source
stringlengths
36
80
text
stringlengths
51
500
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#63
tly connected to the "individual vs. group" aspect of fairness assessment. Bias mitigation strategies [edit]Fairness can be applied to machine learning algorithms in three different ways: data preprocessing, optimization during software training, or post-processing results of the algorithm. Preprocessing [edit]Usually, the classifier is not the only problem; the dataset is also biased. The discrimination of a dataset with respect to the group can be defined as follows: That is, an approximation
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#64
be defined as follows: That is, an approximation to the difference between the probabilities of belonging in the positive class given that the subject has a protected characteristic different from and equal to . Algorithms correcting bias at preprocessing remove information about dataset variables which might result in unfair decisions, while trying to alter as little as possible. This is not as simple as just removing the sensitive variable, because other attributes can be correlated to the pr
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#65
cause other attributes can be correlated to the protected one. A way to do this is to map each individual in the initial dataset to an intermediate representation in which it is impossible to identify whether it belongs to a particular protected group while maintaining as much information as possible. Then, the new representation of the data is adjusted to get the maximum accuracy in the algorithm. This way, individuals are mapped into a new multivariable representation where the probability of
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#66
ivariable representation where the probability of any member of a protected group to be mapped to a certain value in the new representation is the same as the probability of an individual which doesn't belong to the protected group. Then, this representation is used to obtain the prediction for the individual, instead of the initial data. As the intermediate representation is constructed giving the same probability to individuals inside or outside the protected group, this attribute is hidden to
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#67
e the protected group, this attribute is hidden to the classifier. An example is explained in Zemel et al.[49] where a multinomial random variable is used as an intermediate representation. In the process, the system is encouraged to preserve all information except that which can lead to biased decisions, and to obtain a prediction as accurate as possible. On the one hand, this procedure has the advantage that the preprocessed data can be used for any machine learning task. Furthermore, the clas
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#68
r any machine learning task. Furthermore, the classifier does not need to be modified, as the correction is applied to the dataset before processing. On the other hand, the other methods obtain better results in accuracy and fairness.[citation needed] Reweighing [edit]Reweighing is an example of a preprocessing algorithm. The idea is to assign a weight to each dataset point such that the weighted discrimination is 0 with respect to the designated group.[50] If the dataset was unbiased the sensit
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#69
group.[50] If the dataset was unbiased the sensitive variable and the target variable would be statistically independent and the probability of the joint distribution would be the product of the probabilities as follows: In reality, however, the dataset is not unbiased and the variables are not statistically independent so the observed probability is: To compensate for the bias, the software adds a weight, lower for favored objects and higher for unfavored objects. For each we get: When we have
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#70
r unfavored objects. For each we get: When we have for each a weight associated we compute the weighted discrimination with respect to group as follows: It can be shown that after reweighting this weighted discrimination is 0. Inprocessing [edit]Another approach is to correct the bias at training time. This can be done by adding constraints to the optimization objective of the algorithm.[51] These constraints force the algorithm to improve fairness, by keeping the same rates of certain measures
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#71
ss, by keeping the same rates of certain measures for the protected group and the rest of individuals. For example, we can add to the objective of the algorithm the condition that the false positive rate is the same for individuals in the protected group and the ones outside the protected group. The main measures used in this approach are false positive rate, false negative rate, and overall misclassification rate. It is possible to add just one or several of these constraints to the objective o
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#72
or several of these constraints to the objective of the algorithm. Note that the equality of false negative rates implies the equality of true positive rates so this implies the equality of opportunity. After adding the restrictions to the problem it may turn intractable, so a relaxation on them may be needed. Adversarial debiasing [edit]We train two classifiers at the same time through some gradient-based method (f.e.: gradient descent). The first one, the predictor tries to accomplish the task
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#73
st one, the predictor tries to accomplish the task of predicting , the target variable, given , the input, by modifying its weights to minimize some loss function . The second one, the adversary tries to accomplish the task of predicting , the sensitive variable, given by modifying its weights to minimize some loss function .[52] An important point here is that, to propagate correctly, above must refer to the raw output of the classifier, not the discrete prediction; for example, with an artific
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#74
discrete prediction; for example, with an artificial neural network and a classification problem, could refer to the output of the softmax layer. Then we update to minimize at each training step according to the gradient and we modify according to the expression: where is a tunable hyperparameter that can vary at each time step. The intuitive idea is that we want the predictor to try to minimize (therefore the term ) while, at the same time, maximize (therefore the term ), so that the adversary
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#75
imize (therefore the term ), so that the adversary fails at predicting the sensitive variable from . The term prevents the predictor from moving in a direction that helps the adversary decrease its loss function. It can be shown that training a predictor classification model with this algorithm improves demographic parity with respect to training it without the adversary. Postprocessing [edit]The final method tries to correct the results of a classifier to achieve fairness. In this method, we ha
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#76
ssifier to achieve fairness. In this method, we have a classifier that returns a score for each individual and we need to do a binary prediction for them. High scores are likely to get a positive outcome, while low scores are likely to get a negative one, but we can adjust the threshold to determine when to answer yes as desired. Note that variations in the threshold value affect the trade-off between the rates for true positives and true negatives. If the score function is fair in the sense tha
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#77
es. If the score function is fair in the sense that it is independent of the protected attribute, then any choice of the threshold will also be fair, but classifiers of this type tend to be biased, so a different threshold may be required for each protected group to achieve fairness.[53] A way to do this is plotting the true positive rate against the false negative rate at various threshold settings (this is called ROC curve) and find a threshold where the rates for the protected group and other
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#78
where the rates for the protected group and other individuals are equal.[53] Reject option based classification [edit]Given a classifier let be the probability computed by the classifiers as the probability that the instance belongs to the positive class +. When is close to 1 or to 0, the instance is specified with high degree of certainty to belong to class + or – respectively. However, when is closer to 0.5 the classification is more unclear.[54] We say is a "rejected instance" if with a cert
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#79
54] We say is a "rejected instance" if with a certain such that . The algorithm of "ROC" consists on classifying the non-rejected instances following the rule above and the rejected instances as follows: if the instance is an example of a deprived group () then label it as positive, otherwise, label it as negative. We can optimize different measures of discrimination (link) as functions of to find the optimal for each problem and avoid becoming discriminatory against the privileged group.[54] Se
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#80
iscriminatory against the privileged group.[54] See also [edit]References [edit]- ^ Caton, Simon; Haas, Christian (4 October 2020). "Fairness in Machine Learning: A Survey". arXiv:2010.04053 [cs.LG]. - ^ a b Mattu, Julia Angwin, Jeff Larson, Lauren Kirchner, Surya. "Machine Bias". ProPublica. Retrieved 16 April 2022. {{cite web}} : CS1 maint: multiple names: authors list (link) - ^ Friedler, Sorelle A.; Scheidegger, Carlos; Venkatasubramanian, Suresh (April 2021). "The (Im)possibility of fairnes
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#81
resh (April 2021). "The (Im)possibility of fairness: different value systems require different mechanisms for fair decision making". Communications of the ACM. 64 (4): 136–143. doi:10.1145/3433949. ISSN 0001-0782. S2CID 1769114. - ^ Mehrabi, Ninareh; Morstatter, Fred; Saxena, Nripsuta; Lerman, Kristina; Galstyan, Aram (13 July 2021). "A Survey on Bias and Fairness in Machine Learning". ACM Computing Surveys. 54 (6): 115:1–115:35. arXiv:1908.09635. doi:10.1145/3457607. ISSN 0360-0300. S2CID 20166
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#82
. doi:10.1145/3457607. ISSN 0360-0300. S2CID 201666566. - ^ "AI Fairness 360". aif360.mybluemix.net. Archived from the original on 29 June 2022. Retrieved 18 November 2022. - ^ "IBM AI Fairness 360 open source toolkit adds new functionalities". Tech Republic. 4 June 2020. - ^ "Responsible AI practices". Google AI. Retrieved 18 November 2022. - ^ Fairness Indicators, tensorflow, 10 November 2022, retrieved 18 November 2022 - ^ "How we're using Fairness Flow to help build AI that works better for
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#83
rness Flow to help build AI that works better for everyone". ai.facebook.com. Retrieved 18 November 2022. - ^ "AI experts warn Facebook's anti-bias tool is 'completely insufficient'". VentureBeat. 31 March 2021. Retrieved 18 November 2022. - ^ Hutchinson, Ben; Mitchell, Margaret (29 January 2019). "50 Years of Test (Un)fairness". Proceedings of the Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM FAT*'19. pp. 49–58. arXiv:1811.10104. doi:10.1145/3287560.3287600. I
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#84
. arXiv:1811.10104. doi:10.1145/3287560.3287600. ISBN 9781450361255. - ^ a b c Luo, Queenie; Puett, Michael J.; Smith, Michael D. (23 May 2023), A Perspectival Mirror of the Elephant: Investigating Language Bias on Google, ChatGPT, Wikipedia, and YouTube, arXiv:2303.16281 - ^ Kotek, Hadas; Dockum, Rikker; Sun, David (5 November 2023). "Gender bias and stereotypes in Large Language Models". Proceedings of the ACM Collective Intelligence Conference. CI '23. New York, NY, USA: Association for Compu
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#85
. CI '23. New York, NY, USA: Association for Computing Machinery. pp. 12–24. doi:10.1145/3582269.3615599. ISBN 979-8-4007-0113-9. - ^ Zhou, Karen; Tan, Chenhao (December 2023). Bouamor, Houda; Pino, Juan; Bali, Kalika (eds.). "Entity-Based Evaluation of Political Bias in Automatic Summarization". Findings of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics: 10374–10386. arXiv:2305.02321. doi:10.18653/v1/2023.findings-emnlp.696. - ^ "
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#86
21. doi:10.18653/v1/2023.findings-emnlp.696. - ^ "Attorney General Eric Holder Speaks at the National Association of Criminal Defense Lawyers 57th Annual Meeting and 13th State Criminal Justice Network Conference". www.justice.gov. 1 August 2014. Retrieved 16 April 2022. - ^ Dieterich, William; Mendoza, Christina; Brennan, Tim (2016). "COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity" (PDF). Northpointe Inc. - ^ Angwin, Jeff Larson, Julia (29 July 2016). "Technical Respons
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#87
f Larson, Julia (29 July 2016). "Technical Response to Northpointe". ProPublica. Retrieved 18 November 2022. {{cite web}} : CS1 maint: multiple names: authors list (link) - ^ Rose, Adam (22 January 2010). "Are face-detection cameras racist?". Time. ISSN 0040-781X. Retrieved 18 November 2022. - ^ "Google says sorry for racist auto-tag in photo app". The Guardian. 1 July 2015. Retrieved 16 April 2022. - ^ "A beauty contest was judged by AI and the robots didn't like dark skin". The Guardian. 8 Sep
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#88
robots didn't like dark skin". The Guardian. 8 September 2016. Retrieved 16 April 2022. - ^ a b Buolamwini, Joy; Gebru, Timnit (February 2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (PDF). Conference on Fairness, Accountability and Transparency. New York, NY, USA. pp. 77–91. - ^ "Student proves Twitter algorithm 'bias' toward lighter, slimmer, younger faces". The Guardian. 10 August 2021. Retrieved 18 November 2022. - ^ openai/dalle-2-preview, Ope
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#89
18 November 2022. - ^ openai/dalle-2-preview, OpenAI, 17 November 2022, retrieved 18 November 2022 - ^ "No quick fix: How OpenAI's DALL·E 2 illustrated the challenges of bias in AI". NBC News. 27 July 2022. Retrieved 23 July 2024. - ^ "Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. 10 October 2018. Retrieved 18 November 2022. - ^ "Apple Card algorithm sparks gender bias allegations against Goldman Sachs". Washington Post. ISSN 0190-8286. Retrieved 18 November
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#90
ngton Post. ISSN 0190-8286. Retrieved 18 November 2022. - ^ Martinez, Emmanuel; Kirchner, Lauren (25 August 2021). "The Secret Bias Hidden in Mortgage-Approval Algorithms – The Markup". themarkup.org. Retrieved 18 November 2022. - ^ Ruggieri, Salvatore; Alvarez, Jose M.; Pugnana, Andrea; State, Laura; Turini, Franco (26 June 2023). "Can We Trust Fair-AI?". Proceedings of the AAAI Conference on Artificial Intelligence. 37 (13). Association for the Advancement of Artificial Intelligence (AAAI): 15
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#91
Advancement of Artificial Intelligence (AAAI): 15421–15430. doi:10.1609/aaai.v37i13.26798. hdl:11384/136444. ISSN 2374-3468. S2CID 259678387. - ^ Buyl, Maarten; De Bie, Tijl (2022). "Inherent Limitations of AI Fairness". Communications of the ACM. 67 (2): 48–55. arXiv:2212.06495. doi:10.1145/3624700. hdl:1854/LU-01GMNH04RGNVWJ730BJJXGCY99. - ^ Castelnovo, Alessandro; Inverardi, Nicole; Nanino, Gabriele; Penco, Ilaria Giuseppina; Regoli, Daniele (2023). "Fair Enough? A map of the current limitat
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#92
(2023). "Fair Enough? A map of the current limitations of the requirements to have "fair" algorithms". arXiv:2311.12435 [cs.AI]. - ^ Gaudeul, Alexia; Arrigoni, Ottla; Charisi, Vicky; Escobar-Planas, Marina; Hupont, Isabelle (2024), "Understanding the Impact of Human Oversight on Discriminatory Outcomes in AI-Supported Decision-Making", ECAI 2024, Frontiers in Artificial Intelligence and Applications, IOS Press, pp. 1067–1074, doi:10.3233/faia240598, ISBN 978-1-64368-548-9, retrieved 24 January 2
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#93
98, ISBN 978-1-64368-548-9, retrieved 24 January 2025 - ^ a b c d Solon Barocas; Moritz Hardt; Arvind Narayanan, Fairness and Machine Learning. Retrieved 15 December 2019. - ^ Mayes, Elizabeth (2001). Handbook of Credit Scoring. NY, NY, USA: Glenlake Publishing. p. 282. ISBN 0-8144-0619-X. - ^ Berk, Richard; Heidari, Hoda; Jabbari, Shahin; Kearns, Michael; Roth, Aaron (February 2021). "Fairness in Criminal Justice Risk Assessments: The State of the Art". Sociological Methods & Research. 50 (1):
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#94
he Art". Sociological Methods & Research. 50 (1): 3–44. arXiv:1703.09207. doi:10.1177/0049124118782533. ISSN 0049-1241. S2CID 12924416. - ^ a b Räz, Tim (3 March 2021). "Group Fairness: Independence Revisited". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM. pp. 129–137. arXiv:2101.02968. doi:10.1145/3442188.3445876. ISBN 978-1-4503-8309-7. S2CID 231667399. - ^ a b c Verma, Sahil; Rubin, Julia (2018). "Fairness definitions explained". Proceedings of the
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#95
airness definitions explained". Proceedings of the International Workshop on Software Fairness. pp. 1–7. doi:10.1145/3194770.3194776. ISBN 9781450357463. S2CID 49561627. - ^ Gursoy, Furkan; Kakadiaris, Ioannis A. (November 2022). "Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems". 2022 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE. pp. 137–146. arXiv:2307.00472. doi:10.1109/ICDMW58026.2022.00027. ISBN 979-8-3503-4609-1. S2CID 256669
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#96
6.2022.00027. ISBN 979-8-3503-4609-1. S2CID 256669476. - ^ Chen, Violet (Xinying); Hooker, J. N. (2021). "Welfare-based Fairness through Optimization". arXiv:2102.00311 [cs.AI]. - ^ Mullainathan, Sendhil (19 June 2018). Algorithmic Fairness and the Social Welfare Function. Keynote at the 19th ACM Conference on Economics and Computation (EC'18). YouTube. 48 minutes in. In other words, if you have a social welfare function where what you care about is harm, and you care about harm to the African A
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#97
is harm, and you care about harm to the African Americans, there you go: 12 percent less African Americans in jail overnight.... Before we get into the minutiae of relative harm, the welfare function is defined in absolute harm, so we should actually calculate the absolute harm first. - ^ Mitchell, Shira; Potash, Eric; Barocas, Solon; d'Amour, Alexander; Lum, Kristian (2021). "Algorithmic Fairness: Choices, Assumptions, and Definitions". Annual Review of Statistics and Its Application. 8 (1): 1
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#98
Review of Statistics and Its Application. 8 (1): 141–163. arXiv:1811.07867. Bibcode:2021AnRSA...8..141M. doi:10.1146/annurev-statistics-042720-125902. S2CID 228893833. - ^ Castelnovo, Alessandro; Crupi, Riccardo; Greco, Greta; Regoli, Daniele; Penco, Ilaria Giuseppina; Cosentini, Andrea Claudio (2022). "A clarification of the nuances in the fairness metrics landscape". Scientific Reports. 12 (1): 4209. arXiv:2106.00467. Bibcode:2022NatSR..12.4209C. doi:10.1038/s41598-022-07939-1. PMC 8913820. PM
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#99
C. doi:10.1038/s41598-022-07939-1. PMC 8913820. PMID 35273279. - ^ Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. "A survey on bias and fairness in machine learning." ACM Computing Surveys (CSUR) 54, no. 6 (2021): 1–35. - ^ Dwork, Cynthia; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Zemel, Richard (2012). "Fairness through awareness". Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS '12. pp. 214–226. doi:10.1145
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#100
Conference on - ITCS '12. pp. 214–226. doi:10.1145/2090236.2090255. ISBN 9781450311151. S2CID 13496699. - ^ Galhotra, Sainyam; Brun, Yuriy; Meliou, Alexandra (2017). "Fairness testing: Testing software for discrimination". Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. pp. 498–510. arXiv:1709.03221. doi:10.1145/3106237.3106277. ISBN 9781450351058. S2CID 6324652. - ^ Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. Advances i
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#101
va, R. (2017). Counterfactual fairness. Advances in neural information processing systems, 30. - ^ Coston, Amanda; Mishler, Alan; Kennedy, Edward H.; Chouldechova, Alexandra (27 January 2020). "Counterfactual risk assessments, evaluation, and fairness". Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. FAT* '20. New York, NY, USA: Association for Computing Machinery. pp. 582–593. doi:10.1145/3351095.3372851. ISBN 978-1-4503-6936-7. S2CID 202539649. - ^ Mishler, Al
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#102
78-1-4503-6936-7. S2CID 202539649. - ^ Mishler, Alan; Kennedy, Edward H.; Chouldechova, Alexandra (1 March 2021). "Fairness in Risk Assessment Instruments". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. New York, NY, USA: Association for Computing Machinery. pp. 386–400. doi:10.1145/3442188.3445902. ISBN 978-1-4503-8309-7. S2CID 221516412. - ^ a b Plecko, Drago; Bareinboim, Elias (2022). "Causal Fairness Analysis". arXiv:2207.11385. {{cite journ
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#103
Fairness Analysis". arXiv:2207.11385. {{cite journal}} : Cite journal requires|journal= (help) - ^ Richard Zemel; Yu (Ledell) Wu; Kevin Swersky; Toniann Pitassi; Cyntia Dwork, Learning Fair Representations. Retrieved 1 December 2019 - ^ Faisal Kamiran; Toon Calders, Data preprocessing techniques for classification without discrimination. Retrieved 17 December 2019 - ^ Muhammad Bilal Zafar; Isabel Valera; Manuel Gómez Rodríguez; Krishna P. Gummadi, Fairness Beyond Disparate Treatment & Disparate
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#104
, Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. Retrieved 1 December 2019 - ^ a b Brian Hu Zhang; Blake Lemoine; Margaret Mitchell, Mitigating Unwanted Biases with Adversarial Learning. Retrieved 17 December 2019 - ^ a b Moritz Hardt; Eric Price; Nathan Srebro, Equality of Opportunity in Supervised Learning. Retrieved 1 December 2019 - ^ a b Faisal Kamiran; Asim Karim; Xiangliang Zhang, Decision Theory for Discrimination-aware Cla
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#105
hang, Decision Theory for Discrimination-aware Classification. Retrieved 17 December 2019
https://en.wikipedia.org/wiki/SixDegrees.com#0
SixDegrees.com SixDegrees.com is a social network service web site that started in 1997[1][2][3] and is based on the Web of Contacts model of social networking. It was named after the concept of six degrees of separation[4] and allows users to list friends, family members and acquaintances whether registered on the site or not. External contacts are invited to join. People who confirm a relationship with an existing user but did not go on to register with the site continue to receive occasional
https://en.wikipedia.org/wiki/SixDegrees.com#1
ster with the site continue to receive occasional email updates and solicitations. Users can send messages and post bulletin board items to people in their first, second, and third degrees, and see their connection to any other user on the site. SixDegrees was one of the first social networking sites that later became highly popular. It was followed by more successful sites based on the "social-circles network model" such as Friendster, MySpace, LinkedIn, XING, and Facebook. MacroView (later ren
https://en.wikipedia.org/wiki/SixDegrees.com#2
LinkedIn, XING, and Facebook. MacroView (later renamed to SixDegrees Inc.), the company that developed the site, was founded by CEO Andrew Weinreich in May 1996[5] and was based in New York City. At its height, SixDegrees had around 100 employees, and the site had around 3,500,000 fully registered members.[6] The site was bought by YouthStream Media Networks in December 1999 for $125 million.[7][8] SixDegrees shut down one year later on December 30, 2000,[3] then brought back up a few years afte
https://en.wikipedia.org/wiki/SixDegrees.com#3
30, 2000,[3] then brought back up a few years after.[2] Andrew Weinreich [edit]Andrew Weinreich (/ˈwaɪnrɪtʃ/ WYNE-ritch) is an American businessman primarily known for launching SixDegrees in 1997.[9] In 2001, Weinreich founded Joltage, an infrastructure services business devoted to building out a global network of Wi-Fi hotspots.[10] Joltage was forced to shut down in 2003 when the company ran out of funding.[11] In 2003, Weinreich started I Stand For, Inc. He sold the company in February 2006.
https://en.wikipedia.org/wiki/SixDegrees.com#4
nd For, Inc. He sold the company in February 2006.[12] In February 2006, Weinreich co-founded MeetMoi with Jeremy Levy.[13][12][14] In 2008, Weinreich and Jeremy Levy spun off the persistent tracking technologies of MeetMoi into a separate company called Xtify.[12] Xtify was acquired by IBM on October 3, 2013.[15][16][17] References [edit]- ^ boyd, danah m; Ellison, Nicole B. (2007). "Social Network Sites: Definition, History, and Scholarship". Journal of Computer-Mediated Communication. 13 (1):
https://en.wikipedia.org/wiki/SixDegrees.com#5
ournal of Computer-Mediated Communication. 13 (1): 210–230. doi:10.1111/j.1083-6101.2007.00393.x. S2CID 52810295. - ^ a b Barker, Melissa (2012). "10". Social Media Marketing: A Strategic Approach (1st ed.). Cengage Learning. - ^ a b "sixdegrees sitedown message". 2001-03-02. Archived from the original on 2001-03-02. Retrieved 2021-08-12. - ^ "TECHNOLOGY & MEDIA: PATENTS; Idea for Online Networking Brings Two Entrepreneurs Together - New York Times". The New York Times. 2010-11-02. Archived from
https://en.wikipedia.org/wiki/SixDegrees.com#6
es". The New York Times. 2010-11-02. Archived from the original on 2010-11-02. Retrieved 2018-07-22. - ^ "Then and now: a history of social networking sites". 2014-02-04. Archived from the original on 2018-07-23. Retrieved 2018-07-22. - ^ Kirkpatrick, David (2010). The Facebook Effect: The Inside Story of the Company That Is Connecting the World. Simon & Schuster. ISBN 978-1439102121. - ^ Angwin, Julia (2009). Stealing MySpace: The Battle to Control the Most Popular Website in America. Random Ho
https://en.wikipedia.org/wiki/SixDegrees.com#7
rol the Most Popular Website in America. Random House. p. 52. - ^ Jones, Dow (16 December 1999). "COMPANY NEWS; YOUTHSTREAM TO ACQUIRE SIXDEGREES FOR $125 MILLION". The New York Times. Archived from the original on 2018-07-22. Retrieved 2018-07-22. - ^ Goble, Gordon (2012-09-06). "The History of Social Networking". - ^ Metz, Cade (April 8, 2002). "Would You Like Wireless Access with That?". PC Magazine. - ^ Shim, Richard (28 February 2003). "Start-up Joltage unplugs Wi-Fi service". - ^ a b c Boy
https://en.wikipedia.org/wiki/SixDegrees.com#8
t-up Joltage unplugs Wi-Fi service". - ^ a b c Boyd Myers, Courtney (July 11, 2011). "Where are they now? New York City's Dot Com Entrepreneurs". Archived from the original on March 6, 2018. Retrieved June 18, 2018. - ^ Smith, Kevin (24 October 2012). "Check Out MeetMoi, The Dating App That Has Quietly Reached 3 Million Users". Business Insider. - ^ Tsotsis, Alexia (3 August 2010). "Push Notifications Meet Dating: meetMoi NOW Alerts You When Matches Are Nearby". - ^ Jackson, Joab (3 October 2013
https://en.wikipedia.org/wiki/SixDegrees.com#9
hes Are Nearby". - ^ Jackson, Joab (3 October 2013). "IBM buys e-commerce tool maker Xtify". Computerworld. - ^ "Patent Issued for Location-Based Services Platform". Telecommunications Weekly. 5 June 2013. Archived from the original on 5 November 2013. - ^ "U.S. Patents Awarded to Inventors in New York". 23 May 2013. External links [edit]- Official website at the Wayback Machine (archived 2000-05-11) Further reading [edit]- Bedell, Doug. "Meeting your new best friends: Six Degrees widens your co
https://en.wikipedia.org/wiki/SixDegrees.com#10
your new best friends: Six Degrees widens your contacts in exchange for sampling Web sites". The Dallas Morning News, October 27, 1998.
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#0
Transformer (deep learning architecture) The transformer is a deep learning architecture that was developed by researchers at Google and is based on the multi-head attention mechanism, which was proposed in the 2017 paper "Attention Is All You Need".[1] Text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table.[1] At each layer, each token is then contextualized within the scope of the context window with other
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#1
within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished. Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM).[2] Later variations have been widely adopted for training large language models (LLM) on large (lan
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#2
training large language models (LLM) on large (language) datasets.[3] Transformers were first developed as an improvement over previous architectures for machine translation,[4][5] but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning,[6][7] audio,[8] multimodal learning, robotics,[9] and even playing chess.[10] It has also led to the development of pre-trained systems, such as generative pr
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#3
ment of pre-trained systems, such as generative pre-trained transformers (GPTs)[11] and BERT[12] (bidirectional encoder representations from transformers). History [edit]Predecessors [edit]For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves t
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#4
n practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens. A key breakthrough was LSTM (1995),[note 1] a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units.[13] N
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#5
her neurons, so-called multiplicative units.[13] Neural networks using multiplicative units were later called sigma-pi networks[14] or higher-order networks.[15] LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers. However, LSTM still used sequential processing, like most other RNNs.[note 2] Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence. Modern Transformers
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#6
ver all tokens in a sequence. Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input.[16] One of its two networks has "fast weights" or "dynamic links" (1981).[17][18][19] A slow neural network learns by gradient descent to generate keys and values for computing the weight changes
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#7
e keys and values for computing the weight changes of the fast neural network which computes answers to queries.[16] This was later shown to be equivalent to the unnormalized linear Transformer.[20][21] Attention with seq2seq [edit]The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see previous papers[22][23]). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.[22][23] A 380M-parameter mod
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#8
hed papers from 2014.[22][23] A 380M-parameter model for machine translation uses two long short-term memories (LSTM).[23] Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM.[22] Later research showed that GRUs are neither better nor worse than LSTMs for s
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#9
GRUs are neither better nor worse than LSTMs for seq2seq.[24][25] These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#10
output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.[26] The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#11
e model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".[4] The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.[27] In 2016, Google Translate was revamped to Google Neural M
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#12
, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM.[28] It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.[29] Parallelizing attention [edit]Seq2seq models with attention (including self-attention) still suffered from the same issue wit
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#13
-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters than LSTMs.[30] One of its authors, Jakob Uszkoreit, suspected that attention without recurrence is sufficient for la
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#14
attention without recurrence is sufficient for language translation, thus the title "attention is all you need".[31] That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical.[31] In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs.[32] In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#15
del was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance.[1] This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#16
ecurrence. Its parallelizability was an important factor to its widespread use in large neural networks.[33] AI boom era [edit]Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles.[34] Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom. In language modelling, ELMo (2018) was a bi-d
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#17
oom. In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model.[35] In 2019 October, Google started using BERT to process search queries.[36] In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.[37] Starting in 2018, the OpenAI GPT series of decode
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#18
Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly[38] popular, triggering a boom around large language models.[39][40] Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer,[41] speech recognition,[42] robotics,[6] and multimodal.[43] The vision transformer, in turn, stimulated new developments in convolutiona
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#19
turn, stimulated new developments in convolutional neural networks.[44] Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024),[45] and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data. Training [edit]Methods for stabilizing training [edit]The plain transformer archite
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#20
izing training [edit]The plain transformer architecture had difficulty converging. In the original paper[1] the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again. A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes trai
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#21
d attention and feedforward layers stabilizes training, not requiring learning rate warmup.[46] Pretrain-finetune [edit]Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include: - language modeling[12] - next-sentence prediction[12] - question answering[
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#22
next-sentence prediction[12] - question answering[3] - reading comprehension - sentiment analysis[1] - paraphrasing[1] The T5 transformer report[47] documents a large number of natural language pretraining tasks. Some examples are: - restoring or repairing incomplete or corrupted text. For example, the input, "Thank you ~~ me to your party ~~ week", might generate the output, "Thank you for inviting me to your party last week". - translation between natural languages (machine translation) - judg
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#23
een natural languages (machine translation) - judging the pragmatic acceptability of natural language. For example, the following sentence might be judged "not acceptable",[48] because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well. Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#24
ging for previous generations of machine learning architecture. Tasks [edit]In general, there are 3 classes of language modelling tasks: "masked",[49] "autoregressive",[50] and "prefixLM".[51] These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer. In a masked task,[49] one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#25
ibution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens: and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task. In an autoregressive task,[50] the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model pre
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#26
Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks. In a prefixLM task,[51] the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#27
The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks. Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model). Architecture [edit]All transformers have the same primary components: - Tokenizers, which convert text into tokens. - Embedding layer, which converts tokens and positions of the tokens into vector represe
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#28
ns and positions of the tokens into vector representations. - Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants. - Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens. The f
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#29
a probability distribution over the tokens. The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section. By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as . Tokenization [edit]As the Transformer architecture natively processes numerical data, not text, there must be a translation
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#30
rical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer. The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size . When faced with token
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#31
ize is the vocabulary size . When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown". Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece. Embedding [edit]Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix . For example, if the input token is , then the one-hot representation is , and its embedd
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#32
hen the one-hot representation is , and its embedding vector isThe token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors. The number of dimensions in an embedding vector is called hidden size or embedding size and written as .[35] This size is written as in the original Transformer paper.[1] Un-embedding [edit]An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a to
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#33
ng layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens. The un-embedding layer is a linear-softmax layer:The matrix has shape . The embedding matrix and the un-embedding matrix are sometimes required to be transposes of each other, a practice called weight tying.[52] Positional encoding [edit]A positional encoding is a fixed-size vector representation of the relative positions of tokens within a seq
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#34
n of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man". The positional encoding is defined as a function of type , where is a positive even integer. The full positional encoding defined in the original paper[1] is:where . Here, i
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#35
efined in the original paper[1] is:where . Here, is a free parameter that should be significantly larger than the biggest that would be input into the positional encoding function. The original paper uses . The function is in a simpler form when written as a complex function of type where . The main reason for using this positional encoding function is that using it, shifts are linear transformations:where is the distance one wishes to shift. This allows the transformer to take any encoded posit
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#36
s allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication. By taking a linear sum, any convolution can also be implemented as linear transformations:for any constants . This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighb
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#37
nism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position." In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference. Encoder-decoder (overview) [ed
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#38
ational difference. Encoder-decoder (overview) [edit]Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far. The purpose of each encoder layer is to create contextualized representations of the tokens, where each repre
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#39
ed representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).[53][54] Both the encoder and decode
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#40
ference time).[53][54] Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps.[54] These feed-forward layers contain most of the parameters in a Transformer model. Feedforward network [edit] The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:where and are weight matrices and and are bias vectors, and is its activation function. The ori
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#41
s vectors, and is its activation function. The original Transformer used ReLU activation. The number of neurons in the middle layer is called intermediate size (GPT),[55] filter size (BERT),[35] or feedforward size (BERT).[35] It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size: . Scaled dot-product attention [edit]Attention head [edit]The attention mechanism used in the Transformer arch
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#42
e attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights , the key weights , and the value weights . The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length , and each entry is a vector of dimension . Similarly for the key and value sequences. For each vector in the query sequence, it is multipl
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#43
r each vector in the query sequence, it is multiplied by a matrix to produce a query vector . The matrix of all query vectors is the query matrix:Similarly, we construct the key matrix and the value matrix . It is usually the case that all are square matrices, meaning , etc. Attention weights are calculated using the query and key vectors: the attention weight from token to token is the dot product between and . The attention weights are divided by the square root of the dimension of the key vec
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#44
by the square root of the dimension of the key vectors, , which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that and are different matrices allows attention to be non-symmetric: if token attends to token (i.e. is large), this does not necessarily mean that token will attend to token (i.e. could be small). The output of the attention unit for token is the weighted sum of the value vectors of all tokens, weighted by , the attention from
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#45
rs of all tokens, weighted by , the attention from token to each token. The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices , and are defined as the matrices where the th rows are vectors , , and respectively. Then we can represent the attention as where the softmax is applied over each of the rows