source
stringlengths 36
80
| text
stringlengths 51
500
|
|---|---|
https://en.wikipedia.org/wiki/Google_Sheets#5
|
nd Safari web browsers.[15] Users can access all spreadsheets, among other files, collectively through the Google Drive website. In June 2014, Google rolled out a dedicated website homepage for Sheets that contain only files created with Sheets.[16] In 2014, Google launched a dedicated mobile app for Sheets on the Android and iOS mobile operating systems.[17][18][19] In 2015, the mobile website for Sheets was updated with a "simpler, more uniform" interface. While users can read spreadsheets thr
|
https://en.wikipedia.org/wiki/Google_Sheets#6
|
" interface. While users can read spreadsheets through the mobile websites, users trying to edit will be redirected towards the mobile app to eliminate editing on the mobile web.[20]
Features
[edit]Editing
[edit]Collaboration and revision history
[edit]Google Sheets serves as a collaborative tool for cooperative editing of spreadsheets in real time. Documents can be shared, opened, and edited by multiple users simultaneously and users can see character-by-character changes as other collaborators
|
https://en.wikipedia.org/wiki/Google_Sheets#7
|
racter-by-character changes as other collaborators make edits. Changes are automatically saved to Google's servers and a revision history is automatically kept so past edits may be viewed and reverted to.[22] An editor's current position is represented with an editor-specific color/cursor, so if another editor happens to be viewing that part of the document they can see edits as they occur. A sidebar chat functionality allows collaborators to discuss edits. The revision history allows users to s
|
https://en.wikipedia.org/wiki/Google_Sheets#8
|
cuss edits. The revision history allows users to see the additions made to a document, with each author distinguished by color. Only adjacent revisions can be compared and users cannot control how frequently revisions are saved. Files can be exported to a user's local computer in a variety of formats such as PDF and Office Open XML.[23] Sheets supports tagging for archival and organizational purposes.[24]
Explore
[edit]Launched for the entire Drive suite in September 2016, "Explore" enables addi
|
https://en.wikipedia.org/wiki/Google_Sheets#9
|
ve suite in September 2016, "Explore" enables additional functionality through machine learning.[25][26][27] In Google Sheets, Explore enables users to ask questions, such as "How many units were sold on Black Friday?" and Explore will return the answer, without requiring formula knowledge from the user. In June 2017, Google expanded the Explore feature in Google Sheets to automatically build charts and visualize data[28][29] and again expanded it in December to feature machine learning capable
|
https://en.wikipedia.org/wiki/Google_Sheets#10
|
t in December to feature machine learning capable of automatically creating pivot tables.[30][31] In October 2016, Google announced the addition of "Action items" to Sheets. If a user assigns a task within a Sheet, the service will intelligently assign that action to the designated user. Google states this will make it easier for other collaborators to visualize who is responsible for a task. When a user visits Google Drive or Sheets, any files containing tasks assigned to them will be highlight
|
https://en.wikipedia.org/wiki/Google_Sheets#11
|
ontaining tasks assigned to them will be highlighted with a badge.[32] In March 2014, Google introduced add-ons; new tools from third-party developers that add more features for Google Sheets.[33]
Offline editing
[edit]To view and edit spreadsheets offline on a computer, users need to be using the Chromium-based web browser (e.g., Google Chrome, Microsoft Edge). A Chrome extension, Google Docs Offline, allows users to enable offline support for Sheets and other Drive suite files on the Google Dr
|
https://en.wikipedia.org/wiki/Google_Sheets#12
|
heets and other Drive suite files on the Google Drive website.[34] The Android and iOS apps natively support offline editing.[35][36]
Files
[edit]Supported file formats and limits
[edit]Files in the following formats can be viewed and converted to the Sheets format: .xls (if newer than Microsoft Office 95), .xlsx, .xlsm, .xlt, .xltx, .xltm .ods, .csv, .tsv, .txt and .tab[37] Overall document size is capped at 10 million cells.[38][39]
Google Workspace
[edit]The Sheets app and the rest of the Goo
|
https://en.wikipedia.org/wiki/Google_Sheets#13
|
space
[edit]The Sheets app and the rest of the Google Docs Editors suite are free to use for individuals, but Sheets is also available as part of the business-centered Google Workspace (formerly G Suite) service by Google, which is a monthly subscription that enables additional business-focused functionality.[40]
Integration with Charts and Wikipedia
[edit]Sheets can produce Google Charts[41] and has a third-party plugin which allows for integration with Wikipedia.[42]
Other functionality
[edit]
|
https://en.wikipedia.org/wiki/Google_Sheets#14
|
ion with Wikipedia.[42]
Other functionality
[edit]A simple find and replace tool is available. The service includes a web clipboard tool that allows users to copy and paste content between Google Sheets and Google Docs, Google Slides, and Google Drawings. The web clipboard can also be used for copying and pasting content between different computers. Copied items are stored on Google's servers for up to 30 days.[43]
Google offers an extension for the Google Chrome web browser called Office editin
|
https://en.wikipedia.org/wiki/Google_Sheets#15
|
the Google Chrome web browser called Office editing for Docs, Sheets and Slides that enables users to view and edit Microsoft Excel documents on Google Chrome, via the Google Sheets app. The extension can be used for opening Excel files stored on the computer using Chrome, as well as for opening files encountered on the web (in the form of email attachments, web search results, etc.) without having to download them. The extension is installed on ChromeOS by default.[44] As of June 2019, this ext
|
https://en.wikipedia.org/wiki/Google_Sheets#16
|
ChromeOS by default.[44] As of June 2019, this extension is no longer required since the functionality exists natively.[45]
Google Cloud Connect was a plug-in for Microsoft Office 2003, 2007, and 2010 that could automatically store and synchronize any Excel document to Google Sheets (before the introduction of Drive). The online copy was automatically updated each time the Microsoft Excel document was saved. Microsoft Excel documents could be edited offline and synchronized later when online. Go
|
https://en.wikipedia.org/wiki/Google_Sheets#17
|
ted offline and synchronized later when online. Google Cloud Connect maintained previous Microsoft Excel document versions and allowed multiple users to collaborate by working on the same document at the same time.[46][47] However, Google Cloud Connect has been discontinued as of April 30, 2013, as, according to Google, Google Drive achieves all of the above tasks, "with better results".[48]
While Microsoft Excel maintains the 1900 Leap year bug, Google sheets 'fixes' this bug by increasing all
|
https://en.wikipedia.org/wiki/Google_Sheets#18
|
Google sheets 'fixes' this bug by increasing all dates before March 1, 1900, so entering "0" and formatting it as a date returns December 30, 1899. On the other hand. Excel interprets "0" as meaning December 31, 1899, which is formatted to read January 0, 1900.
Launched in December 2022, Simple ML is the Google's add-on for machine learning.[49]
See also
[edit]References
[edit]- ^
- "Google Docs". Google Play. Retrieved May 19, 2025.
- "Google Slides". Google Play. Retrieved May 19, 2025.
- "Go
|
https://en.wikipedia.org/wiki/Google_Sheets#19
|
lides". Google Play. Retrieved May 19, 2025.
- "Google Sheets". Google Play. Retrieved May 19, 2025.
- ^
- "Google Docs 1.25.192.03". APKMirror. May 14, 2025. Retrieved May 19, 2025.
- "Google Slides 1.25.192.01". APKMirror. May 7, 2025. Retrieved May 19, 2025.
- "Google Sheets 1.25.192.01". APKMirror. May 7, 2025. Retrieved May 19, 2025.
- ^
- "Google Docs". App Store. Retrieved May 19, 2025.
- "Google Slides". App Store. Retrieved May 19, 2025.
- "Google Sheets". App Store. Retrieved May 19, 2
|
https://en.wikipedia.org/wiki/Google_Sheets#20
|
- "Google Sheets". App Store. Retrieved May 19, 2025.
- ^ Hill, Ian (June 18, 2013). "18 New Languages for Drive, Docs, Sheets, and Slides". Google Drive Blog. Retrieved October 29, 2016.
- ^ "Office editing makes it easier to work with Office files in Docs, Sheets, and Slides". G Suite Updates Blog. Retrieved August 12, 2019.
- ^ Surden, Esther (June 26, 2013). "At Madison Meetup, NJ's Rochelle Talks XL2Web Acquisition by Google". NJ Tech Weekly. Retrieved October 15, 2021.
- ^ Dawson, Christo
|
https://en.wikipedia.org/wiki/Google_Sheets#21
|
y. Retrieved October 15, 2021.
- ^ Dawson, Christopher (October 30, 2010). "Google's 40 acquisitions in 2010: What about integration?". ZDNet. CBS Interactive. Retrieved June 1, 2017.
- ^ Rochelle, Jonathan (June 6, 2006). "It's nice to share". Official Google Blog. Retrieved October 29, 2016.
- ^ "Google Announces limited test on Google Labs: Google Spreadsheets". June 6, 2006. Retrieved October 29, 2016.
- ^ "Google Announces Google Docs & Spreadsheets". October 11, 2006. Retrieved October 29,
|
https://en.wikipedia.org/wiki/Google_Sheets#22
|
adsheets". October 11, 2006. Retrieved October 29, 2016.
- ^ Jackson, Rob (March 5, 2010). "Google Buys DocVerse For Office Collaboration: Chrome, Android & Wave Implications?". Phandroid. Retrieved October 20, 2016.
- ^ Belomestnykh, Olga (April 15, 2010). "A rebuilt, more real-time Google documents". Google Drive Blog. Retrieved October 30, 2016.
- ^ Warren, Alan (June 5, 2012). "Google + Quickoffice = get more done anytime, anywhere". Official Google Blog. Retrieved October 30, 2016.
- ^ Sawe
|
https://en.wikipedia.org/wiki/Google_Sheets#23
|
Google Blog. Retrieved October 30, 2016.
- ^ Sawers, Paul (October 23, 2012). "Google Drive apps renamed "Docs, Sheets and Slides", now available in the Chrome Web Store". The Next Web. Retrieved October 30, 2016.
- ^ "System requirements and browsers". Docs editors Help. Retrieved December 16, 2016.
- ^ "Dedicated desktop home pages for Google Docs, Sheets & Slides". G Suite Updates. June 25, 2014. Retrieved December 16, 2016.
- ^ Levee, Brian (April 30, 2014). "New mobile apps for Docs, Sheet
|
https://en.wikipedia.org/wiki/Google_Sheets#24
|
(April 30, 2014). "New mobile apps for Docs, Sheets and Slides—work offline and on the go". Official Google Blog. Retrieved December 16, 2016.
- ^ Tabone, Ryan (June 25, 2014). "Work with any file, on any device, any time with new Docs, Sheets, and Slides". Google Drive Blog. Retrieved December 16, 2016.
- ^ "New Google Slides, Docs, and Sheets apps for iOS". G Suite Updates. August 25, 2014. Retrieved December 16, 2016.
- ^ "A new look for the Google Docs, Sheets, and Slides viewers on the mobi
|
https://en.wikipedia.org/wiki/Google_Sheets#25
|
oogle Docs, Sheets, and Slides viewers on the mobile web". G Suite Updates. July 27, 2015. Retrieved December 16, 2016.
- ^ Meyer, David (August 20, 2009). "Google Apps Script gets green light". CNet. Archived from the original on August 10, 2012. Retrieved March 26, 2011.
- ^ "See the history of changes made to a file". Docs editors Help. Retrieved February 20, 2017.
- ^ "Share a copy of a file in an Office format". G Suite Learning Center. Retrieved August 12, 2019.
- ^ "Google sheet for organ
|
https://en.wikipedia.org/wiki/Google_Sheets#26
|
ieved August 12, 2019.
- ^ "Google sheet for organizational access".
- ^ Ranjan, Ritcha (September 29, 2016). "Explore in Docs, Sheets and Slides makes work a breeze — and makes you look good, too". Google Docs Blog. Retrieved December 16, 2016.
- ^ Novet, Jordan (September 29, 2016). "Google updates Calendar, Drive, Docs, Sheets, and Slides with machine intelligence features". VentureBeat. Retrieved December 16, 2016.
- ^ Allan, Darren (September 30, 2016). "Google wants to better challenge Mic
|
https://en.wikipedia.org/wiki/Google_Sheets#27
|
r 30, 2016). "Google wants to better challenge Microsoft Office with these new features". TechRadar. Future plc. Retrieved December 16, 2016.
- ^ Lardinois, Frederic (June 1, 2017). "Google Sheets now uses machine learning to help you visualize your data". TechCrunch. AOL. Retrieved June 1, 2017.
- ^ Carman, Ashley (June 1, 2017). "Google Sheets is making it easier to create charts through natural language commands". The Verge. Vox Media. Retrieved June 1, 2017.
- ^ Miller, Ron (December 6, 2017
|
https://en.wikipedia.org/wiki/Google_Sheets#28
|
ed June 1, 2017.
- ^ Miller, Ron (December 6, 2017). "Latest Google Sheets release helps automate pivot table creation". TechCrunch. Oath Inc. Retrieved December 14, 2017.
- ^ Gagliordi, Natalie (December 6, 2017). "Google brings new AI, machine learning features to Sheets". ZDNet. CBS Interactive. Retrieved December 14, 2017.
- ^ Weber, Ryan (October 19, 2016). "Five new ways to reach your goals faster with G Suite". The Keyword Google Blog. Retrieved December 14, 2016.
- ^ Gupta, Saurabh (Marc
|
https://en.wikipedia.org/wiki/Google_Sheets#29
|
rieved December 14, 2016.
- ^ Gupta, Saurabh (March 11, 2014). "Bring a little something extra to Docs and Sheets with add-ons". Google Drive Blog. Retrieved October 30, 2016.
- ^ "Work on Google files offline". Drive Help. Retrieved January 14, 2017.
- ^ "Work on Google files offline". Drive Help. Retrieved January 14, 2017.
- ^ "Work on Google files offline". Drive Help. Retrieved January 14, 2017.
- ^ "Work with Office files". Docs editors Help. Retrieved October 30, 2016.
- ^ "Files you can
|
https://en.wikipedia.org/wiki/Google_Sheets#30
|
p. Retrieved October 30, 2016.
- ^ "Files you can store in Google Drive". Drive Help. Retrieved November 1, 2019.
- ^ "Insert or delete images or videos". Docs editors Help. Retrieved October 22, 2016.
- ^ "G Suite - Choose a Plan". Retrieved October 30, 2016.
- ^ "Google Spreadsheets | Charts". Google Developers. Retrieved March 1, 2020.
- ^ "Wikipedia and Wikidata Tools - G Suite Marketplace". gsuite.google.com.
- ^ "Copy and paste text and images". Retrieved October 30, 2016.
- ^ "Office Edit
|
https://en.wikipedia.org/wiki/Google_Sheets#31
|
ges". Retrieved October 30, 2016.
- ^ "Office Editing for Docs, Sheets & Slides". Chrome Web Store. Retrieved October 30, 2016.
- ^ "Remove the Office compatibility app". G Suite Admin Help. Retrieved August 12, 2019.
- ^ Sinha, Shan (February 24, 2011). "Google Cloud Connect for Microsoft Office available to all". Google Drive Blog. Retrieved October 30, 2016.
- ^ White, Charlie (February 24, 2011). "Now Anyone Can Sync Google Docs & Microsoft Office". Mashable. Retrieved October 30, 2016.
- ^
|
https://en.wikipedia.org/wiki/Google_Sheets#32
|
ffice". Mashable. Retrieved October 30, 2016.
- ^ "Migrate from Google Cloud Connect to Google Drive". Apps Documentation and Support. Archived from the original on March 17, 2013. Retrieved October 30, 2016.
- ^ "Google Unveils a New Machine Learning Add-on for Google Sheets, Called Simple ML for Sheets, Which Allows Users to Leverage the Power of Machine Learning Without Any Coding Experience". December 13, 2022.
External links
[edit]- ^ AI Spreadsheet. Sourcetable Inc., 2024. Retrieved 2024-1
|
https://en.wikipedia.org/wiki/Google_Sheets#33
|
eadsheet. Sourcetable Inc., 2024. Retrieved 2024-11-14.
|
https://en.wikipedia.org/wiki/Google_Sites#0
|
Google Sites
Google Sites is a structured wiki and web page creation tool included as part of the free, web-based Google Docs Editors suite offered by Google. The service includes Google Docs, Google Sheets, Google Slides, Google Drawings, Google Forms, and Google Keep. Google Sites is only available on the web.
History
[edit]Google Sites started out as JotSpot, the name and sole product of a software company that offered enterprise social software.[citation needed] It was targeted mainly at sma
|
https://en.wikipedia.org/wiki/Google_Sites#1
|
re.[citation needed] It was targeted mainly at small-sized and medium-sized businesses. The company was founded by Joe Kraus and Graham Spencer, co-founders of Excite.
In February 2006, JotSpot was named part of Business 2.0, "Next Net 25",[1] and in May 2006, it was honored as one of InfoWorld's "15 Start-ups to Watch".[2] In October 2006, JotSpot was acquired by Google.[3] Google announced a prolonged data transition of webpages created using Google Page Creator (also known as "Google Pages")
|
https://en.wikipedia.org/wiki/Google_Sites#2
|
oogle Page Creator (also known as "Google Pages") to Google Sites servers in 2007. On February 28, 2008, Google Sites was unveiled using the JotSpot technology.[4] The service was free, but users needed a domain name, which Google offered for $10. However, as of May 21, 2008, Google Sites became available for free, separately from Google Apps, and without the need for a domain.[5]
In June 2016, Google introduced a complete rebuild of the Google Sites platform, named the New Google Sites,[6][7] a
|
https://en.wikipedia.org/wiki/Google_Sites#3
|
ites platform, named the New Google Sites,[6][7] along with transition schedule from Classic Google Sites.[8] The new Google Sites does not use JotSpot technology.
In August 2020, the new Google Sites became the default option for website creation, while in November 2021, all websites made with classic Google Sites were archived.[9]
Censorship
[edit]Following a Turkish regional court ruling in 2009, all pages hosted on Google Sites were blocked in Turkey after it was alleged that one of the page
|
https://en.wikipedia.org/wiki/Google_Sites#4
|
n Turkey after it was alleged that one of the pages contained an insult of Turkey's founder, Mustafa Kemal Atatürk. In 2012, the European Court of Human Rights (ECHR) ruled the blockage a breach of Article 10 of the European Convention on Human Rights (Yildirim v Turkey, 2012).[10] The blockage was lifted in 2014.[11]
References
[edit]- ^ Schonfeld, Eric (February 28, 2008). "CNN's – The Webtop". con Mindy con mine.com. Retrieved February 28, 2008.
- ^ Gruman, Galen (May 15, 2006). "JotSpot deli
|
https://en.wikipedia.org/wiki/Google_Sites#5
|
8.
- ^ Gruman, Galen (May 15, 2006). "JotSpot delivers enterprise wikis and mashups". InfoWorld. Retrieved February 29, 2008.
- ^ Spot on – Google Blog, November 1, 2006
- ^ Auchard, Eric (February 28, 2008). "Google offers team Web site publishing service". Yahoo! News. Archived from the original on March 2, 2008. Retrieved February 28, 2008.
- ^ "Google Sites Help Group". May 22, 2008. Retrieved May 22, 2008.
- ^ Lardinois, Frederic. "Google's redesigned Google Sites goes live". TechCrunch. Re
|
https://en.wikipedia.org/wiki/Google_Sites#6
|
redesigned Google Sites goes live". TechCrunch. Retrieved January 11, 2018.
- ^ "Google Apps for Work – Email, Collaboration Tools And More". apps.google.com. Archived from the original on September 28, 2016. Retrieved June 20, 2016.
- ^ "An update on the classic Google Sites deprecation timeline". G Suite Updates Blog. Retrieved January 11, 2018.
- ^ "Convert your classic Sites to new Sites - Sites Help". Google Sites Help. Retrieved August 4, 2023.
- ^ 1 Crown Office Row (January 16, 2013). "T
|
https://en.wikipedia.org/wiki/Google_Sites#7
|
023.
- ^ 1 Crown Office Row (January 16, 2013). "Turkish block on Google site breached Article 10 rights, rules Strasbourg". UK Human Rights Blog. Retrieved June 15, 2013.
{{cite web}}
: CS1 maint: numeric names: authors list (link) - ^ "Google Transparency Report – Turkey, Google Sites". Retrieved October 4, 2013.
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#0
|
Fairness (machine learning)
Fairness in machine learning (ML) refers to the various attempts to correct algorithmic bias in automated decision processes based on ML models. Decisions made by such models after a learning process may be considered unfair if they were based on variables considered sensitive (e.g., gender, ethnicity, sexual orientation, or disability).
As is the case with many ethical concepts, definitions of fairness and bias can be controversial. In general, fairness and bias are
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#1
|
controversial. In general, fairness and bias are considered relevant when the decision process impacts people's lives.
Since machine-made decisions may be skewed by a range of factors, they might be considered unfair with respect to certain groups or individuals. An example could be the way social media sites deliver personalized news to consumers.
Context
[edit]Discussion about fairness in machine learning is a relatively recent topic. Since 2016 there has been a sharp increase in research int
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#2
|
16 there has been a sharp increase in research into the topic.[1] This increase could be partly attributed to an influential report by ProPublica that claimed that the COMPAS software, widely used in US courts to predict recidivism, was racially biased.[2] One topic of research and discussion is the definition of fairness, as there is no universal definition, and different definitions can be in contradiction with each other, which makes it difficult to judge machine learning models.[3] Other res
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#3
|
ult to judge machine learning models.[3] Other research topics include the origins of bias, the types of bias, and methods to reduce bias.[4]
In recent years tech companies have made tools and manuals on how to detect and reduce bias in machine learning. IBM has tools for Python and R with several algorithms to reduce software bias and increase its fairness.[5][6] Google has published guidelines and tools to study and combat bias in machine learning.[7][8] Facebook have reported their use of a t
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#4
|
ing.[7][8] Facebook have reported their use of a tool, Fairness Flow, to detect bias in their AI.[9] However, critics have argued that the company's efforts are insufficient, reporting little use of the tool by employees as it cannot be used for all their programs and even when it can, use of the tool is optional.[10]
It is important to note that the discussion about quantitative ways to test fairness and unjust discrimination in decision-making predates by several decades the rather recent deba
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#5
|
predates by several decades the rather recent debate on fairness in machine learning.[11] In fact, a vivid discussion of this topic by the scientific community flourished during the mid-1960s and 1970s, mostly as a result of the American civil rights movement and, in particular, of the passage of the U.S. Civil Rights Act of 1964. However, by the end of the 1970s, the debate largely disappeared, as the different and sometimes competing notions of fairness left little room for clarity on when one
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#6
|
fairness left little room for clarity on when one notion of fairness may be preferable to another.
Language Bias
[edit]Language bias refers a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository."[better source needed][12] Luo et al.[12] show that current large language models, as they are predominately train
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#7
|
e language models, as they are predominately trained on English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state interve
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#8
|
equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent. Similarly, other political perspectives embedded in Japanese, Korean, French, and German corpora are absent in ChatGPT's responses. ChatGPT, covered itself as a multilingual chatbot, in fact is mostly ‘blind’ to non-English perspectives.[12]
Gender Bias
[edit]Gender bias refers to th
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#9
|
es.[12]
Gender Bias
[edit]Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.[13]
Political bias
[edit]Political bias refers to the tende
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#10
|
ical bias
[edit]Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.[14]
Controversies
[edit]The use of algorithmic deci
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#11
|
4]
Controversies
[edit]The use of algorithmic decision making in the legal system has been a notable area of use under scrutiny. In 2014, then U.S. Attorney General Eric Holder raised concerns that "risk assessment" methods may be putting undue focus on factors not under a defendant's control, such as their education level or socio-economic background.[15] The 2016 report by ProPublica on COMPAS claimed that black defendants were almost twice as likely to be incorrectly labelled as higher risk t
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#12
|
likely to be incorrectly labelled as higher risk than white defendants, while making the opposite mistake with white defendants.[2] The creator of COMPAS, Northepointe Inc., disputed the report, claiming their tool is fair and ProPublica made statistical errors,[16] which was subsequently refuted again by ProPublica.[17]
Racial and gender bias has also been noted in image recognition algorithms. Facial and movement detection in cameras has been found to ignore or mislabel the facial expressions
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#13
|
ound to ignore or mislabel the facial expressions of non-white subjects.[18] In 2015, Google apologized after Google Photos mistakenly labeled a black couple as gorillas. Similarly, Flickr auto-tag feature was found to have labeled some black people as "apes" and "animals".[19] A 2016 international beauty contest judged by an AI algorithm was found to be biased towards individuals with lighter skin, likely due to bias in training data.[20] A study of three commercial gender classification algori
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#14
|
y of three commercial gender classification algorithms in 2018 found that all three algorithms were generally most accurate when classifying light-skinned males and worst when classifying dark-skinned females.[21] In 2020, an image cropping tool from Twitter was shown to prefer lighter skinned faces.[22] In 2022, the creators of the text-to-image model DALL-E 2 explained that the generated images were significantly stereotyped, based on traits such as gender or race.[23][24]
Other areas where ma
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#15
|
ch as gender or race.[23][24]
Other areas where machine learning algorithms are in use that have been shown to be biased include job and loan applications. Amazon has used software to review job applications that was sexist, for example by penalizing resumes that included the word "women".[25] In 2019, Apple's algorithm to determine credit card limits for their new Apple Card gave significantly higher limits to males than females, even for couples that shared their finances.[26] Mortgage-approva
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#16
|
s that shared their finances.[26] Mortgage-approval algorithms in use in the U.S. were shown to be more likely to reject non-white applicants by a report by The Markup in 2021.[27]
Limitations
[edit]Recent works underline the presence of several limitations to the current landscape of fairness in machine learning, particularly when it comes to what is realistically achievable in this respect in the ever increasing real-world applications of AI.[28][29][30] For instance, the mathematical and quan
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#17
|
8][29][30] For instance, the mathematical and quantitative approach to formalize fairness, and the related "de-biasing" approaches, may rely onto too simplistic and easily overlooked assumptions, such as the categorization of individuals into pre-defined social groups. Other delicate aspects are, e.g., the interaction among several sensible characteristics,[21] and the lack of a clear and shared philosophical and/or legal notion of non-discrimination.
Finally, while machine learning models can b
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#18
|
tion.
Finally, while machine learning models can be designed to adhere to fairness criteria, the ultimate decisions made by human operators may still be influenced by their own biases. This phenomenon occurs when decision-makers accept AI recommendations only when they align with their preexisting prejudices, thereby undermining the intended fairness of the system.[31]
Group fairness criteria
[edit]In classification problems, an algorithm learns a function to predict a discrete characteristic ,
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#19
|
a function to predict a discrete characteristic , the target variable, from known characteristics . We model as a discrete random variable which encodes some characteristics contained or implicitly encoded in that we consider as sensitive characteristics (gender, ethnicity, sexual orientation, etc.). We finally denote by the prediction of the classifier. Now let us define three main criteria to evaluate if a given classifier is fair, that is if its predictions are not influenced by some of these
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#20
|
ts predictions are not influenced by some of these sensitive variables.[32]
Independence
[edit]We say the random variables satisfy independence if the sensitive characteristics are statistically independent of the prediction , and we write We can also express this notion with the following formula: This means that the classification rate for each target classes is equal for people belonging to different groups with respect to sensitive characteristics .
Yet another equivalent expression for inde
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#21
|
stics .
Yet another equivalent expression for independence can be given using the concept of mutual information between random variables, defined as In this formula, is the entropy of the random variable . Then satisfy independence if .
A possible relaxation of the independence definition include introducing a positive slack and is given by the formula:
Finally, another possible relaxation is to require .
Separation
[edit]We say the random variables satisfy separation if the sensitive characteri
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#22
|
les satisfy separation if the sensitive characteristics are statistically independent of the prediction given the target value , and we write We can also express this notion with the following formula: This means that all the dependence of the decision on the sensitive attribute must be justified by the actual dependence of the true target variable .
Another equivalent expression, in the case of a binary target rate, is that the true positive rate and the false positive rate are equal (and there
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#23
|
e and the false positive rate are equal (and therefore the false negative rate and the true negative rate are equal) for every value of the sensitive characteristics:
A possible relaxation of the given definitions is to allow the value for the difference between rates to be a positive number lower than a given slack , rather than equal to zero.
In some fields separation (separation coefficient) in a confusion matrix is a measure of the distance (at a given level of the probability score) between
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#24
|
at a given level of the probability score) between the predicted cumulative percent negative and predicted cumulative percent positive.
The greater this separation coefficient is at a given score value, the more effective the model is at differentiating between the set of positives and negatives at a particular probability cut-off. According to Mayes:[33] "It is often observed in the credit industry that the selection of validation measures depends on the modeling approach. For example, if model
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#25
|
ds on the modeling approach. For example, if modeling procedure is parametric or semi-parametric, the two-sample K-S test is often used. If the model is derived by heuristic or iterative search methods, the measure of model performance is usually divergence. A third option is the coefficient of separation...The coefficient of separation, compared to the other two methods, seems to be most reasonable as a measure for model performance because it reflects the separation pattern of a model."
Suffic
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#26
|
eflects the separation pattern of a model."
Sufficiency
[edit]We say the random variables satisfy sufficiency if the sensitive characteristics are statistically independent of the target value given the prediction , and we write We can also express this notion with the following formula: This means that the probability of actually being in each of the groups is equal for two individuals with different sensitive characteristics given that they were predicted to belong to the same group.
Relations
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#27
|
e predicted to belong to the same group.
Relationships between definitions
[edit]Finally, we sum up some of the main results that relate the three definitions given above:
- Assuming is binary, if and are not statistically independent, and and are not statistically independent either, then independence and separation cannot both hold except for rhetorical cases.
- If as a joint distribution has positive probability for all its possible values and and are not statistically independent, then separ
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#28
|
and are not statistically independent, then separation and sufficiency cannot both hold except for rhetorical cases.
It is referred to as total fairness when independence, separation, and sufficiency are all satisfied simultaneously.[34] However, total fairness is not possible to achieve except in specific rhetorical cases.[35]
Mathematical formulation of group fairness definitions
[edit]Preliminary definitions
[edit]Most statistical measures of fairness rely on different metrics, so we will st
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#29
|
fairness rely on different metrics, so we will start by defining them. When working with a binary classifier, both the predicted and the actual classes can take two values: positive and negative. Now let us start explaining the different possible relations between predicted and actual outcome:[36]
- True positive (TP): The case where both the predicted and the actual outcome are in a positive class.
- True negative (TN): The case where both the predicted outcome and the actual outcome are assig
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#30
|
predicted outcome and the actual outcome are assigned to the negative class.
- False positive (FP): A case predicted to befall into a positive class assigned in the actual outcome is to the negative one.
- False negative (FN): A case predicted to be in the negative class with an actual outcome is in the positive one.
These relations can be easily represented with a confusion matrix, a table that describes the accuracy of a classification model. In this matrix, columns and rows represent instance
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#31
|
n this matrix, columns and rows represent instances of the predicted and the actual cases, respectively.
By using these relations, we can define multiple metrics which can be later used to measure the fairness of an algorithm:
- Positive predicted value (PPV): the fraction of positive cases which were correctly predicted out of all the positive predictions. It is usually referred to as precision, and represents the probability of a correct positive prediction. It is given by the following formul
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#32
|
ve prediction. It is given by the following formula:
- False discovery rate (FDR): the fraction of positive predictions which were actually negative out of all the positive predictions. It represents the probability of an erroneous positive prediction, and it is given by the following formula:
- Negative predicted value (NPV): the fraction of negative cases which were correctly predicted out of all the negative predictions. It represents the probability of a correct negative prediction, and it i
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#33
|
ability of a correct negative prediction, and it is given by the following formula:
- False omission rate (FOR): the fraction of negative predictions which were actually positive out of all the negative predictions. It represents the probability of an erroneous negative prediction, and it is given by the following formula:
- True positive rate (TPR): the fraction of positive cases which were correctly predicted out of all the positive cases. It is usually referred to as sensitivity or recall, an
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#34
|
s usually referred to as sensitivity or recall, and it represents the probability of the positive subjects to be classified correctly as such. It is given by the formula:
- False negative rate (FNR): the fraction of positive cases which were incorrectly predicted to be negative out of all the positive cases. It represents the probability of the positive subjects to be classified incorrectly as negative ones, and it is given by the formula:
- True negative rate (TNR): the fraction of negative cas
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#35
|
negative rate (TNR): the fraction of negative cases which were correctly predicted out of all the negative cases. It represents the probability of the negative subjects to be classified correctly as such, and it is given by the formula:
- False positive rate (FPR): the fraction of negative cases which were incorrectly predicted to be positive out of all the negative cases. It represents the probability of the negative subjects to be classified incorrectly as positive ones, and it is given by th
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#36
|
ncorrectly as positive ones, and it is given by the formula:
The following criteria can be understood as measures of the three general definitions given at the beginning of this section, namely Independence, Separation and Sufficiency. In the table[32] to the right, we can see the relationships between them.
To define these measures specifically, we will divide them into three big groups as done in Verma et al.:[36] definitions based on a predicted outcome, on predicted and actual outcomes, and
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#37
|
ed outcome, on predicted and actual outcomes, and definitions based on predicted probabilities and the actual outcome.
We will be working with a binary classifier and the following notation: refers to the score given by the classifier, which is the probability of a certain subject to be in the positive or the negative class. represents the final classification predicted by the algorithm, and its value is usually derived from , for example will be positive when is above a certain threshold. repre
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#38
|
positive when is above a certain threshold. represents the actual outcome, that is, the real classification of the individual and, finally, denotes the sensitive attributes of the subjects.
Definitions based on predicted outcome
[edit]The definitions in this section focus on a predicted outcome for various distributions of subjects. They are the simplest and most intuitive notions of fairness.
- Demographic parity, also referred to as statistical parity, acceptance rate parity and benchmarking.
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#39
|
l parity, acceptance rate parity and benchmarking. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal probability of being assigned to the positive predicted class. This is, if the following formula is satisfied:
- Conditional statistical parity. Basically consists in the definition above, but restricted only to a subset of the instances. In mathematical notation this would be:
Definitions based on predicted and actual outcomes
[edit]These d
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#40
|
sed on predicted and actual outcomes
[edit]These definitions not only considers the predicted outcome but also compare it to the actual outcome .
- Predictive parity, also referred to as outcome test. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal PPV. This is, if the following formula is satisfied:
- Mathematically, if a classifier has equal PPV for both groups, it will also have equal FDR, satisfying the formula:
- False positive error
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#41
|
DR, satisfying the formula:
- False positive error rate balance, also referred to as predictive equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal FPR. This is, if the following formula is satisfied:
- Mathematically, if a classifier has equal FPR for both groups, it will also have equal TNR, satisfying the formula:
- False negative error rate balance, also referred to as equal opportunity. A classifier satisfies this definition if
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#42
|
rtunity. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal FNR. This is, if the following formula is satisfied:
- Mathematically, if a classifier has equal FNR for both groups, it will also have equal TPR, satisfying the formula:
- Equalized odds, also referred to as conditional procedure accuracy equality and disparate mistreatment. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal TP
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#43
|
the protected and unprotected groups have equal TPR and equal FPR, satisfying the formula:
- Conditional use accuracy equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal PPV and equal NPV, satisfying the formula:
- Overall accuracy equality. A classifier satisfies this definition if the subject in the protected and unprotected groups have equal prediction accuracy, that is, the probability of a subject from one class to be assigned
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#44
|
bility of a subject from one class to be assigned to it. This is, if it satisfies the following formula:
- Treatment equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have an equal ratio of FN and FP, satisfying the formula:
Definitions based on predicted probabilities and actual outcome
[edit]These definitions are based in the actual outcome and the predicted probability score .
- Test-fairness, also known as calibration or matching conditi
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#45
|
ess, also known as calibration or matching conditional frequencies. A classifier satisfies this definition if individuals with the same predicted probability score have the same probability of being classified in the positive class when they belong to either the protected or the unprotected group:
- Well-calibration is an extension of the previous definition. It states that when individuals inside or outside the protected group have the same predicted probability score they must have the same pr
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#46
|
icted probability score they must have the same probability of being classified in the positive class, and this probability must be equal to :
- Balance for positive class. A classifier satisfies this definition if the subjects constituting the positive class from both protected and unprotected groups have equal average predicted probability score . This means that the expected value of probability score for the protected and unprotected groups with positive actual outcome is the same, satisfyin
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#47
|
ith positive actual outcome is the same, satisfying the formula:
- Balance for negative class. A classifier satisfies this definition if the subjects constituting the negative class from both protected and unprotected groups have equal average predicted probability score . This means that the expected value of probability score for the protected and unprotected groups with negative actual outcome is the same, satisfying the formula:
Equal confusion fairness
[edit]With respect to confusion matric
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#48
|
on fairness
[edit]With respect to confusion matrices, independence, separation, and sufficiency require the respective quantities listed below to not have statistically significant difference across sensitive characteristics.[35]
- Independence: (TP + FP) / (TP + FP + FN + TN) (i.e., ).
- Separation: TN / (TN + FP) and TP / (TP + FN) (i.e., specificity and recall ).
- Sufficiency: TP / (TP + FP) and TN / (TN + FN) (i.e., precision and negative predictive value ).
The notion of equal confusion fa
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#49
|
edictive value ).
The notion of equal confusion fairness[37] requires the confusion matrix of a given decision system to have the same distribution when computed stratified over all sensitive characteristics.
Social welfare function
[edit]Some scholars have proposed defining algorithmic fairness in terms of a social welfare function. They argue that using a social welfare function enables an algorithm designer to consider fairness and predictive accuracy in terms of their benefits to the people
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#50
|
accuracy in terms of their benefits to the people affected by the algorithm. It also allows the designer to trade off efficiency and equity in a principled way.[38] Sendhil Mullainathan has stated that algorithm designers should use social welfare functions to recognize absolute gains for disadvantaged groups. For example, a study found that using a decision-making algorithm in pretrial detention rather than pure human judgment reduced the detention rates for Blacks, Hispanics, and racial minori
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#51
|
ion rates for Blacks, Hispanics, and racial minorities overall, even while keeping the crime rate constant.[39]
Individual fairness criteria
[edit]An important distinction among fairness definitions is the one between group and individual notions.[40][41][36][42] Roughly speaking, while group fairness criteria compare quantities at a group level, typically identified by sensitive attributes (e.g. gender, ethnicity, age, etc.), individual criteria compare individuals. In words, individual fairnes
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#52
|
compare individuals. In words, individual fairness follow the principle that "similar individuals should receive similar treatments".
There is a very intuitive approach to fairness, which usually goes under the name of fairness through unawareness (FTU), or blindness, that prescribes not to explicitly employ sensitive features when making (automated) decisions. This is effectively a notion of individual fairness, since two individuals differing only for the value of their sensitive attributes w
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#53
|
only for the value of their sensitive attributes would receive the same outcome.
However, in general, FTU is subject to several drawbacks, the main being that it does not take into account possible correlations between sensitive attributes and non-sensitive attributes employed in the decision-making process. For example, an agent with the (malignant) intention to discriminate on the basis of gender could introduce in the model a proxy variable for gender (i.e. a variable highly correlated with g
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#54
|
r gender (i.e. a variable highly correlated with gender) and effectively using gender information while at the same time being compliant to the FTU prescription.
The problem of what variables correlated to sensitive ones are fairly employable by a model in the decision-making process is a crucial one, and is relevant for group concepts as well: independence metrics require a complete removal of sensitive information, while separation-based metrics allow for correlation, but only as far as the la
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#55
|
s allow for correlation, but only as far as the labeled target variable "justify" them.
The most general concept of individual fairness was introduced in the pioneer work by Cynthia Dwork and collaborators in 2012[43] and can be thought of as a mathematical translation of the principle that the decision map taking features as input should be built such that it is able to "map similar individuals similarly", that is expressed as a Lipschitz condition on the model map. They call this approach fair
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#56
|
ion on the model map. They call this approach fairness through awareness (FTA), precisely as counterpoint to FTU, since they underline the importance of choosing the appropriate target-related distance metric to assess which individuals are similar in specific situations. Again, this problem is very related to the point raised above about what variables can be seen as "legitimate" in particular contexts.
Causality-based metrics
[edit]Causal fairness measures the frequency with which two nearly i
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#57
|
ess measures the frequency with which two nearly identical users or applications who differ only in a set of characteristics with respect to which resource allocation must be fair receive identical treatment.[44] [dubious – discuss]
An entire branch of the academic research on fairness metrics is devoted to leverage causal models to assess bias in machine learning models. This approach is usually justified by the fact that the same observational distribution of data may hide different causal rel
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#58
|
distribution of data may hide different causal relationships among the variables at play, possibly with different interpretations of whether the outcome are affected by some form of bias or not.[32]
Kusner et al.[45] propose to employ counterfactuals, and define a decision-making process counterfactually fair if, for any individual, the outcome does not change in the counterfactual scenario where the sensitive attributes are changed. The mathematical formulation reads:
that is: taken a random in
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#59
|
ical formulation reads:
that is: taken a random individual with sensitive attribute and other features and the same individual if she had , they should have same chance of being accepted. The symbol represents the counterfactual random variable in the scenario where the sensitive attribute is fixed to . The conditioning on means that this requirement is at the individual level, in that we are conditioning on all the variables identifying a single observation.
Machine learning models are often tr
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#60
|
observation.
Machine learning models are often trained upon data where the outcome depended on the decision made at that time.[46] For example, if a machine learning model has to determine whether an inmate will recidivate and will determine whether the inmate should be released early, the outcome could be dependent on whether the inmate was released early or not. Mishler et al.[47] propose a formula for counterfactual equalized odds:
where is a random variable, denotes the outcome given that t
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#61
|
random variable, denotes the outcome given that the decision was taken, and is a sensitive feature.
Plecko and Bareinboim[48] propose a unified framework to deal with causal analysis of fairness. They suggest the use of a Standard Fairness Model, consisting of a causal graph with 4 types of variables:
- sensitive attributes (),
- target variable (),
- mediators () between and , representing possible indirect effects of sensitive attributes on the outcome,
- variables possibly sharing a common c
|
https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29#62
|
e outcome,
- variables possibly sharing a common cause with (), representing possible spurious (i.e., non causal) effects of the sensitive attributes on the outcome.
Within this framework, Plecko and Bareinboim[48] are therefore able to classify the possible effects that sensitive attributes may have on the outcome. Moreover, the granularity at which these effects are measured—namely, the conditioning variables used to average the effect—is directly connected to the "individual vs. group" aspect
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.