text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
During the debates over the design and ratification of theUnited States Constitution, in 1787 and 1788, a large number of writers in the popular press usedpseudonyms. This list shows some of the more important commentaries and the (known or presumed) authors responsible for them. Note: the identity of the person behind several of these pseudonyms is not known for certain.
|
https://en.wikipedia.org/wiki/List_of_pseudonyms_used_in_the_American_Constitutional_debates
|
Thislist of stage nameslists names used by those in the entertainment industry, alphabetically by theirstage name's surname followed by their birth name. Individuals who dropped their last name and substituted their middle name as their last name are listed. Those with a one-word stage name are listed in aseparate article.
In many cases, performers have legally changed their name to their stage name.[1]
Note: Many cultures have their own naming customs and systems, some rather intricate. Minor changes or alterations, including reversing Eastern-style formats, do not in and of themselves qualify as stage names and should not normally be included. For example,Björkis not a stage name, it is part of her full Icelandic name, Björk Guðmundsdóttir. Her second name is apatronymicinstead of a family name, followingIcelandic naming conventions.
People arenotlisted here if they fall into one or more of the following categories:
Note:Elton Johnis listed here because he used the name professionally before he legally adopted it in 1972.
Includes stage names that contain numbers or other non-alphabetic characters.
|
https://en.wikipedia.org/wiki/List_of_stage_names
|
Amononymis a name composed of only one word. An individual who is known and addressed by a mononym is amononymous person.
A mononym may be the person's only name, given to them at birth. This was routine in most ancient societies, and remains common in modern societies such as inAfghanistan,[1]Bhutan, some parts ofIndonesia(especially by olderJavanesepeople),Myanmar,Mongolia,Tibet,[2]andSouth India.
In other cases, a person may select a single name from theirpolynymor adopt a mononym as a chosen name,pen name,stage name, orregnal name. A popularnicknamemay effectively become a mononym, in some cases adopted legally. For some historical figures, a mononym is the only name that is still known today.
The wordmononymcomes from Englishmono-("one", "single") and-onym("name", "word"), ultimately fromGreekmónos(μόνος, "single"), andónoma(ὄνομα, "name").[a][b]
The structure of persons' names has varied across time and geography. In somesocieties, individuals have been mononymous, receiving only a single name.Alulim, first king ofSumer, is one of the earliest names known;Narmer, anancient Egyptianpharaoh, is another. In addition, Biblical names likeAdam,Eve,Moses, orAbraham, were typically mononymous, as were names in the surrounding cultures of theFertile Crescent.[4]
Ancient Greeknames likeHeracles,Homer,Plato,Socrates, andAristotle, also follow the pattern, withepithets(similar to second names) only used subsequently by historians to distinguish between individuals with the same name, as in the case ofZeno the StoicandZeno of Elea; likewise,patronymicsor other biographic details (such ascityof origin, or another place name or occupation the individual was associated with) were used to specify whom one was talking about, but these details were not considered part of the name.[5]
A departure from this custom occurred, for example, among theRomans, who by theRepublicanperiod and throughout theImperialperiodused multiple names: a male citizen's name comprised three parts (this was mostly typical of the upper class, while others would usually have only two names):praenomen(given name),nomen(clan name) andcognomen(family line within the clan) – thenomenandcognomenwere almost always hereditary.[6]Famous ancient Romans who today are usually referred to by mononym includeCicero(Marcus Tullius Cicero) andTerence(Publius Terentius Afer).Roman emperors, for exampleAugustus,Caligula, andNero, are also often referred to in English by mononym.
Mononyms in other ancient cultures includeHannibal, theCelticqueenBoudica, and theNumidiankingJugurtha.
During theearly Middle Ages, mononymity slowly declined, with northern and easternEuropekeeping the tradition longer than the south. TheDutch Renaissancescholar and theologianErasmusis a late example of mononymity; though sometimes referred to as "Desiderius Erasmus" or "Erasmus of Rotterdam", he was christened only as "Erasmus", after themartyrErasmus of Formiae.[7]
Composers in thears novaandars subtiliorstyles of latemedieval musicwere often known mononymously—potentially because their names weresobriquets—such asBorlet,Egardus,Egidius,Grimace,Solage, andTrebor.[8]
Naming practices ofindigenous peoples of the Americasare highly variable, with one individual often bearing more than one name over a lifetime. In European and American histories, prominent Native Americans are usually mononymous, using a name that was frequently garbled and simplified in translation. For example, the Aztec emperor whose name was preserved inNahuatldocuments asMotecuhzoma Xocoyotzinwas called "Montezuma" in subsequent histories. In current histories he is often namedMoctezuma II, using the European custom of assigningregnal numbersto hereditary heads of state.
Native Americans from the 15th through 19th centuries, whose names are often thinly documented in written sources, are still commonly referenced with a mononym. Examples includeAnacaona(Haiti, 1464–1504),Agüeybaná(Puerto Rico, died 1510),Diriangén(Nicaragua, died 1523),Urracá(Panama, died 1531),Guamá(Cuba, died 1532),Atahualpa(Peru, 1497–1533),Lempira(Honduras, died 1537),Lautaro(Chile, 1534–1557),Tamanaco(Venezuela, died 1573),Pocahontas(United States, 1595–1617),Auoindaon(Canada, fl. 1623),Cangapol(Argentina, fl. 1735), andTecumseh(United States, 1768–1813).
Prominent Native Americans having a parent of European descent often received a European-style polynym in addition to a name or names from their indigenous community. The name of the Dutch-Seneca diplomatCornplanteris a translation of aSeneca-languagemononym (Kaintwakon, roughly "corn-planter"). He was also called "John Abeel" after hisDutchfather. His later descendants, includingJesse Cornplanter, used "Cornplanter" as a surname instead of "Abeel".
Some French authors have shown a preference for mononyms. In the 17th century, the dramatist and actor Jean-Baptiste Poquelin (1622–73) took the mononym stage name Molière.[9]
In the 18th century, François-Marie Arouet (1694–1778) adopted the mononymVoltaire, for both literary and personal use, in 1718 after his imprisonment in Paris'Bastille, to mark a break with his past. The new name combined several features. It was ananagramfor aLatinizedversion (where "u" become "v", and "j" becomes "i") of his familysurname, "Arouet, l[e] j[eune]" ("Arouet, the young"); it reversed the syllables of the name of the town his father came from, Airvault; and it has implications of speed and daring through similarity to French expressions such asvoltige,volte-faceandvolatile. "Arouet" would not have served the purpose, given that name's associations with "roué" and with an expression that meant "for thrashing".[10]
The 19th-century French authorMarie-Henri Beyle(1783–1842) used manypen names, most famously the mononym Stendhal, adapted from the name of the littlePrussiantown ofStendal, birthplace of the German art historianJohann Joachim Winckelmann, whom Stendhal admired.[11]
Nadar[12](Gaspard-Félix Tournachon, 1820–1910) was an early French photographer.
In the 20th century,Sidonie-Gabrielle Colette(1873–1954, author ofGigi, 1945), used her actual surname as her mononym pen name, Colette.[13]
In the 17th and 18th centuries, most Italian castrato singers used mononyms as stage names (e.g.Caffarelli,Farinelli). The German writer, mining engineer, and philosopher Georg Friedrich Philipp Freiherr von Hardenberg (1772–1801) became famous asNovalis.[14]
The 18th-century Italian painterBernardo Bellotto, who is now ranked as an important and original painter in his own right, traded on the mononymous pseudonym of his uncle and teacher, Antonio Canal (Canaletto), in those countries—Poland and Germany—where his famous uncle was not active, calling himself likewise "Canaletto". Bellotto remains commonly known as "Canaletto" in those countries to this day.[15]
The 19th-century Dutch writer Eduard Douwes Dekker (1820–87), better known by his mononymous pen nameMultatuli[16](from theLatinmulta tuli, "I have suffered [orborne] many things"), became famous for the satirical novel,Max Havelaar(1860), in which he denounced the abuses ofcolonialismin theDutch East Indies(nowIndonesia).
The 20th-century British authorHector Hugh Munro(1870–1916) became known by hispen name, Saki. In 20th-century Poland, thetheater-of-the-absurdplaywright, novelist,painter, photographer, andphilosopherStanisław Ignacy Witkiewicz(1885–1939) after 1925 often used the mononymous pseudonym Witkacy, aconflationof his surname (Witkiewicz) andmiddle name(Ignacy).[17]
Monarchsand otherroyalty, for exampleNapoleon, have traditionally availed themselves of theprivilegeof using a mononym, modified when necessary by anordinalorepithet(e.g., QueenElizabeth IIorCharles the Great). This is not always the case: KingCarl XVI Gustafof Sweden has two names. While many European royals have formally sportedlong chainsof names, in practice they have tended to use only one or two and not to usesurnames.[c]
In Japan, the emperor and his family have no surname, only a given name, such asHirohito, which in practice in Japanese is rarely used: out of respect and as a measure of politeness, Japanese prefer to say "the Emperor" or "the Crown Prince".[19]
Roman Catholicpopeshave traditionally adopted a single,regnal nameupon theirelection.John Paul Ibroke with this tradition – adopting a double name honoring his two predecessors[20]– and his successorJohn Paul IIfollowed suit, butBenedict XVIreverted to the use of a single name.
Surnames were introduced inTurkeyonly afterWorld War I, by the country's first president,Mustafa Kemal Atatürk, as part of his Westernization and modernization programs.[21]
SomeNorth American Indigenouspeople continue their nations' traditional naming practices, which may include the use of single names. InCanada, where government policy often included the imposition of Western-style names, one of the recommendations of theTruth and Reconciliation Commission of Canadawas for all provinces and territories to waive fees to allow Indigenous people to legally assume traditional names, including mononyms.[22]InOntario, for example, it is now legally possible to change to a single name or register one at birth, for members ofIndigenous nationswhich have a tradition of single names.[23]
In modern times, in countries that have long been part of theEast Asian cultural sphere(Japan, the Koreas, Vietnam, and China), mononyms are rare. An exception pertains to theEmperor of Japan.
In the past, mononyms were common inIndonesia, especially inJavanese names.[24]Some younger people may have them, but this practice is becoming rarer, since mononyms are no longer allowed for newborns since 2022 (seeNaming law § Indonesia).[25]
Single names still also occur inTibet.[2]MostAfghansalso have no surname.[26]
InBhutan, most people use either only one name or a combination of two personal names typically given by a Buddhist monk. There are no inherited family names; instead, Bhutanese differentiate themselves with nicknames or prefixes.[27]
In theNear East'sArabworld, the Syrian poet Ali Ahmad Said Esber (born 1930) at age 17 adopted the mononym pseudonym,Adunis, sometimes also spelled "Adonis". A perennial contender for the Nobel Prize in Literature, he has been described as the greatest living poet of the Arab world.[28]
In the West, mononymity, as well as its use by royals in conjunction with titles, has been primarily used or given to famous people such as prominent writers,artists,entertainers, musicians andathletes.[d]
ThecomedianandillusionistTeller, the silent half of the duoPenn & Teller, legally changed his original polynym, Raymond Joseph Teller, to the mononym "Teller" and possesses aUnited States passportissued in that single name.[30][31]Similarly,Kanye Westlegally changed his name to the mononym "Ye".[32]
In Brazil, it is very common for footballers to go by one name for simplicity and as a personal brand. Examples includePelé,RonaldoandKaká. Brazil's PresidentLuiz Inácio Lula da Silvais known as "Lula", a nickname he officially added to his full name. Such mononyms, which take their origin ingiven names,surnamesornicknames, are often used becausePortuguese namestend to be rather long.
In Australia, where nicknames and short names are extremely common, individuals with long names of European origin (such as formerPremier of New South WalesGladys Berejiklian, who is of Armenian descent, and soccer managerAnge Postecoglou, who was born in Greece) will often be referred to by a mononym, even in news headlines. Similarly, Greek basketball playerGiannis Antetokounmpois often referred to outside Greece as just "Giannis" due to the length of his last name.
Western computer systems do not always support monynyms, most still requiring a given name and a surname. Some companies get around this by entering the mononym as both the given name and the surname.
Mononyms are commonly used by many association footballers.
A large number of Brazilian footballers use mononyms, such asAlisson,Kaká,Neymar,RonaldoandRonaldinho.
Players from other countries where Portuguese is spoken, such as Portugal itself and Lusophone countries in Africa, also occasionally use mononyms, such asBruma,Otávio,Pepe,TotiandVitinhafrom Portugal.
Australian managerAnge Postecoglouand Spanish managerPep Guardiolaare commonly known as "Ange" and "Pep", even in news headlines.
|
https://en.wikipedia.org/wiki/Mononymous_person
|
Anickname, in some circumstances also known as asobriquet,or informally a "moniker", is an informal substitute for the proper name of a person, place, or thing,[1]used to express affection, playfulness, contempt, or a particular character trait. It is distinct from apseudonym,stage name, ortitle, although the concepts can overlap.
The compound wordekename, meaning "additional name", was attested as early as 1303.[2]This word was derived from theOld Englishwordeac, meaning "also",[3]related toeacian, meaning "to increase".[4]By the 15th century, themisdivision of the syllablesof the phrase "an ekename" led to its rephrasing as "a nekename".[5]Though the spelling has changed, the meaning of the word has remained relatively stable ever since.
English nicknames are generally represented in quotes between the bearer's first and last names (e.g.,Dwight David "Ike" EisenhowerandDaniel Lamont "Bubba" Franks). It is also common for the nickname to be identified after a comma following the full real name or later in the body of the text, such as in an obituary (e.g.,Frankie Frisch, "The Fordham Flash"). Any middle name is generally omitted, especially in speech. Like English, German uses (German-style) quotation marks between the first and last names (e.g.,Andreas Nikolaus „Niki“ Lauda). Other languages may use other conventions; for example, Italian writes the nickname after the full name followed bydetto"called" (e.g.,Salvatore Schillaci detto Totò), in Spanish the nickname is written in formal contexts at the end in quotes followingalias(e.g.Alfonso Tostado, alias «el Abulense»), in Portuguese the nickname is written after the full name followed byvulgoor between parenthesis (e.g.Edson Arantes do Nascimento, vulgo Pelé/ Edson Arantes do Nascimento (Pelé)) and Slovenian represents nicknames after a dash or hyphen (e.g.,Franc Rozman – Stane). The latter may confuse because it resembles an English convention sometimes used formarried and maiden names.
InVikingsocieties, many people hadheiti,viðrnefni, orkenningarnöfn(Old Norse terms for nicknames)[6]which were used in addition to, or instead of, the first name. In some circumstances, the giving of a nickname had a special status in Viking society in that it created a relationship between the name maker and the recipient of the nickname, to the extent that the creation of a nickname also often entailed a formalceremonyand anexchange of giftsknown in Old Norse asnafnfestr('fastening a name'). Nicknames are widely attested inAnglo-Saxon England, and similar social models have been applied to these names.[7]
InBengalisociety, for example, people will often have two names: adaknam(pet name) which is the name used by family and friends and abhalonamwhich is their formal name.[8][9]
In England, some surnames have nicknames traditionally attached. A man with the surname 'Clark' will be nicknamed 'Nobby'; the surname 'Miller' will have the nickname 'Dusty' (alluding to the flour dust of a miller at work); the surname 'Adams' has the nickname 'Nabby'. Several other nicknames are linked traditionally with surnames, including Chalky White, Bunny Warren, Tug Wilson, and Spud Baker. Other English nicknames allude to a person's origins. A Scotsman may be nicknamed 'Jock', an Irishman 'Paddy' (alluding to Saint Patrick, the patron saint of Ireland), or 'Mick' (alluding[clarification needed]to the preponderance of Roman Catholicism in Ireland), and a Welshman may be nicknamed 'Taffy' (from WelshDafydd, David). Some nicknames referred ironically to a person's physical characteristics, such as 'Lofty' for a short person, 'Curly' for a bald man, or 'Bluey' for a redhead.
In Chinese culture, nicknames are frequently used within a community among relatives, friends, and neighbors. A typical southern Chinese nickname often begins with a "阿" followed by another character, usually the last character of the person's given name.[10]For example, Taiwanese politicianChen Shui-bian(陳水扁) is sometimes referred as "阿扁" (A-Bian). In many Chinese communities of Southeast Asia, nicknames may also connote one's occupation or status. For example, the landlord might be known simply asTowkay(simplified Chinese:头家;traditional Chinese:頭家;Pe̍h-ōe-jī:thâu-ke),Hokkienfor "boss") to his tenants or workers, while a bread seller would be called "Mianbao Shu" 面包叔 (literally, Uncle Bread).
In the context of information technology,nicknameis a common synonym for thescreen nameorhandleof auser. Incomputer networksit has become a common practice for every person to also have one or more nicknames forpseudonymity, to avoidambiguity, or simply because the natural name or technicaladdresswould be too long to type or take too much space on thescreen.
In theIRC (Internet Relay Chat)text-based messaging system first developed in the late 1980s, a nickname (or "nick") was required for every client that connected to an IRC server.[11]
Nicknames are usually applied to a person and they are not always chosen by the recipient themselves. Some nicknames are derogatoryname calls.
A nickname can be ashortenedor a modified variation on a person's real name.
A nickname may refer to the relationship with the person. This is aterm of endearment.
Many geographical places have titles, or alternative names, which have positive implications.Paris, for example, is the "City of Light",Romeis the "Eternal City",Veniceis "La Serenissima", andNew Jerseyis the "Garden State". These alternative names are often used to boost the status of such places, contrary to the usual role of a nickname. Many places or communities, particularly in the US, adopt titles because they can help in establishing a civic identity, help outsiders recognize a community or attract people to a community, promote civic pride, and build community unity.[13]Titles and slogans that successfully create a new community "ideology or myth"[14]are also believed to have economic value.[13]Their economic value is difficult to measure,[13]but there are anecdotal reports of cities that have achieved substantial economic benefits by "branding" themselves by adopting new slogans.[14]
By contrast, oldercity nicknamesmay be critical:Londonis still occasionally referred to as "The Smoke" in memory of its notorious "pea-souper"smogs(smoke-filled fogs) of the 19th and early 20th centuries, andEdinburghwas "Auld Reekie" for the same reason, as countless coal fires polluted its atmosphere.
Besides or replacing thedemonym, some places have collective nicknames for their inhabitants. Many examples of this practice are found inWalloniaand in Belgium in general, where such a nickname is referred to in French as "blason populaire".
|
https://en.wikipedia.org/wiki/Nickname
|
Apseudonymous remailerornym server, as opposed to ananonymous remailer, is anInternetsoftware program designed to allow people to writepseudonymousmessages onUsenetnewsgroups and send pseudonymousemail. Unlike purely anonymous remailers, it assigns its users a user name, and it keeps a database of instructions on how to return messages to the real user. These instructions usually involve the anonymous remailer network itself, thus protecting the true identity of the user.
Primordial pseudonymous remailers once recorded enough information to trace the identity of the real user, making it possible for someone to obtain the identity of the real user through legal or illegal means. This form of pseudonymous remailer is no longer common.
David Chaumwrote an article in 1981 that described many of the features present in modern pseudonymous remailers.[1]
ThePenet remailer, which lasted from 1993 to 1996, was a popular pseudonymous remailer.
Anym server(short for "pseudonymserver") is aserverthat provides an untraceable e-mail address, such that neither the nym server operator nor the operators of the remailers involved can discover which nym corresponds to which real identity.
To set up a nym, one creates aPGPkeypair and submits it to the nym server, along with instructions (called areply block) toanonymous remailers(such asCypherpunkorMixmaster) on how to send a message to one's real address. The nym server returns a confirmation through this reply block. One then sends a message to the address in the confirmation.
To send a message through the nym server so that theFromaddress is the nym, one adds a few headers,[clarification needed]signs the message with one's nym key, encrypts it with the nym server key, and sends the message to the nym server, optionally routing it through some anonymous remailers. When the nym server receives the message it decrypts it and sends it on to the intended recipient, with theFromaddress indicating one's nym.
When the nym server gets a message addressedtothe nym, it appends it to the nym's reply block and sends it to the first remailer in the chain, which sends it to the next and so on until it reaches your real address. It is considered good practice to include instructions to encrypt it on the way, so that someone (or some organization) doing in/outtraffic analysison the nym server cannot easily match the message received by you to the one sent by the nym server.
Existing "multi-use reply block" nym servers were shown to be susceptible to passive traffic analysis with one month's worth of incomingspam(based on 2005 figures) in a paper byBram Cohen,Len Sassaman, andNick Mathewson.[2]
|
https://en.wikipedia.org/wiki/Nym_server
|
Nymwarsis a series of conflicts over policies that mandate that all users of certain internet communications platforms identify themselves using theirlegal names. The term is mostly associated withGoogle's name policies onGoogle+andYouTube. Nymwars is ablend wordcomposed from(pseudo)nymandwars. The name appears to have gained prominence as thehashtag"#nymwars" onTwitter.
Conflicts regardingGoogle+began in July 2011 when thesocial networkingsite began enforcing its real-name-only policy by suspending the accounts of users it felt were not following the policy.[1]Pseudonyms,nicknames, and non-standard real names (for example,mononymsor names that include scripts from multiple languages) were suspended. The issue was settled in July 2014 when Google announced that it was ending its real-name-only policy.[2]
A predecessor to the Google+ conflict wasBlizzard's RealID which, starting in July 2010, exposes the name on the player's credit card, and is mandatory to use some game features (cross-game chat) and was nearly made mandatory to post on discussion forums.[3][4][5]
These issues have existed since the beginning ofonline identity, and are related to the allegedonline disinhibition effect. The resulting discussions have raised many issues regarding naming, cultural sensitivity, public and private identity, privacy, and the role of social media in modern discourse. The debate has been covered widely in the press includingWired,[6]The Atlantic,[7]andThe New York Times.[8]
Google Pluswas launched in late June 2011. At the time of launch, the site's user content and conduct policy stated, "To help fight spam and prevent fake profiles, use the name your friends, family or co-workers usually call you."[9]Many users signed up usingnicknames,handles,stage names, or other names by which they were commonly known, but which did not necessarily match the name on their government-issued ID.
The first suspensions for name-related reasons occurred in July 2011, and includedLimor Fried's account which included the name "LadyAda" (by which she is widely known),nerdcorerapper Doctor Popular, andLA WeeklyandLos Angeles Timescolumnist A.V. Flox.[10]Account suspensions over the following weeks included those who were using nicknames, handles, and pseudonyms; those whose legal names were unusual, includingmononymoususers; and some users whoGooglemistakenly believed were impersonating famous individuals, such asFacebookemployee andMozillafounderBlake Ross,[11]and actorWilliam Shatner.[12]
Awareness of the issue grew rapidly, viaTwitter, Google+ itself, and a variety of media outlets. By early August, theElectronic Frontier Foundationhad posted "A Case for Pseudonymity" in response to the issue.[13]
Google initially responded on 25 July when vice presidentBradley Horowitzpromised improvements to the suspension and enforcement process.[14]On 17 August, Google implemented a "grace period" before suspension,[15]and on 19 August, a "verified account" program for celebrities and high-profile users.[16]
On 19 October 2011, at the Web 2.0 Summit, Google executiveVic Gundotrarevealed that Google+ would begin supporting pseudonyms and other types of identity within a few months.[17]However, as of the 16 October 2012 policy documents, Google still required that participants "Use your common first and last name" adding "our Name Policy may not be for everyone at this time."[18]
On 15 July 2014, Google indicated that the real-name policy was being ended, announcing that "Over the years, as Google+ grew and its community became established, we steadily opened up this policy, from allowing +Page owners to use any name of their choosing to letting YouTube users bring their usernames into Google+. Today, we are taking the last step: there are no more restrictions on what name you can use."[19]
Google's official support page described their real-names policy, which has since been abandoned:
Google+ makes connecting with people on the web more like connecting with people in the real world. Because of this, it’s important to use your common name so that the people you want to connect with can find you. Your common name is the name your friends, family or co-workers usually call you. For example, if your legal name is Charles Jones Jr. but you normally use Chuck Jones or Junior Jones, any of these would be acceptable.
Googleoffers support and assistance to anyone whose profile has been suspended, including an appeal process, and a referral to their Content Policy. If an account is suspended, users will not be able to access Google services that require active profiles, such asBuzz,Reader, andPicasa. They will, however, be able to access other Google services such asGmail.
Google suggested that their naming policy may not have been for everyone, and recommended if a user chooses not to comply, to make a copy of their Google+ data, and leave.[20]
A Google support worker has stated:[21]
Google Profiles is a product that works best in the identified state. This way you can be certain you’re connecting with the right person, and others will have confidence knowing that there is someone real behind the profile they’re checking out. For this reason, Google Profiles requires you to use the name that you commonly go by in daily life.
In August 2011, Google CEOEric Schmidtwas quoted as stating that Google+ was intended as anidentity managementservice more than as a social network, and that the use of real names would be necessary for other planned Google products based on this service. He also asserted that "the Internet will work better if people know that you're a real person rather than a fake person".[22]
Google vice president Bradley Horowitz (in a Google+ post on 24 January 2012) announced that Google is updating its policy "to broaden support for established pseudonyms".[23]However, the updated policy has been criticized for being too vague concerning what is an "established" pseudonym,[24]and insufficiently flexible to protect online privacy.[25]
A number of high-profile commentators have publicly criticized Google's policies, including technologistsViolet Blue,[26]Jamie Zawinski,[27]Kevin Marks,[28]andRobert Scoble[29]and organisations such as theElectronic Frontier Foundation.[13]
Criticisms have been wide-ranging, for example:
Facebookhas always had a "real name" policy[36]but enforcement has traditionally been sporadic and usually dependent on reports by users.
In November 2011, Facebook suspendedSalman Rushdie's account, and then reinstated it under his little-used first name Ahmed, before backing down and restoring it to his preferred name.[37][38]
In the fall of 2014, Facebook began requiringdrag performersto change their Facebook identities to their legal names, notably including performerSister Roma.[39]This is reported to be the work of a single person systematically reporting them.[40]
In November 2011 theUnited States Department of Justicesaid that it wants to retain the ability under theComputer Fraud and Abuse Actto prosecute people who provide false information online with the intent to harm others. This statement, coming as it did shortly after the Google+ and Facebook actions, raised fears that web users could face criminal prosecution for using pseudonyms. The Justice Department said it would use that power only in select cases, such as a case in 2011 where it prosecuted a woman who used aMySpaceaccount under a fake name to bully a 13-year-old girl who eventually committed suicide.[38]
|
https://en.wikipedia.org/wiki/Nymwars
|
Onion routingis a technique foranonymouscommunication over acomputer network. In anonion network, messages are encapsulated in layers ofencryption, analogous to the layers of anonion. Theencrypteddata is transmitted through a series ofnetwork nodescalled "onion routers," each of which "peels" away a single layer, revealing the data's next destination. When the final layer is decrypted, the message arrives at its destination. The sender remains anonymous because each intermediary knows only the location of the immediately preceding and following nodes.[1]While onion routing provides a high level of security and anonymity, there are methods to break the anonymity of this technique, such as timing analysis.[2]
Onion routing was developed in the mid-1990s at theU.S. Naval Research Laboratoryby employeesPaul Syverson, Michael G. Reed, and David Goldschlag[3][4]to protect U.S.intelligencecommunications online.[5]It was then refined by theDefense Advanced Research Projects Agency(DARPA) and patented by the Navy in 1998.[4][6][7]
This method was publicly released by the same employees through publishing an article in the IEEE Journal on Selected Areas in Communications the same year. It depicted the use of the method to protect the user from the network and outside observers who eavesdrop and conduct traffic analysis attacks. The most important part of this research is the configurations and applications of onion routing on the existing e-services, such asVirtual private network,Web-browsing,Email,Remote login, andElectronic cash.[8]
Based on the existing onion routing technology, computer scientistsRoger DingledineandNick MathewsonjoinedPaul Syversonin 2002 to develop what has become the largest and best-known implementation of onion routing, then called The Onion Routing project (Torproject).
After the Naval Research Laboratory released the code for Tor under afree license,[5][9][10]Dingledine, Mathewson and five others founded The Tor Project as anon-profit organizationin 2006, with thefinancial supportof theElectronic Frontier Foundationand several other organizations.[11][12]
Metaphorically, an onion is the data structure formed by "wrapping" a message with successive layers of encryption to be decrypted ("peeled" or "unwrapped") by as many intermediary computers as there are layers before arriving at its destination. The original message remains hidden as it is transferred from one node to the next, and no intermediary knows both the origin and final destination of the data, allowing the sender to remain anonymous.[13]
To create and transmit an onion, the originator selects a set of nodes from a list provided by a "directory node". The chosen nodes are arranged into a path, called a "chain" or "circuit", through which the message will be transmitted. To preserve the anonymity of the sender, no node in the circuit is able to tell whether the node before it is the originator or another intermediary like itself. Likewise, no node in the circuit is able to tell how many other nodes are in the circuit and only the final node, the "exit node", is able to determine its own location in the chain.[13]
Using asymmetric key cryptography, the originator obtains apublic keyfrom the directory node to send an encrypted message to the first ("entry") node, establishing a connection and ashared secret("session key"). Using the established encrypted link to the entry node, the originator can then relay a message through the first node to a second node in the chain using encryption that only the second node, and not the first, can decrypt. When the second node receives the message, it establishes a connection with the first node. While this extends the encrypted link from the originator, the second node cannot determine whether the first node is the originator or just another node in the circuit. The originator can then send a message through the first and second nodes to a third node, encrypted such that only the third node is able to decrypt it. The third, as with the second, becomes linked to the originator but connects only with the second. This process can be repeated to build larger and larger chains but is typically limited to preserve performance.[13]
When the chain is complete, the originator can send data over the Internet anonymously. When the final recipient of the data sends data back, the intermediary nodes maintain the same link back to the originator, with data again layered, but in reverse such that the final node this time adds the first layer of encryption and the first node adds the last layer of encryption before sending the data, for example a web page, to the originator, who is able to decrypt all layers.[13]
One of the reasons why the typical Internet connections are not considered anonymous is the ability ofInternet service providersto trace and log connections between computers. For example, when a person accesses a particular website, the data itself may be secured through a connection likeHTTPSsuch that the user's password, emails, or other content is not visible to an outside party, but there is a record of the connection itself, what time it occurred, and the amount of data transferred. Onion routing creates and obscures a path between two computers such that there is no discernible connection directly from a person to a website, but there still exist records of connections between computers. Traffic analysis searches those records of connections made by a potential originator and tries to match the timing and data transfers to connections made to a potential recipient. If an attacker has compromised both ends of a route, a sender may be seen to have transferred an amount of data to an unknown computer a specified amount of seconds before a different unknown computer transferred data of the same exact size to a particular destination.[14][15]Factors that may facilitate traffic analysis include nodes failing or leaving the network[15]and a compromised node keeping track of a session as it occurs when chains are periodically rebuilt.[16]
Garlic routingis a variant of onion routing associated with theI2Pnetwork that encrypts multiple messages together, which both increases the speed of data transfer and makes it more difficult[17]for attackers to perform traffic analysis.[18]
Although the message being sent is transmitted inside several layers of encryption, the job of the exit node, as the final node in the chain, is to decrypt the final layer and deliver the message to the recipient. A compromised exit node is thus able to acquire the raw data being transmitted, potentially including passwords, private messages, bank account numbers, and other forms of personal information. Dan Egerstad, a Swedish researcher, used such an attack to collect the passwords of over 100 email accounts related to foreign embassies.[19]
Exit node vulnerabilities are similar to those on unsecured wireless networks, where the data being transmitted by a user on the network may be intercepted by another user or by the router operator. Both issues are solved by using a secure end-to-end connection likeSSL/TLSorsecure HTTP(S-HTTP). If there isend-to-end encryptionbetween the sender and the recipient, and the sender isn't lured into trusting a false SSL certificate offered by the exit node, then not even the last intermediary can view the original message.
|
https://en.wikipedia.org/wiki/Onion_routing
|
ThePenet remailer(anon.penet.fi) was apseudonymous remaileroperated byJohan "Julf" Helsingiusof Finland from 1993 to 1996. Its initial creation stemmed from an argument in a Finnishnewsgroupover whether people should be required to tie their real name to their online communications. Julf believed that people should not—indeed, could not—be required to do so. In his own words:
Julf's remailer worked by receiving an e-mail from a person, stripping away all the technical information that could be used to identify the original source of the e-mail, and then remailing the message to its final destination. The result provided Internet users with the ability to send e-mail messages and post to Usenet newsgroups without revealing their identities.
In addition, the Penet remailer used a type of “post office box” system in which users could claim their own anonymous e-mail addresses of the formanxxxxx@anon.penet.fi, allowing them to assign pseudonymous identities to their anonymous messages, and to receive messages sent to their (anonymous) e-mail addresses.
While the basic concept was effective, the Penet remailer had several vulnerabilities which threatened the anonymity of its users. Chief among them was the need to store a list of real e-mail addresses mapped to the corresponding anonymous e-mail addresses on the server. A potential attacker needed only to access that list to compromise the identities of all of Penet's users. The Penet remailer was on two occasions required by the legal system inFinland(the country where the Penet server hardware resided) to turn over the real e-mail address that was mapped to an anonymous e-mail address. Another potential vulnerability was that messages sent to and from the remailer were all sent incleartext, making it vulnerable to electronic eavesdropping.
Later anonymous remailer designs, such as theCypherpunkandMixmasterdesigns, adopted more sophisticated techniques to try to overcome these vulnerabilities, including the use of encryption to prevent eavesdropping, and also the technique known asonion routingto allow the existence of pseudonymous remailers in which no record of a user's real e-mail address is stored by the remailer.
Despite its relatively weak security, the Penet remailer was a hugely popular remailer owing to its ease of anonymous account set-up and use compared to more secure but less user-friendly remailers, and had over 700,000 registered users at the time of its shutdown in September 1996.
In the summer of 1994, word spread online of the Penet remailer being compromised, with the announcement being made at the hacker conventionDEF CONII.Wiredmagazine reported at the time:
An official announcement was made at this year'sDefCon IIthat anon.penet.fi has been seriously compromised. We strongly suggest that you not trust this anonymous remailer. (Word has it that some folks are working on aPGP-based service.) We'll keep you posted.[2]
This was followed a year later by a mention in the announcement for DEF CON III:
SPEAKERSSarah Gordon, AKATheora, a veteran of DC II will be presenting another speech this year. Last year she organized a round table discussion withPhil Zimmermannand Presence, and revealed that the Anonymous remailer anon.penet.fi was compromised. TOPIC: Not Announced Yet.[3]
The second reported compromise of the Penet remailer occurred in February 1995 at the behest of theChurch of Scientology. Claiming that a file had been stolen from one of the Church's internal computer servers and posted to the newsgroupalt.religion.scientologyby a Penet user, representatives of the Church contactedInterpol, who in turn contacted the Finnish police, who issued a search warrant demanding that Julf hand over data on the users of the Penet remailer. Initially Julf was asked to turn over the identities of all users of his remailer (which numbered 200,000 at the time), but he managed a compromise and revealed only the single user being sought by the Church of Scientology.[4]
The anonymous user in question used the handle "-AB-" when posting anonymously, and their real e-mail address indicated that they were an alumnus or alumna of theCalifornia Institute of Technology. The document he posted was an internal report by a Scientology private investigator, Eugene Ingram, about an incident that had occurred involving a man named Tom Klemesrud, a BBS operator involved in acontroversy. The confusing story became known on the Internet as the "Miss Blood Incident".[5]
Eventually the Church learned the real identity of "-AB-" to be Tom Rummelhart, a Scientologist and computer operator responsible for some of the maintenance of the Church of Scientology'sINCOMMcomputer system. The fate of "-AB-" after the Church of Scientology learned his true identity is unknown. Years later in 2003, a two-part story entitled "What Really Happened in INCOMM - Part 1"[6]and "What Really Happened in INCOMM – Part 2"[7]was posted toalt.religion.scientologyby a former Scientologist named Dan Garvin, which described events within the Church leading up to and stemming from the Penet posting by "-AB-".[6][7]
Julf was also contacted by the government ofSingaporeas part of an effort to discover who was posting messages critical of the nation's government in the newsgroup soc.culture.singapore, but as Finnish law did not recognise any crime being committed, Julf was not required to reveal the user's identity.
In August 1996, a British newspaper,The Observer, published an article describing the Penet remailer as a major hub ofchild pornography, quoting a United StatesFBIinvestigator named Toby Tyler as saying that Penet was responsible for between 75% and 90% of the child pornography being distributed on the Internet.[8]Investigations by online journalistDeclan McCullaghdemonstrated many errors and omissions in theObserverarticle. In an article penned by McCullagh, the alleged FBI investigator described himself as a sergeant inCalifornia'sSan Bernardinosheriff's office who only consulted with the FBI from time to time, a relationship which theObserverarticle had in his opinion purposefully misrepresented as some kind of employment relationship. Tyler also claimed that theObserverpurposely misquoted him, and he had actually said that most child pornography posted to newsgroups doesnotgo through remailers.[9]
In addition, Julf claimed that he explained to theObserverthe steps he took to prevent child pornography from being posted by forbidding posting to the alt.binariesnewsgroupsand limiting the size of messages to 16 kilobytes, too small to allowuuencodedbinaries such as pictures to be posted. He also informed theObserverof an investigation already performed by the Finnish police which had found no evidence that child pornography was being remailed through Penet. Julf claims that all this information was ignored, stating that theObserver"wanted to make a story so they made things up."[citation needed]
Despite voluminous reader mail pointing to the numerous errors in the news story, theObservernever issued a full retraction of its claims, only going so far as to clarify that Johan Helsingius had "consistently denied" the claims of child pornography distribution.[citation needed]
In September 1996, theChurch of Scientologyagain sought information from Julf as part of its court case against a critic of the Church namedGrady Ward. The Church wanted to know if Ward had posted any information through the Penet remailer. Ward gave Julf explicit permission to reveal the extent of his alleged use of the Penet remailer, and Julf told the Church that he could find no evidence that Ward had ever used the Penet remailer at all.[citation needed]
In September 1996, an anonymous user posted theconfidential writingsof theChurch of Scientologythrough the Penet remailer. The Church once again demanded that Julf turn over the identity of one of its users, claiming that the poster had infringed theChurch's copyrighton theconfidential material. The Church was successful in finding the originating e-mail address of the posting before Penet remailed it, but it turned out to be another anonymous remailer: the alpha.c2.orgnymserver, a more advanced and more secure remailer which didn't keep a mapping of e-mail addresses that could besubpoenaed.[10]
Facing much criticism and many attacks, and unable to guarantee the anonymity of Penet users, Julf shut down the remailer in September 1996.[11]
|
https://en.wikipedia.org/wiki/Penet.fi
|
Alice and Bobare fictional characters commonly used as placeholders in discussions aboutcryptographicsystems andprotocols,[1]and in other science and engineering literature where there are several participants in athought experiment. The Alice and Bob characters were created byRon Rivest,Adi Shamir, andLeonard Adlemanin their 1978 paper "A Method for Obtaining Digital Signatures and Public-key Cryptosystems".[2]Subsequently, they have become commonarchetypesin many scientific and engineering fields, such asquantum cryptography,game theoryandphysics.[3]As the use of Alice and Bob became more widespread, additional characters were added, sometimes each with a particular meaning. These characters do not have to refer to people; they refer to generic agents which might be different computers or even different programs running on a single computer.
Alice and Bob are the names of fictional characters used for convenience and to aid comprehension. For example, "How can Bob send a private message M to Alice in a public-key cryptosystem?"[2]is believed to be easier to describe and understand than if the hypothetical people were simply namedAandBas in "How can B send a private message M to A in apublic-key cryptosystem?"
The names are conventional, and where relevant may use analliterativemnemonicsuch as "Mallory" for "malicious" to associate the name with the typical role of that person.
Scientific papers about thought experiments with several participants often used letters to identify them:A,B,C, etc.
The first mention of Alice and Bob in the context of cryptography was inRivest,Shamir, andAdleman's 1978 article "A method for obtaining digital signatures and public-key cryptosystems."[2]They wrote, "For our scenarios we suppose that A and B (also known as Alice and Bob) are two users of a public-key cryptosystem".[2]: 121Previous to this article, cryptographers typically referred to message senders and receivers as A and B, or other simple symbols. In fact, in the two previous articles by Rivest, Shamir, and Adleman, introducing theRSA cryptosystem, there is no mention of Alice and Bob.[4][5]The choice of the first three names may have come from the filmBob & Carol & Ted & Alice.[6]
Within a few years, however, references to Alice and Bob in cryptological literature became a commontrope. Cryptographers would often begin their academic papers with reference to Alice and Bob. For instance,Michael Rabinbegan his 1981 paper, "Bob and Alice each have a secret, SB and SA, respectively, which they want to exchange."[7]Early on, Alice and Bob were starting to appear in other domains, such as inManuel Blum's 1981 article, "Coin Flipping by Telephone: A Protocol for Solving Impossible Problems," which begins, "Alice and Bob want to flip a coin by telephone."[8]
Although Alice and Bob were invented with no reference to their personality, authors soon began adding colorful descriptions. In 1983, Blum invented a backstory about a troubled relationship between Alice and Bob, writing, "Alice and Bob, recently divorced, mutually distrustful, still do business together. They live on opposite coasts, communicate mainly by telephone, and use their computers to transact business over the telephone."[9]In 1984, John Gordon delivered his famous[10]"After Dinner Speech" about Alice and Bob, which he imagines to be the first "definitive biography of Alice and Bob."[11]
In addition to adding backstories and personalities to Alice and Bob, authors soon added other characters, with their own personalities. The first to be added was Eve, the "eavesdropper." Eve was invented in 1988 by Charles Bennet, Gilles Brassard, and Jean-Marc Robert, in their paper, "Privacy Amplification by Public Discussion."[12]InBruce Schneier's bookApplied Cryptography, other characters are listed.[13]
The most common characters are Alice and Bob. Eve, Mallory, and Trent are also common names, and have fairly well-established "personalities" (or functions). The names often use alliterative mnemonics (for example, Eve, "eavesdropper"; Mallory, "malicious") where different players have different motives. Other names are much less common and more flexible in use. Sometimes the genders are alternated: Alice, Bob, Carol, Dave, Eve, etc.[14]
Forinteractive proof systemsthere are other characters:
The names Alice and Bob are often used to name the participants in thought experiments in physics.[33][34]More alphabetical names, usually of alternating gender, are used as required, e.g. "Alice and Bob (and Carol and Dick and Eve)".[35]
In experiments involving robotic systems, the terms "Alice Robot" and "Bob Robot" refer to mobile platforms responsible for transmitting quantum information and receiving it with quantum detectors, respectively, within the context of the field ofquantum robotics.[36][37][38][39][40][41]
|
https://en.wikipedia.org/wiki/Placeholder_names_in_cryptography
|
Apseudepigraph(alsoanglicizedas "pseudepigraphon") is afalsely attributedwork, a text whose claimedauthoris not the true author, or a work whose real author attributed it to a figure of the past. The name of the author to whom the work is falsely attributed is often prefixed with the particle "pseudo-",[1]such as for example "pseudo-Aristotle" or "pseudo-Dionysius": these terms refer to the anonymous authors of works falsely attributed toAristotleandDionysius the Areopagite, respectively.
Inbiblical studies, the termpseudepigraphacan refer to an assorted collection of Jewish religious works thought to be writtenc.300 BCE to 300 CE. They are distinguished byProtestantsfrom thedeuterocanonical books(Catholic and Orthodox) orApocrypha(Protestant), the books that appear in extant copies of theSeptuagintin the fourth century or later[2]and theVulgate, but not in theHebrew Bibleor inProtestant Bibles.[3]TheCatholic Churchdistinguishes only between the deuterocanonical and all other books; the latter are calledbiblical apocrypha, which in Catholic usage includes the pseudepigrapha.[citation needed]In addition, two books considered canonical in theOrthodox Tewahedochurches, theBook of EnochandBook of Jubilees, are categorized as pseudepigrapha from the point of view ofChalcedonian Christianity.[citation needed]
In addition to the sets of generally agreed to be non-canonical works, scholars will also apply the term to canonical works who make a direct claim of authorship, yet this authorship is doubted. For example, theBook of Danielis considered by some to have been written in the 2nd century BCE, 400 years after the prophetDaniellived, and thus the work is pseudepigraphic.[4][5]A New Testament example might be the book of2 Peter, considered by some to be written approximately 80 years afterSaint Peter's death. Early Christians, such asOrigen, harbored doubts as to the authenticity of the book's authorship.[6]
The term has also been used byQuranistMuslimsto describehadiths: Quranists claim that most hadiths are fabrications[7]created in the 8th and 9th century CE, and falsely attributed to the Islamic prophetMuhammad.[8]
The wordpseudepigraph(from theGreek:ψευδής,pseudḗs, "false" andἐπιγραφή,epigraphḗ, "name" or "inscription" or "ascription"; thus when taken together it means "false superscription or title";[9]see the relatedepigraphy). The plural of "pseudepigraph" (sometimesLatinizedas "pseudepigraphon" or "pseudepigraphum") is "pseudepigrapha".
When a text is shown to have been falsely attributed to a particular author, and the true identity of the author is not known, the author can be referred to by a combination ofpseudo-and the traditional authors name. For example, theArmenian Historyhas been falsely attributed to an Armenian historian named seventh-centurySebeos, and it is therefore called Pseudo-Sebeos.[10]
Scholars have identified seven levels of authenticity which they have organized in a hierarchy ranging from literal authorship, meaning written in the author's own hand, to outright forgery:[11]
Inbiblical studies,pseudepigrapharefers particularly to works which purport to be written by noted authorities in either the Old and New Testaments or by persons involved in Jewish or Christian religious study or history. These works can also be written about biblical matters, often in such a way that they appear to be as authoritative as works which have been included in the many versions of the Judeo-Christian scriptures.Eusebiusindicates this usage dates back at least toSerapion of Antioch, whom Eusebius records[12]as having said: "But those writings which are falsely inscribed with their name (ta pseudepigrapha), we as experienced persons reject...."
Many such works were also referred to asApocrypha, which originally connoted "private" or "non-public": those that were not endorsed for public reading in theliturgy. An example of a text that is both apocryphal and pseudepigraphical is theOdes of Solomon.[13]It is considered pseudepigraphical because it was not actually written by Solomon but instead is a collection of early Christian (first to second century) hymns and poems, originally written not in Hebrew, and apocryphal because they were not accepted in either theTanakhor theNew Testament.
There is a tendency not to use the wordpseudepigraphawhen describing works later than about 300 CE when referring to biblical matters.[3]: 222–28But the late-appearingGospel of Barnabas,Apocalypse of Pseudo-Methodius, thePseudo-Apuleius(author of a fifth-centuryherbalascribed to Apuleius), and the author traditionally referred to as the "Pseudo-Dionysius the Areopagite", are classic examples of pseudepigraphy. In the fifth century the moralistSalvianpublishedContra avaritiam("Against avarice") under the name of Timothy; the letter in which he explained to his former pupil, Bishop Salonius, his motives for so doing survives.[14]
The term pseudepigrapha is also commonly used to describe numerous works of Jewish religious literature written from about 300 BCE to 300 CE. Not all of these works are actually pseudepigraphical. It also refers to books of the New Testament canon whose authorship is misrepresented. Such works include the following:[3]
Various canonical works accepted as scripture have since been reexamined and considered by modern scholars in the 19th century onward as likely cases of pseudepigraphica. TheBook of Danieldirectly claims to be written by theprophet Daniel, yet there are strong reasons to believe it was not written until centuries after Daniel's death, such as references to the book only appearing from the 2nd century BCE onward. The book is an apocalypse wherein Daniel offers a series of predictions of the future, and is meant to reassure the Jews of the period that the tyrantAntiochus IV Epiphaneswould soon be overthrown. By backdating the book to the 6th century BCE and providing a series of correct prophecies as to the history of the past 400 years, the authorship claim of Daniel would have strengthened a later author's predictions of the coming fall of theSeleucid Empire.[6][15]
Christian scholars traditionally maintain that nothing known to be pseudepigraphical was admitted to the New Testament canon.
The Catholic Encyclopedia notes,
The first four historical books of the New Testament are supplied with titles, which however ancient, do not go back to the respective authors of those sacred texts.The Canon of Muratori,Clement of Alexandria, andSt. Irenaeusbear distinct witness to the existence of those headings in the latter part of the second century of our era. Indeed, the manner in which Clement (Strom. I, xxi), and St. Irenaeus (Adv. Haer. III, xi, 7) employ them implies that, at that early date, our present titles to the gospels had been in current use for some considerable time. Hence, it may be inferred that they were prefixed to the evangelical narratives as early as the first part of that same century. That however, they do not go back to the first century of the Christian era, or at least that they are not original, is a position generally held at the present day. It is felt that since they are similar for the four Gospels, although the same Gospels were composed at some interval from each other, those titles were not framed and consequently not prefixed to each individual narrative, before the collection of the four Gospels was actually made. Besides as well pointed out by Prof. Bacon, "the historical books of the New Testament differ from its apocalyptic and epistolary literature, as those of the Old Testament differ from its prophecy, in being invariably anonymous, and for the same reason. Prophecies, whether in the earlier or in the later sense, and letters, to have authority, must be referable to some individual; the greater his name, the better. But history was regarded as common possession. Its facts spoke for themselves. Only as the springs of common recollection began to dwindle, and marked differences to appear between the well-informed and accurate Gospels and the untrustworthy ... become worth while for the Christian teacher or apologist to specify whether the given representation of the current tradition was 'according to' this or that special compiler, and to state his qualifications". It thus appears that the present titles of the Gospels are not traceable to the Evangelists themselves.[16]
However, agnostic biblical scholarBart D. Ehrmanholds that only seven of Paul's epistles are convincingly genuine, and that all of the other 20 books in the New Testament appear to be written by unknown people who were not the well-known biblical figures to whom the early Christian leaders originally attributed authorship.[7]The earliest and best manuscripts of Matthew, Mark, Luke, and John were all written anonymously.[17]Furthermore, the books of Acts, Hebrews, 1 John, 2 John, and 3 John were also written anonymously.[17]
ThirteenNew Testamentlettersare attributed to Paul and are still considered by Christians to carry Paul's authority. These letters are part of theChristian Bibleand are foundational for the Christian Church. Therefore, letters which some claim to be pseudepigraphic are not considered any less valuable to Christians.[18]
Authorship of 6 out of the 13 canonical epistles of Paul has been questioned by both Christian and non-Christian biblical scholars.[19]These are theEpistle to the Ephesians,Epistle to the Colossians,Second Epistle to the Thessalonians,First Epistle to Timothy,Second Epistle to Timothy, andEpistle to Titus. These six books are referred by sceptical scholars such as Bart Ehrman as "deutero-Pauline letters", meaning "secondary" standing in the corpus of Paul's writings, on the grounds of proposed evidence that they could not have been written by Paul, despite internal attribution to Paul. Those known as the "Pastoral Epistles" (Timothy, 2 Timothy, and Titus) are all so similar that they are thought to be written by the same unknown author, either by Paul or in Paul's name.[7]
Seven New Testament letters are attributed to several apostles, such asSaint Peter,John the Apostle, and Jesus's brothersJamesandJude.
Three of the seven letters are anonymous. These three have traditionally been attributed toJohn the Apostle, the son of Zebedee and one of the Twelve Apostles of Jesus. Consequently, these letters have been labelled theJohannine epistles, despite the fact that none of the epistles mentions any author. Most modern scholars believe the author is not John the Apostle, but there is no scholarly consensus for any particular historical figure. (see:Authorship of the Johannine works).[20][21]
Two of the letters claim to have been written or issued bySimon Peter, one of the Twelve Apostles of Jesus. Therefore, they have traditionally been called thePetrine epistles. However, most modern scholars agree the second epistle was probably not written by Peter, because it appears to have been written in the early 2nd century, long after Peter had died. Yet, opinions on the first epistle are more divided; many scholars do think this letter is authentic.[22]
In one epistle, the author only calls himself James (ἸάκωβοςIákobos). It is not known which James this is supposed to be. There are several different traditional Christian interpretations of other New Testament texts which mention aJames, brother of Jesus. However, most modern scholars tend to reject this line of reasoning, since the author himself does not indicate anyfamilial relationship with Jesus. A similar problem presents itself with the Epistle of Jude (ἸούδαςIoudas): the writer names himself a brother of James (ἀδελφὸς δὲ Ἰακώβουadelphos de Iakóbou), but it is not clear which James is meant. According to some Christian traditions, this is the same James as the author of the Epistle of James, who was allegedly a brother of Jesus; and so, this Jude should also be a brother of Jesus, despite the fact he does not indicate any such thing in his text.[22]
TheGospel of Peter[23]and the attribution to Paul of theEpistle to the Laodiceansare both examples of pseudepigrapha that were excluded from the New Testament canon.[24]They are often referred to asNew Testament apocrypha. Further examples of New Testament pseudepigrapha include theGospel of Barnabas[25]and theGospel of Judas, which begins by presenting itself as "the secret account of the revelation that Jesus spoke in conversation with Judas Iscariot".[26]
TheVision of Ezrais an ancientapocryphaltext purportedly written by the biblicalscribeEzra. The earliest surviving manuscripts, composed inLatin, date to the 11th century CE, although textual peculiaritiesstrongly suggestthat the text was originally written inGreek. Like theGreek Apocalypse of Ezra, the work is clearly Christian, and features several apostles being seen inheaven. However, the text is significantly shorter than the Apocalypse.
TheDonation of Constantineis a forged Roman imperial decree by which the 4th-century emperorConstantine the Greatsupposedly transferred authority over Rome and thewestern partof theRoman Empireto thePope. Composed probably in the 8th century, it was used, especially in the 13th century, in support ofclaims of political authority by the papacy.[27]Lorenzo Valla, an ItalianCatholicpriest andRenaissance humanist, is credited with first exposing the forgery with solidphilologicalarguments in 1439–1440,[28]although the document's authenticity had been repeatedly contested since 1001.[27]
In Russian history, in 1561 Muscovites supposedly received a letter from thePatriarch of Constantinoplewhich asserted the right ofIvan the Terribleto claim the title ofTsar. This, too, turned out to be false.[29]While earlier Russian Monarchs had on some occasions used the title "Tsar", Ivan the Terrible previously known as "Grand Prince of all the Russias" was the first to be formally crowned as Tsar of All Rus (Russian:Царь Всея Руси). This was related to Russia's growing ambitions to become an Orthodox "Third Rome", after theFall of Constantinople– for which the supposed approval by the Patriarch added weight.[30][31]
TheAnaphoraeofMar Nestorius, employed in theEastern Churches, is attributed toNestorius, but its earliest manuscripts are in Syriac, which question its Greek authorship.[32][33]
TheZohar(Hebrew:זֹהַר, lit. Splendor or Radiance), foundational work in the literature of Jewish mystical thought known asKabbalah,[34]first appeared inSpainin the 13th century, and was published by a Jewish writer namedMoses de León. De León ascribed the work toShimon bar Yochai("Rashbi"), arabbiof the 2nd century during the Roman persecution[35]who, according to Jewish legend,[36][37]hid in a cave for thirteen years studying the Torah and was inspired by theProphetElijahto write the Zohar. This accords with the traditional claim by adherents that Kabbalah is the concealed part of theOral Torah. Modern academic analysis of the Zohar, such as that by the 20th century religious historianGershom Scholem, has theorized that de León was the actual author, as textual analysis points to a Medieval Spanish Jewish writer rather than one living in Roman-ruled Palestine.
Conrad Celtes, a notedGermanhumanistscholar and poet of theGerman Renaissance, collected numerous Greek and Latin manuscripts in his function as librarian of the Imperial Library in Vienna. In a 1504 letter to the Venetian publisherAldus Manutius[38]Celtes claimed to have discovered the missing books ofOvid'sFasti. However, it turned out that the purported Ovid verses had actually been composed by an 11th-century monk and were known to theEmpire of Nicaeaaccording toWilliam of Rubruck. Even so, many contemporary scholars believed Celtes and continued to write about the existence of the missing books until well into the 17th century.[39]
Pseudepigraphy has been employed as ametafictionaltechnique. Authors who have made notable use of this device includeJames Hogg(The Private Memoirs and Confessions of a Justified Sinner),Thomas Carlyle(Sartor Resartus),Jorge Luis Borges("An Examination of the Works of Herbert Quain"; "Pierre Menard, Author of the Quixote"),Vladimir Nabokov(Pale Fire),Stanislaw Lem(A Perfect Vacuum;Imaginary Magnitude)Roberto Bolaño(Nazi Literature in the Americas) andStefan Heym(The Lenz Papers).
Edgar Rice Burroughsalso presented many of his works – including the most well-known, theTarzanbooks – as pseudepigrapha, prefacing each book with a detailed introduction presenting the supposed actual author, with Burroughs himself pretending to be no more than the literary editor.J.R.R. TolkieninThe Lord of the Ringspresents that story andThe Hobbitas translated from the fictionalRed Book of Westmarchwritten by characters within the novels. The twelve books ofThe Flashman Papersseries byGeorge MacDonald Frasersimilarly pretend to be transcriptions of the papers left by an "illustriousVictoriansoldier", each volume prefaced by a long semi-scholarly Explanatory Note stating that "additional packets of Flashman's papers have been found and are here presented to the public". A similar device was used byIan FleminginThe Spy Who Loved Meand by various other writers of popular fiction.
|
https://en.wikipedia.org/wiki/Pseudepigrapha
|
Pseudonymizationis adata managementandde-identificationprocedure by whichpersonally identifiable informationfields within adatarecord are replaced by one or more artificial identifiers, orpseudonyms.[1]A single pseudonym for each replaced field or collection of replaced fields makes the data record less identifiable while remaining suitable fordata analysisanddata processing.
Pseudonymization (or pseudonymisation, the spelling under European guidelines) is one way to comply with theEuropean Union'sGeneral Data Protection Regulation(GDPR) demands for secure data storage of personal information.[2]Pseudonymized data can be restored to its original state with the addition of information which allows individuals to be re-identified. In contrast,anonymizationis intended to preventre-identificationof individuals within the dataset. Clause 18, Module Four, footnote 2 of the Adoption by the European Commission of the Implementing Decisions (EU) 2021/914 “requires rendering the data anonymous in such a way that the individual is no longer identifiable by anyone ... and that this process is irreversible.”[3]
The European Data Protection Supervisor (EDPS) on 9 December 2021 highlighted pseudonymization as the top technical supplementary measure for Schrems II compliance.[4]Less than two weeks later, the EU Commission highlighted pseudonymization as an essential element of the equivalency decision for South Korea, which is the status that was lost by the United States under the Schrems II ruling by the Court of Justice of the European Union (CJEU).[5]
The importance of GDPR-compliant pseudonymization increased dramatically in June 2021 when theEuropean Data Protection Board(EDPB) and the European Commission highlighted GDPR-compliant Pseudonymisation as the state-of-the-art technical supplementary measure for the ongoing lawful use of EU personal data when using third country (i.e., non-EU) cloud processors or remote service providers under the "Schrems II" ruling by the CJEU.[6]Under the GDPR and final EDPB Schrems II Guidance,[7]the term pseudonymization requires a new protected “state” of data, producing a protected outcome that:
(1) Protects direct, indirect, and quasi-identifiers, together with characteristics and behaviors;
(2) Protects at the record and data set level versus only the field level so that the protection travels wherever the data goes, including when it is in use; and
(3) Protects against unauthorized re-identification via the Mosaic Effect by generating high entropy (uncertainty) levels by dynamically assigning different tokens at different times for various purposes.
The combination of these protections is necessary to prevent the re-identification of data subjects without the use of additional information kept separately, as required under GDPR Article 4(5) and as further underscored by paragraph 85(4) of the final EDPB Schrems II guidance:
GDPR-compliant pseudonymization requires that data is “anonymous” in the strictest EU sense of the word – globally anonymous – but for the additional information held separately and made available under controlled conditions as authorized by the data controller for permitted re-identification of individual data subjects. Clause 18, Module Four, footnote 2 of the Adoption by the European Commission of the Implementing Decision (EU) 2021/914 “requires rendering the data anonymous in such a way that the individual is no longer identifiable by anyone, in line with recital 26 of Regulation (EU) 2016/679, and that this process is irreversible.”[3]
Before the Schrems II ruling, pseudonymization was a technique used by security experts or government officials to hide personally identifiable information to maintain data structure andprivacy of information. Some common examples of sensitive information include postal code, location of individuals, names of individuals, race and gender, etc.
After the Schrems II ruling, GDPR-compliant pseudonymization must satisfy the above-noted elements as an "outcome" versus merely a technique.
The choice of which data fields are to be pseudonymized is partly subjective. Less selective fields, such as Birth Date or Postal Code are often also included because they are usually available from other sources and therefore make a record easier to identify. Pseudonymizing these less identifying fields removes most of their analytic value and is therefore normally accompanied by the introduction of new derived and less identifying forms, such as year of birth or a largerpostal coderegion.
Data fields that are less identifying, such as date of attendance, are usually not pseudonymized. This is because too much statistical utility is lost in doing so, not because the data cannot be identified. For example, given prior knowledge of a few attendance dates it is easy to identify someone's data in a pseudonymized dataset by selecting only those people with that pattern of dates. This is an example of aninference attack.
The weakness of pre-GDPR pseudonymized data to inference attacks is commonly overlooked. A famous example is theAOL search data scandal. The AOL example of unauthorized re-identification did not require access to separately kept “additional information” that was under the control of the data controller as is now required for GDPR compliant Pseudonymisation, outlined below under the section "New Definition for Pseudonymization Under GDPR".
Protecting statistically useful pseudonymized data from re-identification requires:
The pseudonym allows tracking back of data to its origins, which distinguishes pseudonymization fromanonymization,[9]where all person-related data that could allow backtracking has been purged. Pseudonymization is an issue in, for example, patient-related data that has to be passed on securely between clinical centers.
The application of pseudonymization toe-healthintends to preserve thepatient's privacyanddata confidentiality. It allows primary use of medical records by authorized health care providers and privacy preserving secondary use by researchers.[10]In the US,HIPAAprovides guidelines on how health care data must be handled and data de-identification or pseudonymization is one way to simplify HIPAA compliance[citation needed]. However, plain pseudonymization for privacy preservation often reaches its limits whengenetic dataare involved (see alsogenetic privacy). Due to the identifying nature of genetic data, depersonalization is often not sufficient to hide the corresponding person. Potential solutions are the combination of pseudonymization with fragmentation andencryption.
An example of application of pseudonymization procedure is creation of datasets forde-identificationresearch by replacingidentifyingwords with words from the same category (e.g. replacing a name with a random name from the names dictionary),[11][12][13]however, in this case it is in general not possible to track data back to its origins.
Effective as of May 25, 2018, the EUGeneral Data Protection Regulation(GDPR) defines pseudonymization for the very first time at the EU level in Article 4(5). Under Article 4(5) definitional requirements, data is pseudonymized if it cannot be attributed to a specific data subject without the use of separately kept "additional information.” Pseudonymized data embodies the state of the art in Data Protection by Design and by Default[14]because it requires protection of both direct and indirect identifiers (not just direct). GDPR Data Protection by Design and by Default principles as embodied in pseudonymization require protection of both direct and indirect identifiers so that personal data is not cross-referenceable (or re-identifiable) via the "Mosaic Effect"[15]without access to “additional information” that is kept separately by the controller. Because access to separately kept “additional information” is required for re-identification, attribution of data to a specific data subject can be limited by the controller to support lawful purposes only.
GDPR Article 25(1) identifies pseudonymization as an “appropriate technical and organizational measure” and Article 25(2) requires controllers to:
“…implement appropriate technical and organizational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual's intervention to an indefinite number of natural persons.”
A central core of Data Protection by Design and by Default under GDPR Article 25 is enforcement of technology controls that support appropriate uses and the ability to demonstrate that you can, in fact, keep your promises. Technologies like pseudonymization that enforce Data Protection by Design and by Default show individual data subjects that in addition to coming up with new ways to derive value from data, organizations are pursuing equally innovative technical approaches to protecting data privacy—an especially sensitive and topical issue given the epidemic of data security breaches around the globe.
Vibrant and growing areas of economic activity—the “trust economy,” life sciences research, personalized medicine/education, the Internet of Things, personalization of goods and services—are based on individuals trusting that their data is private, protected, and used only for appropriate purposes that bring them and society maximum value. This trust cannot be maintained using outdated approaches to data protection. Pseudonymisation, as newly defined under the GDPR, is a means of helping to achieve Data Protection by Design and by Default to earn and maintain trust and more effectively serve businesses, researchers, healthcare providers, and everyone who relies on the integrity of data.
GDPR compliant pseudonymization not only enables greater privacy-respectful use of data in today's "big data" world of data sharing and combining, but it also enables data controllers and processors to reap explicit benefits under the GDPR for correctly pseudonymized data.The benefits of properly pseudonymized data are highlighted in multiple GDPR Articles, including:
|
https://en.wikipedia.org/wiki/Pseudonymization
|
Pseudonymous Bosch(/ˈsuːdənɪməsbɒʃ,bɔːʃ,bɔːs/) is thepen nameofRaphael Simon(born October 25, 1967), the author ofThe Secret SeriesandThe Bad Booksseries of fiction books, as well asThe Unbelievable Oliverchapter book mysteries and two stand-alone titles. He has written 12 books, each widely read.[1]
Simon was born on October 25, 1967, to writers Dyanne Asimow and Roger L. Simon.[note 1][2]He was born inLos Angeles County, California.[4]His brother, Jesse, is a visual artist.[2]He also has a significantly younger half-sister, Madeleine, from his father's third marriage.[5]
Simon attended Yale,[6]where hecame outas gay when he was 20 years old.[7]Later he earned an MA in Comparative Literature from UC Irvine.[citation needed]He went on to teach courses about detective fiction, composition, and fiction for young readers at various colleges and universities in California.[8]He currently lives inPasadena, California, with his husband, Phillip de Leon.[9]They have twin children, who were born in 2007.[9]
Bosch had long been suspected to be the author Raphael Simon, although Bosch disputed this until he revealed himself as Simon in a May 8, 2016, editorial inThe New York Times.[10]
The pseudonym is a play on that of the artistHieronymus Bosch.[11]It also may play off the fictionalLos Angelesdetective,Hieronymous "Harry" Bosch, also named after the artist, created by the authorMichael Connelly, and who has appeared in several of his novels starting in 1992.[citation needed]
Prior to becoming a novelist, Simon worked as a screenwriter, including as a staff writer on the Nickelodeon seriesRocket Power. He started writing his first novel,The Name of this Book Is Secret, as a series of letters to a fourth-grader. It was published in 2007, and was nominated for an Edgar Allan Poe award for best juvenile mystery. A sequel followed in 2008:If You're Reading This It's Too Late. Eventually there would be five titles in the Secret Series. TheNew York Timesbestselling series has sold millions of copies and has been translated into many languages.[citation needed]
In 2013, Bosch publishedWrite This Book!, ado it yourselfbook; he calls it "a book that readers will write for me". Bosch elaborated in an interview withWiredstating that "it is a kind of half-written, guided mystery. Parts of it are going to be multiple choice, choose-your-own adventure, parts of it will be more likeMad Libs, and some silly stuff".[12]
The following year, Bosch returned readers to the world of theSecret SeriesinBad Magic, the first novel in what became theBad Bookstrilogy.
On May 14, 2019, Bosch publishedThe Unbelievable Oliver and the Four Jokers, with illustrations by Shane Pangburn. The book is about an eight year-old boy who longs to be a professional magician.[13]A followup,The Unbelievable Oliver and the Sawed-in-Half Dads, was released on May 12, 2020.[14]In 2021, Bosch publishedThe Anti-Book, his first book under his real name Raphael Simon.[15]
|
https://en.wikipedia.org/wiki/Pseudonymous_Bosch
|
Apseudonymous remailerornym server, as opposed to ananonymous remailer, is anInternetsoftware program designed to allow people to writepseudonymousmessages onUsenetnewsgroups and send pseudonymousemail. Unlike purely anonymous remailers, it assigns its users a user name, and it keeps a database of instructions on how to return messages to the real user. These instructions usually involve the anonymous remailer network itself, thus protecting the true identity of the user.
Primordial pseudonymous remailers once recorded enough information to trace the identity of the real user, making it possible for someone to obtain the identity of the real user through legal or illegal means. This form of pseudonymous remailer is no longer common.
David Chaumwrote an article in 1981 that described many of the features present in modern pseudonymous remailers.[1]
ThePenet remailer, which lasted from 1993 to 1996, was a popular pseudonymous remailer.
Anym server(short for "pseudonymserver") is aserverthat provides an untraceable e-mail address, such that neither the nym server operator nor the operators of the remailers involved can discover which nym corresponds to which real identity.
To set up a nym, one creates aPGPkeypair and submits it to the nym server, along with instructions (called areply block) toanonymous remailers(such asCypherpunkorMixmaster) on how to send a message to one's real address. The nym server returns a confirmation through this reply block. One then sends a message to the address in the confirmation.
To send a message through the nym server so that theFromaddress is the nym, one adds a few headers,[clarification needed]signs the message with one's nym key, encrypts it with the nym server key, and sends the message to the nym server, optionally routing it through some anonymous remailers. When the nym server receives the message it decrypts it and sends it on to the intended recipient, with theFromaddress indicating one's nym.
When the nym server gets a message addressedtothe nym, it appends it to the nym's reply block and sends it to the first remailer in the chain, which sends it to the next and so on until it reaches your real address. It is considered good practice to include instructions to encrypt it on the way, so that someone (or some organization) doing in/outtraffic analysison the nym server cannot easily match the message received by you to the one sent by the nym server.
Existing "multi-use reply block" nym servers were shown to be susceptible to passive traffic analysis with one month's worth of incomingspam(based on 2005 figures) in a paper byBram Cohen,Len Sassaman, andNick Mathewson.[2]
|
https://en.wikipedia.org/wiki/Pseudonymous_remailer
|
Public-key cryptography, orasymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of apublic keyand a correspondingprivate key.[1][2]Key pairs are generated withcryptographicalgorithmsbased onmathematicalproblems termedone-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.[3]There are many kinds of public-key cryptosystems, with different security goals, includingdigital signature,Diffie–Hellman key exchange,public-key key encapsulation, and public-key encryption.
Public key algorithms are fundamental security primitives in moderncryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such asTransport Layer Security (TLS),SSH,S/MIME, andPGP. Compared tosymmetric cryptography, public-key cryptography can be too slow for many purposes,[4]so these protocols often combine symmetric cryptography with public-key cryptography inhybrid cryptosystems.
Before the mid-1970s, all cipher systems usedsymmetric key algorithms, in which the samecryptographic keyis used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via asecure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users.
By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret.
The two best-known types of public key cryptography aredigital signatureand public-key encryption:
For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.
For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.
Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not concealmetadatalike what computer a source used to send a message, when they sent it, or how long it is.[9][10][11][12]Public-key encryption on its own also does not tell the recipient anything about who sent a message[8]:283[13][14]—it just conceals the content of the message.
One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including:
Apublic key infrastructure(PKI), in which one or more third parties – known ascertificate authorities– certify ownership of key pairs.TLSrelies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved.
A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user.PGPuses this approach, in addition to lookup in thedomain name system(DNS). TheDKIMsystem for digitally signing emails also uses this approach.
The most obvious application of a public key encryption system is for encrypting communication to provideconfidentiality– a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key.
Another application in public key cryptography is thedigital signature. Digital signature schemes can be used for senderauthentication.
Non-repudiationsystems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication.
Further applications built on this foundation include:digital cash,password-authenticated key agreement,time-stamping servicesand non-repudiation protocols.
Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/privateasymmetrickey-exchange algorithmto encrypt and exchange asymmetric key, which is then used bysymmetric-key cryptographyto transmit data using the now-sharedsymmetric keyfor a symmetric key encryption algorithm.PGP,SSH, and theSSL/TLSfamily of schemes use this procedure; they are thus calledhybrid cryptosystems. The initialasymmetriccryptography-based key exchange to share a server-generatedsymmetrickey from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection.
As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost.
Additionally, with the advent ofquantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem.[15][16]
All public key schemes are in theory susceptible to a "brute-force key search attack".[17]However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" byClaude Shannon– is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; bothRSAandElGamal encryptionhave known attacks that are much faster than the brute-force approach.[citation needed]None of these are sufficiently improved to be actually practical, however.
Major weaknesses have been found for several formerly promising asymmetric key algorithms. The"knapsack packing" algorithmwas found to be insecure after the development of a new attack.[18]As with all cryptographic functions, public-key implementations may be vulnerable toside-channel attacksthat exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks.
Another potential security vulnerability in using asymmetric keys is the possibility of a"man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion.[citation needed]
A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender.[19]
A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, theInternet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at anInternet service provider(ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk.[citation needed]
In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker.[20][21][22]
One approach to prevent such attacks involves the use of apublic key infrastructure(PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store andrevokedigital certificates and manage public-key encryption. However, this has potential weaknesses.
For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin.Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check thebona fidesof the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream.
Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples includeTLSand its predecessorSSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS forHTTPS).
Aside from the resistance to attack of a particular key pair, the security of the certificationhierarchymust be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate.Public key digital certificatesare typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure.
Most of the available public-key encryption software does not concealmetadatain the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden.
However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party.[23]The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging.
During the earlyhistory of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach todistributing keys.
In his 1874 bookThe Principles of Science,William Stanley Jevonswrote:[24]
Can the reader say what two numbers multiplied together will produce the number8616460799?[25]I think it unlikely that anyone but myself will ever know.[24]
Here he described the relationship ofone-way functionsto cryptography, and went on to discuss specifically thefactorizationproblem used to create atrapdoor function. In July 1996, mathematicianSolomon W. Golombsaid: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography."[26]
In 1970,James H. Ellis, a British cryptographer at the UKGovernment Communications Headquarters(GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it.[27][28]
In 1973, his colleagueClifford Cocksimplemented what has become known as theRSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer,Malcolm J. Williamson, developed what is now known asDiffie–Hellman key exchange.
The scheme was also passed to the US'sNational Security Agency.[29]Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization:
I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution fromBerners-Leedesigning an open internet architecture forCERN, its adaptation and adoption for theArpanet... did public key cryptography realise its full potential.
—Ralph Benjamin[29]
These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997.[30]
In 1976, an asymmetric key cryptosystem was published byWhitfield DiffieandMartin Hellmanwho, influenced byRalph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which usesexponentiation in a finite field, came to be known asDiffie–Hellman key exchange.[31]This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known asMerkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years.[32]
In 1977, a generalization of Cocks's scheme was independently invented byRon Rivest,Adi ShamirandLeonard Adleman, all then atMIT. The latter authors published their work in 1978 inMartin Gardner'sScientific Americancolumn, and the algorithm came to be known asRSA, from their initials.[33]RSA usesexponentiation moduloa product of two very largeprimes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty offactoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in theMathematical Gamescolumn in the August 1977 issue ofScientific American.[34]
Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including theRabin cryptosystem,ElGamal encryption,DSAandECC.
Examples of well-regarded asymmetric key techniques for varied purposes include:
Examples of asymmetric key algorithms not yet widely adopted include:
Examples of notable – yet insecure – asymmetric key algorithms include:
Examples of protocols using asymmetric key algorithms include:
|
https://en.wikipedia.org/wiki/Public_key_encryption
|
Aring name[1]is a type ofstage nameornicknameused by an athlete such as aprofessional wrestler,mixed martial artist, orboxerwhose real name is considered unattractive, dull, difficult to pronounce or spell, amusing for the wrong reasons, or projecting the wrong image. Since the advent of theInternet, it is relatively easy to discover a fighter's real name.[2]
Ring names are much more common in professional wrestling than any other sport; famous examples include Terry Bollea becomingHulk Hogan, Shawn Hickenbottom becomingShawn Michaels, Roderick Toombs becomingRoddy Piper, Dwayne Johnson becomingThe Rock, Christopher Irvine becomingChris Jericho, and Phillip Jack Brooks becomingCM Punk. A number of wrestlers adopted their real name or a variation of it, sometimes modifying the spelling to better fit theirgimmick, such as Dave Bautista becomingBatista(later reverting to his real name for his Hollywood acting career), Patricia Stratigeas becomingTrish Stratus, Jonathan Good becomingJon Moxley,Bryan Danielsonbecoming Daniel Bryan (when he wrestled in WWE), Richard Fliehr becomingRic Flairand Randall Poffo becomingRandy Savage. Others simply use part of their name, such as Bill Goldberg usingGoldberg, Nicole Garcia-Colace usingNikki Bella, Mike Mizanin usingThe Miz, Cody Runnels usingCody Rhodes, and Michael Wardlow usingWardlow. Many female wrestlers go solely by their first name. It is also common for wrestlers of all genders to use a nickname in addition to their real name for marketability and other reasons.Ricky Steamboatis an atypical instance of a wrestler adopting a ring name to sound less intimidating, as his legal name of Richard Blood was considered unfitting for hisbabyfacepersona.[3]
Some (mostlyindependent) wrestlers still go to great lengths to ensure that their real names are not publicly known. It is a highly respected tradition in Mexicanlucha librefor performers to hide their true identities, usually wrestling under masks, and revealing a luchador's identity without their permission is considered a serious offence with real-life consequences. Professional wrestlers are often referred to by their contemporaries by their ring name. In interviews,Bret Hartregularly referred to Mark Calaway, Curt Hennig, and Kevin Nash by their ring names (The Undertaker,Mr. Perfect, andDiesel). Ring names are often trademarked by thepromotionthat creates a character or gimmick for a performer. It is common to see one performer use a variety of ring names throughout their career, even if their overall persona remains similar.
This is especially true inWWE, which has largely forced most wrestlers that have debuted since 2006 to use a WWE-owned ring name instead of a ring name that they used on the independent circuit or, such as withDaniel Bryanand a few others, their real name. One notable exception was made forDavid Otungabecause of his real marriage to singerJennifer Hudsonat the time, which gave WWE some mainstream exposure.
Low Kiused the alias "Senshi" during his secondTNAstint to reserve his primary ring name for other use. A similar example is the team known asThe Dudley BoyzinECWand WWE and Team 3D elsewhere. WWE trademarked the "Dudley Boyz" name, leading them to have to change their name when they went to TNA. The members' individual names were also trademarked by WWE, forcing them to have to change their names. WWE partially repealed the policy in 2015, allowing wrestlers who were well known in other promotions such asSamoa Joe,A.J. Styles,Shinsuke Nakamura,Austin Aries,Bobby Roode, andEric Youngto use their long-standing ring names (or, in Nakamura's and Roode's cases, their real names) as well as wrestlers who sign "Tier 2"NXTbrand contracts such asJohnny GarganoandTommaso Ciampa, who wrestle both on NXT and the independent circuit to keep their ring names (or, in Gargano's case, his real name). Gargano and Ciampa have since signed exclusive WWE contracts. "In-house" WWE wrestlers still use WWE-owned ring names.
In rare cases, the rights to a wrestler's ring name may be owned by a company with little or no connection to professional wrestling, such asMarvel Comics' ownership of the nameHulk Hoganuntil early 2003, which was due to Hogan being advertised as "The Incredible HulkHogan" early in his career, while Marvel owned the trademark for their comic book character. Sometimes, a wrestler will buy the rights to their own ring name; for example, Steve Borden owns the rights to the nameStingand licenses it to themusician of the same name.[citation needed]The wrestler formerly known asTesttook this one step further and legally changed his name to "Andrew Test Martin". Jim Hellwig, known asThe Ultimate Warrior, had his name legally changed to simply "Warrior".
In many cases, ring names evolve over time as the wrestler's gimmick changes, either subtly or dramatically. After debuting in WWE as the "ConnecticutBlueblood" Hunter Hearst Helmsley, Paul Levesque's character later morphed intoTriple H. A more drastic change sometimes occurs when a wrestler turnsheroicorvillainous, such as when Hulk Hogan joined the villainousnWo(New World Order) and became "Hollywood" Hulk Hogan. His new attitude was enhanced by changing his costume color scheme from his famous red and yellow to nWo's black and white. Steve Williams adopted the ring nameSteve Austinto avoid confusion with the then-more established performer"Dr. Death" Steve Williams. Austin would wrestle under that name for several years before signing with the WWF and being given the name "the Ringmaster". This gimmick failed to catch on, and Austin reverted to his established name, reaching his greatest level of success with the prefix "Stone Cold" Steve Austin.
Numerous boxers have used ring names or nicknames as their mode of identification during their professional boxing careers, particularly during the late 19th century and the early 20th century. The ring name "Kid" was particularly popular, indicating the boxer's comparative youth. Since the mid 20th century, ring names for boxers have typically been less common, although nicknames have become more popular in recent years. Famous examples of boxers who used ring names include:
|
https://en.wikipedia.org/wiki/Ring_name
|
Asecret identityis a person'scryptonym,incognito,coverand/oralter egowhich is not known to the general populace, most often used infiction. Brought intopopular culturebythe Scarlet Pimpernelin 1903, the concept was widespread in pulp heroes and is particularly prevalent in theAmerican comic bookgenre, and is a trope of themasquerade.[1]
In American comic books, a character typically has dual identities, one overt and one covert. The false orpublic identitybeing known to the general public as the "superheropersona" and the other being the secret identity. The private or secret identity is typically the superhero'slegal name, true identity, and/or "civilianpersona" when they are not actively assuming the superhero persona. It is kept hidden from their enemies and the general public to protect themselves from legal ramifications, pressure, or public scrutiny, as well as to protect their friends and loved ones from harm secondary to their actions as superheroes.
Occasionally, this trope is inverted. Examples of this are:
|
https://en.wikipedia.org/wiki/Secret_identity
|
InPersian,Turkic,HindustaniandPunjabi, the wordtakhallus(fromArabicتخلّص,lit.'to get liberated or to become secure';[1][2]Persian:تخلّص,romanized:takhallos;Azerbaijani:təxəllüs;Uzbek:taxallus;Urdu:تخلّص,Hindi:तख़ल्लुस) means apen name.[3]Pen names were widely adopted by Persian, Turkic, Urdu and Punjabi poets.[4]
Thetakhallusis often included in themaqta', the last couplet (bayt) of aghazal.[5][6]
Whileghazaloriginated in Arabia evolving fromqasida, some of the common features of contemporaryghazal, such as including thetakhallusin themaqta', the concept ofmatla', etc., did not exist in Arabicghazal. It was Persianghazalwhich added these features.[5]
List oftakhalluses of somePersian poets:
List oftakhalluses of some Urdu poets:
|
https://en.wikipedia.org/wiki/Takhallus
|
In computer science, afingerprinting algorithmis a procedure that maps an arbitrarily large data item (remove, as a computer file) to a much shorter bit string, itsfingerprint, that uniquely identifies the original data for all practical purposes just as human fingerprints uniquely identify people for practical purposes. This fingerprint may be used for data deduplication purposes. This is also referred to asfile fingerprinting,data fingerprinting, orstructured data fingerprinting.
Fingerprints are typically used to avoid the comparison and transmission of bulky data. For instance, a remove, web browser or proxy server can efficiently check whether a remote, by fetching only its fingerprint and comparing it with that of the previously fetched copy.
Fingerprint functions may be seen as high-performance hash functions used to uniquely identify substantial blocks of data where cryptographic functions may be.
Special algorithms exist for audio and video fingerprinting.
To serve its intended purposes, a fingerprinting algorithm must be able to capture the identity of a file with virtual certainty. In other words, the probability of acollision— two files yielding the same fingerprint — must be negligible, compared to the probability of other unavoidable causes of fatal errors (such as the system being destroyed bywaror by ameteorite): say, 10−20or less.
This requirement is somewhat similar to that of achecksumfunction, but is much more stringent. To detect accidental data corruption or transmission errors, it is sufficient that the checksums of the original file and any corrupted version will differ with near certainty, given some statistical model for the errors. In typical situations, this goal is easily achieved with 16- or 32-bit checksums. In contrast, file fingerprints need to be at least64-bitlong to guarantee virtual uniqueness in large file systems (seebirthday attack).
When proving the above requirement, one must take into account that files are generated by highly non-random processes that create complicated dependencies among files. For instance, in a typical business network, one usually finds many pairs or clusters of documents that differ only by minor edits or other slight modifications. A good fingerprinting algorithm must ensure that such "natural" processes generate distinct fingerprints, with the desired level of certainty.
Computer files are often combined in various ways, such as concatenation (as inarchive files) or symbolic inclusion (as with theC preprocessor's#includedirective). Some fingerprinting algorithms allow the fingerprint of a composite file to be computed from the fingerprints of its constituent parts. This "compounding" property may be useful in some applications, such as detecting when a program needs to be recompiled.
Rabin's fingerprinting algorithmis the prototype of the class.[1]It is fast and easy to implement, allows compounding, and comes with a mathematically precise analysis of the probability of collision. Namely, the probability of two stringsrandsyielding the samew-bit fingerprint does not exceed max(|r|,|s|)/2w-1, where |r| denotes the length ofrin bits. The algorithm requires the previous choice of aw-bit internal "key", and this guarantee holds as long as the stringsrandsare chosen without knowledge of the key.
Rabin's method is not secure against malicious attacks. An adversarial agent can easily discover the key and use it to modify files without changing their fingerprint.
Mainstreamcryptographicgrade hash functions generally can serve as high-quality fingerprint functions, are subject to intense scrutiny fromcryptanalysts, and have the advantage that they are believed to be safe against malicious attacks.
A drawback of cryptographic hash algorithms such asMD5andSHAis that they take considerably longer to execute than Rabin's fingerprint algorithm. They also lack proven guarantees on the collision probability. Some of these algorithms, notablyMD5, are no longer recommended for secure fingerprinting. They are still useful for error checking, where purposeful data tampering is not a primary concern.
NISTdistributes a software reference library, the AmericanNational Software Reference Library, that uses cryptographic hash functions to fingerprint files and map them to software products. TheHashKeeperdatabase, maintained by theNational Drug Intelligence Center, is a repository of fingerprints of "known to be good" and "known to be bad" computer files, for use in law enforcement applications (e.g. analyzing the contents of seized disk drives).
Fingerprinting is currently the most widely applied approach to content similarity detection. This method forms representative digests of documents by selecting a set of multiple substrings (n-grams) from them. The sets represent the fingerprints and their elements are called minutiae.[4][5]
|
https://en.wikipedia.org/wiki/Fingerprint_(computing)
|
Key/Config-authenticationis used to solve the problem of authenticating the keys of a person (say "person A") that some other person ("person B") is talking to or trying to talk to. In other words, it is the process of assuring that the key of "person A", held by "person B", does in fact belong to "person A" and vice versa.
This is usually done after the keys have been shared among the two sides over some secure channel. However, some algorithms share the keys at the time of authentication.
The simplest solution for this kind of problem is for the two concerned users to communicate and exchange keys. However, for systems in which there are a large number of users or in which the users do not personally know each other (e.g., Internet shopping), this is not practical. There are various algorithms for both symmetric keys and asymmetric public key cryptography to solve this problem.
For key authentication using the traditionalsymmetric keycryptography, this is the problem of assuring that there is noman-in-the-middle attackerwho is trying to read or spoof the communication. There are various algorithms used now-a-days to prevent such attacks. The most common among the algorithms areDiffie–Hellman key exchange, authentication usingKey distribution center,kerberosandNeedham–Schroeder protocol. Other methods that can be used includePassword-authenticated key agreementprotocols etc.[1]
Crypto systemsusingasymmetric keyalgorithms do not evade the problem either. That a public key can be known by all without compromising the security of an encryption algorithm (for some such algorithms, though not for all) is certainly useful, but does not prevent some kinds of attacks. For example, a spoofing attack in which public keyAis claimed publicly to be that of user Alice, but is in fact a public key belonging toman-in-the-middle attackerMallet, is easily possible. No public key is inherently bound to any particular user, and any user relying on a defective binding (including Alice herself when she sends herself protected messages) will have trouble.
The most common solution to this problem is the use ofpublic key certificatesand certificate authorities (CAs) for them in apublic-key infrastructure(PKI) system. Thecertificate authority(CA) acts as a 'trusted third party' for the communicating users and, using cryptographic binding methods (e.g.,digital signatures) represents to both parties involved that the public keys each holds which allegedly belong to the other, actually do so. A digitalnotaryservice, if you will. Such CAs can be private organizations providing such assurances, or government agencies, or some combination of the two. However, in a significant sense, this merely moves the key authentication problem back one level for any CA may make a good faith certification of some key but, through error or malice, be mistaken. Any reliance on a defective key certificate 'authenticating' a public key will cause problems. As a result, many people find all PKI designs unacceptably insecure.
Accordingly, key authentication methods are being actively researched.
|
https://en.wikipedia.org/wiki/Key_authentication
|
TheRabin fingerprinting scheme(akaPolynomial fingerprinting) is a method for implementingfingerprintsusingpolynomialsover afinite field. It was proposed byMichael O. Rabin.[1]
Given ann-bit messagem0,...,mn-1, we view it as a polynomial of degreen-1 over thefinite fieldGF(2).
We then pick a randomirreducible polynomialp(x){\displaystyle p(x)}of degreekover GF(2), and we define the fingerprint of the messagemto be the remainderr(x){\displaystyle r(x)}after division off(x){\displaystyle f(x)}byp(x){\displaystyle p(x)}over GF(2) which can be viewed as a polynomial of degreek− 1or as ak-bit number.
Many implementations of theRabin–Karp algorithminternally use Rabin fingerprints.
TheLow Bandwidth Network Filesystem(LBFS) from MIT uses Rabin fingerprints to implement variable size shift-resistant blocks.[2]The basic idea is that the filesystem computes thecryptographic hashof each block in a file. To save on transfers between the client and server,
they compare their checksums and only transfer blocks whose checksums differ. But one problem with this scheme is that a single insertion at the beginning of the file will cause every checksum to change if fixed-sized (e.g. 4 KB) blocks are used. So the idea is to select blocks not based on a specific offset but rather by some property of the block contents. LBFS does this by sliding a 48 byte window over the file and computing the Rabin fingerprint of each window. When the low 13 bits of the fingerprint are zero LBFS calls those 48 bytes a breakpoint and ends the current block and begins a new one. Since the output of Rabin fingerprints arepseudo-randomthe probability of any given 48 bytes being a breakpoint is2−13{\displaystyle 2^{-13}}(1 in 8192). This has the effect of shift-resistant variable size blocks.Anyhash functioncould be used to divide a long file into blocks (as long as acryptographic hash functionis then used to find the checksum of each block): but the Rabin fingerprint is an efficientrolling hash, since the computation of the Rabin fingerprint of regionBcan reuse some of the computation of the Rabin fingerprint of regionAwhen regionsAandBoverlap.
Note that this is a problem similar to that faced byrsync.[example needed]
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Rabin_fingerprint
|
AnSSH clientis a software program which uses thesecure shellprotocol to connect to aremote computer. This article compares a selection of notable clients.
Theoperating systemsorvirtual machinesthe SSH clients are designed to run on withoutemulationinclude several possibilities:
The list is not exhaustive, but rather reflects the most common platforms today.
This table lists standard authentication key algorithms implemented by SSH clients. Some SSH implementations include both server and client implementations and support custom non-standard authentication algorithms not listed in this table.
|
https://en.wikipedia.org/wiki/Comparison_of_SSH_clients
|
AnSSH serveris a software program which uses theSecure Shellprotocol to accept connections from remote computers.SFTP/SCPfile transfers and remote terminal connections are popular use cases for an SSH server.
Theoperating systemsorvirtual machinesthe SSH servers are designed to run on withoutemulation; there are several possibilities:
The list is not exhaustive, but rather reflects the most common platforms today.
|
https://en.wikipedia.org/wiki/Comparison_of_SSH_servers
|
Corkscrewis a computer program, written by Patrick Padgett, that enables the user to tunnelSSHconnections through mostHTTPandHTTPSproxy servers.[1]Combined with features ofSSHsuch asport forwarding, this can allow many types of services to be run securely over the SSH via HTTP connections.
Supported proxy servers:
This security software article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Corkscrew_(program)
|
TheIdent Protocol(Identification Protocol,Ident), specified inRFC 1413, is anInternetprotocolthat helps identify the user of a particularTCPconnection. One populardaemon programfor providing the ident service isidentd.
The Ident Protocol is designed to work as a serverdaemon, on auser's computer, where it receives requests to a specifiedTCP port, generally 113. In the query, a client specifies a pair ofTCP ports(a local and a remote port), encoded asASCIIdecimals and separated by a comma (,). The server then sends a response that identifies the username of the user who runs the program that uses the specified pair of TCP ports, or specifies an error.
Suppose host A wants to know the name of the user who is connecting to its TCP port 23 (Telnet) from the client's (host B) port 6191. Host A would then open a connection to the ident service on host B, and issue the following query:
As TCP connections generally use one unique local port (6191 in this case), host B can unambiguously identify the program that has initiated the specified connection to host A's port 23, should it exist. Host B would then issue a response, identifying the user ("stjohns" in this example) who owns the program that initiated this connection and the name of its localoperating system:
But if it would turn out that no such connection exists on host B, it would instead issue an error response:
All ident messages should be delimited by anend of linesequence consisting of the carriage return and linefeed characters (CR+LF).[1]
Dialup hosts or shared shell servers often provide ident to enable abuse to be tracked back to specific users. In the case that abuse is handled on this host, the concern about trusting the ident daemon is mostly irrelevant. Spoofing of the service and privacy concerns can be avoided by providing varyingcryptographically strongtokens instead of real usernames.
If abuse is to be handled by the administrators of the service that users connect to using the ident providing host, then the ident service must provide information identifying each user. Usually, it is impossible for the administrators of the remote service to know whether specific users are connecting via a trustable server or from a computer they themselves control. In the latter case the ident service provides no reliable information.
The usefulness of Ident for proving of a known identity to a remote host is limited to circumstances when:
The ident protocol is considered dangerous because it allowscrackersto gain a list ofusernameson acomputer systemwhich can later be used for attacks. A generally accepted solution to this is to set up a generic/generated identifier, returningnodeinformation or evengibberish(from the requesters point of view) rather than usernames. This gibberish may be turned into real usernames by the ident administrator, when they are contacted about possible abuse, which means the usefulness for tracking abuse is preserved.
Ident is important onIRCas a large number of people connect to IRC from a server shared by multiple users, often using abouncer. Without Ident, there would be no way to ban a single user without banning the entire host. The server administrator may also use this information to identify the abusive user.
On most IRC networks, when the server fails to get an Ident response, it falls back to the username given by client, but marks it as "not verified", usually by prefixing with a tilde; e.g.,~josh. Some IRC servers even go as far as blocking clients without an ident response,[2]the main reason being that it makes it much harder to connect via an "open proxy" or a system where you have compromised a single account of some form but do not haveroot(on Unix-like systems, only root can listen for network connections on ports below 1024).
However, Ident provides no additional authentication when the user is connecting directly from their personal computer, on which they have enough privileges to control the Ident daemon as well.[1]
|
https://en.wikipedia.org/wiki/Ident_protocol
|
OpenSSH(also known asOpenBSD Secure Shell[a]) is a suite ofsecurenetworkingutilities based on theSecure Shell(SSH) protocol, which provides asecure channelover an unsecured network in aclient–serverarchitecture.[4][5]
OpenSSH started as aforkof thefreeSSH program developed by Tatu Ylönen; later versions of Ylönen's SSH wereproprietary softwareoffered bySSH Communications Security.[6]OpenSSH was first released in 1999 and is currently developed as part of theOpenBSDoperating system.
OpenSSH is not a single computer program, but rather a suite of programs that serve as alternatives to unencrypted protocols likeTelnetandFTP. OpenSSH is integrated into several operating systems, namelyMicrosoft Windows,macOSand mostLinuxoperating systems,[7][8]while theportableversion is available as a package in other systems.[9][10][11]
OpenBSD Secure Shell was created byOpenBSDdevelopers as an alternative to the original SSH software by Tatu Ylönen, which is nowproprietary software.[12]Althoughsource codeis available for the original SSH, various restrictions are imposed on its use and distribution. OpenSSH was created as aforkof Björn Grönvall'sOSSHthat itself was a fork of Tatu Ylönen's original free SSH 1.2.12 release,[13]which was the last one having a license suitable for forking.[14][15]The OpenSSH developers claim that their application is more secure than the original, due to their policy of producing clean andauditedcode and because it is released under theBSD license, theopen-source licenseto which the wordopenin the name refers.
OpenSSH first appeared in OpenBSD 2.6. The first portable release was made in October 1999.[16]Developments since then have included the addition of ciphers (e.g.,ChaCha20-Poly1305in 6.5 of January 2014[17]), cutting the dependency onOpenSSL(6.7, October 2014[18]) and an extension to facilitatepublic-keydiscovery and rotation for trusted hosts (for transition fromDSAtoEd25519public host keys, version 6.8 of March 2015[19]).
On 19 October 2015,Microsoftannounced that OpenSSH will be natively supported onMicrosoft Windowsand accessible throughPowerShell, releasing an early implementation and making the code publicly available.[20]OpenSSH-based client and server programs have been included inWindows 10since version 1803. The SSH client and key agent are enabled and available by default, and the SSH server is an optional Feature-on-Demand.[21]
In October 2019 protection for private keys at rest in RAM against speculation and memoryside-channel attackswere added in OpenSSH 8.1.[22]
OpenSSH is developed as part of the OpenBSDoperating system. Rather than including changes for other operating systems directly into OpenSSH, a separateportabilityinfrastructure is maintained by the OpenSSH Portability Team, and "portable releases" are made periodically. This infrastructure is substantial, partly because OpenSSH is required to performauthentication, a capability that has many varying implementations. This model is also used for other OpenBSD projects such asOpenNTPD.
The OpenSSH suite includes the followingcommand-line utilitiesanddaemons:
The OpenSSH server can authenticate users using the standard methods supported by theSSHprotocol: with a password;public-keyauthentication, using per-user keys; host-based authentication, which is a secure version ofrlogin's host trust relationships using public keys; keyboard-interactive, a genericchallenge–responsemechanism, which is often used for simple password authentication, but which can also make use of stronger authenticators such astokens; andKerberos/GSSAPI. The server makes use of authentication methods native to the host operating system; this can include using theBSD Authenticationsystem orpluggable authentication modules(PAM) to enable additional authentication through methods such asone-time passwords. However, this occasionally has side effects: when using PAM with OpenSSH, it must be run asroot, as root privileges are typically required to operate PAM. OpenSSH versions after 3.7 (16 September 2003) allow PAM to be disabled at run-time, so regular users can run sshd instances.
On OpenBSD, OpenSSH uses a dedicatedsshduser by default to drop privileges and perform privilege separation in accordance with theprinciple of least privilege, applied throughout the operating system including theXenocaraX server.
OpenSSH includes the ability to set up a secured channel through which data sent to local, client-sideUnix domain socketsor local, client-sideTCPports may be "forwarded" (sent across the secured channel) for routing on the server side; when this forwarding is set up, the server is instructed to send that forwarded data to some socket or TCP host/port (the host could be the server itself, "localhost"; or, the host may be some other computer, so that it appears to the other computer that the server is the originator of the data). The forwarding of data is bidirectional, meaning that any return communication is itself forwarded back to the client-side in the same manner; this is known as an "SSH tunnel",[23]and it can be used to multiplex additional TCP connections over a single SSH connection since 2004,[24]to conceal connections, to encrypt protocols that are otherwise unsecured, and to circumvent firewalls by sending/receiving all manner of data through one port that is allowed by the firewall. For example, anX Window Systemtunnel may be created automatically when using OpenSSH to connect to a remote host, and other protocols, such asHTTPandVNC, may be forwarded easily.[25]
Tunneling a TCP-encapsulatingpayload (such asPPP) over a TCP-based connection (such as SSH'sport forwarding) is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance due to theTCP meltdown problem,[26][27]which is whyvirtual private networksoftware may instead use for the tunnel connection a protocol simpler than TCP. However, this is often not a problem when using OpenSSH's port forwarding, because many use cases do not entail TCP-over-TCP tunneling; the meltdown is avoided because the OpenSSH client processes the local, client-side TCP connection in order to get to the actual payload that is being sent, and then sends that payload directly through the tunnel's own TCP connection to the server side, where the OpenSSH server similarly "unwraps" the payload in order to "wrap" it up again for routing to its final destination.[28]
In addition, some third-party software includes support for tunnelling over SSH. These includeDistCC,CVS,rsync, andFetchmail. On some operating systems, remotefile systemscan be mounted over SSH using tools such assshfs(usingFUSE).
Anad hocSOCKSproxy server may be created using OpenSSH. This allows more flexible proxying than is possible with ordinary port forwarding.
Beginning with version 4.3, OpenSSH implements anOSIlayer 2/3tun-basedVPN. This is the most flexible of OpenSSH's tunnelling capabilities, allowing applications to transparently access remote network resources without modifications to make use of SOCKS.[29]
OpenSSH supports the following public key types:[30][31]
Before version 5.2 of OpenSSH, it was possible for an attacker to recover up to 14 bits of plaintext with a success probability of 2−14.[39]The vulnerability was related to the CBC encryption mode. The AES CTR mode and arcfour ciphers are not vulnerable to this attack.
A local privilege escalation vulnerability existed in OpenSSH 6.8 to 6.9 (CVE-2015-6565) due to world-writable (622)TTYdevices, which was believed to be adenial of servicevulnerability.[40]With the use of the TIOCSTIioctl, it was possible for authenticated users to inject characters into other users terminals and execute arbitrary commands on Linux.[41]
Malicious or compromised OpenSSH servers could read sensitive information on the client such as private login keys for other systems, using a vulnerability that relies on the undocumented connection-resuming feature of the OpenSSH client, which is called roaming, enabled by default on the client, but not supported on the OpenSSH server. This applies to versions 5.4 (released on 8 March 2010[42]) to 7.1 of the OpenSSH client, and was fixed in OpenSSH 7.1p2, released on 14 January 2016.CVE numbersassociated to this vulnerability areCVE-2016-0777(information leak) andCVE-2016-0778(buffer overflow).[43][44]
On March 29, 2024, a serioussupply chain attackonXZ Utilshas been reported, targeting indirectly the OpenSSH server (sshd) running on Linux. The OpenSSH code is not directly concerned, the backdoor is caused by the dependencies on liblzma vialibsystemdapplied by a tierce patch, applied by various Linux distributions.[citation needed]
On July 1, 2024, theRegreSSHionsecurity vulnerability was disclosed, which could enable a remote attacker to cause OpenSSH toexecute arbitrarycode and gain full root access. It was inadvertently introduced in prior OpenSSH version 8.5p1 in October 2020, and was patched following version 9.8/9.8p1.[45][46]
In February 2001, Tatu Ylönen, chairman and CTO of SSH Communications Security informed the OpenSSH development mailing list that the company intended to assert its ownership of the "SSH" and "Secure Shell"trademarks,[47]and sought to change references to the protocol to "SecSH" or "secsh", in order to maintain control of the "SSH" name. He proposed that OpenSSH change its name in order to avoid a lawsuit, a suggestion that developers resisted. OpenSSH developer Damien Miller replied urging Ylönen to reconsider, arguing that "SSH" had long since been ageneric trademark.[48]
At the time, "SSH", "Secure Shell" and "ssh" had appeared in documents proposing the protocol as an open standard. Without marking these within the proposal as registered trademarks, Ylönen ran the risk of relinquishing all exclusive rights to the name as a means of describing the protocol. Improper use of a trademark, or allowing others to use a trademark incorrectly, results in the trademark becoming a generic term, likeKleenexorAspirin, which opens the mark to use by others.[49]After study of theUSPTOtrademark database, many online pundits opined that the term "ssh" was not trademarked, merely the logo using the lower case letters "ssh". In addition, the six years between the company's creation and the time when it began to defend its trademark, and that only OpenSSH was receiving threats of legal repercussions, weighed against the trademark's validity.[50]
Both developers of OpenSSH and Ylönen himself were members of the IETF working group developing the new standard; after several meetings this group denied Ylönen's request to rename the protocol, citing concerns that it would set a bad precedent for other trademark claims against the IETF. The participants argued that both "Secure Shell" and "SSH" were generic terms and could not be trademarks.[6]
|
https://en.wikipedia.org/wiki/OpenSSH
|
Incomputer networks, atunneling protocolis acommunication protocolwhich allows for the movement of data from one network to another. They can, for example, allowprivate networkcommunications to be sent across a public network (such as theInternet), or for one network protocol to be carried over an incompatible network, through a process calledencapsulation.
Because tunneling involves repackaging the traffic data into a different form, perhaps withencryptionas standard, it can hide the nature of the traffic that is run through a tunnel.
Tunneling protocols work by using the data portion of apacket(thepayload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of theOSIorTCP/IPprotocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol.
A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as runningIPv6overIPv4.
Another important use is to provide services that are impractical or unsafe to be offered using only the underlying network services, such as providing a corporatenetwork addressto a remote user whose physical network address is not part of the corporate network.
Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall would normally block, but "wrapped" inside a protocol that the firewall does not block, such asHTTP. If the firewall policy does not specifically exclude this kind of "wrapping", this trick can function to get around the intended firewall policy (or any set of interlocked firewall policies).
Another HTTP-based tunneling method uses theHTTP CONNECT method/command. A client issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP connection to a particular server:port, and relays data between that server:port and the client connection.[1]Because this creates a security hole, CONNECT-capable HTTP proxies commonly restrict access to the CONNECT method. The proxy allows connections only to specific ports, such as 443 for HTTPS.[2]
Other tunneling methods able to bypass network firewalls make use of different protocols such asDNS,[3]MQTT,[4]SMS.[5]
As an example of network layer over network layer,Generic Routing Encapsulation(GRE), a protocol running over IP (IP protocol number47), often serves to carry IP packets, with RFC 1918 private addresses, over the Internet using delivery packets with public IP addresses. In this case, the delivery and payload protocols are the same, but the payload addresses are incompatible with those of the delivery network.
It is also possible to establish a connection using the data link layer. TheLayer 2 Tunneling Protocol(L2TP) allows the transmission offramesbetween two nodes. A tunnel is not encrypted by default: theTCP/IPprotocol chosen determines the level of security.
SSHuses port 22 to enable data encryption of payloads being transmitted over a public network (such as the Internet) connection, thereby providingVPNfunctionality.IPsechas an end-to-end Transport Mode, but can also operate in a tunneling mode through a trusted security gateway.
To understand a particular protocol stack imposed by tunneling, network engineers must understand both the payload and delivery protocol sets.
Tunneling a TCP-encapsulatingpayload (such asPPP) over a TCP-based connection (such as SSH's port forwarding) is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance — known as theTCP meltdown problem[6][7]which is whyvirtual private network(VPN) software may instead use a protocol simpler than TCP for the tunnel connection. TCP meltdown occurs when a TCP connection is stacked on top of another. The underlying layer may detect a problem and attempt to compensate, and the layer above it then overcompensates because of that, and this overcompensation causes said delays and degraded transmission performance.
ASecure Shell(SSH) tunnelconsists of an encrypted tunnel created through anSSH protocolconnection. Users may set up SSH tunnels to transferunencryptedtraffic over a network through anencryptedchannel. It is a software-based approach to network security and the result is transparent encryption.[8]
For example, Microsoft Windows machines can share files using theServer Message Block(SMB) protocol, a non-encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the Internet, someone snooping on the connection could see transferred files. To mount the Windows file-system securely, one can establish a SSH tunnel that routes all SMB traffic to the remote fileserver through an encrypted channel. Even though the SMB protocol itself contains no encryption, the encrypted SSH channel through which it travels offers security.
Once an SSH connection has been established, the tunnel starts with SSH listening to a port on theremote or local host. Any connections to it are forwarded to the specifiedaddress and port originating from theopposing (remote or local, as previously) host.
TheTCP meltdown problemis often not a problem when using OpenSSH's port forwarding, because many use cases do not entail TCP-over-TCP tunneling; the meltdown is avoided because the OpenSSH client processes the local, client-side TCP connection in order to get to the actual payload that is being sent, and then sends that payload directly through the tunnel's own TCP connection to the server side, where the OpenSSH server similarly "unwraps" the payload in order to "wrap" it up again for routing to its final destination.[9]Naturally, this wrapping and unwrapping also occurs in the reverse direction of the bidirectional tunnel.
SSH tunnels provide a means to bypassfirewallsthat prohibit certain Internet services – so long as a site allows outgoing connections. For example, an organization may prohibit a user from accessing Internet web pages (port 80) directly without passing through the organization'sproxy filter(which provides the organization with a means of monitoring and controlling what the user sees through the web). But users may not wish to have their web traffic monitored or blocked by the organization's proxy filter. If users can connect to an external SSHserver, they can create an SSH tunnel to forward a given port on their local machine to port 80 on a remote web server. To access the remote web server, users would point theirbrowserto the local port at http://localhost/
Some SSH clients support dynamicport forwardingthat allows the user to create aSOCKS4/5 proxy. In this case users can configure their applications to use their local SOCKS proxy server. This gives more flexibility than creating an SSH tunnel to a single port as previously described. SOCKS can free the user from the limitations of connecting only to a predefined remote port and server. If an application does not support SOCKS, a proxifier can be used to redirect the application to the local SOCKS proxy server. Some proxifiers, such as Proxycap, support SSH directly, thus avoiding the need for an SSH client.
In recent versions of OpenSSH it is even allowed to createlayer 2 or layer 3 tunnelsif both ends have enabled such tunneling capabilities. This createstun(layer 3, default) ortap(layer 2) virtual interfaces on both ends of the connection. This allows normal network management and routing to be used, and when used on routers, the traffic for an entire subnetwork can be tunneled. A pair oftapvirtual interfaces function like an Ethernet cable connecting both ends of the connection and can join kernel bridges.
Over the years, tunneling anddata encapsulationin general have been frequently adopted for malicious reasons, in order to maliciously communicate outside of a protected network.
In this context, known tunnels involve protocols such asHTTP,[10]SSH,[11]DNS,[12][13]MQTT.[14]
|
https://en.wikipedia.org/wiki/Secure_Shell_tunneling
|
Web-based SSHis the provision ofSecure Shell(SSH) access through aweb browser. SSH is a secure network protocol that is commonly used to remotely control servers, network devices, and other devices. With web-based SSH, users can access and manage these devices using a standard web browser, without the need to install any additional software.
Web-based SSH clients are typically implemented usingJavaScriptand eitherAjaxorWebSockets. These clients communicate with the SSH server through aproxy, which allows them to bypassfirewallsand other network security measures that may block SSH traffic. This makes web-based SSH a convenient and secure way to access remote servers and devices from any location with an internet connection.
Web-based SSH is useful for a variety of purposes, including system administration, network management, and remote development. It is often used by IT professionals and developers to remotely access and manage servers, as well as by individuals who need to access their home or office computers from a remote location.
Web-based SSH clients are applications that allow users to accessSecure Shell(SSH) servers through aweb browser. They consist of two main parts: a client-side component, which is typically implemented usingJavaScriptanddynamic HTML, and a server-side or web application component, which is typically implemented on anapplication server.
The client-side component captures keystrokes, transmits messages to and from the server, and displays the results in the user's web browser. The server-side component processes incoming requests and forwards keyboard events to a secure shell client that communicates with the connected SSH server. Terminal output is either passed to the client, where it is converted into HTML using JavaScript, or it is translated into HTML by the server before it is transmitted to the client.
Web-based SSH servers can use either client-side or server-side terminal emulation.
Client-side terminal emulation transmits the raw terminal output from the SSH server directly to the client, which has the advantage of offloading the process of translating terminal output into HTML onto the client. However, it can be limited by the capabilities of JavaScript and can use a significant amount of the client's CPU and memory.
An example of a client-side terminal emulator is vt100.js.[1]
Server-side terminal emulation keeps track of the terminal screen and state in memory and converts it to HTML when a screen update occurs or when the client requests an update. This method has the advantage of keeping the state of the terminal persistent even if the user connects to their existing session from a different web browser, but it can use more CPU and memory on the server.
An example of a server-side terminal emulator is terminal.py.[2]
The main advantages of web-basedSSHcan be summarized as follows:
The following issues have to be considered and are important when using a web-based SSH client:
|
https://en.wikipedia.org/wiki/Web-based_SSH
|
Broadcast encryptionis thecryptographicproblem of deliveringencryptedcontent (e.g. TV programs or data on DVDs) over abroadcast channelin such a way that only qualified users (e.g. subscribers who have paid their fees or DVD players conforming to a specification) can decrypt the content.[1][2][3]The challenge arises from the requirement that the set of qualified users can change in each broadcast emission, and therefore revocation of individual users or user groups should be possible using broadcast transmissions, only, and without affecting any remaining users. As efficient revocation is the primary objective of broadcast encryption, solutions are also referred to asrevocation schemes.[4][5][6]
Rather than directly encrypting the content for qualified users, broadcast encryption schemesdistribute keying informationthat allows qualified users to reconstruct the content encryption key whereas revoked users find insufficient information to recover the key.[1]The typical setting considered is that of aunidirectionalbroadcaster and stateless users (i.e., users do not keep bookmarking of previous messages by the broadcaster), which is especially challenging.[4]In contrast, the scenario where users are supported with a bi-directional communication link with the broadcaster and thus can more easily maintain their state, and where users are not only dynamically revoked but also added (joined), is often referred to asmulticast encryption.[7]
The problem of practical broadcast encryption has first been formally studied byAmos FiatandMoni Naorin 1994.[1]Since then, several solutions have been described in the literature, includingcombinatorialconstructions, one-time revocation schemes based onsecret sharingtechniques, andtree-based constructions.[2]In general, they offer various trade-offs between the increase in the size of the broadcast, the number of keys that each user needs to store, and the feasibility of an unqualified user or a collusion of unqualified users being able to decrypt the content.LubyandStaddonhave used a combinatorial approach to study the trade-offs for some general classes of broadcast encryption algorithms.[3]A particularly efficient tree-based construction is the "subset difference" scheme, which is derived from a class of so-called subset cover schemes.[4]The subset difference scheme is notably implemented in theAACSforHD DVDandBlu-rayDisc encryption. A rather simple broadcast encryption scheme is used for theCSSforDVDencryption.
The problem of rogue users sharing their decryption keys or the decrypted content with unqualified users is mathematically insoluble.Traitor tracingalgorithmsaim to minimize the damage by retroactively identifying the user or users who leaked their keys, so that punitive measures, legal or otherwise, may be undertaken.[8][4]In practice,pay TVsystems often employset-top boxeswithtamper-resistantsmart cardsthat impose physical restraints on a user learning their own decryption keys. Some broadcast encryption schemes, such as AACS, also provide tracing capabilities.[9]
|
https://en.wikipedia.org/wiki/Broadcast_encryption
|
Distributed key generation(DKG) is a cryptographic process in which multiple parties contribute to the calculation of a shared public and private key set. Unlike mostpublic key encryptionmodels, distributed key generation does not rely onTrusted Third Parties.[1]Instead, the participation of athresholdof honest parties determines whether a key pair can be computed successfully.[2]Distributed key generation prevents single parties from having access to a private key. The involvement of many parties requires Distributed key generation to ensure secrecy in the presence ofmaliciouscontributions to the key calculation.[1]
Distributed key generation is commonly used to decrypt sharedciphertextsor create groupdigital signatures.[2]
Distributed key generation protocol was first specified by Torben Pedersen in 1991. This first model depended on the security of the Joint-Feldman Protocol forverifiable secret sharingduring the secret sharing process.[3]
In 1999, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, andTal Rabinproduced a series of security proofs demonstrating that Feldman verifiable secret sharing was vulnerable to malicious contributions to Pedersen's distributed key generator that would leak information about the shared private key.[4][5]The same group also proposed an updated distributed key generation scheme preventing malicious contributions from impacting the value of the private key.
The distributed key generation protocol specified by Gennaro, Jarecki, Krawczyk, and Rabin assumes that a group of players has already been established by an honest party prior to the key generation. It also assumes the communication between parties issynchronous.[5]
In 2009, Aniket Kate and Ian Goldberg presented a Distributed key generation protocol suitable for use over the Internet.[6]Unlike earlier constructions, this protocol does not require a broadcast channel or the synchronous communication assumption, and aready-to-use libraryis available.
In many circumstances, arobustdistributed key generator is necessary. Robust generator protocols can reconstruct public keys in order to remove malicious shares even if malicious parties still remain in the qualified group during the reconstruction phase.[5]For example, robust multi-party digital signatures can tolerate a number of malicious users roughly proportionate to the length of the modulus used during key generation.[7]
Distributed key generators can implement a sparse evaluation matrix in order to improve efficiency during verification stages. Sparse evaluation can improve run time fromO(nt){\displaystyle O(nt)}(wheren{\displaystyle n}is the number of parties andt{\displaystyle t}is the threshold of malicious users) toO(log3n){\displaystyle O(log^{3}n)}. Instead of robust verification, sparse evaluation requires that a small set of the parties verify a small, randomly picked set of shares. This results in a small probability that the key generation will fail in the case that a large number of malicious shares are not chosen for verification.[8]
Distributed key generation and distributed key cryptography are rarely applied over the internet because of the reliance on synchronous communication.[5]
Distributed key cryptography is useful inkey escrowservices where a company can meet a threshold to decrypt a ciphertext version of private key. This way a company can require multiple employees to recover a private key without giving the escrow service a plaintext copy.[1]
Distributed key generation is also useful inserver-sidepasswordauthentication. Ifpassword hashesare stored on a single server, a breach in the server would result in all the password hashes being available for attackers toanalyzeoffline. Variations of distributed key generation can authenticate user passwords across multiple servers and eliminatesingle points of failure.[9][10]
Distributed key generation is more commonly used for group digital signatures. This acts as a form of voting, where a threshold of group members would have to participate in order for the group to digitally sign a document.[2]
|
https://en.wikipedia.org/wiki/Distributed_key_generation
|
Secret sharing(also calledsecret splitting) refers to methods for distributing asecretamong a group, in such a way that no individual holds any intelligible information about the secret, but when a sufficient number of individuals combine their 'shares', the secret may be reconstructed. Whereasinsecuresecret sharing allows an attacker to gain more information with each share,securesecret sharing is 'all or nothing' (where 'all' means the necessary number of shares).
In one type of secret sharing scheme there is onedealerandnplayers. The dealer gives a share of the secret to the players, but only when specific conditions are fulfilled will the players be able to reconstruct the secret from their shares. The dealer accomplishes this by giving each player a share in such a way that any group oft(forthreshold) or more players can together reconstruct the secret but no group of fewer thantplayers can. Such a system is called a(t,n)-threshold scheme (sometimes it is written as an(n,t)-threshold scheme).
Secret sharing was invented independently byAdi Shamir[1]andGeorge Blakley[2]in 1979.
Secret sharing schemes are ideal for storing information that is highly sensitive and highly important. Examples include:encryption keys,missile launch codes, andnumbered bank accounts. Each of these pieces of information must be kept highly confidential, as their exposure could be disastrous; however, it is also critical that they should not be lost. Traditional methods for encryption are ill-suited for simultaneously achieving high levels of confidentiality and reliability. This is because when storing the encryption key, one must choose between keeping a single copy of the key in one location for maximum secrecy, or keeping multiple copies of the key in different locations for greater reliability. Increasing reliability of the key by storing multiple copies lowers confidentiality by creating additional attack vectors; there are more opportunities for a copy to fall into the wrong hands. Secret sharing schemes address this problem, and allow arbitrarily high levels of confidentiality and reliability to be achieved.[3]
Secret sharing also allows the distributor of the secret to trust a group 'in aggregate'. Traditionally, giving a secret to a group for safekeeping would require that the distributor completely trust all members of the group. Secret sharing schemes allow the distributor to securely store the secret with the group even if not all members can be trusted all the time. So long as the number of traitors is never more than the critical number needed to reconstruct the secret, the secret is safe.
Secret sharing schemes are important incloud computingenvironments. Thus a key can be distributed over many servers by a threshold secret sharing mechanism. The key is then reconstructed when needed.
Secret sharing has also been suggested[by whom?]forsensor networkswhere the links are liable to be tapped, by sending the data in shares which makes the task of the eavesdropper harder.[citation needed]The security in such environments can be made greater by continuous changing[how?]of the way the shares are constructed.[citation needed]
A secure secret sharing scheme distributes shares so that anyone with fewer thantshares has no more information about the secret than someone with 0 shares.
Consider for example the secret sharing scheme in which the secret phrase "password" is divided into the shares "pa––––––", "––ss––––", "––––wo––", and "––––––rd". A person with 0 shares knows only that the password consists of eight letters, and thus would have to guess the password from 268= 208 billion possible combinations. A person with one share, however, would have to guess only the six letters, from 266= 308 million combinations, and so on as more persons collude. Consequently, this system is not a "secure" secret sharing scheme, because a player with fewer thantsecret shares is able to reduce the problem of obtaining the inner secret without first needing to obtain all of the necessary shares.
In contrast, consider the secret sharing scheme whereXis the secret to be shared,Piare publicasymmetric encryptionkeys andQitheir corresponding private keys. Each playerJis provided with{P1(P2(...(PN(X)))),Qj}.In this scheme, any player with private key 1 can remove the outer layer of encryption, a player with keys 1 and 2 can remove the first and second layer, and so on. A player with fewer thanNkeys can never fully reach the secretXwithout first needing to decrypt a public-key-encrypted blob for which he does not have the corresponding private key – a problem that is currently believed to be computationally infeasible. Additionally we can see that any user with allNprivate keys is able to decrypt all of the outer layers to obtainX, the secret, and consequently this system is a secure secret distribution system.
Several secret-sharing schemes are said to beinformation-theoretically secureand can be proven to be so, while others give up thisunconditional securityfor improved efficiency while maintaining enough security to be considered as secure as other common cryptographic primitives. For example, they might allow secrets to be protected by shares with entropy of 128 bits each, since each share would be considered enough to stymie any conceivable present-day adversary, requiring a brute force attack of average size 2127.
Common to all unconditionally secure secret sharing schemes, there are limitations:[citation needed]
Note:nis the total number of 'players', among whom the shares are distributed, andtis the minimum number of players required to reveal the secret.
t= 1 secret sharing is trivial. The secret can simply be distributed to allnparticipants.
There are several(t,n)secret-sharing schemes fort=n, when all shares are necessary to recover the secret:
The difficulty[clarification needed]lies in creating schemes that are still secure, but do not require allnshares.
When space efficiency is not a concern, trivialt=nschemes can be used to reveal a secret to any desired subsets of the players simply by applying the scheme for each subset. For example, to reveal a secretsto any two of the three players Alice, Bob and Carol, create three ((32){\displaystyle {\binom {3}{2}}}) differentt=n= 2secret shares fors, giving the three sets of two shares to Alice and Bob, Alice and Carol, and Bob and Carol.
For example, imagine that the board of directors of a company would like to protect their secret formula. The president of the company should be able to access the formula when needed, but in an emergency any 3 of the 12 board members would be able to unlock the secret formula together. One of the ways this can be accomplished is by a secret-sharing scheme witht= 3andn= 15, where 3 shares are given to the president, and one share is given to each board member.
The trivial approach quickly becomes impractical as the number of subsets increases, for example when revealing a secret to any 50 of 100 players, which would require(10050)≈1.009×1029{\displaystyle {\binom {100}{50}}\approx 1.009\times 10^{29}}schemes to be created and each player to maintain(9949)≈5.04×1028{\displaystyle {\binom {99}{49}}\approx 5.04\times 10^{28}}distinct sets of shares for each scheme. In the worst case, the increase is exponential. This has led to the search for schemes that allow secrets to be shared efficiently with a threshold of players.
In this scheme, anytout ofnshares may be used to recover the secret. The system relies on the idea that one can construct a uniquepolynomialof degreet− 1, such that each of thetpoints lies on the polynomial. It takes two points to define a straight line, three points to fully define a quadratic, four points to define a cubic curve, and so on. That is, it takestpoints to define a polynomial of degreet− 1. The method is to create a polynomial of degreet− 1with the secret as the first coefficient and the remaining coefficients picked at random. Next findnpoints on the curve and give one to each of the players. When at leasttout of thenplayers reveal their points, there is sufficient information to fit a(t− 1)th degree polynomial to them, the first coefficient being the secret.
Twononparallellines in the sameplaneintersect at exactly one point. Three nonparallel planes in space intersect at exactly one point. More generally, anynnonparallel(n− 1)-dimensionalhyperplanesintersect at a specific point. The secret may be encoded as any single coordinate of the point of intersection. If the secret is encoded using all the coordinates, even if they are random, then an insider (someone in possession of one or more of the(n− 1)-dimensionalhyperplanes) gains information about the secret since he knows it must lie on his plane. If an insider can gain any more knowledge about the secret than an outsider can, then the system no longer hasinformation theoretic security. If only one of thencoordinates is used, then the insider knows no more than an outsider (i.e., that the secret must lie on thex-axis for a 2-dimensional system). Each player is given enough information to define a hyperplane; the secret is recovered by calculating the planes' point of intersection and then taking a specified coordinate of that intersection.
Blakley's scheme is less space-efficient than Shamir's; while Shamir's shares are each only as large as the original secret, Blakley's shares arettimes larger, wheretis the threshold number of players. Blakley's scheme can be tightened by adding restrictions on which planes are usable as shares. The resulting scheme is equivalent to Shamir's polynomial system.
TheChinese remainder theoremcan also be used in secret sharing, for it provides us with a method to uniquely determine a numberSmodulokmanypairwise coprimeintegersm1,m2,...,mk{\displaystyle m_{1},m_{2},...,m_{k}}, given thatS<∏i=1kmi{\displaystyle S<\prod _{i=1}^{k}m_{i}}. There are two secret sharing schemes that make use of the Chinese remainder theorem, Mignotte's and Asmuth-Bloom's Schemes. They are threshold secret sharing schemes, in which the shares are generated by reduction modulo the integersmi{\displaystyle m_{i}}, and the secret is recovered by essentially solving the system of congruences using the Chinese remainder theorem.
If the players store their shares on insecure computer servers, anattackercould crack in and steal the shares. If it is not practical to change the secret, the uncompromised (Shamir-style) shares can be renewed. The dealer generates a new random polynomial with constant term zero and calculates for each remaining player a new ordered pair, where the x-coordinates of the old and new pairs are the same. Each player then adds the old and new y-coordinates to each other and keeps the result as the new y-coordinate of the secret.
All of the non-updated shares the attacker accumulated become useless. An attacker can only recover the secret if he can find enough other non-updated shares to reach the threshold. This situation should not happen because the players deleted their old shares. Additionally, an attacker cannot recover any information about the original secret from the update files because they contain only random information.
The dealer can change the threshold number while distributing updates, but must always remain vigilant of players keeping expired shares.
A player might lie about his own share to gain access to other shares. Averifiable secret sharing(VSS) scheme allows players to be certain that no other players are lying about the contents of their shares, up to a reasonable probability of error. Such schemes cannot be computed conventionally; the players must collectively add and multiply numbers without any individual's knowing what exactly is being added and multiplied.Tal RabinandMichael Ben-Ordevised amultiparty computing(MPC)system that allows players to detect dishonesty on the part of the dealer or on part of up to one third of the threshold number of players, even if those players are coordinated by an "adaptive" attacker who can change strategies in realtime depending on what information has been revealed.
The disadvantage of unconditionally secure secret sharing schemes is that the storage and transmission of the shares requires an amount of storage and bandwidth resources equivalent to the size of the secret times the number of shares. If the size of the secret were significant, say 1 GB, and the number of shares were 10, then 10 GB of data must be stored by the shareholders. Alternate techniques have been proposed for greatly increasing the efficiency of secret sharing schemes, by giving up the requirement of unconditional security.
One of these techniques, known assecret sharing made short,[4]combines Rabin's information dispersal algorithm[5](IDA) with Shamir's secret sharing. Data is first encrypted with a randomly generated key, using a symmetric encryption algorithm. Next this data is split into N pieces using Rabin's IDA. This IDA is configured with a threshold, in a manner similar to secret sharing schemes, but unlike secret sharing schemes the size of the resulting data grows by a factor of (number of fragments / threshold). For example, if the threshold were 10, and the number of IDA-produced fragments were 15, the total size of all the fragments would be (15/10) or 1.5 times the size of the original input. In this case, this scheme is 10 times more efficient than if Shamir's scheme had been applied directly on the data. The final step in secret sharing made short is to use Shamir secret sharing to produce shares of the randomly generated symmetric key (which is typically on the order of 16–32 bytes) and then give one share and one fragment to each shareholder.
A related approach, known as AONT-RS,[6]applies anAll-or-nothing transformto the data as a pre-processing step to an IDA. The All-or-nothing transform guarantees that any number of shares less than the threshold is insufficient to decrypt the data.
An information-theoretically securek-of-nsecret-sharing scheme generatesnshares, each of size at least that of the secret itself, leading to the total required storage being at leastn-fold larger than the secret. In multi-secret sharing designed byMatthew K. FranklinandMoti Yung,[7]multiple points of the polynomial host secrets; the method was found useful in numerous applications from coding tomulti-party computations. In space efficient secret sharing, devised by Abhishek Parakh andSubhash Kak, each share is roughly the size of the secret divided byk− 1.[8]
This scheme makes use of repeated polynomial interpolation and has potential applications in secure information dispersal on the Web and insensor networks. This method is based on data partitioning involving the roots of a polynomial in finite field.[9]Some vulnerabilities of relatedspace efficientsecret sharing schemes were pointed out later.[10]They show that a scheme based on interpolation method cannot be used to implement a(k,n)scheme when theksecrets to be distributed are inherently generated from a polynomial of degree less thank− 1, and the scheme does not work if all of the secrets to be shared are the same, etc.[11]
A secret-sharing scheme can secure a secret over multiple servers and remain recoverable despite multiple server failures. The dealer may act as several distinct participants, distributing the shares among the participants. Each share may be stored on a different server, but the dealer can recover the secret even if several servers break down as long as they can recover at leasttshares; however,crackersthat break into one server would still not know the secret as long as fewer thantshares are stored on each server.
This is one of the major concepts behind theVanishcomputer project at theUniversity of Washington, where a random key is used to encrypt data, and the key is distributed as a secret across several nodes in aP2Pnetwork. In order to decrypt the message, at leasttnodes on the network must be accessible; the principle for this particular project being that the number of secret-sharing nodes on the network will decrease naturally over time, therefore causing the secret to eventuallyvanish. However, the network is vulnerable to aSybil attack, thus making Vanish insecure.[12]
Any shareholder who ever has enough information to decrypt the content at any point is able to take and store a copy of X. Consequently, although tools and techniques such as Vanish can make data irrecoverable within their own system after a time, it is not possible to force the deletion of data once a malicious user has seen it. This is one of the leading conundrums ofdigital rights management.
A dealer could sendtshares, all of which are necessary to recover the original secret, to a single recipient. An attacker would have to intercept alltshares to recover the secret, a task which is more difficult than intercepting a single file, especially if the shares are sent using different media (e.g. some over theInternet, some mailed onCDs).
For large secrets, it may be more efficient to encrypt the secret and then distribute the key using secret sharing.
Secret sharing is an important primitive in several protocols forsecure multiparty computation.
Secret sharing can also be used for user authentication in a system.[13]
|
https://en.wikipedia.org/wiki/Secret_sharing
|
Shamir'ssecret sharing(SSS) is an efficientsecret sharingalgorithmfor distributingprivate information(the "secret") among a group. The secret cannot be revealed unless a minimum number of the group's members act together to pool their knowledge. To achieve this, the secret is mathematically divided into parts (the "shares") from which the secret can be reassembled only when a sufficient number of shares are combined. SSS has the property ofinformation-theoretic security, meaning that even if an attacker steals some shares, it is impossible for the attacker to reconstruct the secret unless they have stolen a sufficient number of shares.
Shamir's secret sharing is used in some applications to share the access keys to a master secret.
SSS is used to secure a secret in a distributed form, most often to secureencryption keys. The secret is split into multiple shares, which individually do not give any information about the secret.
To reconstruct a secret secured by SSS, a number of shares is needed, called thethreshold. No information about the secret can be gained from any number of shares below the threshold (a property calledperfect secrecy). In this sense, SSS is a generalisation of theone-time pad(which can be viewed as SSS with a two-share threshold and two shares in total).[1]
A company needs to secure their vault. If asingleperson knows the code to the vault, the code might be lost or unavailable when the vault needs to be opened. If there areseveralpeople who know the code, they may not trust each other to always act honestly.
SSS can be used in this situation to generate shares of the vault's code which are distributed to authorized individuals in the company. The minimum threshold and number of shares given to each individual can be selected such that the vault is accessible only by (groups of) authorized individuals. If fewer shares than the threshold are presented, the vault cannot be opened.
By accident, coercion or as an act of opposition, some individuals might present incorrect information for their shares. If the total of correct shares fails to meet the minimum threshold, the vault remains locked.
Shamir's secret sharing can be used to
SSS has useful properties, but also weaknesses[5]that means that it is unsuited to some uses.
Useful properties include:
Weaknesses include:
Adi Shamir, anIsraeliscientist, first formulated the scheme in 1979.[6]
The scheme exploits theLagrange interpolation theorem, specifically thatk{\displaystyle k}points on the polynomial uniquely determines apolynomialofdegreeless than or equal tok−1{\displaystyle k-1}. For instance, 2pointsare sufficient to define aline, 3 points are sufficient to define aparabola, 4 points to define acubic curveand so forth.
Shamir's secret sharing is an ideal and perfect(k,n){\displaystyle \left(k,n\right)}-threshold schemebased onpolynomial interpolationoverfinite fields. In such a scheme, the aim is to divide a secretS{\displaystyle S}(for example, the combination to asafe) inton{\displaystyle n}pieces of dataS1,…,Sn{\displaystyle S_{1},\ldots ,S_{n}}(known asshares) in such a way that:
Ifn=k{\displaystyle n=k}, then all of the shares are needed to reconstruct the secretS{\displaystyle S}.
Assume that the secretS{\displaystyle S}can be represented as an elementa0{\displaystyle a_{0}}of afinite fieldGF(q){\displaystyle \mathrm {GF} (q)}(whereq{\displaystyle q}is greater than the numbern{\displaystyle n}of shares being generated). Randomly choosek−1{\displaystyle k-1}elements,a1,⋯,ak−1{\displaystyle a_{1},\cdots ,a_{k-1}}, fromGF(q){\displaystyle \mathrm {GF} (q)}and construct the polynomialf(x)=a0+a1x+a2x2+a3x3+⋯+ak−1xk−1{\displaystyle f\left(x\right)=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{k-1}x^{k-1}}. Compute anyn{\displaystyle n}points out on the curve, for instance seti=1,…,n{\displaystyle i=1,\ldots ,n}to find points(i,f(i)){\displaystyle \left(i,f\left(i\right)\right)}. Every participant is given a point (a non-zero input to the polynomial, and the corresponding output).[7]Given any subset ofk{\displaystyle k}of these pairs,a0{\displaystyle a_{0}}can be obtained usinginterpolation, with one possible formula for doing so beinga0=f(0)=∑j=0k−1yj∏m=0m≠jk−1xmxm−xj{\displaystyle a_{0}=f(0)=\sum _{j=0}^{k-1}y_{j}\prod _{\begin{smallmatrix}m\,=\,0\\m\,\neq \,j\end{smallmatrix}}^{k-1}{\frac {x_{m}}{x_{m}-x_{j}}}}, where the list of points on the polynomial is given ask{\displaystyle k}pairs of the form(xi,yi){\displaystyle (x_{i},y_{i})}. Note thatf(0){\displaystyle f(0)}is equal to the first coefficient of polynomialf(x){\displaystyle f(x)}.
The following example illustrates the basic idea. Note, however, that calculations in the example are done using integer arithmetic rather than usingfinite field arithmeticto make the idea easier to understand. Therefore, the example below does not provide perfect secrecy and is not a proper example of Shamir's scheme. The next example will explain the problem.
Suppose that the secret to be shared is 1234(S=1234){\displaystyle (S=1234)}.
In this example, the secret will be split into 6 shares(n=6){\displaystyle (n=6)}, where any subset of 3 shares(k=3){\displaystyle (k=3)}is sufficient to reconstruct the secret.k−1=2{\displaystyle k-1=2}numbers are taken at random. Let them be 166 and 94.
The polynomial to produce secret shares (points) is therefore:
Six pointsDx−1=(x,f(x)){\displaystyle D_{x-1}=(x,f(x))}from the polynomial are constructed as:
Each participant in the scheme receives a different point (a pair ofx{\displaystyle x}andf(x){\displaystyle f(x)}). BecauseDx−1{\displaystyle D_{x-1}}is used instead ofDx{\displaystyle D_{x}}the points start from(1,f(1)){\displaystyle (1,f(1))}and not(0,f(0)){\displaystyle (0,f(0))}. This is necessary becausef(0){\displaystyle f(0)}is the secret.
In order to reconstruct the secret, any 3 points are sufficient
Consider using the 3 points(x0,y0)=(2,1942);(x1,y1)=(4,3402);(x2,y2)=(5,4414){\displaystyle \left(x_{0},y_{0}\right)=\left(2,1942\right);\left(x_{1},y_{1}\right)=\left(4,3402\right);\left(x_{2},y_{2}\right)=\left(5,4414\right)}.
Computing theLagrange basis polynomials:
Using the formula for polynomial interpolation,f(x){\displaystyle f(x)}is:
Recalling that the secret is the free coefficient, which means thatS=1234{\displaystyle S=1234}, and the secret has been recovered.
Using polynomial interpolation to find a coefficient in a source polynomialS=f(0){\displaystyle S=f(0)}usingLagrange polynomialsis notefficient, since unused constants are calculated.
Considering this, an optimized formula to use Lagrange polynomials to findf(0){\displaystyle f(0)}is defined as follows:
Although the simplified version of the method demonstrated above, which uses integer arithmetic rather than finite field arithmetic, works, there is a security problem:Evegains information aboutS{\displaystyle S}with everyDi{\displaystyle D_{i}}that she finds.
Suppose that she finds the 2 pointsD0=(1,1494){\displaystyle D_{0}=(1,1494)}andD1=(2,1942){\displaystyle D_{1}=(2,1942)}. She still does not havek=3{\displaystyle k=3}points, so in theory she should not have gained any more information aboutS{\displaystyle S}. But she could combine the information from the 2 points with the public information:n=6,k=3,f(x)=a0+a1x+⋯+ak−1xk−1,a0=S,ai∈Z{\displaystyle n=6,k=3,f(x)=a_{0}+a_{1}x+\cdots +a_{k-1}x^{k-1},a_{0}=S,a_{i}\in \mathbb {Z} }. Doing so, Eve could perform the following algebra:
The above attack exploits constraints on the values that the polynomial may take by virtue of how it was constructed: the polynomial must have coefficients that are integers, and the polynomial must take an integer as value when evaluated at each of the coordinates used in the scheme. This reduces its possible values at unknown points, including the resultant secret, given fewer thank{\displaystyle k}shares.
This problem can be remedied by using finite field arithmetic. A finite field always has sizeq=pr{\displaystyle q=p^{r}}, wherep{\displaystyle p}is a prime andr{\displaystyle r}is a positive integer. The sizeq{\displaystyle q}of the field must satisfyq>n{\displaystyle q>n}, and thatq{\displaystyle q}is greater than the number of possible values for the secret, though the latter condition may be circumvented by splitting the secret into smaller secret values, and applying the scheme to each of these. In our example below, we use a prime field (i.e.r= 1). The figure shows a polynomial curve over a finite field.
In practice this is only a small change. The orderqof the field (i.e. the number of values that it has) must be chosen to be greater than the number of participants and the number of values that the secreta0=S{\displaystyle a_{0}=S}may take. All calculations involving the polynomial must also be calculated over the field (modpin our example, in whichp=q{\displaystyle p=q}is taken to be a prime) instead of over the integers. Both the choice of the field and the mapping of the secret to a value in this field are considered to be publicly known.
For this example, choosep=1613{\displaystyle p=1613}, so the polynomial becomesf(x)=1234+166x+94x2mod1613{\displaystyle f(x)=1234+166x+94x^{2}{\bmod {1613}}}which gives the points:(1,1494);(2,329);(3,965);(4,176);(5,1188);(6,775){\displaystyle (1,1494);(2,329);(3,965);(4,176);(5,1188);(6,775)}
This time Eve doesn't gain any information when she finds aDx{\displaystyle D_{x}}(until she hask{\displaystyle k}points).
Suppose again that Eve findsD0=(1,1494){\displaystyle D_{0}=\left(1,1494\right)}andD1=(2,329){\displaystyle D_{1}=\left(2,329\right)}, and the public information is:n=6,k=3,p=1613,f(x)=a0+a1x+⋯+ak−1xk−1modp,a0=S,ai∈N{\displaystyle n=6,k=3,p=1613,f(x)=a_{0}+a_{1}x+\dots +a_{k-1}x^{k-1}\mod {p},a_{0}=S,a_{i}\in \mathbb {N} }. Attempting the previous attack, Eve can:
There arep{\displaystyle p}possible values fora1{\displaystyle a_{1}}. She knows that[448,445,442,…]{\displaystyle [448,445,442,\ldots ]}always decreases by 3, so ifp{\displaystyle p}were divisible by3{\displaystyle 3}she could concludea1∈[1,4,7,…]{\displaystyle a_{1}\in [1,4,7,\ldots ]}. However,p{\displaystyle p}is prime, so she can not conclude this. Thus, using a finite field avoids this possible attack.
Also, even though Eve can conclude thatS≡1046+2a2(mod1613){\displaystyle S\equiv 1046+2a_{2}{\pmod {1613}}}, it does not provide any additional information, since the "wrapping around" behavior of modular arithmetic prevents the leakage of "S is even", unlike the example with integer arithmetic above.
For purposes of keeping the code clearer, a prime field is used here. In practice, for convenience a scheme constructed using a smaller binary field may be separately applied to small substrings of bits of the secret (e.g. GF(256) for byte-wise application), without loss of security. The strict condition that the size of the field must be larger than the number of shares must still be respected (e.g., if the number of shares could exceed 255, the field GF(256) might be replaced by say GF(65536)).
|
https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing
|
Thresholdmay refer to:
|
https://en.wikipedia.org/wiki/Threshold_(disambiguation)
|
Acryptosystemis a set ofcryptographicalgorithmsthat mapciphertextsandplaintextsto each other.[1]
Private-key cryptosystemsuse the samekeyforencryptionanddecryption.
Public-key cryptosystemsuse a public key for encryption and a private key for decryption.
|
https://en.wikipedia.org/wiki/List_of_cryptosystems
|
The vast majority of theNational Security Agency's work onencryptionisclassified, but from time to time NSA participates instandardsprocesses or otherwise publishes information about its cryptographic algorithms. The NSA has categorized encryption items into four product types, and algorithms into two suites. The following is a brief and incomplete summary of public knowledge about NSA algorithms and protocols.
A Type 1 Product refers to an NSA endorsed classified or controlled cryptographic item for classified or sensitive U.S. government information, including cryptographic equipment, assembly or component classified or certified by NSA for encrypting and decrypting classified and sensitive national security information when appropriately keyed.[1]
A Type 2 Product refers to an NSA endorsed unclassified cryptographic equipment, assemblies or components for sensitive but unclassified U.S. government information.
Unclassified cryptographic equipment, assembly, or component used, when appropriately keyed, for encrypting or decrypting unclassified sensitive U.S. Government or commercial information, and to protect systems requiring protection mechanisms consistent with standard commercial practices. A Type 3 Algorithm refers to NIST endorsed algorithms, registered and FIPS published, for sensitive but unclassified U.S. government and commercial information.
A Type 4 Algorithm refers to algorithms that are registered by the NIST but are not FIPS published. Unevaluated commercial cryptographic equipment, assemblies, or components that are neither NSA nor NIST certified for any Government usage.
A set of NSA unpublished algorithms that is intended for highly sensitive communication and critical authentication systems.
A set of NSA endorsed cryptographic algorithms for use as an interoperable cryptographic base for both unclassified information and most classified information. Suite B was announced on 16 February 2005, and phased out in 2016.[3]
A set of cryptographic algorithmspromulgatedby theNational Security Agencyas a replacement forNSA Suite B Cryptographyuntilpost-quantum cryptographystandards are promulgated.
In August 2015, NSA announced that it is planning to transition "in the not distant future" to a new cipher suite that is resistant toquantumattacks. "Unfortunately, the growth ofelliptic curveuse has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." NSA advised: "For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition."[4]
|
https://en.wikipedia.org/wiki/NSA_cryptography
|
Attempts, unofficially dubbed the "Crypto Wars", have been made by theUnited States(US) and allied governments to limit the public's and foreign nations' access tocryptographystrong enough to thwartdecryptionby national intelligence agencies, especially theNational Security Agency(NSA).[1][2]
In the early days of theCold War, the U.S. and its allies developed an elaborate series of export control regulations designed to prevent a wide range of Western technology from falling into the hands of others, particularly theEastern bloc. All export of technology classed as 'critical' required a license.CoComwas organized to coordinate Western export controls.
Two types of technology were protected: technology associated only with weapons of war ("munitions") and dual use technology, which also had commercial applications. In the U.S., dual use technology export was controlled by theDepartment of Commerce, while munitions were controlled by theState Department. Since in the immediate postWWIIperiod the market for cryptography was almost entirely military, the encryption technology (techniques as well as equipment and, after computers became important, crypto software) was included as a Category XIII item into theUnited States Munitions List. The multinational control of the export of cryptography on the Western side of the cold war divide was done via the mechanisms of CoCom.
By the 1960s, however, financial organizations were beginning to require strong commercialencryptionon the rapidly growing field of wired money transfer. The U.S. Government's introduction of theData Encryption Standardin 1975 meant that commercial uses of high quality encryption would become common, and serious problems of export control began to arise. Generally these were dealt with through case-by-case export license request proceedings brought by computer manufacturers, such asIBM, and by their large corporate customers.
Encryption export controls became a matter of public concern with the introduction of thepersonal computer.Phil Zimmermann'sPGPcryptosystemand its distribution on theInternetin 1991 was the first major 'individual level' challenge to controls on export of cryptography. The growth ofelectronic commercein the 1990s created additional pressure for reduced restrictions.[3]Shortly afterward,Netscape'sSSLtechnology was widely adopted as a method for protecting credit card transactions usingpublic key cryptography.
SSL-encrypted messages used theRC4cipher, and used 128-bitkeys. U.S. government export regulations would not permit crypto systems using 128-bit keys to be exported.[4]At this stage Western governments had, in practice, a split personality when it came to encryption; policy was made by the military cryptanalysts, who were solely concerned with preventing their 'enemies' acquiring secrets, but that policy was then communicated to commerce by officials whose job was to support industry.
The longestkey sizeallowed for export without individual license proceedings was40 bits, so Netscape developed two versions of itsweb browser. The "U.S. edition" had the full 128-bit strength. The "International Edition" had its effective key length reduced to 40 bits by revealing 88 bits of the key in the SSLprotocol. Acquiring the 'U.S. domestic' version turned out to be sufficient hassle that most computer users, even in the U.S., ended up with the 'International' version,[5]whose weak40-bit encryptioncould be broken in a matter of days using a single personal computer. A similar situation occurred withLotus Notesfor the same reasons.[6]
Legal challengesbyPeter Jungerand other civil libertarians and privacy advocates, the widespread availability of encryption software outside the U.S., and the perception by many companies that adverse publicity aboutweak encryptionwas limiting their sales and the growth of e-commerce, led to a series of relaxations in US export controls, culminating in 1996 in PresidentBill Clintonsigning theExecutive order13026[7]transferring the commercial encryption from the Munition List to theCommerce Control List. Furthermore, the order stated that, "the software shall not be considered or treated as 'technology'" in the sense ofExport Administration Regulations. This order permitted theUnited States Department of Commerceto implement rules that greatly simplified the export of proprietary andopen sourcesoftware containing cryptography, which they did in 2000.[8]
As of 2009, non-military cryptography exports from the U.S. are controlled by the Department of Commerce'sBureau of Industry and Security.[9]Some restrictions still exist, even for mass market products, particularly with regard to export to "rogue states" andterroristorganizations. Militarized encryption equipment,TEMPEST-approved electronics, custom cryptographic software, and even cryptographic consulting services still require an export license[9](pp. 6–7). Furthermore, encryption registration with the BIS is required for the export of "mass market encryption commodities, software and components with encryption exceeding 64 bits" (75FR36494). In addition, other items require a one-time review by or notification to BIS prior to export to most countries.[9]For instance, the BIS must be notified before open-source cryptographic software is made publicly available on the Internet, though no review is required.[10]Export regulations have been relaxed from pre-1996 standards, but are still complex.[9]Other countries, notably those participating in theWassenaar Arrangement,[11]have similar restrictions.[12]
Until 1996, thegovernment of the United Kingdomwithheld export licenses from exporters unless they used weak ciphers or short keys, and generally discouraged practical public cryptography.[13]A debate about cryptography for the NHS brought this out in the open.[13]
The Clipper chip was designed for the NSA in the 1990s for secure landline phones, which implemented encryption with an announcedbackdoorfor the US government.[3]The US government tried to get manufacturers to adopt the chip, but without success. Meanwhile, much stronger software encryption became available worldwide. Academics also demonstrated fatal flaws in the chip's backdoor protocol. The effort was finally abandoned by 1996.
A5/1is astream cipherused to provide over-the-air communicationprivacyin theGSMcellular telephonestandard.
Security researcherRoss Andersonreported in 1994 that "there was a terrific row between theNATOsignal intelligence agenciesin the mid-1980s over whether GSM encryption should be strong or not. The Germans said it should be, as they shared a long border with theWarsaw Pact; but the other countries didn't feel this way, and the algorithm as now fielded is a French design."[14]
According to professor Jan Arild Audestad, at the standardization process which started in 1982, A5/1 was originally proposed to have a key length of 128 bits. At that time, 128 bits was projected to be secure for at least 15 years. It is now estimated that 128 bits would in fact also still be secure as of 2014. Audestad, Peter van der Arend, andThomas Haugsay that the British insisted on weaker encryption, with Haug saying he was told by the British delegate that this was to allow the British secret service to eavesdrop more easily. The British proposed a key length of 48 bits, while the West Germans wanted stronger encryption to protect against East German spying, so the compromise became a key length of 56 bits.[15]In general, a key of length 56 is2128−56=272=4.7×1021{\displaystyle 2^{128-56}=2^{72}=4.7\times 10^{21}}times easier to break than a key of length 128.
The widely usedDESencryption algorithm was originally planned by IBM to have a key size of 128 bits;[16]the NSA lobbied for a key size of 48 bits. The end compromise were a key size of 64 bits, 8 of which were parity bits, to make an effective key security parameter of 56 bits.[17]DES was considered insecure as early as 1977,[18]and documents leaked in the 2013Snowden leakshows that it was in fact easily crackable by the NSA, but was still recommended byNIST.[19]TheDES Challengeswere a series ofbrute force attackcontests created byRSA Securityto highlight the lack of security provided by theData Encryption Standard. As part of the successful cracking of the DES-encoded messages, the EFF constructed a specialized DES cracking computer nicknamedDeep Crack.
The successful cracking of DES likely helped to gather both political and technical support for more advanced encryption in the hands of ordinary citizens.[20]In 1997, NIST began acompetition to select a replacement for DES, resulting in the publication in 2000 of theAdvanced Encryption Standard(AES).[21]AES is still considered secure as of 2019, and the NSA considers AES strong enough to protect information classified at the Top Secret level.[22]
Fearing widespread adoption of encryption, the NSA set out to stealthily influence and weaken encryption standards and obtain master keys—either by agreement, by force of law, or by computer network exploitation (hacking).[3][23]
According to theNew York Times: "But by 2006, an N.S.A. document notes, the agency had broken into communications for three foreign airlines, one travel reservation system, one foreign government's nuclear department and another's Internet service by cracking the virtual private networks that protected them. By 2010, the Edgehill program, the British counterencryption effort, was unscrambling VPN traffic for 30 targets and had set a goal of an additional 300."[23]
As part of Bullrun, NSA has also been actively working to "insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets".[24]The New York Timeshas reported that the random number generatorDual EC DRBGcontains a back door from the NSA, which would allow the NSA to break encryption relying on that random number generator.[25]Even though Dual_EC_DRBG was known to be an insecure and slow random number generator soon after the standard was published, and the potential NSA backdoor was found in 2007, and alternative random number generators without these flaws were certified and widely available,RSA Securitycontinued using Dual_EC_DRBG in the company'sBSAFE toolkitandData Protection Manageruntil September 2013. While RSA Security has denied knowingly inserting a backdoor into BSAFE, it has not yet given an explanation for the continued usage of Dual_EC_DRBG after its flaws became apparent in 2006 and 2007,[26]however it was reported on December 20, 2013, that RSA had accepted a payment of $10 million from the NSA to set the random number generator as the default.[27][28]Leaked NSA documents state that their effort was "a challenge in finesse" and that "Eventually, N.S.A. became the sole editor" of the standard.
By 2010, the NSA had developed "groundbreaking capabilities" against encrypted Internet traffic. A GCHQ document warned however "These capabilities are among the Sigint community's most fragile, and the inadvertent disclosure of the simple 'fact of' could alert the adversary and result in immediate loss of the capability."[23]Another internal document stated that "there will be NO 'need to know.'"[23]Several experts, includingBruce SchneierandChristopher Soghoian, have speculated that a successful attack againstRC4, a 1987 encryption algorithm still used in at least 50 per cent of all SSL/TLS traffic is a plausible avenue, given several publicly known weaknesses of RC4.[29]Others have speculated that NSA has gained ability to crack 1024-bitRSAandDiffie–Hellmanpublic keys.[30]A team of researchers have pointed out that there is wide reuse of a few non-ephemeral 1024 bit primes in Diffie–Hellman implementations, and that NSA having done precomputation against those primes in order to break encryption using them in real time is very plausibly what NSA's "groundbreaking capabilities" refer to.[31]
The Bullrun program is controversial, in that it is believed that NSA deliberately inserts or keeps secret vulnerabilities which affect both law-abiding US citizens as well as NSA's targets, under itsNOBUSpolicy.[32]In theory, NSA has two jobs: prevent vulnerabilities that affect the US, and find vulnerabilities that can be used against US targets; but as argued by Bruce Schneier, NSA seems to prioritize finding (or even creating) and keeping vulnerabilities secret. Bruce Schneier has called for the NSA to be broken up so that the group charged with strengthening cryptography is not subservient to the groups that want to break the cryptography of its targets.[33]
As part of the Snowden leaks, it became widely known that intelligence agencies could bypass encryption of data stored on Android andiOSsmartphones by legally ordering Google and Apple to bypass the encryption on specific phones. Around 2014, as a reaction to this, Google and Apple redesigned their encryption so that they did not have the technical ability to bypass it, and it could only be unlocked by knowing the user's password.[34][35]
Various law enforcements officials, including theObama administration's Attorney GeneralEric Holder[36]responded with strong condemnation, calling it unacceptable that the state could not access alleged criminals' data even with a warrant. In one of the more iconic responses, the chief of detectives for Chicago's police department stated that "Apple will become the phone of choice for the pedophile".[37]Washington Post posted an editorial insisting that "smartphone users must accept that they cannot be above the law if there is a valid search warrant", and after agreeing that backdoors would be undesirable, suggested implementing a "golden key" backdoor which would unlock the data with a warrant.[38][39]
FBI Director James Comey cited a number of cases to support the need to decrypt smartphones. Interestingly, in none of the presumably carefully handpicked cases did the smartphone have anything to do with the identification or capture of the culprits, and the FBI seems to have been unable to find any strong cases supporting the need for smartphone decryption.[40]
Bruce Schneierhas labelled the right to smartphone encryption debateCrypto Wars II,[41]whileCory Doctorowcalled itCrypto Wars redux.[42]
Legislators in the US states of California[43]and New York[44]have proposed bills to outlaw the sale of smartphones with unbreakable encryption. As of February 2016, no bills have been passed.
In February 2016 the FBI obtained a court order demanding that Apple create andelectronically signnew software which would enable the FBI to unlock aniPhone 5cit recovered fromone of the shootersin the 2015 terroristattack in San Bernardino, California. Apple challenged the order. In the end the FBI hired a third party to crack the phone.SeeFBI–Apple encryption dispute.
In April 2016,Dianne FeinsteinandRichard Burrsponsored a bill, described as "overly vague" by some,[45]that would be likely to criminalise all forms ofstrong encryption.[46][47]
In December 2019, theUnited States Senate Committee on the Judiciaryconvened a hearing on Encryption and Lawful Access, focusing on encrypted smartphone storage.[48]District AttorneyCyrus Vance Jr., Professor Matt Tait, Erik Neuenschwander from Apple, and Jay Sullivan from Facebook testified. ChairmanLindsey Grahamstated in his opening remarks "all of us want devices that protect our privacy." He also said law enforcement should be able to read encrypted data on devices, threatening to pass legislation if necessary: "You're going to find a way to do this or we're going to do this for you."[49]
In October 2017, Deputy Attorney GeneralRod Rosensteincalled for key escrow under the euphemism "responsible encryption"[50]as a solution to the ongoing problem of "going dark".[51]This refers to wiretapping court orders and police measures becoming ineffective as strongend-to-end encryptionis increasingly added to widespread messenger products. Rosenstein suggestedkey escrowwould provide their customers with a way to recover their encrypted data if they forget their password, so that it is not lost forever. From a law enforcement perspective, this would allow a judge to issue a search warrant instructing the company to decrypt the data; without escrow or other undermining of encryption it is impossible for a service provider to comply with this request. In contrast to previous proposals, the decentralized storage of keys by companies instead of government agencies is claimed to be an additional safeguard.
In 2015 the head of the NSA,Admiral Michael S. Rogers, suggested further decentralizing the key escrow by introducing "front doors" instead of back doors into encryption.[52]This way, the key would be split into two halves: one kept by government authorities and the other by the company responsible for the encryption product. The government would thus still need a search warrant to obtain the company's half-key, while the company would be unable to abuse the key escrow to access users' data without the government's half-key. Experts were not impressed.[53][52]
In 2018, the NSA promoted the use of "lightweight encryption", in particular its ciphersSimonandSpeck, forInternet of Thingsdevices.[54]However, the attempt to have those ciphers standardized by ISO failed because of severe criticism raised by the board of cryptography experts which provoked fears that the NSA had non-public knowledge of how to break them.[55]
Following the 2015Charlie Hebdoshooting, a terrorism attack, formerUK Prime MinisterDavid Cameroncalled for outlawing non-backdoored cryptography, saying that there should be no "means of communication" which "we cannot read".[56][57]US president Barack Obama sided with Cameron on this.[58]This call for action does not seem to have resulted in any legislation or changes in the status quo of non-backdoored cryptography being legal and available.
TheEliminating Abusive and Rampant Neglect of Interactive Technologies(EARN IT) Act of 2020 provides for a 19-member National Commission which will develop a set of "best practice" guidelines to which technology providers will have to conform in order to "earn" immunity (traditionally provided 'automatically' bySection 230 of the Communications Decency Act) to liability for child sexual abuse material on their platforms. Proponents present it as a way to tackle child sexual abuse material on internet platforms, but it has been criticized by advocates of encryption because it is likely that the "best practices" devised by the commission will include refraining from using end-to-end encryption, as such encryption would make it impossible to screen for illegal content.[59][60]
|
https://en.wikipedia.org/wiki/Crypto_wars
|
The export of cryptography from theUnited Statesto other countries has experienced various levels of restrictions over time.[2]World War IIillustrated thatcode-breakingandcryptographycan play an integral part innational securityand the ability to prosecute war. Changes in technology and the preservation of free speech have been competing factors in the regulation and constraint of cryptographic technologies for export.
In the early days of theCold War, the U.S. and its allies developed an elaborate series ofexport controlregulations designed to prevent a wide range of Western technology from falling into the hands of others, particularly theEastern bloc. All export of technology classed as 'critical' required a license.CoComwas organized to coordinate Western export controls.
Two types of technology were protected: technology associated only with weapons of war ("munitions") and dual use technology, which also had commercial applications. In the U.S., dual use technology export was controlled by theDepartment of Commerce, while munitions were controlled by theState Department. Since in the immediate post WWII period the market for cryptography was almost entirely military, the encryption technology (techniques as well as equipment and, after computers began to play a larger role in modern life, crypto software) was included as "Category XI - Miscellaneous Articles" and later "Category XIII - Auxiliary Military Equipment" item into theUnited States Munitions Liston November 17, 1954. The multinational control of the export of cryptography on the Western side of the cold war divide was done via the mechanisms of CoCom.
By the 1960s, however, financial organizations were beginning to require strong commercial encryption on the rapidly growing field of wired money transfer. The U.S. Government's introduction of theData Encryption Standardin 1975 meant that commercial uses of high quality encryption would become common, and serious problems of export control began to arise. Generally these were dealt with through case-by-case export license request proceedings brought by computer manufacturers, such asIBM, and by their large corporate customers.
Encryption export controls became a matter of public concern with the introduction of thepersonal computer.Phil Zimmermann'sPGPencryption softwareand its distribution on theInternetin 1991 was the first major 'individual level' challenge to controls on export of cryptography. The growth ofelectronic commercein the 1990s created additional pressure for reduced restrictions. VideoCipher II also used DES to scramble satellite TV audio.
In 1989, non-encryption use of cryptography (such as access control and message authentication) was removed from export control with a Commodity Jurisdiction.[1]In 1992, an exception was formally added in the USML for non-encryption use of cryptography (and satellite TV descramblers) and a deal between NSA and theSoftware Publishers Associationmade40-bitRC2andRC4encryption easily exportable using a Commodity Jurisdiction with special "7-day" and "15-day" review processes (which transferred control from the State Department to the Commerce Department). At this stage Western governments had, in practice, a split personality when it came to encryption; policy was made by the military cryptanalysts, who were solely concerned with preventing their 'enemies' acquiring secrets, but that policy was then communicated to commerce by officials whose job was to support industry.
Shortly afterward,Netscape'sSSLtechnology was widely adopted as a method for protecting credit card transactions usingpublic key cryptography. Netscape developed two versions of itsweb browser. The "U.S. edition" supported full size (typically 1024-bit or larger)RSApublic keys in combination with full size symmetric keys (secret keys) (128-bit RC4 or 3DES in SSL 3.0 and TLS 1.0). The "International Edition" had its effective key lengths reduced to 512 bits and 40 bits respectively (RSA_EXPORTwith 40-bit RC2 or RC4 in SSL 3.0 and TLS 1.0).[3]Acquiring the 'U.S. domestic' version turned out to be sufficient hassle that most computer users, even in the U.S., ended up with the 'International' version,[4]whose weak40-bit encryptioncan currently be broken in a matter of days using a single computer. A similar situation occurred withLotus Notesfor the same reasons.
Legal challengesbyPeter Jungerand other civil libertarians and privacy advocates, the widespread availability of encryption software outside the U.S., and the perception by many companies that adverse publicity aboutweak encryptionwas limiting their sales and the growth of e-commerce, led to a series of relaxations in US export controls, culminating in 1996 in PresidentBill Clintonsigning theExecutive Order 13026transferring the commercial encryption from the Munition List to theCommerce Control List. Furthermore, the order stated that, "the software shall not be considered or treated as 'technology'" in the sense ofExport Administration Regulations. The Commodity Jurisdiction process was replaced with a Commodity Classification process, and a provision was added to allow export of 56-bit encryption if the exporter promised to add "key recovery" backdoors by the end of 1998. In 1999, the EAR was changed to allow 56-bit encryption (based on RC2, RC4, RC5, DES or CAST) and 1024-bit RSA to be exported without any backdoors, and new SSL cipher suites were introduced to support this (RSA_EXPORT1024with 56-bit RC4 or DES). In 2000, theDepartment of Commerceimplemented rules that greatly simplified the export of commercial andopen sourcesoftware containing cryptography, including allowing the key length restrictions to be removed after going through the Commodity Classification process (to classify the software as "retail") and adding an exception for publicly available encryption source code.[5]
As of 2009[update], non-military cryptography exports from the U.S. are controlled by the Department of Commerce'sBureau of Industry and Security.[6]Some restrictions still exist, even for mass market products; particularly with regards to export to "rogue states" andterroristorganizations. Militarized encryption equipment,TEMPEST-approved electronics, custom cryptographic software, and even cryptographic consulting services still require an export license.[6]: 6–7Furthermore, encryption registration with the BIS is required for the export of "mass market encryption commodities, software and components with encryption exceeding 64 bits" (75FR36494). For elliptic curves algorithms and asymmetric algorithms, the requirements for key length are 128 bit and 768 bits, respectively.[7]In addition, other items require a one-time review by, or notification to, BIS prior to export to most countries.[6]For instance, the BIS must be notified before open-source cryptographic software is made publicly available on the Internet, though no review is required.[8]Export regulations have been relaxed from pre-1996 standards, but are still complex.[6]Other countries, notably those participating in theWassenaar Arrangement,[9]have similar restrictions.[10]On March 29, 2021, the Implementation of Wassenaar Arrangement 2019 Plenary Decisions[11]was published in the Federal Register. This rule included changes to license exception ENC Section 740.17 of the EAR.[12][13]
U.S. non-military exports are controlled byExport Administration Regulations(EAR), a short name for the U.S.Code of Federal Regulations(CFR) Title 15 chapter VII, subchapter C.
Encryption items specifically designed, developed, configured, adapted or modified for military
applications (including command, control and intelligence applications) are controlled by
theDepartment of Stateon theUnited States Munitions List.
Encryption export terminology is defined in EAR part 772.1.[14]In particular:
Export destinations are classified by the EAR Supplement No. 1 to Part 740 into fourcountry groups(A, B, D, E) with further subdivisions;[15]a country can belong to more than one group. For the purposes of encryption, groups B, D:1, and E:1 are important:
The EAR Supplement No. 1 to Part 738 (Commerce Country Chart) contains the table withcountry restrictions.[16]If a line of table that corresponds to the country contains an X in thereason for controlcolumn, the export of a controlled item requires a license, unless anexceptioncan be applied. For the purposes of encryption, the following three reasons for control are important:
For export purposes each item is classified with theExport Control Classification Number(ECCN) with the help of theCommerce Control List(CCL, Supplement No. 1 to the EAR part 774). In particular:[6]
An item can be either self-classified, or a classification ("review") requested from the BIS. A BIS review is required for typical items to get the 5A992 or 5D992 classification.
|
https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States
|
A number of countries have attempted to restrict the import ofcryptographytools.
Countries may wish to restrict import of cryptography technologies for a number of reasons:
TheElectronic Privacy Information Centerand Global Internet Liberty Campaign reports use a color code to indicate the level of restriction, with the following meanings:
|
https://en.wikipedia.org/wiki/Restrictions_on_the_import_of_cryptography
|
IP address blockingorIP banningis a configuration of anetwork servicethat blocks requests fromhostswith certainIP addresses. IP address blocking is commonly used to protect againstbrute force attacksand to prevent access by a disruptive address. It can also be used to restrict access to or from a particular geographic area; for example, syndicating content to a specific region through the use ofInternet geolocation.[1]
IP address blocking can be implemented with ahosts file(e.g., for Mac, Windows, Android, or OS X) or with aTCP wrapper(forUnix-likeoperating systems). It can be bypassed using methods such asproxy servers; however,thiscan be circumvented withDHCP leaserenewal.
Every device connected to the Internet is assigned a uniqueIP address, which is needed to enable devices to communicate with each other. With appropriate software on the host website, the IP address of visitors to the site can be logged and can also be used to determine the visitor'sgeographical location.[2][3]
Logging the IP address can, for example, monitor if a person has visited the site before, for example, to vote more than once, as well as to monitor their viewing pattern, how long since they performed any activity on the site (and set a time out limit), besides other things.
Knowing the visitor'sgeolocationindicates, besides other things, the visitor's country. In some cases, requests from or responses to a certain country would be blocked entirely.Geo-blockinghas been used, for example, to block shows in certain countries, such ascensoringshows deemed inappropriate. This is especially frequent in places such asChina.[4][5]
Internet users may circumvent geo-blocking and censorship and protect their personal identity using aVirtual Private Network.[4]
On a website, an IP address block can prevent a disruptive address from access, though a warning and/or account block may be used first. Dynamic allocation of IP addresses byISPscan complicate IP address blocking by making it difficult to block a specific user without blocking many IP addresses (blocks of IP address ranges), thereby creating collateral damage.[6]For websites with low-enough popularity (often intentionally, with explicitly declaring the majority of potential visitors as out-of-scope) the large-scale collateral damage is often tolerable: most of website accesses, for addresses belong to the same IP range, are accesses of persons just having a dynamic IP address, but the sameInternet service provider(ISP), country, city and city districts, based on which IP ranges are assigned by ISPs. On websites with low-enough total visitor count, it is improbable that all these features match more than a single person. For large websites,Terms of Servicesusually reserve the right of their admins to block access at own discretion, enabling them to create collateral damage this way.
Unix-likeoperating systems commonly implement IP address blocking using aTCP wrapper, configured by host access control files/etc/hosts.denyand/etc/hosts.allow.
Both companies and schools offering remote user access useLinuxprograms such asDenyHostsorFail2banfor protection from unauthorized access while allowing permitted remote access. This is also useful for allowing remote access to computers. It is also used for Internet censorship.
IP address blocking is possible on many systems using ahosts file, which is a simple text file containing hostnames and IP addresses. Hosts files are used by many operating systems, including Microsoft Windows, Linux, Android, and OS X.
Proxy serversand other methods[which?]can be used to bypass the blocking of traffic from IP addresses.[7]However, anti-proxy strategies are available. Consumer-grade internet routers can sometimes obtain a new public IP address on-demand from the ISP usingDHCP leaserenewal to circumvent individual IP address blocks. This, however, can be countered by blocking the range of IP addresses from which the internet service provider is assigning new IP addresses, which is usually a sharedIP address prefix. However, this may impact legitimate users from the same internet service provider who have IP addresses in the same range, which inadvertently creates adenial-of-service attack.
In the caseCraigslist v. 3Taps(2013), US federal judgeCharles R. Breyerheld that circumventing an address block to access a website is a violation of theComputer Fraud and Abuse Actfor "unauthorized access", and is thus punishable bycivil damages.
Media related toIP address blockingat Wikimedia Commons
|
https://en.wikipedia.org/wiki/IP_address_blocking
|
Internet censorshipis the legalcontrol or suppressionof what can be accessed, published, or viewed on theInternet.Censorshipis most often applied to specificinternet domains(such asWikipedia.org, for example) but exceptionally may extend to all Internet resources located outside thejurisdictionof the censoring state. Internet censorship may also put restrictions on what information can be made internet accessible.[1]Organizations providing internet access – such as schools and libraries – may choose to preclude access to material that they consider undesirable, offensive,age-inappropriateor even illegal, and regard this as ethical behavior rather than censorship. Individuals and organizations may engage inself-censorshipof material they publish, for moral, religious, or business reasons, to conform tosocietal norms, political views, due to intimidation, or out of fear of legal or other consequences.[2][3]
The extent of Internet censorship varies on a country-to-country basis. While some countries have moderate Internet censorship, other countries go as far as to limit the access of information such as news and suppress and silence discussion among citizens.[3]Internet censorship also occurs in response to or in anticipation of events such as elections, protests, and riots. An example is the increased censorship due to the events of theArab Spring. Other types of censorship include theuse of copyrights,defamation,harassment, and various obscene material claims as a way to deliberately suppress content.
Support for and opposition to Internet censorship also varies. In a2012 Internet Society survey, 71% of respondents agreed that "censorship should exist in some form on the Internet". In the same survey, 83% agreed that "access to the Internet should be considered a basic human right" and 86% agreed that "freedom of expressionshould be guaranteed on the Internet". Perception of internet censorship in the US is largely based on theFirst Amendmentand the right for expansive free speech and access to content without regard to the consequences.[4]According toGlobalWebIndex, over 400 million people usevirtual private networkstocircumvent censorshipor for increased userprivacy.[5]
Many of the challenges associated with Internet censorship are similar to those for offline censorship of more traditional media such as newspapers, magazines, books, music, radio, television, and film. One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. Thus censors must work to preventaccess to informationeven though they lack physical or legal control over the websites themselves. This in turn requires the use of technical censorship methods that are unique to the Internet, such as site blocking and content filtering.[6]
Views about the feasibility and effectiveness of Internet censorship have evolved in parallel with the development of the Internet and censorship technologies:
Blocking and filtering can be based on relatively staticblacklistsor be determined more dynamically based on a real-time examination of the information being exchanged. Blacklists may be produced manually or automatically and are often not available to non-customers of the blocking software. Blocking or filtering can be done at a centralized national level, at a decentralized sub-national level, or at an institutional level, e.g., in libraries, universities orInternet cafés.[3]Blocking and filtering may also vary within a country across different ISPs.[10]Countries may filter sensitive content on an ongoing basis and/or introduce temporary filtering during key time periods such as elections. In some cases, the censoring authorities may surreptitiously block content to mislead the public into believing that censorship has not been applied. This is achieved by returning a fake"Not Found"error message when an attempt is made to access a blocked website.[11]
Unless the censor has total control over all Internet-connected computers, such as inNorth Korea(who employs anintranetthat only privileged citizens can access), orCuba, total censorship of information is very difficult or impossible to achieve due to the underlying distributed technology of the Internet.Pseudonymityanddata havens(such asHyphanet) protectfree speechusing technologies that guarantee material cannot be removed and prevents the identification of authors. Technologically savvy users can often find ways to access blocked content. Nevertheless, blocking remains an effective means of limiting access to sensitive information for most users when censors, such as those inChina, are able to devote significant resources to building and maintaining a comprehensive censorship system.[6]
The term "splinternet" is sometimes used to describe the effects of nationalfirewalls. The verb "rivercrab" colloquially refers to censorship of the Internet, particularly in Asia.[12]
Various parties are using different technical methods of preventing public access to undesirable resources, with varying levels of effectiveness, costs and side effects.
Entities mandating and implementing the censorship usually identify them by one of the following items: keywords,domain namesandIP addresses. Lists are populated from different sources, ranging from private supplier through courts to specialized government agencies (Ministry of Industry and Information Technologyof China, Islamic Guidance in Iran).[13]
As per Hoffmann, different methods are used to block certain websites or pages includingDNS spoofing, blocking access toIPs, analyzing andfiltering URLs,inspecting filter packetsandresetting connections.[14]
Enforcement of the censor-nominated technologies can be applied at various levels of countries and Internet infrastructure:[13]
Internet content is subject to technical censorship methods, including:[3][6]
Technical censorship techniques are subject to both over- and under-blocking since it is often impossible to always block exactly the targeted content without blocking other permissible material or allowing some access to targeted material and so providing more or less protection than desired.[6]An example is blocking an IP-address of a server that hosts multiple websites, which prevents access to all of the websites rather than just those that contain content deemed offensive.[20]
Writing in 2009,Ronald Deibert, professor of political science at theUniversity of Torontoand co-founder and one of the principal investigators of theOpenNet Initiative, and, writing in 2011, Evgeny Morzov, a visiting scholar atStanford Universityand an op-ed contributor toThe New York Times, explain that companies in the United States, Finland, France, Germany, Britain, Canada, and South Africa are in part responsible for the increasing sophistication of online content filtering worldwide. While the off-the-shelffiltering softwaresold by Internet security companies are primarily marketed to businesses and individuals seeking to protect themselves and their employees and families, they are also used by governments to block what they consider sensitive content.[21][22]
Among the most popular filtering software programs isSmartFilterbySecure Computingin California, which was bought byMcAfeein 2008. SmartFilter has been used byTunisia, Saudi Arabia,Sudan, theUAE, Kuwait,Bahrain, Iran, andOman, as well as the United States and the UK.[23]MyanmarandYemenhave used filtering software fromWebsense. TheCanadian-madecommercial filter Netsweeper[24]is used inQatar, theUAE, andYemen.[25]The Canadian organization CitizenLab has reported thatSandvineandProceraproducts are used in Turkey and Egypt.[26]
On 12 March 2013, in aSpecial Report on Internet Surveillance, Reporters Without Borders named five "Corporate Enemies of the Internet":Amesys(France),Blue Coat Systems(U.S.),Gamma(UK and Germany),HackingTeam(Italy), and Trovicor (Germany). The companies sell products that are liable to be used by governments to violate human rights and freedom of information. RWB said that the list is not exhaustive and will be expanded in the coming months.[27]
In a U.S. lawsuit filed in May 2011,Ciscois accused of helping theChinese governmentbuild a firewall, known widely as theGolden Shield, to censor the Internet and keep tabs on dissidents.[28]Cisco said it had made nothing special for China. Cisco is also accused of aiding the Chinese government in monitoring and apprehending members of the bannedFalun Gonggroup.[29]
Many filtering programs allow blocking to be configured based on dozens of categories and sub-categories such as these from Websense: "abortion" (pro-life, pro-choice), "adult material" (adult content, lingerie and swimsuit, nudity, sex, sex education), "advocacy groups" (sites that promote change or reform in public policy, public opinion, social practice, economic activities, and relationships), "drugs" (abused drugs, marijuana, prescribed medications, supplements and unregulated compounds), "religion" (non-traditional religions occult and folklore, traditional religions), ....[25]The blocking categories used by the filtering programs may contain errors leading to the unintended blocking of websites.[21]The blocking ofDailymotionin early 2007 by Tunisian authorities was, according to theOpenNet Initiative, due to Secure Computing wrongly categorizing Dailymotion as pornography for its SmartFilter filtering software. It was initially thought that Tunisia had blocked Dailymotion due to satirical videos about human rights violations in Tunisia, but after Secure Computing corrected the mistake access to Dailymotion was gradually restored in Tunisia.[30]
Organizations such as theGlobal Network Initiative, theElectronic Frontier Foundation,Amnesty International, and theAmerican Civil Liberties Unionhave successfully lobbied some vendors such asWebsenseto make changes to their software, to refrain from doing business with repressive governments, and to educate schools who have inadvertently reconfigured their filtering software too strictly.[31][32][33]Nevertheless, regulations and accountability related to the use of commercial filters and services are often non-existent, and there is relatively little oversight from civil society or other independent groups. Vendors often consider information about what sites and content is blocked valuable intellectual property that is not made available outside the company, sometimes not even to the organizations purchasing the filters. Thus by relying upon out-of-the-box filtering systems, the detailed task of deciding what is or is not acceptable speech may be outsourced to the commercial vendors.[25]
Internet content is also subject to censorship methods similar to those used with more traditional media. For example:[6]
Deplatformingis a form of Internet censorship in which controversial speakers or speech are suspended, banned, or otherwise shut down bysocial mediaplatforms and other service providers that generally provide a venue for free speech or expression.[38]Banking and financial service providers, among other companies, have also denied services to controversial activists or organizations, a practice known as "financial deplatforming".
Law professorGlenn Reynoldsdubbed 2018 the "Year of Deplatforming", in an August 2018 article inThe Wall Street Journal.[38]According to Reynolds, in 2018 "the internet giants decided to slam the gates on a number of people and ideas they don't like."[38]On 6 August 2018, for example, several major platforms, includingYouTubeandFacebook, executed a coordinated, permanent ban on all accounts and media associated with conservative talk show hostAlex Jonesand his media platformInfoWars, citing "hate speech" and "glorifying violence."[39]
Most major web service operators reserve to themselves broad rights to remove or pre-screen content, and to suspend or terminate user accounts, sometimes without giving a specific list or only a vague general list of the reasons allowing the removal. The phrases "at our sole discretion", "without prior notice", and "for other reasons" are common inTerms of serviceagreements.
Internet censorship circumvention is one of the processes used by technologically savvy Internet users to bypass the technical aspects of Internet filtering and gain access to the otherwise censored material. Circumvention is an inherent problem for those wishing to censor the Internet because filtering and blocking do not remove content from the Internet, but instead block access to it. Therefore, as long as there is at least one publicly accessible uncensored system, it will often be possible to gain access to the otherwise censored material. However circumvention may not be possible by non-tech-savvy users, so blocking and filtering remain effective means of censoring the Internet access of large numbers of users.[6]
Different techniques and resources are used to bypass Internet censorship, includingproxy websites,virtual private networks,sneakernets, thedark weband circumvention software tools. Solutions have differing ease of use, speed, security, and risks. Most, however, rely on gaining access to an Internet connection that is not subject to filtering, often in a different jurisdiction not subject to the same censorship laws. According toGlobalWebIndex, over 400 million people use virtual private networks to circumvent censorship or for an increased level of privacy.[5]The majority of circumvention techniques are not suitable for day to day use.[48]
There are risks to using circumvention software or other methods to bypass Internet censorship. In some countries, individuals that gain access to otherwise restricted content may be violating the law and if caught can be expelled, fired, jailed, or subject to other punishments and loss of access.[3][49]
In June 2011,The New York Timesreported that the U.S. is engaged in a "global effort to deploy 'shadow' Internet and mobile phone systems that dissidents can use to undermine repressive governments that seek to silence them by censoring or shutting down telecommunications networks."[50]
Another way to circumvent Internet censorship is to physically go to an area where the Internet is not censored. In 2017, a so-called "Internet refugee camp" was established by IT workers in the village of Bonako, just outside an area of Cameroon where the Internet is regularly blocked.[51][52]
The use ofHTTPSversus what originally wasHTTPin web searches created greater accessibility to most sites originally blocked or heavily monitored. Many social media sites including, Facebook, Google, and Twitter have added an automatic redirection to HTTPS as of 2017.[53]With the added adoption of HTTPS use, "censors" are left with limited options of either completely blocking all content or none of it.[54]
The use ofHTTPSdoes not inherently prevent the censorship of an entire domain, as the domain name is left unencrypted in the ClientHello of theTLShandshake. TheEncrypted Client HelloTLS extension expands on HTTPS and encrypts the entire ClientHello but this depends on both client and server support.[55][56]
There are several motives or rationales for Internet filtering: politics and power, social norms and morals, and security concerns. Protecting existing economic interests is an additional emergent motive for Internet filtering. In addition, networking tools and applications that allow the sharing of information related to these motives are themselves subjected to filtering and blocking. And while there is considerable variation from country to country, the blocking of web sites in a local language is roughly twice that of web sites available only in English or other international languages.[11]
Censorship directed at political opposition to the ruling government is common in authoritarian and repressive regimes. Some countries block web sites related to religion and minority groups, often when these movements represent a threat to the ruling regimes.[11]
Examples include:
Social filtering is censorship of topics that are held to be antithetical to accepted societal norms.[11]In particular censorship ofchild pornographyand content deemedinappropriate for childrenenjoys very widespread public support and such content is subject to censorship and other restrictions in most countries.
Examples include:
Many organizations implement filtering as part of adefense in depthstrategy to protect their environments frommalware,[62]and to protect their reputations in the event of their networks being used, for example, to carry out sexual harassment.
Internet filtering related to threats tonational securitythat targets the Web sites ofinsurgents,extremists, and terrorists often enjoys wide public support.[11]
Examples include:
The protection of existing economic interests is sometimes the motivation for blocking new Internet services such as low-cost telephone services that useVoice over Internet Protocol(VoIP). These services can reduce the customer base of telecommunications companies, many of which enjoy entrenched monopoly positions and some of which are government sponsored or controlled.[11]
Anti-copyrightactivistsChristian Engström,Rick FalkvingeandOscar Swartzhave alleged that censorship of child pornography is being used as a pretext by copyright lobby organizations to get politicians to implement similar site blocking legislation against copyright-related piracy.[65]
Examples include:
Blocking the intermediate tools and applications of the Internet that can be used to assist users in accessing and sharing sensitive material is common in many countries.[11]
Examples include:
Theright to be forgottenis a concept that has been discussed and put into practice in theEuropean Union. In May 2014, theEuropean Court of Justiceruled againstGoogleinCosteja, a case brought by a Spanish man who requested the removal of a link to a digitized 1998 article inLa Vanguardianewspaper about an auction for his foreclosed home, for a debt that he had subsequently paid.[72]He initially attempted to have the article removed by complaining to Spain's data protection agency—Agencia Española de Protección de Datos—which rejected the claim on the grounds that it was lawful and accurate, but accepted a complaint against Google and asked Google to remove the results.[73]Google sued in Spain and the lawsuit was transferred to the European Court of Justice. The court ruled inCostejathat search engines are responsible for the content they point to and thus, Google was required to comply with EUdata privacylaws.[74][75]It began compliance on 30 May 2014 during which it received 12,000 requests to have personal details removed from its search engine.[76]
Index on Censorshipclaimed that "Costejaruling ... allows individuals to complain to search engines about information they do not like with no legal oversight. This is akin to marching into a library and forcing it to pulp books. Although the ruling is intended for private individuals it opens the door to anyone who wants to whitewash their personal history....The Court's decision is a retrograde move that misunderstands the role and responsibility of search engines and the wider internet. It should send chills down the spine of everyone in the European Union who believes in the crucial importance of free expression and freedom of information."[77]
Various contexts influence whether or not an internet user will be resilient to censorship attempts. Users are more resilient to censorship if they are aware that information is being manipulated. This awareness of censorship leads to users finding ways to circumvent it. Awareness of censorship also allows users to factor this manipulation into their belief systems. Knowledge of censorship also offers some citizens incentive to try to discover information that is being concealed. In contrast, those that lack awareness of censorship cannot easily compensate for information manipulation.[78]
Other important factors for censorship resiliency are the demand for the information being concealed, and the ability to pay the costs to circumvent censorship. Entertainment content is more resilient to online censorship than political content, and users with more education, technology access, and wider, more diverse social networks are more resilient to censorship attempts.[78]
From 1995 to 2002, the government ofSouth Koreapassed the Telecommunications Business Act (TBA), the first internet censorship law in the world.[79]As more people in more places begin using the Internet for important activities, there is an increase in online censorship, using increasingly sophisticated techniques. The motives, scope, and effectiveness of Internet censorship vary widely from country to country. The countries engaged in state-mandated filtering are clustered in three main regions of the world: east Asia, central Asia, and theMiddle East/North Africa.
Countries in other regions also practice certain forms of filtering. In theUnited States, state-mandated Internet filtering occurs on some computers in libraries andK–12 schools. Content related toNazismorHolocaust denialis blocked inFranceandGermany.Child pornographyandhate speechare blocked in many countries throughout the world.[80]In fact, many countries throughout the world, including some democracies with long traditions of strong support forfreedom of expressionandfreedom of the press, are engaged in some amount of online censorship, often with substantial public support.[81]
Internet censorship inChinais among the most stringent in the world. The government blocks Web sites that discuss theDalai Lama, the 1989 crackdown onTiananmen Square protesters, the banned spiritual practiceFalun Gong, as well as many general Internet sites.[82]The government requires Internet search firms and state media to censor issues deemed officially "sensitive," and blocks access to foreign websites includingFacebook,Twitter, andYouTube.[83]According to a study in 2014,[84]censorship in China is used to muzzle those outside government who attempt to spur the creation of crowds for any reason—in opposition to, in support of, or unrelated to the government.
There are international bodies that oppose internet censorship, for example "Internet censorship is open to challenge at the World Trade Organization (WTO) as it can restrict trade in online services, a forthcoming study argues".[85]
Generally, national laws affecting content within a country only apply to services that operate within that country and do not affect international services, but this has not been established clearly by international case law. There are concerns that due to the vast differences in freedom of speech between countries, that the ability for one country to affect speech across the global Internet could have chilling effects.
For example,Googlehad won a case at theEuropean Court of Justicein September 2019 that ruled that the EU's right to be forgotten only applied to services within the EU, and not globally.[86]But in a contrary decision in October 2019, the same court ruled thatFacebookwas required to globally comply with a takedown request made in relationship to defamatory material that was posted to Facebook by an Austrian that was libelous of another, which had been determined to be illegal under Austrian laws. The case created a problematic precedent that the Internet may become subject to regulation under the strictest national defamation laws, and would limit free speech that may be acceptable in other countries.[87]
Several governments have resorted to shutting down most or all Internet connections in all or part of the country.
This appears to have been the case on 27 and 28 January 2011 during the2011 Egyptian revolution, in what has been widely described as an "unprecedented" internet block.[88][89]About 3500Border Gateway Protocol(BGP) routes to Egyptian networks were shut down from about 22:10 to 22:35 UTC 27 January.[88]This full block was implemented without cutting off major intercontinentalfibre-opticlinks, with Renesys stating on 27 January, "Critical European-Asian fiber-optic routes through Egypt appear to be unaffected for now."[88]Full blocks also occurred inMyanmar/Burmain 2007,[90]Libyain 2011,[91]Iranin 2019,[92]andSyriaduring theSyrian civil war.
Almost all Internet connections inSudanwere disconnected from 3 June to 9 July 2019, in response to a political opposition sit-in seeking civilian rule.[93][94]A near-complete shutdown inEthiopialasted for a week after theAmhara Region coup attempt.[95]A week-long shutdown inMauritaniafollowed disputes over the2019 Mauritanian presidential election.[96]Other country-wide shutdowns in 2019 includeZimbabweafter a gasoline price protests triggered police violence,Gabonduring the2019 Gabonese coup attempt, and during or after elections inDemocratic Republic of the Congo,Benin,Malawi, andKazakhstan.[97]
Local shutdowns are frequently ordered in India during times of unrest and security concerns.[98][99]Some countries have used localized Internet shutdowns to combat cheating during exams, includingIraq,[100]Ethiopia,India,Algeria, andUzbekistan.[97]
The Iranian government imposed atotal internet shutdownfrom 16 to 23 November 2019, in response tothe fuel protests.[101]Doug Madory, the director of Internet analysis atOracle, has described the operation as "unusual in its scale" and way more advanced.[102]Beginning Saturday afternoon on 16 November 2019, the government ofIranordered the disconnection of much of the country's internet connectivity as a response to widespread protests against the government's decision to raise gas prices. WhileIranis no stranger to government-directed interference in its citizens’ access to theinternet, this outage is notable in how it differs from past events. Unlike previous efforts at censorship and bandwidth throttling, the internet of Iran is presently experiencing a multi-day wholesale disconnection for much of its population – arguably the largest such event ever for Iran.[102][103][104]
Detailed country by country information on Internet censorship is provided by theOpenNet Initiative,Reporters Without Borders,Freedom House,V-Dem Institute,Access Nowand in theUS State DepartmentBureau of Democracy, Human Rights, and Labor'sHuman Rights Reports.[105]The ratings produced by several of these organizations are summarized in theInternet censorship and surveillance by countryand theCensorship by countryarticles.
Through 2010, the OpenNet Initiative had documented Internet filtering by governments in over forty countries worldwide.[25]The level of filtering in 26 countries in 2007 and in 25 countries in 2009 was classified in the political, social, and security areas. Of the 41 separate countries classified, seven were found to show no evidence of filtering in all three areas (Egypt,France,Germany,India,Ukraine,United Kingdom, andUnited States), while one was found to engage in pervasive filtering in all three areas (China), 13 were found to engage in pervasive filtering in one or more areas, and 34 were found to engage in some level of filtering in one or more areas. Of the 10 countries classified in both 2007 and 2009, one reduced its level of filtering (Pakistan), five increased their level of filtering (Azerbaijan,Belarus,Kazakhstan,South Korea, andUzbekistan), and four maintained the same level of filtering (China,Iran,Myanmar, andTajikistan).[6][106]
TheFreedom on the Netreports fromFreedom Houseprovide analytical reports and numerical ratings regarding the state ofInternet freedomfor countries worldwide.[107]The countries surveyed represent a sample with a broad range of geographical diversity and levels of economic development, as well as varying levels of political and media freedom. The surveys ask a set of questions designed to measure each country's level of Internet and digital media freedom, as well as the access and openness of other digital means of transmitting information, particularly mobile phones and text messaging services. Results are presented for three areas: Obstacles to Access, Limits on Content, and Violations of User Rights.
The results from the three areas are combined into a total score for a country (from 0 for best to 100 for worst) and countries are rated as "Free" (0 to 30), "Partly Free" (31 to 60), or "Not Free" (61 to 100) based on the totals.
Starting in 2009 Freedom House has produced nine editions of the report.[108][109][110][111][112][113][114][115][107]There was no report in 2010. The reports generally cover the period from June through May.
The 2014 report assessed 65 countries and reported that 36 countries experienced a negative trajectory in Internet freedom since the previous year, with the most significant declines inRussia,TurkeyandUkraine. According to the report, few countries demonstrated any gains in Internet freedom, and the improvements that were recorded reflected less vigorous application of existing controls rather than new steps taken by governments to actively increase Internet freedom. The year's largest improvement was recorded inIndia, where restrictions to content and access were relaxed from what had been imposed in 2013 to stifle rioting in the northeastern states. Notable improvement was also recorded inBrazil, where lawmakers approved the billMarco Civil da Internet, which contains significant provisions governing net neutrality and safeguarding privacy protection.[112]
In 2006, Reporters without Borders (Reporters sans frontières, RSF), a Paris-based internationalnon-governmental organizationthat advocatesfreedom of the press, started publishing a list of "Enemies of the Internet".[116]The organization classifies a country as an enemy of the internet because "all of these countries mark themselves out not just for their capacity to censor news and information online but also for their almost systematic repression of Internet users."[117]In 2007 a second list of countries "Under Surveillance" (originally "Under Watch") was added.[118]
Past Countries Under Surveillance:
When the "Enemies of the Internet" list was introduced in 2006, it listed 13 countries. From 2006 to 2012 the number of countries listed fell to 10 and then rose to 12. The list was not updated in 2013. In 2014 the list grew to 19 with an increased emphasis onsurveillancein addition to censorship. The list has not been updated since 2014.
When the "Countries under surveillance" list was introduced in 2008, it listed 10 countries. Between 2008 and 2012 the number of countries listed grew to 16 and then fell to 11. The number grew to 12 with the addition of Norway in 2020. The list was last updated in 2020.[citation needed]
On 12 March 2013,Reporters Without Borderspublished aSpecial report on Internet Surveillance.[27]The report includes two new lists:
The five "State Enemies of the Internet" named in March 2013 are:Bahrain,China,Iran,Syria, andVietnam.[27]
The five "Corporate Enemies of the Internet" named in March 2013 are:Amesys(France),Blue Coat Systems(U.S.),Gamma Group(UK and Germany),HackingTeam(Italy), and Trovicor (Germany).[27]
The V-Dem Digital Societies Project measures a range of questions related to internet censorship, misinformation online, and internet shutdowns.[121]This annual report includes 35 indicators assessing five areas: disinformation, digital media freedom, state regulation of digital media, polarization of online media, and online social cleavages.[122]The data set uses V-Dem's methodology of aggregating surveys of experts from around the world.[122]It has been updated each year starting in 2019, with data covering from 2000–2021.[122]These ratings are more similar to other expert analyses like Freedom House than remotely sensed data from Access Now.[123]
Access Now maintains an annual list of internet shutdowns, throttling, and blockages as part of the #KeepItOn project.[123][124][125]These data track several features of shutdowns including their location, their duration, the particular services impacted, the government's justification for the shutdown, and actual reasons for the shutdown as reported by independent media.[126]Unlike Freedom House or V-Dem, Access Now detects shutdowns usingremote sensingand then confirms these instances with reports from civil society, government, in-country volunteers, or ISPs.[126][123]These methods have been found to be less prone to false positives.[123]
A poll of 27,973 adults in 26 countries, including 14,306 Internet users,[127]was conducted for theBBC World Serviceby the international polling firmGlobeScanusing telephone and in-person interviews between 30 November 2009 and 7 February 2010. GlobeScan Chairman Doug Miller felt, overall, that the poll showed that:
Findings from the poll include:[128]
In July and August 2012, theInternet Societyconducted online interviews of more than 10,000 Internet users in 20 countries. Some of the results relevant to Internet censorship are summarized below.[130]
Among the countries that filter or block online content, few openly admit to or fully disclose their filtering and blocking activities. States are frequently opaque and/or deceptive about the blocking of access to political information.[10]For example:
During theArab Springof 2011, mediajihad(media struggle) was extensive. Internet and mobile technologies, particularly social networks such as Facebook and Twitter, played and are playing important new and unique roles in organizing and spreading the protests and making them visible to the rest of the world. An activist in Egypt tweeted, "we use Facebook to schedule the protests, Twitter to coordinate, and YouTube to tell the world".[133]
This successful use of digital media in turn led to increased censorship including the complete loss of Internet access for periods of time in Egypt[88][89][134]andLibyain 2011.[91][135]In Syria, the Syrian Electronic Army (SEA), an organization that operates with at least tacit support of the government, claims responsibility for defacing or otherwise compromising scores of websites that it contends spread news hostile to the Syrian government. SEA disseminates denial of service (DoS) software designed to target media websites including those ofAl Jazeera,BBC News, Syrian satellite broadcasterOrient TV, and Dubai-basedAl ArabiyaTV.[136]
In response to the greater freedom of expression brought about by the Arab Spring revolutions in countries that were previously subject to very strict censorship, in March 2011, Reporters Without Borders movedTunisiaandEgyptfrom its "Internet enemies" list to its list of countries "under surveillance"[137]and in 2012 droppedLibyafrom the list entirely.[119]At the same time, there were warnings that Internet censorship might increase in other countries following the events of the Arab Spring.[138][139]However, in 2013, Libyan communication companyLTTblocked the pornographic websites.[140]It even blocked the family-filtered videos of ordinary websites likeDailymotion.[141]
During theRussian invasion of Ukrainein 2022, Russia was reported to have blocked the internet websitesTwitterandFacebook. Facebook was noted as being suspended due to an objection to its policy of reviewing news stories for authenticity where they were produced by Russian state-backed media before allowing them to be published on its platform. It was subject to a total ban whereas Twitter was suspended regionally. Reports have identified thatVPNuse has enabled people to circumvent the restrictions by installing software.[142]
It been reported that theEuropean Unionwould seek to censor Russian media outlets regarded as producing propaganda.[citation needed]
Organizations and projects
Topics
Media related toInternet censorshipat Wikimedia Commons
|
https://en.wikipedia.org/wiki/Internet_censorship
|
Inbroadcasting, the termblackoutrefers to the non-airing oftelevisionorradioprogramming in a certainmedia market.[1]
It is particularly prevalent in thebroadcasting of sports events, although other television or radio programs may be blacked out as well. Most blackout policies serve to protect local broadcasters (primarilyregional sports networks) from competition by "out-of-market" networks that carry different teams, by only allowing viewers to watch non-national telecasts of teams within their designated markets (with television providers blacking out regional telecasts of teams that are outside their market; in turn, encouraging viewers to purchase subscription-basedout-of-market sports packages), and by allowing teams to black out national telecasts of games that are also being shown by a local broadcaster. In these situations, the national stations would close in those areas for the duration of the game, and in some cases be replaced with other stations until the game ends.
By contrast, some blackout policies, such as those of the U.S.National Football Leagueand Englishassociation football(soccer), serve to encourage attendance to games by respectively requiring that a specific percentage of tickets be sold in order for a game to be televised in the home team's market, or by enforcing a blanket prohibition on any domestic telecasts of the sport during specific windows.
The term is also used in relation to situations where programming is removed or replaced on international feeds of a television service, because the broadcaster does not hold the territorial rights to air the programs outside of their home country. In some cases, replacement programming airs, but when there's no replacement programming required, the feed would temporarily close, and would not resume broadcasting until the next programme was due to begin.
Perhaps the most notable non-sports-related blackout in television was the blackout ofCanadian federal electioncoverage. Because there are sixtime zonesacross Canada, polls close in different parts of the country at different times.Section 329 of the Canada Elections Actoutlawed disseminating election results from other ridings in constituencies where polls were still open, ostensibly to prevent the results fromthe Eastfrom influencing voters inwestern ridings.[2]
However, in thefederal election in 2000, Paul Charles Bryan published results fromAtlantic Canadaonline despite being told not to by the authorities. Bryan was charged before theProvincial Court of British Columbia, but fought the charges as unconstitutional undersection 2of theCanadian Charter of Rights and Freedoms, which protects freedom of expression andfreedom of association. Bryan's victory before theBritish Columbia Supreme Courtmeant that voters inBritish Columbiaand the rest of Canada legally learned of election results in other ridings during thefederal election in 2004. However, Elections Canada appealed, and Bryan lost his case before theBritish Columbia Court of Appeal. Bryan further appealed to theSupreme Court of Canada, but in a ruling made on March 15, 2007 (R. v. Bryan), in a 5–4 ruling, the Court ruled that Section 329 of the Canada Elections Act isconstitutionaland justified undersection 1of theCanadian Charter of Rights and Freedoms.Stephen Harper, who later became Prime Minister, labelledElections Canada"jackasses" and tried to raise money for Bryan. TheCanadian Broadcasting Corporationalso supported Bryan, hoping to "make election night a bigger event than it already is".[3]
Before the 2000 election, Elections Canada moved to reduce the effects of the blackout and the influence of unauthorized knowledge of election results in Western ridings by altering the times that polls close, so that polls no longer close at the same local time throughout the country. Polls inAtlantic Canadaclose at 9 p.m.Atlantic(9:30in Newfoundland), polls fromAlbertatoQuebecclose an hour later (9 p.m.Eastern, 8 p.m.Centraland 7 p.m.Mountain) and finally, polls in British Columbia close an hour after that (7 p.m.Pacific). Historically, the results of the election are often not decisively known until more than an hour after polls close in the Eastern Time Zone, but are usually known within two hours of these polls closing.
Provincial elections are not subject to blackout restrictions – in provinces that have two time zones, the vast majority of the population lives in one time zone or the other. Election laws in these provinces stipulate that all polls are to close at the same time – this time invariably being 8:00 p.m. (or 9:00 p.m. in Ontario beginning with the2007 provincial election) in the time zone of the majority.
On August 17, 2011, Elections Canada Chief Electoral OfficerMarc Mayrandsuggested improvements of the voting system to Parliament; among them were a proposal to remove the blackout rule. Mayrand argued that "the growing use ofsocial mediaputs in question not only the practical enforceability of the rule, but also its very intelligibility and usefulness in a world where the distinction between private communication and public transmission is quickly eroding. The time has come for Parliament to consider revoking the current rule."[4][5]On January 13, 2012, it was announced that the federal government would introduce legislation that would repeal the blackout rule, citing the increased use of social media. The blackout rule was officially repealed in October 2015, prior to the2015 Canadian federal election.[2]
TheCanadian Football League's constitution does provide the option for teams to black out games in their home markets in order to encourage attendance; at one point, the CFL required games to be blacked out within a radius of 120 kilometres (75 miles) around the closest over-the-air signal carrying the game, or 56 kilometres (35 miles) of the stadium for cable broadcasts (and, for theSaskatchewan Roughriders, the entirety of theprovince).[6][7]
The policy received significant criticism in 2002 when theHamilton Tiger-Catsenforced a blackout on a game against theToronto Argonautsthat had playoff implications; the range of the blackout was considered too wide for the market.[7]
Under the league's 2008–2013 contract withTSN, teams were given a cap on the number of blackouts they could impose per-season (with the number varying by media and CFL reports, ranging from 2 for Hamilton and Toronto, and 5 for teams in Western Canada), and final decisions were assigned to the league if at least 90% of tickets were sold out within 48 hours of the game. Although the CFL stated that the league's current contract with TSN (which began in 2014) does allow for blackouts, they have been seldom-used, if not at all.[6][8]
As in the U.S., National Hockey League games that are not scheduled as national telecasts bySportsnetorTVA Sportsare broadcast byregional feedsof either Sportsnet,TSN, orRDS(French), and are blacked out for viewers outside the team's home market. Sportsnet's four regional feeds correspond with each of its NHL teams' designated markets; the Ontario and Pacific feeds are designated to theToronto Maple Leafs, andVancouver Canucksrespectively, while Sportsnet West and its corresponding market (which includes all ofAlbertaandSaskatchewan) is shared by theEdmonton OilersandCalgary Flames. Although West is also the main feed forManitoba, Flames and Oilers games are blacked out there to protect theWinnipeg Jets. As of August 2014, TSN is similarly structured, with the Ottawa Senators on TSN5 (East), Maple Leafs on TSN4 (Ontario), and Jets on TSN3 (Manitoba and Saskatchewan). TheMontreal Canadienswere added in 2017 onTSN2(which was originally promoted as being a secondary national channel).[9]The Canadiens and Senators share the same market, which includes parts of Eastern Ontario (primarily theOttawa Valley), and the entirety of Quebec and Atlantic Canada, while Saskatchewan is shared by the Jets, Flames, and Oilers.[10][11]
Until the 2014–15 season, allFrench-languagebroadcasts of theMontreal Canadienswere available nationally on RDS, which was previously the national French-language rightsholder of the NHL in Canada. As RDS was, until 2011, the only French-language cable sports channel in Canada,[12]the team forwent a separate regional rights deal and allowed all of its games to be broadcast as part of the national package. As of the 2014–15 season,Quebecor MediaandTVA Sportsis the national French rightsholder as part of a sub-licensing agreement withRogers Communications.[13][14][15]RDS negotiated a 12-year deal with the team for regional rights to the Canadiens: games are now blacked out for viewers outside Quebec, Atlantic Canada, and parts of Eastern Ontario.[11][16]
Out-of-market games can be viewed using the subscription-basedNHL Centre Iceand Sportsnet+; in-market games are blacked out from Centre Ice to protect local broadcasters,[17][18][19]but Sportsnet+ does not black out in-market broadcasts of games televised by Sportsnet since it is a direct-to-consumer version of the Sportsnet channels themselves.[20][21]
Many programs carried onInternet televisionin other parts of the world are not available in Canada because the major broadcast networks in Canada secure exclusive rights to them and prevent Internet television aggregators, one notable example beingHulu, from distributing them in Canada. TheNational Football League, for example, sold worldwide Internet broadcast rights to a package of itsThursday Night Footballgames during the2016 seasontoTwitter; however,Rogers Mediaforced Twitter to block the streams in Canada by virtue of its holding of terrestrial television rights in the country.[22]Numerous organizations have attempted to establish workarounds that route Canadians' Internet traffic through the United States, workarounds that local broadcasters have opposed, with one,Bell Media, calling such practices "stealing",[23]and that aggregators such asNetflixhave actively fought against.[24]
Indian lawrequiresall sporting events of "national importance", whose broadcast rights are owned by a pay television service, to be simulcast by the state broadcasterDoordarshan(DD) on itsDD NationalTV channel.Tata Sky(which is partially owned by the parent company ofStar India, owner of theStar Sportsnetworks) filed a lawsuit over the rule, arguing that these simulcasts devalued the exclusive broadcast rights because DD National is a must-carry channel. In 2017, theSupreme Court of Indiaruled that pay television services must black out DD National when it is airing such events in order to protect the pay TV broadcaster, restricting availability of DD's simulcasts of such events to terrestrial television andDD Free Dish.[25][26]
UEFAArticle 48.2 and the majorassociation footballleagues of the United Kingdom enforce a blackout on all television broadcasts of football between 2:45 p.m. and 5:15 p.m. on Saturday matchdays. This applies to all matches, regardless of whether they are a domestic or international competition. A match which kicks off within the window may be joined in progress once the blackout window ends.[27][28]
This policy is ostensibly intended to encourage fans to attend football matches in-person, especially in lower divisions that compete with top-flight matches on television. The practice originated in the 1960s;BurnleychairmanBob Lordwas opposed to television broadcasts of football matches — going as far as banning theBBCfrom televisingMatch of the DayfromTurf Moorfor a time. He pushed theFootball Leagueto adopt this stance as an organization-wide policy; it has since been adopted byThe Football Associationand the currentPremier League, which broke away from the Football League in 1992 to become the highest level of club football in England.[29][30][31]
Affected matches can still be broadcast internationally, hence more Premier League matches can be shown outside the United Kingdom by other rightsholders than within. This intricacy created a "grey market" for obtaining the broadcasts from alternative sources, such as foreign satellite providers or unofficial online streaming services. The Premier League and other stakeholders have historically considered this practice to be a violation of thecopyrightof the broadcasts. In 2014, for taking inadequate steps to prevent unauthorized retransmissions from its streaming broadcasts online, the Premier League briefly restrictedMENAregion rightsholderbeIN Sportsto one 3 p.m. match per week on television only.[29][30][31]
Critics, includingAdvocate Generalat theCourt of Justice of the European UnionJuliane Kokott, have argued that 3 p.m. blackouts are outdated, as its purpose is hindered — especially within the Premier League — by the high demand for the few tickets available to the public, and that there was little evidence that television broadcasts actually affected attendance.[32][33][31][34][35]To preserve the value of its domestic broadcast rights and allow more games to be televised, the Premier League has added more matches in windows outside of Saturday afternoons, such as weekdays and Sundays — including thefinal matchday of the season.[34][35]
In 2018, after complying by blacking out the first 15 minutes of aSerie Amatch that sawCristiano Ronaldo's on-field debut forJuventus, streaming serviceEleven Sports UK & Irelandbegan to defy the ban and show selected Serie A andLa Ligamatches during this period. On 17 October 2018, Eleven announced that it would cease its telecasts of 3 p.m. kickoffs, but argued that the rule was outdated because only the UK andMontenegrohave such blackout rules, and that the blackout period encourages illegal streaming. A representative of La Liga has backed Eleven Sports' position.[36][37][38]
In April 2020, due to theCOVID-19 pandemic, UEFA authorised the suspension of the blackout rule for the remainder of the season.[39][40]Upon the resumption of the2019–20 Premier League, all matches were shown on domestic television due to them being playedbehind closed doors, while a number of free-to-air broadcasts (viaSky Sports' sister channelsPickandSky One,Amazon Prime Videoand its sister serviceTwitch, and the BBC — which usually holds rights to free-to-air highlights programmes) were also aired.[41][42][43]This arrangement continued into the first month of the2020–21 Premier League.[44][45]After an attemptedpay-per-viewscheme folded in November 2020, the Premier League returned to allocating the matches to the four broadcasters through at least the end of 2020.[46][47][48][49]
In 2023, the Premier League sought a rare private prosecution against members of a fraud "gang" who sold £10-a-month subscriptions to retransmitted games. The illegal streams brought in more than £7m in revenue from more than 50,000 subscribers, with five members receiving jail sentences between three and eleven years.[50]
Major League Baseballand theNational Hockey Leaguehave very similar blackout rules. Unlike theNational Football League, the blackout of games has nothing to do with attendance, but instead is implemented to protect broadcasters with contracts to air games. Unless one of MLB's national partners hold exclusive rights to a certain regular season game (such as ESPN'sSunday Night Baseballor Apple TV+’sFriday Night Baseball), the local broadcaster of a game has priority over a national broadcaster, and the national broadcast would be blacked out in markets where a local broadcaster is also showing coverage.[51][52]The blackout rules do not apply during the postseason, as there are no regional television broadcasts.
The NHL utilizes a similar policy of exclusive and non-exclusive national games; with the new broadcast deals enacted with2021–22 season, all regular season games carried byABC, ESPN, and ESPN+are exclusive national broadcasts. AllTNTgames were exclusive national broadcasts during the 2021-22 season, but became subject to blackouts the following season.[53]In some cases, national games are scheduled in windows where no other games involving U.S. teams are being played. NHL Network still carries non-exclusive national games, most of which are simulcast from one of the regional broadcasts or aCanadian national broadcast.[54]All games in the first round of theStanley Cup playoffsare non-exclusive national games (though with no blackouts of the national broadcaster), after which they are exclusive to ESPN, TNT, or TBS.[55][56]
Out-of-market games can be viewed using the subscription-basedMLB Extra Innings,MLB.tv, andNHL Center Iceservices, as well as ESPN+ for the NHL. In-market games are blacked out from all four services to protect local broadcasters, and they do not offer nationally televised games (except for NHL games exclusively carried or simulcast by ESPN+).
In Major League Baseball, there are no radio blackouts. However, for many years, the local radio networks of the two participating ballclubs in theWorld Serieswere not allowed to air games, forcingflagshipstations, if they wanted to carry the Series, to simulcast the network broadcast. As an example, whileBoston Red Soxradio flagshipWHDHandSt. Louis Cardinalsflagship stationKMOXboth broadcast the1967 World Series, both stations had to simulcast theNBC Radiobroadcast along with Boston'sWCOPand St. Louis'sKSD, the nominal NBC Radio affiliates in those cities.
This changed after1980, as fans of thePhiladelphia Phillieswere angry that they could not hear their popular broadcasting team ofHarry KalasandRichie Ashburncall the team's appearance in that year's World Series. Their complaints led to a provision in Major League Baseball's next broadcasting contract permitting the radio flagships of the participating ballclubs to produce and air their own Series broadcasts locally.[57]Since then, only the flagship stations of the two participating ballclubs can originate coverage (though their broadcasts, as well as the national English and Spanish broadcasts, are also available out of market via subscription-based packages on such platforms asMLB.com,Sirius XM, andTuneIn). Flagship stations are required to make mention of the presenting sponsor of the national ESPN Radio broadcasts as also sponsoring the team's own broadcasts during the World Series (as of 2016 this isAutoZone). All other network affiliates of the two clubs must carry the feed from MLB's national partner (currently ESPN Radio). Should another ESPN Radio affiliate exist in the same market, that station can claim exclusivity, forcing a blackout of the team network affiliate from carrying the game, although this is rarely done as listener pushback against the ESPN Radio affiliate blocking the local play-by-play would likely be untenable (for instance in2016, ESPN Radio O&OWMVPinChicagobroadcast the national ESPN feed as expected, but made no move to block the officialCubsbroadcasterWSCRfrom carrying local play-by-play, to the point of only mentioning the national coverage existed on their station through promos in national ESPN Radio programming).
Additionally, radio stations (including flagships) may not include MLB games in the liveInternetstreamsof their station programming. MLB itself offers radio feeds as a pay service via the league and team websites, along with being a part of the monthly premium fee service from streaming providerTuneIn. Some stations will simply stream the station's regularly scheduled programming that is being pre-empted by the game.
The NHL has no radio blackouts for local broadcasts, althoughNBC Sports Radiobroadcasts are, similarly to some cable broadcasts, not carried within the local markets of participating teams. Internet streaming of radio calls from the NHL's team radio networks, unlike MLB, are allowed to be broadcast for free nationwide with no geoblocking. Also, unlike other leagues, the Stanley Cup Finals (should a team make it to that point in the playoffs) can also be carried on all affiliates of that team's radio network with no restrictions.
Prior to the1998-99 NBA lockout, the NBA and the WNBA used to black out nationally televised games oncable televisionwithin 35 miles (56 km) of the home team's market; however, these are now restricted to games onNBA TV, WatchESPN and other streaming providers.
The NFL has engaged in various blackout policies to protect both local ticket sales, and local rightsholders of specific games.
In the NFL, any broadcaster that has a signal that hits any area within a 75 miles (121 km) radius of an NFLstadiummay only broadcast a game if that game is a road game (also known as an away game), or if the game sells out 72 hours or more before the start time for the game.[58][59]If sold out in less than 72 hours, or is close to being sold out by the deadline, the team can sometimes request a time extension. Furthermore, broadcasters with NFL contracts are required to show their markets' road games, even if the secondary markets have substantial fanbases for other teams (like inHarrisburg, Pennsylvania, officially aBaltimore Ravenssecondary market, but home to manyPittsburgh Steelersfans[citation needed]). Sometimes[when?]if a game is within a few hundred tickets of selling out, a broadcaster[example needed]with rights to show the nearly sold-out game will buy the remaining tickets (and give them to local charities) so it can broadcast the game. Other teams elect to close off sections of their stadium, but cannot sell these tickets for any game that season if they choose to do so.[60]As a result, if the home team's game is a Sunday day game, both networks can air only one game each in that market (until 2000, this rule applied whether or not the game was blacked out; however, this was changed because some markets virtually never aired doubleheaders as a result). Usually, but not always, when each network can show only one game each in a market, the two stations work out between themselves which will show an early game and which will show a late game. This only affects the primary market, and not markets in a 75-mile (121 km) radius, which always get a doubleheader each Sunday. For theNFL International Series, the network broadcasting an International Series game will not have the game blacked out for the team's markets as the game is played outside of the United States; however, some blackout regulations do apply.
There have been two exceptions to the rule, of which one has never been implemented and the other no longer applies. The first is for theGreen Bay Packers, which have two overlapping 75-mile blackout zones – one surroundingthe team's stadiuminGreen Bayand another surroundingMilwaukee. The team'sradio flagship stationis in Milwaukee, and the Packers played part of their home schedule in Milwaukee from 1953 through 1994. However, this policy has never been implemented in the Packers' case, as they have sold out every home game in Green Bay since 1960 and have a decades-long season-ticket waiting list (games in Milwaukee also sold out during this period). The second exception was for theBills Toronto Series; by a technicality,Rogers Communications(the team's lessee) owned all tickets to those games and resold them to potential fans. Even when Rogers failed to sell all of the tickets, they were still technically defined to be sellouts by the league since Rogers was said to have "bought" the tickets. The technicality came into play for both Toronto Series preseason games, and again for the last two regular season games of the series.[61][62]The Bills Toronto Series was cancelled after the 2013 season, largely due to the aforementioned lackluster attendance.
In June 2012, NFL blackout regulations were revised in which, for the first time in NFL history, home games would no longer require a total sellout to be televised locally; instead, teams would be allowed to set a benchmark anywhere from 85 to 100 percent of the stadium's non-premium seats. Any seats sold beyond that benchmark are subject to heavierrevenue sharingwith the league.[63]Four teams, the Buffalo Bills, theCleveland Browns, theIndianapolis Coltsand theSan Diego Chargers, opted out of the new rules, as it would require the teams to pay a higher percentage of gate fees to the NFL's revenue fund.[64]In the2013 NFL season, theOakland Raidersbegan to artificially limit the capacity ofOakland Coliseumby 11,000 in order to improve their chances of meeting the 85% threshold; the seats comprised sections of "Mount Davis", an extended upper deck that had originally been built as part of the Raiders' 1995 return to Oakland. Under NFL rules, the stadium had to remain in this configuration for the entirety of the season.[65]
In the2015 NFL season, the league, after no games were blacked out at all in the2014 season, voted to "suspend" the blackout policy as an experiment.[66]The suspension continued into the2016 season(a season that included the return of theRamsto theLos Angeles Memorial Coliseumas an interim home until the completion ofSoFi Stadium; the Coliseum has had long-standing issues with NFL sell-outs); commissioner Roger Goodell stated that the league needed to further investigate the impact of removing the blackout rules before such a change is made permanent.[67]The suspension quietly continued into the2017 NFL seasonas well, which saw theSan Diego Chargersalso relocate to Los Angeles, temporarily using the 27,000-seat,soccer-specificDignity Health Sports Park(known as StubHub Center before 2019) as an interim venue until the completion of SoFi Stadium for the2020 season, which is shared with the Rams.[68]
The suspension came a year after theFederal Communications Commission(FCC) ended a policy that formally forbade multichannel television providers from distributing telecasts of sporting events that had been blacked out by local broadcast television stations. Then-FCC chairmanTom Wheelerconsidered such policies to be "obsolete".[69]The policies are still enforced via contractual agreements between the NFL and its media partners.[70][71][72]
Per NFL policies, all games that are exclusively televised on pay television or streaming, includingESPN'sMonday Night FootballandAmazon Prime Video’sThursday Night Footballaresyndicatedto over-the-air broadcasters in the markets of the teams involved, and blacked out on the cable channel in defense of the local simulcast. The local market for these rights is defined as any station within the 75-mile (121 km) radius of a team's respective stadium. When this happens, the cable network affected closes in the region, with cable operators choosing to either leave the space blank for the duration of the game, or replacing it with a relay of another station.
This policy attracted controversy in December 2007, whenHartford, ConnecticutCBSaffiliateWFSBwas refused permission to air the local simulcast of aNew England Patriots-New York Giantsgame on December 29, 2007. The game, which was part of theThursday Night Footballpackage on NFL Network, would see the Patriots attempt to become the first NFL team since 1972 and the expansion of the regular season to 16 games, to finish the regular season undefeated. At the time, NFL Network was available only on a sports tier of cable providerComcastin the immediate viewing areas of the Patriots and Giants.[73]SenatorJohn Kerryand Rep.Ed Markey, both of the state ofMassachusettsand fans of the Patriots team, wrote to the NFL as well as Comcast andTime Warner Cable, to request that the Patriots-Giants game be aired at least onbasic cablein order to reach the highest possible number of television-viewing fans, citing the "potentially historic" nature of the game.[74]Kerry clarified the next week that he did not intend to interrupt current negotiations between the cable operators and NFL.[75]
On December 19, 2007,Joe Courtneyand other members of the Connecticut Congressional Delegation wrote to NFL commissionerRoger Goodellto try to have the NFL allow wider broadcast access to the game.[73]Consequently, on December 26, the NFL announced that the game would be simulcast nationally onCBSandNBC, in addition toWCVB-TV(ABC) in Boston andWWOR-TV(MyNetworkTV) in Secaucus, New Jersey (which is part of the New York City media market)—which had both acquired the local rights to the game.[76]
Although NFL Network would later become more established, in 2014 the NFL began to sub-license the right to produce theThursday Night Footballtelecasts, and air selected games from the package in simulcast with NFL Network, to a broadcast television rightsholder (initiallyCBS). This was part of a move to help heighten the profile of the fledgling Thursday night games.[77][78]
For radio broadcasts, the NFL follows a nearly identical policy to MLB. There are no radio blackouts, but only each team's flagship station can carry local broadcasts during the conference championships orSuper Bowl. All other markets must carry theNFL on Westwood Onefeed for those games. For all other weeks, within 75 miles of a team's stadium, only stations the team or its flagship station contracts with can carry those games, regardless if the team is home or away. Thus, any competing station that carries Westwood One broadcasts cannot air those games. Like MLB, the NFL makes local broadcasts (except for those of theTennessee Titans) available on NFL's Game Pass service andSirius Satellite Radio; as a result, radio stations that carry NFL games, from any source, and stream on the Internet are prohibited from streaming games online outside of their DMA, although it seems this provision is loosely enforced in some cases;WBBMin Chicago andWWL (AM)in New Orleans regularly air live broadcasts of their teams' games over their Internet stream, as doesWTMJin Milwaukee with the Packers, though both stations went to a desktop-only streaming policy in 2015 due to the introduction of GamePass and the absorption of theNFL Audio Passstreaming system into Game Pass. Since the 2022-23 season, WXTB (the Bucs' flagship station) blacks out coverage on all devices unless in the station's coverage area, likely due to the launch of NFL+.
In order to protecthigh schoolandcollege football, the federalSports Broadcasting Act of 1961cancels antitrust protection for television broadcasts of any professional football game on Friday evenings or Saturdays by television stations within 75 miles (121 km) of the venue of a college or high school game, that had been announced in a general circulation newspaper prior to August 1 of the calendar year. This lasts from the second Friday in September through the second Friday in December.[79][80]
To comply with this law, the NFL largely avoids scheduling games on Saturdays altogether until the final weeks of the regular season (which begin in mid-December), which usually feature several Saturday double- or triple-headers.[81][79][82][83]A notable effect of this law occurred in the2004 NFL season, where aTennessee Titans/Miami Dolphinsgame in week 1—which had been moved up to Saturday, September 11, due toHurricane Ivan; presumably to comply with the Act, the game was only broadcast locally, and blacked out onNFL Sunday Ticket.[79]
To encourage local attendance, the live television broadcast of theIndianapolis 500is blacked out on theIndianapolisaffiliate of its broadcaster if the race is not a sellout. Since1992, the station that airs the race in the Indianapolis market (ABCaffiliateWRTVfrom 1986 to 2018,NBCaffiliateWTHRfrom 2019 to 2024, andFoxaffiliateWXINfrom 2025 onwards) airs the race on tape delay in prime time, and carries the network's prime time programming in the race's timeslot under special dispensation from the network.[84]Prior to this, from 1986 to 1991, local and syndicated Sunday programming would continue to air in their regular timeslots.
The blackout has only been lifted five times since live flag-to-flag coverage of the 500 officially began in1986:
Prior to 1986,ABChad aired an edited broadcast of the race in prime time.[88][89]
Until 2001, the same blackout policy applied to theBrickyard 400, aNASCAR Cup Seriesevent also held atIndianapolis Motor Speedway; at the time, television rights to NASCAR events were sold by the owners of their respective tracks,[90]and IMS had packaged the 400 withABC's rights to the Indianapolis 500.[91]This policy ended in 2001 when NASCAR centralized the television rights to all events, and sold them in two packages toFox SportsandNBC/TNTrespectively.[90]
A 1963 episode of theCBStelevision drama seriesEast Side/West Side, focusing on an African-American couple inHarlem, was blacked out by network affiliates inShreveport, Louisiana(KSLA) andAtlanta, Georgia(WAGA-TV).[92]
|
https://en.wikipedia.org/wiki/Blackout_(broadcasting)
|
Theblocking of YouTube videos in Germanywas part of a former dispute between the video sharing platformYouTubeand theGesellschaft für musikalische Aufführungs- und mechanische Vervielfältigungsrechte(GEMA, or "Society for Musical Performance and Mechanical Reproduction Rights" in English), a performance rights organization in Germany.
According to a German court inHamburg,Google's subsidiary YouTube could be held liable for damages when it hosts copyrighted videos without the copyright holder's permission.[1]As a result, music videos formajor labelartists on YouTube, as well as many videos containing background music, have beengeoblockedin Germany since the end of March 2009 after the previous agreement had expired and negotiations for a new license agreement were stopped. On 30 June 2015, Google won a partial victory against GEMA in a state court in Munich, which ruled that they could not be held liable for such damages.[2]
In July 2015, the higher regional court of Hamburg also rejected GEMA's claim for €1.6 million in damages.[3]
In 2016, YouTube and GEMA, who represents 70,000 composers and publishers, reached a settlement agreement. The settlement sum is unknown.
According to Google, GEMA sought to raise its fee charged to YouTube to a "prohibitive" 12 Eurocent per streamed video—a claim that is disputed by GEMA spokesperson Bettina Müller stating their proposal was 1 Eurocent only plus a breakdown by composer.[4][5][6]The issue is set to be taken up by aCaliforniacourt.[7]Google, the world's biggest Internet search engine company, partly lost a German copyright infringement suit over how much it must do to remove illegal music videos from its YouTube website.[8]
A study sponsored by the video hosting websiteMyVideoestimated that 61.5% of the 1000 most viewed YouTube clips are blocked in Germany. This is significantly higher than, for example, in the United States (0.9%) or in Switzerland (1.2%).[9]
Another study found that around 3% of all YouTube videos, and 10% of those videos with over a million views, are blocked in Germany.[10]
Sony Music's CEO of international business,Edgar Berger, said in an interview in February 2012 that the Internet is a blessing for the music industry. Nevertheless, there are still problems that have to be overcome, such as restrictive copyright enforcement by music rights collecting agencies. Berger claims that YouTube revenue running into the millions is being lost because GEMA's policies prevent artist's videos from being shown online in the country.[11]
Conversely, it can be questioned how much of this lost revenue would have actually benefitted GEMA members, given that licensing agreements in other territories are subject to a confidentiality agreement that prevents even the membership of the collecting societies from knowing the royalty rates.[12]
An academic study by Tobias Kretschmer and Christian Peukert published in 2020 shows that the blocking of music videos decreased recorded music sales in Germany by about 5%–10%. The effect is much stronger (more negative) for newcomer artists, and less strong (less negative) for mainstream artists. Also, German artists suffered relatively less from the YouTube blackout and gained market share as a result.[13]
GEMA's stance has elicited considerable criticism from Google and foreign record companies.
Edgar Berger, CEO ofSony Music Entertainmentin Munich, toldBillboard: "I suspect that some members of GEMA's supervisory board have not yet arrived in the digital era. We want to see streaming services likeVEVOand Spotify in the German market. Spotify must not be blocked by GEMA any longer. Artists and music companies are losing sales in the millions".[14]
Google spokesman Kay Oberbeck told Billboard in Hamburg that YouTube had entered into 20 agreements with collection societies from 33 countries. "We therefore regret all the more that GEMA has decided to commence legal proceedings against us despite the promising talks which we have held, thus removing the basis for conducting any further negotiations in a spirit of mutual trust. A solution can only be found at the negotiating table without any legal proceedings. We are prepared to resume negotiations at any time."[14]
Frank Briegmann, President ofUniversal MusicGermany, has described Germany as "a developing country in the digital music market. GEMA apparently has not yet understood the new developments in the international music market".[15]
A common way of viewing blocked videos in Germany is to use browser add-ons that fake a foreignIP address, which are available for all common browsers andSpotify, in some cases these add-ons even come prebundled with the browser setup. Another way is to go through a foreignproxyorVPNserver. Although intellectual property rights in music are licensed by territory, employing such methods tocircumvent local restrictionsis legal.[citation needed]
On 31 October 2016, GEMA released a press statement stating that YouTube will pay GEMA for video views of GEMA-protected artists. No further details regarding payment were disclosed.[16]
|
https://en.wikipedia.org/wiki/Blocking_of_YouTube_videos_in_Germany
|
Incomputer networking,451 Unavailable For Legal Reasonsis anHTTP status codeused when the user requests a resource which cannot be served for legal reasons, such as a web page censored by a government. The number 451 is a reference toRay Bradbury's 1953dystopiannovelFahrenheit 451, in which books are outlawed.[2]451 provides more information thanHTTP 403, which is often used for the same purpose.[3]This status code is currently a proposed standard inRFC7725, which updated the IANA HTTP Status Codes Registry to include 451.[4]
Examples of situations where an HTTP 451 error code could be displayed include web pages deemed a danger to national security, or web pages deemed to violate copyright, privacy,blasphemy laws, or any other law or court order.
After introduction of theGDPRin theEEAit became common practice for websites located outside the EEA to serve HTTP 451 errors to EEA visitors instead of trying to comply with this new privacy law. For instance, many regional U.S. news sites no longer serve web browsers from the EU.[5][6][7]
The RFC is specific that a 451 response does not indicate whether the resource exists but requests for it have been blocked, if the resource has been removed for legal reasons and no longer exists, or even if the resource has never existed, but any discussion of its topic has been legally forbidden (seeinjunction).[8]Some sites have previously returnedHTTP 404(missing) or similar if they are not legally permitted to disclose that the resource has been removed. It is used in theUnited Kingdomby someInternet service providersutilising theInternet Watch Foundationblacklist, returning a 404 message or another error message instead of showing a message indicating the site is blocked.[9][10]
The status code was formally proposed in 2013 byTim Bray, following earlier informal proposals by Chris Applegate[11]in 2008 andTerence Eden[12]in 2012. It was approved by theIETFon 18 December 2015.[13]It was published as in the Proposed StandardRFC7725in February 2016.
HTTP 451 was mentioned by the BBC'sFrom Our Own Correspondentprogramme, as an indication of the effects of sanctions onSudanand the inability to accessAirbnb, theApp Store, or other Western web services.[14]
When an entity intercepts the request and returns status 451, it should include a "Link" HTTP header field whose value is a URI reference identifying itself. The "Link" header field must then have a "rel" parameter whose value is "blocked-by". This is intended to identify the entity implementing the blocking (an ISP, DNS provider, caching system, etc.), not the legal authority mandating the block.[15]At an IETF hackathon, participants used a web crawler to discover that several implementations misunderstood this header and gave the legal authority instead.[16]
The meaning of "a resource which cannot be served for legal reasons" has been interpreted to extend beyond government censorship:
Media related toHTTP 451at Wikimedia Commons
|
https://en.wikipedia.org/wiki/HTTP_451
|
Incryptography,black-box obfuscationwas a proposedcryptographic primitivewhich would allow acomputer programto beobfuscatedin a way such that it was impossible to determine anything about it except its input and output behavior.[1]Black-box obfuscation has been proven to be impossible, even in principle.[2]
Barak et al. constructed a family of unobfuscatable programs, for which an efficient attacker can always learn more fromanyobfuscated code than from black-box access.[2][3]
Broadly, they start by engineering a special pair of programs that cannot be obfuscated together. For some randomly selected stringsα,β{\displaystyle \alpha ,\beta }of a fixed, pre-determined lengthk{\displaystyle k}, define one program to be one that computes
Cα,β(x):={βifx=α0otherwise{\displaystyle C_{\alpha ,\beta }(x):={\begin{cases}\beta &{\text{if }}x=\alpha \\0&{\text{otherwise}}\end{cases}}}
and the other program to one that computes
Dα,β(X):={1ifX(α)=βandXruns in time≤poly(k)0otherwise.{\displaystyle D_{\alpha ,\beta }(X):={\begin{cases}1&{\text{if }}X(\alpha )=\beta {\text{ and }}X{\text{ runs in time}}\leq {\text{poly}}(k)\\0&{\text{otherwise}}\end{cases}}.}
(Here,Dα,β{\displaystyle D_{\alpha ,\beta }}interprets its input as the code for aTuring machine. The second condition in the definition ofDα,β{\displaystyle D_{\alpha ,\beta }}is to prevent the function from beinguncomputable.)
If an efficient attacker only has black-box access, Barak et al. argued, then the attacker only has an exponentially small chance of guessing the passwordα{\displaystyle \alpha }, and so cannot distinguish the pair of programs from a pair whereCα,β{\displaystyle C_{\alpha ,\beta }}is replaced by some programZ{\displaystyle Z}that always outputs "0". However, if the attacker has access to any obfuscated implementationsCα,β′,Dα,β′{\displaystyle C'_{\alpha ,\beta },D'_{\alpha ,\beta }}ofCα,β,Dα,β{\displaystyle C_{\alpha ,\beta },D_{\alpha ,\beta }}, then the attacker will findDα,β′(Cα,β′)=1{\displaystyle D'_{\alpha ,\beta }(C'_{\alpha ,\beta })=1}with probability 1, whereas the attacker will always findDα,β′(Z)=0{\displaystyle D'_{\alpha ,\beta }(Z)=0}unlessβ=0{\displaystyle \beta =0}(which should happen with negligible probability). This means that the attacker can always distinguish the pair(Cα,β′,Dα,β′){\displaystyle (C'_{\alpha ,\beta },D'_{\alpha ,\beta })}from the pair(Z,Dα,β′){\displaystyle (Z,D'_{\alpha ,\beta })}with obfuscated code access, but not black-box access. Sincenoobfuscator can prevent this attack, Barak et al. conclude that no black-box obfuscator for pairs of programs exists.[2][3]
To conclude the argument, Barak et al. define a third program to implement the functionality of the two previous:
Fα,β(b,x):={Cα,β(x)ifb=0Dα,β(x)ifb=1.{\displaystyle F_{\alpha ,\beta }(b,x):={\begin{cases}C_{\alpha ,\beta }(x)&{\text{if }}b=0\\D_{\alpha ,\beta }(x)&{\text{if }}b=1\\\end{cases}}.}
Since equivalently efficient implementations ofCα,β,Dα,β{\displaystyle C_{\alpha ,\beta },D_{\alpha ,\beta }}can be recovered from one ofFα,β{\displaystyle F_{\alpha ,\beta }}by hardwiring the value ofb{\displaystyle b}, Barak et al. conclude thatFα,β{\displaystyle F_{\alpha ,\beta }}cannot be obfuscated either, which concludes their argument.[2]
In their paper, Barak et al. also prove the following (conditional to appropriatecryptographic assumptions):[2]
In their original paper exploring black-box obfuscation, Barak et al. defined two weaker notions of cryptographic obfuscation which they did not rule out:indistinguishability obfuscationand extractability obfuscation (which they called "differing-inputs obfuscation".) Informally, an indistinguishability obfuscator should convert input programs with the same functionality into output programs such that the outputs cannot be efficiently related to the inputs by a bounded attacker, and an extractability obfuscator should be an obfuscator such that if the efficient attackercouldrelate the outputs to the inputs for any two programs, then the attacker could also produce an input such that the two programs being obfuscated produce different outputs. (Note that an extractability obfuscator is necessarily an indistinguishability obfuscator.)[2][4]
As of 2020[update], a candidate implementation ofindistinguishability obfuscationis under investigation.[5]In 2013, Boyle et al. explored several candidate implementations of extractability obfuscation.[4]
|
https://en.wikipedia.org/wiki/Black-box_obfuscation
|
Conditional access(CA) is a term commonly used in relation tosoftwareand todigital televisionsystems. Conditional access is an evaluation to ensure the person who is seeking access to content is authorized to access the content. Access is managed by requiring certain criteria to be met before granting access to the content.
Conditional access is a function that lets an organization manage people's access to the software in question, such as email, applications, and documents. It is usually offered asSaaS(Software-as-a-Service) and deployed in organizations to keep company data safe. By setting conditions on the access to this data, the organization has more control over who accesses the data and where and in what way the information is accessed.
When setting up conditional access, access can be limited to or prevented based on the policy defined by the system administrator. For example, a policy might require that access is available from certain networks, or access is blocked when a specificweb browseris requesting the access.
Under theDigital Video Broadcasting(DVB) standard, conditional access system (CAS) standards are defined in the specification documents for DVB-CA (conditional access),DVB-CSA(the commonscramblingalgorithm) andDVB-CI(theCommon Interface).[1]These standards define a method by which one can obfuscate a digital-television stream, with access provided only to those with valid decryptionsmart-cards. The DVB specifications for conditional access are available from thestandards page on the DVB website.
This is achieved by a combination ofscramblingandencryption. The data stream is scrambled with a 48-bit secret key, called thecontrol word. Knowing the value of the control word at a given moment is of relatively little value, as under normal conditions, content providers will change the control word several times per minute. The control word is generated automatically in such a way that successive values are not usually predictable; the DVB specification recommends using a physical process for that.
In order for the receiver to unscramble the data stream, it must be permanently informed about the current value of the control word. In practice, it must be informed slightly in advance, so that no viewing interruption occurs.Encryptionis used to protect the control word during transmission to the receiver: the control word is encrypted as anentitlement control message(ECM). The CA subsystem in the receiver will decrypt the control word only when authorised to do so; that authority is sent to the receiver in the form of anentitlement management message(EMM). The EMMs are specific to eachsubscriber, as identified by the smart card in his receiver, or to groups of subscribers, and are issued much less frequently than ECMs, usually at monthly intervals. This being apparently not sufficient to prevent unauthorized viewing,TPShas lowered this interval down to about 12 minutes. This can be different for every provider,BSkyBuses a term of 6 weeks. WhenNagravision 2was hacked,Digital+started sending a new EMM every three days to make unauthorized viewing more cumbersome.
The contents of ECMs and EMMs are not standardized and as such they depend on the conditional access system being used.[2]
The control word can be transmitted through different ECMs at once. This allows the use of several conditional access systems at the same time, a DVB feature calledsimulcrypt, which saves bandwidth and encourages multiplex operators to cooperate.DVB Simulcryptis widespread in Europe; some channels, like theCNN InternationalEurope from theHot Birdsatellites, can use seven different CA systems in parallel.
The decryption cards are read, and sometimes updated with specific access rights, either through aconditional-access module(CAM), aPC card-format card reader meeting DVB-CI standards, or through a built-inISO/IEC 7816card reader, such as that in theSky Digibox.
Several companies provide competing CA systems; ABV,VideoGuard, Irdeto,Nagravision,Conax,Viaccess,Synamedia,Mediaguard(a.k.a.SECA) are among the most commonly used CA systems.
Due to the common usage of CA in DVB systems, many tools to aid in or evendirectly circumventencryption exist. CAM emulators and multiple-format CAMs exist which can either read several card formats or even directly decrypt a compromised encryption scheme. Most multiple format CAMs and all CAMs that directly decrypt a signal are based onreverse engineeringof the CA systems. A large proportion of the systems currently in use for DVB encryption have been opened to full decryption at some point, including Nagravision, Conax, Viaccess, Mediaguard (v1) as well as the first version of VideoGuard.
In Canada and the United States, the standard for conditional access is provided withCableCARDswhose specification was developed by the cable company consortiumCableLabs.
Cable companies in the United States are required by theFederal Communications Commissionto support CableCARDs. Standards exist for two-way communication (M-card), butsatellite televisionhas separate standards. Next-generation approaches in the United States eschew such physical cards and employ schemes using downloadable software for conditional access such asDCAS.
The main appeal of such approaches is that theaccess controlmay be upgraded dynamically in response to security breaches without requiring expensive exchanges of physical conditional-access modules.
Conditional access systems include:
0x2610
BISS-E
2018
Despite the Programming Center shutting down its consumer usage of DigiCipher 2 (as 4DTV) on August 24, 2016, it is still being used for cable headends across the United States, as well as on Shaw Direct in Canada.
|
https://en.wikipedia.org/wiki/Conditional_access
|
Pirate decryptionis thedecryption, or decoding, ofpay TVor pay radio signals without permission from the original broadcaster. The term "pirate" is used in the sense ofcopyright infringement. TheMPAAand other groups which lobby in favour ofintellectual property(specificallycopyrightand trademark) regulations have labelled such decryption as "signal theft"[1]and object to it, arguing that losing out on a potentialchance to profitfrom aconsumer's subscription fees counts as a loss of actual profit.
The concept of pay TV orpay televisioninvolves a broadcaster deliberately transmitting signals in a non-standard, scrambled or encrypted format in order to charge viewers asubscriptionfee for the use of a special decoder needed to receive the scrambledbroadcastsignal.[citation needed]
Early pay TV broadcasts in countries such as theUnited Statesused standard over-the-air transmitters; many restrictions applied asanti-siphoning lawswere enacted to prevent broadcasters of scrambled signals from engaging in activities to harm the development of standardfree-to-aircommercial broadcasting. Scrambled signals were limited to large communities which already had a certain minimum number of unencrypted broadcast stations, relegated to certain frequencies. Restrictions were placed on access of pay TV broadcasters to content such as recent feature films in order to give free TV broadcasters a chance to air these programs before they were siphoned away by pay channels.
Under these conditions, the pay TV concept was very slow to become commercially viable; most television and radio broadcasts remained in-the-clear and were funded by commercialadvertising, individual and corporate donations toeducationalbroadcasters, direct funding by governments or license fees charged to the owners of receiving apparatus (theBBCin the UK, for example).
Pay TV only began to become common after the widespread installation ofcable televisionsystems in the 1970s and 1980s; early premium channels were most often movie broadcasters such as the US-basedHome Box OfficeandCinemax, both currently owned byWarner Bros. Discovery. Signals were obtained for distribution by cable companies using C-band satellite dish antennae of up to ten feet in diameter; the first satellite signals were originally unencrypted as extremely few individual end-users could afford the large and expensive satellite receiving apparatus.
As satellite dishes became smaller and more affordable, most satellite signal providers adopted various forms ofencryptionin order to limit reception to certain groups (such as hotels, cable companies, or paid subscribers) or to specific political regions. Early encryption attempts such asVideocipher IIwere common targets for pirate decryption as dismayed viewers saw large amounts of formerly-unencrypted programming vanishing. Nowadays some free-to-airsatellitecontent in the USA still remains, but many of the channels still in the clear are ethnic channels, local over-the-air TV stations, international broadcasters, religious programming, backfeeds of network programming destined to local TV stations or signals uplinked from mobile satellite trucks to provide live news and sports coverage.
Specialty channels and premium movie channels are most often encrypted; in most countries[citation needed], broadcasts containing explicitpornographymust be encrypted to prevent accidental viewing.
Initial attempts to encrypt broadcast signals were based on analogue techniques of questionable security, the most common being one or a combination of techniques such as:
These systems were designed to provide decoders to cable operators at low cost; a serious tradeoff was made in security. Some analogue decoders were addressable so that cable companies could turn channels on or off remotely, but this only gave the cable companies control of their own descramblers — valuable if needed to deactivate a stolen cable company decoder but useless against hardware designed by signal pirates.
The first encryption methods used for big-dish satellite systems used a hybrid approach; analogue video and digital encrypted audio. This approach was somewhat more secure, but not completely free of problems due to piracy of video signals.
Digital TV services, by nature can more easily implement encryption technologies. When first introduced, digitalDBSbroadcasts were touted as being secure enough to put an end to piracy once and for all. Often these claims would be made in press releases.
The enthusiasm was short-lived. In theory the system was an ideal solution, but some corners had been cut in the initial implementations in the rush to launch the service. The first US DirecTV smart cards were based on theBSkyBVideoCryptcard known as the Sky 09 card. The Sky 09 card had been introduced in 1994 as a replacement for the compromised Sky 07 card. The former had been totally compromised in Europe at the time (1995). The countermeasure employed byNDS Group, the designers of the VideoCrypt system was to issue a new smartcard (known as the Sky 10 card) that included anASICin addition to the card'smicrocontroller. This innovation made it harder for pirates to manufacture pirate VideoCrypt cards. Previously, the program in the Sky card's microcontroller could be rewritten for other microcontrollers without too much difficulty. The addition of anASICtook the battle between the system designers and pirates to another level and it bought BSkyB at least six months of almost piracy-free broadcasting before the pirate Sky 10 cards appeared on the market in 1996. Initial pirate Sky 10 cards had an implementation of this ASIC but once supplies ran out, pirates resorted to extracting the ASICs from deactivated Sky cards and reusing them.
The first US DirecTV "F" card did not contain an ASIC and it was quickly compromised. Pirate DirecTV cards based on microcontrollers that were often ironically more secure than that used in the official card became a major problem for DirecTV. Similar errors had been made by the developers of the UK's terrestrial digitalXtraview Encryption System, which provided no encryption and relied on hiding channels from listings.
The DirecTV "F" card was replaced with the "H" card, which contained an ASIC to handle decryption. However, due to similarities between the "H" and other existing cards, it became apparent that while the signal could not be received without the card and its ASIC, the card itself was vulnerable to tampering by reprogramming it to add channel tiers or additional programming, which allowed them to be viewed without paying.
Two more card swaps would be necessary before DirecTV could reduce piracy by a significant amount; a number of other providers are also in the middle of swapping out all of their subscribers' smartcards due to compromisedencryptionmethods or technology.
A number of vulnerabilities exist even with digital encryption:
On May 15, 2008, a jury in the Echostar vs NDS civil lawsuit (8:2003cv00950) awarded Echostar just over US$1,500 in damages; Echostar originally sought $1 billion in damages from NDS. However, a jury was not convinced of the allegations Echostar had made against NDS and awarded damages only for the factual claims that were proven and for which the jury believed an award should be given in accordance with the laws of the United States.
In some cases,fraudulent cloninghas been used to assign identical serial numbers to multiple receivers or cards; subscribe (or unsubscribe) one receiver and the same programming changes appear on all of the others. Various techniques have also been used to providewrite protectionfor memory on the smartcards or receivers to make deactivation or sabotage of tampered cards by signal providers more difficult.
Systems based on removable smartcards do facilitate the implementation ofrenewable security, where compromised systems can be repaired by sending new and redesigned cards to legitimate subscribers, but they also make the task of replacing smartcards with tampered cards or inserting devices between card and receiver easier for pirates. In some European systems, theconditional-access module(CAM) which serves as a standardized interface between smartcard and DVB receiver has also been targeted for tampering or replaced by third-party hardware.
Improvements in hardware and system design can be used to significantly reduce the risks of any encryption system being compromised, but many systems once thought secure have been proven vulnerable to sufficiently sophisticated and malicious attackers.
Two-way communication has also been used by designers of proprietary digital cable TV equipment in order to make tampering more difficult or easier to detect. A system involving the use of ahigh-pass filteron the line to prevent two-way communication has been widely promoted by some businesses as a means of disabling communication of billing information forpay-per-viewprogramming but this device is effectively worthless as a cable operator remains free to unsubscribe a digital set-top box if two-way communication has been lost. As a device intended to pass signals in one direction only, the line filters offer nothing that couldn't be done (with the same results) by an inexpensive signal booster - a simple one-way RF amplifier widely available cheaply and readily for other purposes. Also, many such boxes will disallow access to pay-per-view content after a set number of programs are watched before the box can transmit this data to the headend, further reducing the usefulness of such a filter.
Some of the terminology used to describe various devices, programs and techniques dealing with Pay-TV piracy is named for the particular hacks. The "Season" interface for example is named after the Season7 hack on Sky TV which allowed a PC to emulate a legitimate Sky-TV smartcard. The Season7 referred to the seventh and final season ofStar Trek: The Next Generationwhich was then showing on Sky One. The "Phoenix" hack was named after the mythical bird which can reanimate itself. The hack itself reactivated smartcards that had been switched off by the providers.
Some of the terminology used on Internet discussion sites to describe the various devices, programs and techniques used in dealing with video piracy is strange, non-standard, or specific to one system. The terms are often no different from the brand names used by legitimate products and serve the same function.
Smart card piracy involves the unauthorised use of conditional-accesssmart cards, in order to gain, and potentially provide to others, unauthorised access to pay-TV or even privatemediabroadcasts. Smart card piracy generally occurs after a breach of security in the smart card, exploited by computerhackersin order to gain complete access to the card'sencryptionsystem.
Once access has been gained to the smart card's encryption system, the hacker can perform changes to the card's internal information, which in turn tricks the conditional-access system into believing that it has been allowed access, by the legitimate card provider, to other television channels using the same encryption system. In some cases, the channels do not even have to be from the same television provider, since many providers use similar encryption systems, or use cards which have the capacity to store information for decoding those channels also. The information on how to hack the card is normally held within small, underground groups, to which public access is not possible. Instead, the hacking groups may release their hack in several forms. One such way is simply to release the encryption algorithm and key. Another common release method is by releasing acomputer programwhich can be used by the smart card user to reprogram their card. Once complete, the now illegally modified smart card is known as a "MOSC". (Modified Original Smart Card). A third such method, more common in recent times, is to sell the information gained on the encryption to a third party, who will then release their own smart card, such as the K3 card. This third party, for legal reasons, will then use a fourth party to release encrypted files, which then allow the card to decode encrypted content.
Along with modifying original cards, it is possible to use the information provided by the smart card to create an encryption emulator. This, in turn, can beprogrammedinto a cable or satellite receiver's internal software, and offered for download on the internet as afirmwareupgrade. This allows access to the encrypted channels by those who do not even own a smart card. In recent times, many undergroundforumwebsites dedicated to the hobby of satellite piracy and encryption emulatedFree To Air(FTA) receivers have been set up, giving up-to-date information onsatelliteandcablepiracy, including making available firmware downloads for receivers, and very detailed encryption system information available to the public.
Upon gaining the knowledge that their system has been compromised, the smart card providers often have several counter measure systems against unauthorised viewing, which can be put in place over the air, in most cases causing virtually no disruption to legitimate viewers. One such measure isCI revocation. The simplest form of counter measure is a key change. This simply halts viewing for those viewing without authorisation temporarily, since the new key can easily be accessed in the hacked card, and implemented. There are often other more complicated procedures which update a part of the smart card in order to make it inaccessible. These procedures can also, however, be hacked, once again allowing access. This leads to a game of "cat and mouse" between the smart card provider, and the hackers. This, after several stages of progression, can leave the smart card provider in a situation where they no longer have any further counter measures to implement. This leaves them in a situation where they must perform a card and encryption change with all legitimate viewers, in order to eliminate the viewing of the service without permission, at least for the foreseeable future.
Such has been the success of implementing new smart card systems, that another form of smart card piracy has grown in popularity. This method is calledcard sharing, which works by making available the smart card decoding information in real time to other users, via a computer network. Police monitoring of unsecured card sharing networks has led to prosecutions.
Virtually every common encryption system is publicly known to have been compromised. These includeViaccess,Nagravision, SECAMediaguardandConax. The MediaCipher system, owned by Motorola, along with Scientific Atlanta's PowerKEY system, are the only digital TV encryption systems which have not publicly been compromised. This is largely thanks to there being noPC cardconditional-access modules(CAMs) available for either encryption system.
Despite the unauthorised decryption of media being illegal in many countries, smart card piracy is a crime which is very rarely punished, due to it being virtually undetectable, particularly in the case ofsatelliteviewing. Laws in many countries do not clearly specify whether the decryption of foreign media services is illegal or not. This has caused much confusion in places such as Europe, where the proximity of many countries, coupled with the large land mass covered by satellite beams, allows signal access to many different providers. These providers are reluctant to pursue criminal charges against many viewers as they live in different countries. There have, however, been several high-profile prosecution cases in theUSA, where satellite dealers have been taken to court resulting in large fines or jail time.[2]
An Internet key sharing scheme consists of one smart card with a valid, paid subscription which is located on an Internet server. It generates a stream of real-time decryption keys which are broadcast over the Internet to remotely located satellite receivers. Limiting factors in the number of remotely located satellite receivers are the network latency and the period between the updated keys and the ability of the card client's receiver to use the decrypted key stream.[3]
Each receiver is configured in an identical manner, a clone receiving the same television signal from a satellite and, from the internet server, the same decryption keys to unlock that signal. As the server must have individually subscribed smart cards for each channel to be viewed, its continued operation tends to be costly and may require multiple subscriptions under different names and addresses. There is also a risk that as the number of card clients on the card sharing network grows, it will attract the attention of the satellite TV service provider and law enforcement agencies and the monitoring of IP addresses associated with this card sharing network may identify individual users and server operators who then become targets for legal action by the satellite TV service provider or by legal authorities.
Key sharing schemes are typically used where replacement of compromised smart card systems (such as the deprecation of Nagra 1/2 in favour of Nagra 3) has made other pirate decryption methods non-functional.
In February 2014, an episode ofBBC's "Inside Out" disclosed that the completeSky TVpackage could be obtained from black-market sources for as little as £10 per month through Internet key sharing,SwanseaandCardiffwere highlighted with significant activity in pubs using cracked boxes to show Premier League football.[4]
In some countries such asCanadaand manyCaribbeannations (except for theDominican Republic), theblack marketin satellite TV piracy is closely tied to thegray marketactivity of using direct broadcast satellite signals to watch broadcasts intended for one country in some other, adjacent country. Many smaller countries have no domestic DBS operations and therefore few or no legal restrictions on the use of decoders which capture foreign signals.
The refusal of most providers to knowingly issue subscriptions outside their home country leads to a situation where pirate decryption is perceived as being one of the few ways to obtain certain programming. If there is no domestic provider for a channel, a grey market (subscribed using another address) or black market (pirate) system is prerequisite to receive many specific ethnic, sport or premium movie services.
Pirate or grey-market reception also provides viewers a means to bypasslocal blackoutrestrictions onsporting eventsand to accesshard-core pornographywhere some content is not otherwise available.
The grey market for US satellite receivers in Canada at one point was estimated to serve as many as several hundred thousand English-speaking Canadian households. Canadian authorities, acting under pressure from cable companies and domestic broadcasters, have made many attempts to prevent Canadians from subscribing to US direct-broadcast services such as AT&T's DirecTV and Echostar's Dish Network.
While litigation has gone as far as theSupreme Court of Canada, no judicial ruling has yet been made on whether such restrictions violate the safeguards of theCanadian Charter of Rights and Freedomswhich are intended to protectfreedom of expressionand preventlinguisticorethnic discrimination. Domestic satellite and cable providers have adopted a strategy of judicial delay in which their legal counsel will file an endless series of otherwise-useless motions before the courts to ensure that the proponents of the grey-market systems run out of money before the "Charter Challenge" issue is decided.[citation needed]
According to K. William McKenzie, the Orillia Ontario lawyer who won the case in the Supreme Court of Canada, a consortium headed by David Fuss and supported by Dawn Branton and others later launched a constitutional challenge to defeat section 9(1)(c) of the Radiocommunication Act on the basis that it breached the guarantee of Freedom of Expression enshrined in section 2 (c) of the Canadian Charter of Rights.
The evidence compiled by Mr. McKenzie from his broadcasting clients in opposition to this challenge was so overwhelming that it was abandoned and the Court ordered that substantial costs be paid by the applicants.
In most cases, broadcast distributors will require a domestic billing address before issuing a subscription; post boxes and commercial mail receiving agencies are often used by grey-market subscribers to foreign providers to circumvent this restriction.
The situation in the US itself differs as it is complicated by the legal question of subscriber access to distant local TV stations. Satellite providers are severely limited in their ability to offer subscriptions to distant locals due to the risk of further lawsuits by local affiliates of the same network in the subscribers homedesignated market area. California stations have sued satellite providers who distributed New York signals nationally, as the distant stations would have an unfair advantage by broadcasting the same programming three hours earlier.
There is also a small "reverse gray market" for Canadian signals, transmitted with a footprint which sends full-strength DBS signals to many if not all of the contiguous 48US states. This is desirable not only to receive Canadian-only content, but because some US-produced programs air in Canada in advance of their US broadcast. The question ofsignal substitution, by which Canadian cable and satellite providers substitute the signal of a local or domestic channel over a foreign or distant channel carrying the same program, is rendered more complex by the existence of a reverse grey market. Signal substitution had already been the cause of strong diplomatic protests by the United States, which considers the practice to constitute theft of advertising revenue.
The lack of domestic competition for premium movie channels in Canada is one factor encouraging grey-market reception; language is another key issue as most Spanish-language programming inNorth Americais on the US system and most French-language programming is on the Canadian system. A larger selection of sports and ethnic programming is also available to grey-market subscribers.
It could be said that the 1000-channel universe is a "reality" in North America, but only for the signal pirates as many legal and geographic restrictions are placed on the ability to subscribe to many if not most of the physically available channels.
Other countries such asNicaraguaduring Sandinista rule,Cuba,Iran(Islamic Republic of Iran) andAfghanistanduringTalibanrule andIraqduring theSaddam Husseinregime, have attempted to prohibit their citizens from receiving any satellite broadcasts from foreign sources.
The situation inEuropediffers somewhat, due to the much greater linguistic diversity in that region and due to the use of standardized DVB receivers capable of receiving multiple providers and free-to-air signals. North American providers normally lock their subscribers into "package receivers" unable to tune outside their one package; often the receivers are sold at artificially low prices and the subscription cost for programming is increased in order to favour new subscribers over existing ones. Providers are also notorious for using sales tactics such asbundling, in which to obtain one desired channel a subscriber must purchase a block of anywhere from several to more than a hundred other channels at substantial cost.
Many European companies such as British Sky Broadcasting prohibit subscriptions outside the UK and Ireland. But other satellite providers such asSky Deutschlanddo sell yearly subscription cards legally to customers in other European countries without the need for an address or other personal information. The latter also applies to virtually all the Adult channel cards sold in Europe.
The Middle East emerged in the picture with the Kingdom ofSaudi Arabia. In July 2019, global football authorities of various competitions collectively condemned a pirate broadcasting channel of Saudi Arabia,BeoutQ. The right holders runningPremier League,FIFA World CupandUEFA Champions Leaguecalled on the authorities of the Arab nation to halt the operations of its homegrown pirate TV and broadcasting service, which is involved in illegal streaming of matches internationally.[5]
BeoutQ emerged in 2017, and since has been widely available across Saudi Arabia. However, the country denied that it is based in Riyadh, stating that the authorities are committed to fighting piracy. In February 2015, several sports bodies and broadcasters, including theU.S. National Basketball Association,U.S. Tennis AssociationandSkydemanded the United States to add Saudi Arabia its “Priority Watch List” over TV piracy.[6]It was in April 2019, whenOffice of the United States Trade Representative(USTR) released a report placing Saudi Arabia on the Watch List.[7]
A number of strategies have been used by providers to control or prevent the widespread pirate decryption of their signals.
One approach has been to take legal action against dealers who sell equipment which may be of use to satellite pirates; in some cases the objective has been to obtain lists of clients in order to take or threaten to take costly legal action against end-users. Providers have created departments with names like the "office of signal integrity" or the "end-users group" to pursue alleged pirate viewers.
As some equipment (such as a computer interface to communicate with standard ISO/IEC 7816 smartcards) is useful for other purposes, this approach has drawn strong opposition from groups such as theElectronic Frontier Foundation. There have also been US counter-suits alleging that the legal tactics used by some DBS providers to demand large amounts of money from end-users may themselves appear unlawful or border on extortion.
Much of the equipment is perfectly lawful to own; in these cases, only the misuse of the equipment to pirate signals is prohibited. This makes provider attempts at legal harassment of would-be pirates awkward at best, a problem for providers which is growing due to the Internet distribution of third-party software to reprogram some otherwise legitimate free-to-air DVB receivers to decrypt pay TV broadcasts with no extra hardware.
US-based Internet sites containing information about the compromisedencryptionschemes have also been targeted by lawyers, often with the objective of costing the defendants enough in legal fees that they have to shut down or move their sites to offshore or foreign Internet hosts.
In some cases, the serial numbers of unsubscribed smartcards have beenblacklistedby providers, causing receivers to display error messages. A "hashing" approach of writing arbitrary data to every available location on the card and requiring that this data be present as part of the decryption algorithm has also been tried as a way of leaving less available free space for third-party code supplied by pirates.
Another approach has been to loadmalicious codeonto smartcards or receivers; these programs are intended to detect tampered cards and maliciously damage the cards or corrupt the contents ofnon-volatilememories within the receiver. This particularTrojan horseattack is often used as an ECM (electronic countermeasure) by providers, especially in North America where cards and receivers are sold by the providers themselves and are easy targets for insertion ofbackdoorsin their computerfirmware. The most famous ECM incident was the "Black Sunday" attack launched against tampered DirecTV "H" on January 21, 2001.[8]It intended to destroy the cards by overwriting a non-erasable part of the cards internalmemoryin order to lock the processor into anendless loop.
The results of a provider resorting to the use of malicious code are usually temporary at best, as knowledge of how to repair most damage tends to be distributed rapidly by hobbyists through variousInternetforums. There is also a potential legal question involved (which has yet to be addressed) as the equipment is normally the property not of the provider but of the end user. Providers will often print on the smartcard itself that the card is the property of the signal provider, but at least one legal precedent indicates that marking "this is mine" on a card, putting it in a box with a receiver and then selling it can legally mean "this is not mine anymore". Malicious damage to receiver firmware puts providers on even shakier legal ground in the unlikely event that the matter were ever to be heard by the judiciary.
The only solution which has shown any degree of long-term success against tampered smartcards has been the use of digitalrenewable security; if the code has been broken and the contents of the smartcard's programming widely posted across the Internet, replacing every smartcard in every subscriber's receiver with one of different, uncompromised design will effectively put an end to a piracy problem. Providers tend to be slow to go this route due to cost (as many have millions of legitimate subscribers, each of which must be sent a new card) and due to concern that someone may eventually crack the code used in whatever new replacement card is used, causing the process to begin anew.
Premiere in Germany has replaced all of its smartcards with the Nagravision Aladin card; the US DirecTV system has replaced its three compromised card types ("F" had no encryption chip, "H" was vulnerable to being reprogrammed by pirates and "HU" were vulnerable to a "glitch" which could be used to make them skip an instruction). Both providers have been able to eliminate their problems with signal piracy by replacing the compromised smartcards after all other approaches had proved to provide at best limited results.
Dish NetworkandBell Satellite TVhad released new and more tamper-resistant smart cards over the years, known as the ROM2, ROM3, ROM10, ROM11 series. All these cards used theNagravision 1access system. Despite introducing newer and newer security measures, older cards were typically still able to decrypt the satellite signal after new cards were released (A lack ofEEPROMspace on the ROM2 cards eventually led to them being unable to receive updates necessary to view programming). In an effort to stop piracy, as by this point the Nagravision 1 system had been thoroughly reverse-engineered by resourceful hobbyists, an incompatibleNagravision 2encryption system was introduced along with a smart card swap-out for existing customers. As more cards were swapped, channel groups were slowly converted to the new encryption system, starting withpay-per-viewandHDTVchannels, followed by the premium movie channels. This effort culminated in a complete shutdown of the Nagravision 1 datastream for all major channels in September 2005. Despite these efforts to secure their programming, a software hack was released in late August 2005, allowing for the decryption of the new Nagravision 2 channels with aDVB-Scard and aPC. Just a few months later, early revisions of the Nagravision 2 cards had been themselves compromised. Broadcast programming currently[when?]uses asimulcryptof Nagravision 2 and Nagravision 3, a first step toward a possible future shutdown of Nagravision 2 systems.
Various groups have been targeted for lawsuits in connection with pirate decryption issues:
One of the most severe sentences handed out for satellite TV piracy in the United States was to aCanadianbusinessman, Martin Clement Mullen, widely known for over a decade in the satellite industry as "Marty" Mullen.
Mullen was sentenced to seven years in prison with no parole and ordered to pay DirecTV and smart card provider NDS Ltd. US$24 million in restitution. He pleaded guilty in aTampa, Floridacourt in September 2003 after being arrested when he entered the United States using a British passport in the name "Martin Paul Stewart".
Mr. Mullen had operated his satellite piracy business from Florida, the Cayman Islands and from his home in London, Ontario, Canada. Testimony in the Florida court showed that he had a network of over 100 sub-dealers working for him and that during one six-week period, he cleared US$4.4 million in cash from re-programming DirecTV smartcards that had been damaged in an electronic counter measure.
NDS Inc. Chief of Security John Norris pursued Mullen for a decade in three different countries. When Mullen originally fled the United States to Canada in the mid-1990s, Norris launched an investigation that saw an undercover operator (a former Canadian police officer named Don Best) become one of Mullen's sub-dealers and his closest personal friend for over a year. In summer of 2003 when Mullen travelled under another identity to visit his operations in Florida, US federal authorities were waiting for him at the airport after being tipped off by Canadian investigators working for NDS Inc.
However, theNDS Groupwere accused (in several lawsuits) by Canal+ (dismissed as part of an otherwise-unrelated corporate takeover deal) and Echostar (now Dish Network) of hacking the Nagra encryption and releasing the information on the internet. The jury awarded EchoStar $45.69 actual damages (one month's average subscription fee) in Claim 3.
Bell Satellite TV(as Bell ExpressVu) was sued byVidéotron, a Québécor-owned rival which operatescable televisionsystems in majorQuébecmarkets. Québécor also owns TVA, a broadcaster. Bell's inferior security and failure to replace compromised smartcards in a timely fashion cost Vidéotron cable subscribers, as viewers could obtain the same content for free from satellite under the compromised Nagra1 system from 1999 to 2005; pirate decryption also deprived TVA'sFrench languagenews channel LCN of a monthly 48¢/subscriber fee. TheSuperior Court of Quebecawarded$339,000 and $262,000 in damages/interest to Vidéotron and TVA Group in 2012. Québec's Appeal Court ruled these dollar amounts "erroneus" and increased them in 2015; despite an attempt to appeal to theSupreme Court of Canada, a final award of $141 million in damages and interest was upheld.[21]
|
https://en.wikipedia.org/wiki/Pirate_decryption
|
Adaptive redactionis a form ofredactionwhereby sensitive parts of a document are automatically removed based onpolicy. It is primarily used in next generationData Loss Prevention(DLP) solutions.[1]
The policy is a set of rules based on content and context. Context can include:
The content can be 'visible' information, such as that you see on the screen. For example, sending unprotectedcredit cardinformation outside an organisation breaches thePayment Card Industry Data Security Standard(PCI DSS regulations). Many organisations accept credit card information through incomingemail, but a reply to an email containing such information would send out the prohibited information. That would cause a breach of policy. Adaptive redaction can therefore be used to remove just the credit card number but allow the email to be sent.
Content can also be 'invisible' information such as that indocumentproperties andrevision history, and it can also be 'active' content which has been embedded in an electronic document, such as amacro. Release of 'invisible' information has on several occasions created embarrassment for government bodies.[2][3]
Adaptive redaction is designed to alleviate "False Positive" events created withData loss prevention software(DLP) security solutions. False positives occur when a DLP policy triggers and prevents legitimate outgoing communication. In the majority of cases this is caused through oversight by the sender.
|
https://en.wikipedia.org/wiki/Adaptive_redaction
|
PAN truncationis an anti-fraud measure available on some credit-card-processingpoint of sale(POS) terminals as part of amerchant accountservice.
"PAN" is an acronym forprimary account number, i.e., the "card number" on either adebitor acredit card.PAN truncationsimply replaces the card number printed on a customer receipt with a printout of only the last four digits, the remainder being replaced usually byasterisks. This hides the card number from anyone who obtains the receipt when discarded, or by other means, while still allowing a card holder with multiple cards to identify which was used, and thus accurately record the transaction.
PAN truncation is a measure to combatpayment card fraud, which is increasing worldwide,[1]particularly in a global market where "card not present" (CNP) transactions are increasingly[2]popular over theInternet, bymail, and bytelephone.
|
https://en.wikipedia.org/wiki/PAN_truncation
|
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2]
The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5]
Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema.
A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9]
Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10]
In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10]
Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12]
To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories:
Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals.
Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14]
Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer.
Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section.
A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks.
Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17]
Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods.
Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19]
Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20]
Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks.
Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23]
Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include:
Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread.
Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27]
Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website.
Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31]
Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32]
Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation:
Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33]
Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks.
In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35]
In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36]
Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including:
In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40]
Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41]
HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43]
When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42]
Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44]
Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47]
In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50]
Some common countermeasures are listed in the following sections:
Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature.
The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51]
These design principles of security by design can include some of the following techniques:
Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54]
Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55]
Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization.
A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following:
Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.
Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58]
In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60]
Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security.
Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently.
Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63]
The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market.
It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69]
Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance.
Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access.
Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent.
Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72]
Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below.
One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86]
In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example.
Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC).
Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects.
Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC).
A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language.
The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95]
As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats.
The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks.
Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100]
The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts.
Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100]
Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including:
Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws.
The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk.
The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs.
TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111]
The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities.
The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers.
Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112]
Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117]
Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118]
In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks.
Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today.
The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life.
Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124]
The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124]
Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125]
Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129]
Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131]
Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135]
Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139]
Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142]
Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146]
Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147]
Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150]
Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158]
TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159]
TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162]
While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166]
An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially.
Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182]
In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183]
Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment.
The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone.
Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card.
Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature.
Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185]
However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186]
As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg.
Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash.
High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188]
Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190]
A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191]
Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution.
There are four key components of a computer security incident response plan:
Some illustrative examples of different types of computer security breaches are given below.
In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194]
In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195]
In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197]
In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this.
In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207]
A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.
In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214]
In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function.
In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216]
International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement.
The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219]
The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220]
Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222]
On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223]
Many different teams and organizations exist, including:
On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO).
The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security.
Most countries have their own computer emergency response team to protect network security.
Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233]
TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236]
To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240]
Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231]
Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill.
This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241]
Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242]
TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO).
The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243]
Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245]
TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247]
The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249]
The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded.
In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework.
In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response.
TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS).
TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256]
The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258]
In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260]
TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262]
TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265]
The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266]
TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271]
The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272]
Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together.
In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274]
TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276]
There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race":
In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277]
This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces.
There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281]
Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285]
Typical cybersecurity job titles and descriptions include:[286]
Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294]
In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government.
In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297]
The following terms used with regards to computer security are explained below:
Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302]
TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303]
A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307]
However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308]
One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed]
Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990.
In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311]
Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314]
TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316]
NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
|
https://en.wikipedia.org/wiki/Computer_security#U.S._NRC,_10_CFR_73.54_Cybersecurity
|
Computer security compromised by hardware failureis a branch ofcomputer securityapplied to hardware.
The objective of computer security includes protection of information and property from theft, corruption, ornatural disaster, while allowing the information and property to remain accessible and productive to its intended users.[1]Such secret information could be retrieved by different ways. This article focus on the retrieval of data thanks to misused hardware or hardware failure. Hardware could be misused or exploited to get secret data. This article collects main types of attack that can lead to data theft.
Computer security can be comprised by devices, such as keyboards, monitors or printers (thanks to electromagnetic or acoustic emanation for example) or by components of the computer, such as the memory, the network card or the processor (thanks to time or temperature analysis for example).
The monitor is the main device used to access data on a computer. It has been shown that monitors radiate or reflect data on their environment, potentially giving attackers access to information displayed on the monitor.
Video display units radiate:
Known as compromising emanations orTEMPESTradiation, a code word for a U.S. government programme aimed at attacking the problem, the electromagnetic broadcast of data has been a significant concern in sensitive computer applications. Eavesdroppers can reconstruct video screen content from radio frequency emanations.[3]Each (radiated) harmonic of the video signal shows a remarkable resemblance to a broadcast TV signal. It is therefore possible to reconstruct the picture displayed on the video display unit from the radiated emission by means of a normal television receiver.[2]If no preventive measures are taken, eavesdropping on a video display unit is possible at distances up to several hundreds of meters, using only a normal black-and-white TV receiver, a directional antenna and an antenna amplifier. It is even possible to pick up information from some types of video display units at a distance of over 1 kilometer. If more sophisticated receiving and decoding equipment is used, the maximum distance can be much greater.[4]
What is displayed by the monitor is reflected on the environment. The time-varying diffuse reflections of the light emitted by a CRT monitor can be exploited to recover the original monitor image.[5]This is an eavesdropping technique for spying at a distance on data that is displayed on an arbitrary computer screen, including the currently prevalent LCD monitors.
The technique exploits reflections of the screen's optical emanations in various objects that one commonly finds close to the screen and uses those reflections to recover the original screen content. Such objects include eyeglasses, tea pots, spoons, plastic bottles, and even the eye of the user. This attack can be successfully mounted to spy on even small fonts using inexpensive, off-the-shelf equipment (less than 1500 dollars) from a distance of up to 10 meters. Relying on more expensive equipment allowed to conduct this attack from over 30 meters away, demonstrating that similar attacks are feasible from the other side of the street or from a close by building.[6]
Many objects that may be found at a usual workplace can be exploited to retrieve information on a computer's display by an outsider.[7]Particularly good results were obtained from reflections in a user's eyeglasses or a tea pot located on the desk next to the screen. Reflections that stem from the eye of the user also provide good results. However, eyes are harder to spy on at a distance because they are fast-moving objects and require high exposure times. Using more expensive equipment with lower exposure times helps to remedy this problem.[8]
The reflections gathered from curved surfaces on close by objects indeed pose a substantial threat to the confidentiality of data displayed on the screen. Fully invalidating this threat without at the same time hiding the screen from the legitimate user seems difficult, without using curtains on the windows or similar forms of strong optical shielding. Most users, however, will not be aware of this risk and may not be willing to close the curtains on a nice day.[9]The reflection of an object, a computer display, in a curved mirror creates a virtual image that is located behind the reflecting surface. For a flat mirror this virtual image has the same size and is located behind the mirror at the same distance as the original object. For curved mirrors, however, the situation is more complex.[10]
Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic components, keyboards emit electromagnetic waves. These emanations could reveal sensitive information such as keystrokes.[11]Electromagnetic emanations have turned out to constitute a security threat to computer equipment.[9]The figure below presents how a keystroke is retrieved and what material is necessary.
The approach is to acquire the raw signal directly from the antenna and to process the entire captured electromagnetic spectrum. Thanks to this method, four different kinds of compromising electromagnetic emanations have been detected, generated bywired and wirelesskeyboards. These emissions lead to a full or a partial recovery of the keystrokes. The best practical attack fully recovered 95% of the keystrokes of a PS/2 keyboard at a distance up to 20 meters, even through walls.[11]Because each keyboard has a specific fingerprint based on the clock frequency inconsistencies, it can determine the source keyboard of a compromising emanation, even if multiple keyboards from the same model are used at the same time.[12]
The four different kinds way of compromising electromagnetic emanations are described below.
When a key is pressed, released or held down, the keyboard sends a packet of information known as a scan code to the computer.[13]The protocol used to transmit these scan codes is a bidirectional serial communication, based on four wires: Vcc (5 volts), ground, data and clock.[13]Clock and data signals are identically generated. Hence, the compromising emanation detected is the combination of both signals. However, the edges of the data and the clock lines are not superposed. Thus, they can be easily separated to obtain independent signals.[14]
The Falling Edge Transition attack is limited to a partial recovery of the keystrokes. This is a significant limitation.[15]The GTT is a falling edge transition attack improved, which recover almost all keystrokes. Indeed, between two traces, there is exactly one data rising edge. If attackers are able to detect this transition, they can fully recover the keystrokes.[15]
Harmonics compromising electromagnetic emissions come from unintentional emanations such as radiations emitted by the clock, non-linear elements, crosstalk, ground pollution, etc. Determining theoretically the reasons of these compromising radiations is a very complex task.[16]These harmonics correspond to a carrier of approximately 4 MHz which is very likely the internal clock of the micro-controller inside the keyboard. These harmonics are correlated with both clock and data signals, which describe modulated signals (in amplitude and frequency) and the full state of both clock and data signals. This means that the scan code can be completely recovered from these harmonics.[16]
Keyboard manufacturers arrange the keys in a matrix. The keyboard controller, often an 8-bit processor, parses columns one-by-one and recovers the state of 8 keys at once. This matrix scan process can be described as 192 keys (some keys may not be used, for instance modern keyboards use 104/105 keys) arranged in 24 columns and 8 rows.[17]These columns are continuously pulsed one-by-one for at least 3μs. Thus, these leads may act as an antenna and generate electromagnetic emanations. If an attacker is able to capture these emanations, he can easily recover the column of the pressed key. Even if this signal does not fully describe the pressed key, it still gives partial information on the transmitted scan code, i.e. the column number.[17]
Note that the matrix scan routine loops continuously. When no key is pressed, we still have a signal composed of multiple equidistant peaks. These emanations may be used to remotely detect the presence of powered computers. Concerning wireless keyboards, the wireless data burst transmission can be used as an electromagnetic trigger to detect exactly when a key is pressed, while the matrix scan emanations are used to determine the column it belongs to.[17]
Some techniques can only target some keyboards. This table sums up which technique could be used to find keystroke for different kind of keyboard.
In their paper called"Compromising Electromagnetic Emanations of Wired and Wireless Keyboards", Martin Vuagnoux and Sylvain Pasini tested 12 different keyboard models, with PS/2, USB connectors and wireless communication in different setups: a semi-anechoic chamber, a small office, an adjacent office and a flat in a building. The table below presents their results.
Attacks against emanations caused by human typing have attracted interest in recent years. In particular, works showed that keyboard acoustic emanations do leak information that can be exploited to reconstruct the typed text.[18]
PC keyboards, notebook keyboards are vulnerable to attacks based on differentiating the sound emanated by different keys.[19]This attack takes as input an audio signal containing a recording of a single word typed by a single person on a keyboard, and a dictionary of words. It is assumed that the typed word is present in the dictionary. The aim of the attack is to reconstruct the original word from the signal.[20]This attack, taking as input a 10-minute sound recording of a user typing English text using a keyboard, and then recovering up to 96% of typed characters.[21]This attack is inexpensive because the other hardware required is a parabolic microphone and non-invasive because it does not require physical intrusion into the system. The attack employs a neural network to recognize the key being pressed.[19]It combines signal processing and efficient data structures and algorithms, to successfully reconstruct single words of 7-13 characters from a recording of the clicks made when typing them on a keyboard.[18]The sound of clicks can differ slightly from key to key, because the keys are positioned at different positions on the keyboard plate, although the clicks of different keys sound similar to the human ear.[19]
On average, there were only 0.5 incorrect recognitions per 20 clicks, which shows the exposure of keyboard to the eavesdropping using this attack.[22]The attack is very efficient, taking under 20 seconds per word on a standard PC. A 90% or better success rate of finding the correct word for words of 10 or more characters, and a success rate of 73% over all the words tested.[18]In practice, a human attacker can typically determine if text is random. An attacker can also identify occasions when the user types user names and passwords.[23]Short audio signals containing a single word, with seven or more characters long was considered. This means that the signal is only a few seconds long. Such short words are often chosen as a password.[18]The dominant factors affecting the attack's success are the word length, and more importantly, the number of repeated characters within the word.[18]
This is a procedure that makes it possible to efficiently uncover a word out of audio recordings of keyboard click sounds.[24]More recently, extracting information out of another type of emanations was demonstrated: acoustic emanations from mechanical devices such as dot-matrix printers.[18]
While extracting private information by watching somebody typing on a keyboard might seem to be an easy task, it becomes extremely challenging if it has to be automated. However, an automated tool is needed in the case of long-lasting surveillance procedures or long user activity, as a human being is able to reconstruct only a few characters per minute. The paper"ClearShot: Eavesdropping on Keyboard Input from Video"presents a novel approach to automatically recovering the text being typed on a keyboard, based solely on a video of the user typing.[25]
Automatically recognizing the keys being pressed by a user is a hard problem that requires sophisticated motion analysis. Experiments show that, for a human, reconstructing a few sentences requires lengthy hours of slow-motion analysis of the video.[26]The attacker might install a surveillance device in the room of the victim, might take control of an existing camera by exploiting a vulnerability in the camera's control software, or might simply point a mobile phone with an integrated camera at the laptop's keyboard when the victim is working in a public space.[26]
Balzarotti's analysis is divided into two main phases (figure below).
Thefirst phaseanalyzes the video recorded by the camera using computer vision techniques. For each frame of the video, the computer vision analysis computes the set of keys that were likely pressed, the set of keys that were certainly not pressed, and the position of space characters. Because the results of this phase of the analysis are noisy,a second phase, called the text analysis, is required. The goal of this phase is to remove errors using both language and context-sensitive techniques. The result of this phase is the reconstructed text, where each word is represented by a list of possible candidates, ranked by likelihood.[26]
With acoustic emanations, an attack that recovers what a dot-matrix printer processing English text is printing is possible. It is based on a record of the sound the printer makes, if the microphone is close enough to it. This attack recovers up to 72% of printed words, and up to 95% if knowledge about the text are done, with a microphone at a distance of 10 cm from the printer.[27]
After an upfront training phase ("a" in the picture below), the attack ("b" in the picture below) is fully automated and uses a combination of machine learning, audio processing, and speech recognition techniques, including spectrum features, Hidden Markov Models and linear classification.[5]The fundamental reason why the reconstruction of the printed text works is that, the emitted sound becomes louder if more needles strike the paper at a given time.[9]There is a correlation between the number of needles and the intensity of the acoustic emanation.[9]
A training phase was conducted where words from a dictionary are printed and characteristic sound features of these words are extracted and stored in a database. The trained characteristic features was used to recognize the printed English text.[9]But, this task is not trivial. Major challenges include :
Timing attacks enable an attacker to extract secrets maintained in a security system by observing the time it takes the system to respond to various queries.[28]
SSHis designed to provide a secure channel between two hosts. Despite the encryption and authentication mechanisms it uses, SSH has weaknesses. In interactive mode, every individual keystroke that a user types is sent to the remote machine in a separate IP packet immediately after the key is pressed, which leaks the inter-keystroke timing information of users’ typing. Below, the picture represents the commandsuprocessed through a SSH connection.
A very simple statistical techniques suffice to reveal sensitive information such as the length of users’ passwords or even root passwords. By using advanced statistical techniques on timing information collected from the network, the eavesdropper can learn significant information about what users type in SSH sessions.[29]Because the time it takes the operating system to send out the packet after the keypress is in general negligible comparing to the interkeystroke timing, this also enables an eavesdropper to learn the precise interkeystroke timings of users’ typing from the arrival times of packets.[30]
Data remanence problems not only affect obvious areas such as RAM and non-volatile memory cells but can also occur in other areas of the device through hot-carrier effects (which change the characteristics of the semiconductors in the device) and various other effects which are examined alongside the more obvious memory-cell remanence problems.[31]It is possible to analyse and recover data from these cells and from semiconductor devices in general long after it should (in theory) have vanished.[32]
Electromigration, which means to physically move the atom to new locations (to physically alter the device itself) is another type of attack.[31]It involves the relocation of metal atoms due to high current densities, a phenomenon in which atoms are carried along by an "electron wind" in the opposite direction to the conventional current, producing voids at the negative electrode and hillocks and whiskers at the positive electrode. Void formation leads to a local increase in current density and Joule heating (the interaction of electrons and metal ions to produce thermal energy), producing further electromigration effects. When the external stress is removed, the disturbed system tends to relax back to its original equilibrium state, resulting in a backflow which heals some of the electromigration damage. In the long term though, this can cause device failure, but in less extreme cases it simply serves to alter a device's operating characteristics in noticeable ways.
For example, the excavations of voids leads to increased wiring resistance and the growth of whiskers leads to contact formation and current leakage.[33]An example of a conductor which exhibits whisker growth due to electromigration is shown in the figure below:
One example which exhibits void formation (in this case severe enough to have led to complete failure) is shown in this figure:
Contrary to popular assumption, DRAMs used in most modern computers retain their contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard.[34]
Many products do cryptographic and other security-related computations using secret keys or other variables that the equipment's operator must not be able to read out or alter. The usual solution is for the secret data to be kept in volatile memory inside a tamper-sensing enclosure. Security processors typically store secret key material in static RAM, from which power is removed if the device is tampered with. At temperatures below −20 °C, the contents of SRAM can be ‘frozen’. It is interesting to know the period of time for which a static RAM device will retain data once the power has been removed. Low temperatures can increase the data retention time of SRAM to many seconds or even minutes.[35]
Maximillian Dornseif presented a technique inthese slides, which let him take the control of an Apple computer thanks to an iPod. The attacks needed a first generic phase where the iPod software was modified so that it behaves as master on the FireWire bus. Then the iPod had full read/write access on the Apple Computer when the iPod was plugged into a FireWire port.[36]FireWire is used by : audio devices, printers, scanners, cameras, gps, etc. Generally, a device connected by FireWire has full access (read/write). Indeed, OHCI Standard (FireWire standard) reads :
Physical requests, including physical read, physical write and lock requests to some CSR registers (section 5.5), are handled directly by the Host Controller without assistance by system software.
So, any device connected by FireWire can read and write data on the computer memory. For example, a device can :
or
To increase the computational power, processors are generally equipped with acache memorywhich decreases the memory access latency. Below, the figure shows the hierarchy between the processor and the memory. First the processor looks for data in the cache L1, then L2, then in the memory.
When the data is not where the processor is looking for, it is called a cache-miss. Below, pictures show how the processor fetch data when there are two cache levels.
Unfortunately caches contain only a small portion of the application data and can introduce additional latency to the memory transaction in the case of a miss. This involves also additional power consumption which is due to the activation of memory devices down in the memory hierarchy. The miss penalty has been already used to attack symmetric encryption algorithms, like DES.[37]The basic idea proposed in this paper is to force a cache miss while the processor is executing the AES encryption algorithm on a known plain text.[38]The attacks allow an unprivileged process to attack other process running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization.[39]
By carefully measuring the amount of time required to perform private key operations, attackers may be able to find fixedDiffie-Hellman exponents, factorRSAkeys, and break other cryptosystems. Against a vulnerable system, the attack is computationally inexpensive and often requires only known ciphertext.[40]The attack can be treated as a signal detection problem. The signal consists of the timing variation due to the target exponent bit, and noise results from measurement inaccuracies and timing variations due to unknown exponent bits. The properties of the signal and noise determine the number of timing measurements required to for the attack. Timing attacks can potentially be used against other cryptosystems, including symmetric functions.[41]
A simple and generic processor backdoor can be used by attackers as a means to privilege escalation to get to privileges equivalent to those of any given running operating system.[42]Also, a non-privileged process of one of the non-privileged invited domain running on top of a virtual machine monitor can get to privileges equivalent to those of the virtual machine monitor.[42]
Loïc Duflot studied Intel processors in the paper "CPU bugs, CPU backdoors and consequences on security"; he explains that the processor defines four different privilege rings numbered from 0 (most privileged) to 3 (least privileged). Kernel code is usually running in ring 0, whereas user-space code is generally running in ring 3. The use of some security-critical assembly language instructions is restricted to ring 0 code. In order to escalate privilege through the backdoor, the attacker must :[43]
The backdoors Loïc Duflot presents are simple as they only modify the behavior of three assembly language instructions and have very simple and specific activation conditions, so that they are very unlikely to be accidentally activated.Recent inventionshave begun to target these types of processor-based escalation attacks.
|
https://en.wikipedia.org/wiki/Computer_security_compromised_by_hardware_failure
|
A family ofcomputermodels is said to becompatibleif certainsoftwarethat runs on one of the models can also be run on all other models of the family. The computer models may differ inperformance,reliabilityor some other characteristic. These differences may affect the outcome of the running of the software.
Software compatibility can refer to the compatibility that a particular software has running on a particularCPU architecturesuch as Intel orPowerPC.[1]Software compatibility can also refer to ability for the software to run on a particularoperating system. Very rarely is acompiled softwarecompatible with multiple different CPU architectures. Normally, an application is compiled for different CPU architectures and operating systems to allow it to be compatible with the different system.Interpreted software, on the other hand, can normally run on many different CPU architectures and operating systems if the interpreter is available for the architecture or operating system.Software incompatibilityoccurs many times for new software released for a newer version of an operating system which is incompatible with the older version of the operating system because it may miss some of the features and functionality that the software depends on.
Hardware compatibility can refer to the compatibility of computer hardware components with a particularCPU architecture, bus,motherboardoroperating system.[1]Hardware that is compatible may not always run at its highest stated performance, but it can nevertheless work withlegacy components. An example isRAMchips, some of which can run at a lower (or sometimes higher)clock ratethan rated.[2]Hardware that was designed for one operating system may not work for another, if device or kernel drivers are unavailable. As an example, Android is not able to be run on a phone with iOS.[3]
Sometimes, FOSS is not compatible with proprietary hardware or specific software. This is often due to manufacturers obstructing FOSS such as by not disclosing theinterfacesor other specifications needed for members of the FOSS movement to writedriversfor their hardware – for instance as they wish customers to run only their own proprietary software or as they might benefit from partnerships.[4][5][6][7][8][9][10]
|
https://en.wikipedia.org/wiki/Computer_compatibility
|
Proprietary softwareissoftwarethat grants its creator, publisher, or other rightsholder or rightsholder partner a legal monopoly by moderncopyrightandintellectual property lawto exclude the recipient from freely sharing the software or modifying it, and—in some cases, as is the case with some patent-encumbered andEULA-bound software—from making use of the software on their own, thereby restricting their freedoms.[1]
Proprietary software is asubsetofnon-free software, a term defined in contrast tofree and open-source software; non-commercial licenses such asCC BY-NCare not deemed proprietary, but are non-free. Proprietary software may either beclosed-source softwareorsource-available software.[1][2]
Until the late 1960s, computers—especially large and expensivemainframe computers, machines in specially air-conditioned computer rooms—were usuallyleasedto customers rather thansold.[3][4]Service and all software available were usually supplied by manufacturers without separate charge until 1969. Computer vendors usually provided the source code for installed software to customers.[citation needed]Customers who developed software often made it available to the public without charge.[5]Closed source means computer programs whose source code is not published except to licensees. It is available to be modified only by the organization that developed it and those licensed to use the software.
In 1969, IBM, which hadantitrustlawsuits pending against it, led an industry change bystarting to charge separately for mainframe software[6][7]and services, by unbundling hardware and software.[8]
Bill Gates' "Open Letter to Hobbyists" in 1976 decried computer hobbyists' rampantcopyright infringementof software, particularly Microsoft'sAltair BASICinterpreter, and asserted that their unauthorized use hindered his ability to produce quality software. But the legal status ofsoftware copyright, especially forobject code, was not clear until the 1983 appeals court ruling inApple Computer, Inc. v. Franklin Computer Corp.[9][10][11]
According toBrewster Kahlethe legal characteristic of software changed also due to the U.S.Copyright Act of 1976.[12]
Starting in February 1983 IBM adopted an "object-code-only" model for a growing list of their software and stopped shipping much of the source code,[13][14]even to licensees.
In 1983, binary software became copyrightable in theUnited Statesas well by theApple vs. Franklinlaw decision,[15]before which only source code was copyrightable.[16]Additionally, the growing availability of millions of computers based on the same microprocessor architecture created for the first time an unfragmented and big enough market for binary distributed software.[16]
The tendency to license proprietary software, rather than sell it, dates from the time period before the existence, then the scope of software copyright protection wasclear. These licenses have continued in use after software copyright was recognized in the courts, and are considered to grant the company extra protection compared to copyright law.[17]According toUnited States federal law, a company can restrict the parties to which it sells but it cannot prevent a buyer from reselling the product. Software licensing agreements usually prohibit resale, enabling the company to maximize revenue.[18]
Software distributions considered as proprietary may in fact incorporate a "mixed source" model including both free and non-free software in the same distribution.[26]Most if not all so-called proprietaryUNIXdistributions are mixed source software, bundling open-source components likeBIND,Sendmail,X Window System,DHCP, and others along with a purely proprietarykerneland system utilities.[27][28]
Some free software packages are also simultaneously available under proprietary terms. Examples includeMySQL,Sendmailand ssh. The original copyright holders for a work of free software, even copyleft free software, can usedual-licensingto allow themselves or others to redistribute proprietary versions. Non-copyleft free software (i.e. software distributed under a permissive free software license or released to the public domain) allows anyone to make proprietary redistributions.[29][30]Free software that depends on proprietary software is considered "trapped" by the Free Software Foundation. This includes software written only for Microsoft Windows,[31]or software that could only run onJava, before it became free software.[32]
Most of the software is covered bycopyrightwhich, along withcontract law,patents, andtrade secrets, provides legal basis for its owner to establish exclusive rights.[33]
A software vendor delineates the specific terms of use in anend-user license agreement(EULA). The user may agree to this contract in writing, interactively on screen (clickwrap), or by opening the box containing the software (shrink wrap licensing). License agreements are usuallynot negotiable.[34]Software patentsgrant exclusive rights to algorithms, software features, or otherpatentable subject matter, with coverage varying by jurisdiction. Vendors sometimes grant patent rights to the user in the license agreement.[35]Thesource codefor a piece of proprietary software is routinely handled as atrade secret.[36]Software can be made available with fewer restrictions on licensing or source-code access; software that satisfies certain conditions of freedom and openness is known as "free" or "open-source."[37]
Since license agreements do not override applicablecopyright laworcontract law, provisions in conflict with applicable law are not enforceable.[38]Some software is specifically licensed and not sold, in order to avoid limitations of copyright such as thefirst-sale doctrine.[39]
The owner of proprietary software exercises certainexclusive rightsover the software. The owner can restrict the use, inspection of source code, modification of source code, and redistribution.
Vendors typically limit the number of computers on which software can be used, and prohibit the user from installing the software on extra computers.[citation needed]Restricted use is sometimes enforced through a technical measure, such asproduct activation, aproduct keyor serial number, ahardware key, orcopy protection.
Vendors may also distribute versions that remove particular features, or versions which allow only certain fields of endeavor, such as non-commercial, educational, or non-profit use.
Use restrictions vary by license:
Vendors typically distribute proprietary software incompiledform, usually themachine languageunderstood by the computer'scentral processing unit. They typically retain thesource code, or human-readable version of the software, often written in ahigher level programming language.[43]This scheme is often referred to as closed source.[44]
While most proprietary software is distributed without the source code, some vendors distribute the source code or otherwise make it available to customers. For example, users who have purchased a license for the Internet forum softwarevBulletincan modify the source for their own site but cannot redistribute it. This is true for many web applications, which must be in source code form when being run by a web server. The source code is covered by anon-disclosure agreementor a license that allows, for example, study and modification, but not redistribution.[45]The text-based email clientPineand certain implementations ofSecure Shellare distributed with proprietary licenses that make the source code available.[citation needed]Some licenses for proprietary software allow distributing changes to the source code, but only to others licensed for the product, and some[46]of those modifications are eventually picked up by the vendor.
Some governments fear that proprietary software may includedefectsormalicious featureswhich would compromise sensitive information. In 2003 Microsoft established a Government Security Program (GSP) to allow governments to view source code and Microsoft security documentation, of which theChinese governmentwas an early participant.[47][48]The program is part of Microsoft's broaderShared Source Initiativewhich provides source code access for some products. The Reference Source License (Ms-RSL) and Limited Public License (Ms-LPL) are proprietary software licenses where the source code ismade available.
Governments have also been accused of adding such malware to software themselves. According to documents released byEdward Snowden, theNSAhas used covert partnerships with software companies to make commercial encryption software exploitable to eavesdropping, or to insertbackdoors.[49][50]
Software vendors sometimes useobfuscated codeto impede users who wouldreverse engineerthe software.[51]This is particularly common with certainprogramming languages.[citation needed]For example, thebytecodefor programs written inJavacan be easilydecompiledto somewhat usable code,[citation needed]and the source code for programs written inscripting languagessuch asPHPorJavaScriptis available atrun time.[52]
Proprietary software vendors can prohibit the users from sharing the software with others. Another unique license is required for another party to use the software.
In the case of proprietary software with source code available, the vendor may also prohibit customers from distributing their modifications to the source code.
Sharewareis closed-source software whose owner encourages redistribution at no cost, but which the user sometimes must pay to use after a trial period. The fee usually allows use by a single user or computer. In some cases, software features are restricted during or after the trial period, a practice sometimes calledcrippleware.
Proprietary software often[citation needed]stores some of its data in file formats that areincompatiblewith other software, and may also communicate usingprotocolswhich are incompatible. Such formats and protocols may be restricted astrade secretsor subject topatents.[citation needed]
A proprietaryapplication programming interface(API) is asoftware libraryinterface "specific to one device or, more likely to a number of devices within a particular manufacturer's product range."[53]The motivation for using a proprietary API can bevendor lock-inor because standard APIs do not support the device's functionality.[53]
TheEuropean Commission, in its March 24, 2004, decision on Microsoft's business practices,[54]quotes, in paragraph 463, Microsoft general manager forC++development Aaron Contorer as stating in a February 21, 1997, internal Microsoft memo drafted forBill Gates:
Early versions of theiPhone SDKwere covered by anon-disclosure agreement. The agreement forbade independent developers from discussing the content of the interfaces. Apple discontinued the NDA in October 2008.[55]
Any dependency on the future versions and upgrades for a proprietary software package can createvendor lock-in, entrenching a monopoly position.[56]
Proprietary software may also have licensing terms that limit the usage of that software to a specific set of hardware.Applehas such a licensing model formacOS, an operating system which is limited to Apple hardware, both by licensing and various design decisions. This licensing model has been affirmed by theUnited States Court of Appeals for the Ninth Circuit.[57]
Proprietary software which is no longer marketed, supported or sold by its owner is calledabandonware, the digital form oforphaned works. If the proprietor of a software package should cease to exist, or decide to cease or limit production or support for a proprietary software package, recipients and users of the package may have no recourse if problems are found with the software. Proprietors can fail to improve and support software because of business problems.[58]Support for older or existing versions of a software package may be ended to force users to upgrade and pay for newer versions[59](planned obsolescence). Sometimes another vendor or a software's community themselves canprovide supportfor the software, or the users can migrate to either competing systems with longer support life cycles or toFOSS-based systems.[60]
Some proprietary software is released by their owner atend-of-lifeas open-source orsource availablesoftware, often to prevent the software from becoming unsupported and unavailableabandonware.[61][62][63]3D Realmsandid Softwareare famous for the practice of releasing closed source software into theopen source.[further explanation needed]Some of those kinds are free-of-charge downloads (freeware), some are still commercially sold (e.g.Arx Fatalis).[further explanation needed]More examples of formerly closed-source software in theList of commercial software with available source codeandList of commercial video games with available source code.
Proprietary software is not synonymous withcommercial software,[64][65]although the two terms are sometimes used synonymously in articles about free software.[66][67]Proprietary software can be distributed at no cost or for a fee, andfree softwarecan be distributed at no cost or for a fee.[68]The difference is that whether proprietary software can be distributed, and what the fee would be, is at the proprietor's discretion. With free software, anyone who has a copy can decide whether, and how much, to charge for a copy or related services.[69]
Proprietary software that comes for no cost is calledfreeware.
Proponents of commercial proprietary software argue that requiring users to pay for software as a product increases funding or time available for theresearch and developmentof software. For example,Microsoftsays that per-copy fees maximize the profitability of software development.[70]
Proprietary software generally creates greater commercial activity over free software, especially in regard to market revenues.[71]Proprietary software is often sold with a license that gives the end user right to use the software.
Technical support for proprietary software can often be provided only by employees of the company that created the program and such service is included with the software. However, a dedicated technical support system increases the cost of software maintenance, which has an impact on its price.[72]
|
https://en.wikipedia.org/wiki/Proprietary_software
|
Free and open-source software(FOSS) issoftwareavailable under alicensethat grants users the right to use, modify, and distribute the software – modified or not – to everyone free of charge. FOSS is an inclusiveumbrella termencompassingfree softwareandopen-source software.[a][1]The rights guaranteed by FOSS originate from the "Four Essential Freedoms" ofThe Free Software Definitionand the criteria ofThe Open Source Definition.[4][6]All FOSS must have publicly availablesource code, but not allsource-available softwareis FOSS. FOSS is the opposite ofproprietary software, which is licensed restrictively or has undisclosed source code.[4]
The historical precursor to FOSS was the hobbyist and academicpublic domain softwareecosystem of the 1960s to 1980s. Free and open-source operating systems such asLinux distributionsand descendants ofBSDare widely used, powering millions ofservers,desktops,smartphones, and other devices.[9][10]Free-software licensesandopen-source licenseshave been adopted bymany software packages. Reasons for using FOSS include decreased software costs, increasedsecurityagainstmalware, stability,privacy, opportunities for educational usage, and giving users more control over their own hardware.
Thefree software movementand theopen-source software movementareonline social movementsbehind widespread production, adoption and promotion of FOSS, with the former preferring to use the equivalent termfree/libre and open-source software(FLOSS). FOSS is supported by a loosely associated movement of multiple organizations, foundations, communities and individuals who share basic philosophical perspectives and collaborate practically, but may diverge in detail questions.
"Free and open-source software" (FOSS) is anumbrella termfor software that is consideredfree softwareandopen-source software.[1]The precise definition of the terms "free software" and "open-source software" applies them to any software distributed under terms that allow users to use, modify, and redistribute said software in any manner they see fit, without requiring that they pay the author(s) of the software aroyaltyor fee for engaging in the listed activities.[11]
Although there is an almost complete overlap between free-software licenses and open-source-software licenses, there is a strong philosophical disagreement between the advocates of these two positions. The terminology of FOSS was created to be a neutral on these philosophical disagreements between theFree Software Foundation(FSF) andOpen Source Initiative(OSI) and have a single unified term that could refer to both concepts, although Richard Stallman argues that it fails to be neutral unlike the similar term; "Free/Libre and Open Source Software" (FLOSS).[12]
Richard Stallman'sThe Free Software Definition, adopted by the FSF, definesfree softwareas a matter of liberty, not price,[13][14]and that which upholds the Four Essential Freedoms. The earliest known publication of this definition of his free software definition was in the February 1986 edition[15]of the FSF's now-discontinued GNU's Bulletin publication. The canonical source for the document is in the philosophy section of theGNU Projectwebsite. As of August 2017[update], it is published in 40 languages.[16]
To meet the definition of "free software", the FSF requires the software's licensing respect the civil liberties / human rights of what the FSF calls the software user's "Four Essential Freedoms".[17]
The Open Source Definitionis used by theOpen Source Initiative(OSI) to determine whether asoftwarelicense qualifies for the organization's insignia foropen-source software. The definition was based on theDebian Free Software Guidelines, written and adapted primarily byBruce Perens.[18][19]Perens did not base his writing on the Four Essential Freedoms of free software from theFree Software Foundation, which were only later available on the web.[20]Perens subsequently stated that he feltEric Raymond's promotion of open-source unfairly overshadowed the Free Software Foundation's efforts and reaffirmed his support for free software.[21]In the following 2000s, he spoke about open source again.[22][23]
In the early decades of computing, particularly from the 1950s through the 1970s, software development was largely collaborative. Programs were commonly shared in source code form among academics, researchers, and corporate developers. Most companies at the time made their revenue from hardware sales, and software—including source code—was distributed freely alongside it, often as public-domain software.[24][25]
By the late 1960s and 1970s, a distinct software industry began to emerge. Companies started selling software as a separate product, leading to the use of restrictive licenses and technical measures—such as distributing only binary executables—to limit user access and control. This shift was driven by growing competition and the U.S. government's antitrust scrutiny of bundled software, exemplified by the 1969 antitrust caseUnited States v. IBM.[26]
A key turning point came in 1980 when U.S. copyright law was formally extended to cover computer software.[27][28]This enabled companies like IBM to further enforce closed-source distribution models. In 1983, IBM introduced its "object code only" policy, ceasing the distribution of source code for its system software.[29]
In response to the growing restrictions on software, Richard Stallman launched the GNU Project in 1983 at MIT. His goal was to develop a complete Free software operating system and restore user freedom. The Free Software Foundation (FSF) was established in 1985 to support this mission. Stallman'sGNU Manifestoand the Four Essential Freedoms outlined the movement's ethical stance, emphasizing user control over software.[17]
The release of the Linux kernel by Linus Torvalds in 1991, and its relicense under the GNU General Public License (GPL) in 1992, marked a major step toward a fully Free operating system.[30]Other Free software projects like FreeBSD, NetBSD, and OpenBSD also gained traction following the resolution of theUSL v. BSDilawsuit in 1993.
In 1997, Eric Raymond’s essay *The Cathedral and the Bazaar* explored the development model of Free software, influencing Netscape’s decision in 1998 to release the source code for its browser suite. This code base became Mozilla Firefox and Thunderbird.
To broaden business adoption, a group of developers including Raymond, Bruce Perens, Tim O’Reilly, and Linus Torvalds rebranded the Free software movement as “Open Source.” The Open Source Initiative (OSI) was founded in 1998 to promote this new term and emphasize collaborative development benefits over ideology.[31]
Despite initial resistance—such as Microsoft's 2001 claim that "Open-source is an intellectual property destroyer"—FOSS eventually gained widespread acceptance in the corporate world. Companies like Red Hat proved that commercial success and Free software principles could coexist.[32][33][34]
Users of FOSS benefit from theFour Essential Freedomsto make unrestricted use of, and to study, copy, modify, and redistribute such software with or without modification. If they would like to change the functionality of software they can bring about changes to the code and, if they wish, distribute such modified versions of the software or often − depending on the software'sdecision making modeland its other users − even push or request such changes to be made via updates to the original software.[35][36][37][38][39]
Manufacturers of proprietary, closed-source software are sometimes pressured to building inbackdoorsor other covert, undesired features into their software.[40][41][42][43]Instead of having to trust software vendors, users of FOSS can inspect and verify the source code themselves and can put trust on a community of volunteers and users.[39]As proprietary code is typically hidden from public view, only the vendors themselves and hackers may be aware of anyvulnerabilitiesin them[39]while FOSS involves as many people as possible for exposing bugs quickly.[44][45]
FOSS is often free of charge although donations are often encouraged. This also allows users to better test and compare software.[39]
FOSS allows for better collaboration among various parties and individuals with the goal of developing the most efficient software for its users or use-cases while proprietary software is typicallymeant to generate profits. Furthermore, in many cases more organizations and individuals contribute to such projects than to proprietary software.[39]It has been shown that technical superiority is typically the primary reason why companies choose open source software.[39]
According toLinus's lawthe more people who can see and test a set of code, the more likely any flaws will be caught and fixed quickly. However, this does not guarantee a high level of participation. Having a grouping of full-time professionals behind a commercial product can in some cases be superior to FOSS.[39][44][46]
Furthermore, publicized source code might make it easier for hackers to find vulnerabilities in it and write exploits. This however assumes that such malicious hackers are more effective thanwhite hat hackerswhichresponsibly discloseor help fix the vulnerabilities, that no code leaks orexfiltrationsoccur and thatreverse engineeringof proprietary code is a hindrance of significance for malicious hackers.[44]
Sometimes, FOSS is not compatible with proprietary hardware or specific software. This is often due to manufacturers obstructing FOSS such as by not disclosing theinterfacesor other specifications needed for members of the FOSS movement to writedriversfor their hardware – for instance as they wish customers to run only their own proprietary software or as they might benefit from partnerships.[47][48][49][50][51][52][53]
While FOSS can be superior to proprietary equivalents in terms of software features and stability, in many cases it has more unfixed bugs and missing features when compared to similar commercial software.[54][additional citation(s) needed]This varies per case, and usually depends on the level of interest in a particular project. However, unlike close-sourced software, improvements can be made by anyone who has the motivation, time and skill to do so.[46][additional citation(s) needed]
A common obstacle in FOSS development is the lack of access to some common official standards, due to costlyroyaltiesor requirednon-disclosure agreements(e.g., for theDVD-Videoformat).[55]
There is often less certainty of FOSS projects gaining the required resources and participation for continued development than commercial software backed by companies.[56][additional citation(s) needed]However, companies also often abolish projects for being unprofitable, yet large companies may rely on, and hence co-develop, open source software.[45]On the other hand, if the vendor of proprietary software ceases development, there are no alternatives; whereas with FOSS, any user who needs it still has the right, and the source-code, to continue to develop it themself, or pay a 3rd party to do so.
As the FOSS operating system distributions ofLinuxhas a lowermarket shareof end users there are also fewer applications available.[57][58]
"We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable -- one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could."
In 2017, theEuropean Commissionstated that "EU institutions should become open source software users themselves, even more than they already are" and listed open source software as one of the nine key drivers of innovation, together withbig data, mobility,cloud computingand theinternet of things.[96]
In 2020, theEuropean Commissionadopted itsOpen Source Strategy 2020-2023,[97]including encouraging sharing and reuse of software and publishing Commission's source code as key objectives. Among concrete actions there is also to set up an Open Source Programme Office in 2020[98]and in 2022 it launched its own FOSS repositoryhttps://code.europa.eu/.[99]
In 2021, theCommission Decision on the open source licensing and reuse of Commission software (2021/C 495 I/01)[100]was adopted, under which, as a general principle, the European Commission may release software underEUPLor another FOSS license, if more appropriate. There are exceptions though.
In May 2022,[101]theExpert group on the Interoperability of European Public Servicescame published 27 recommendations to strengthen the interoperability of public administrations across the EU. These recommendations are to be taken into account later in the same year in Commission's proposal of the"Interoperable Europe Act".
Open-source software development (OSSD) is the process by which open-source software is developed. The software's source code is publicly available to be used, modified, and enhanced.[102]Notable examples of open-source software products are Mozilla Firefox, Android, and VLC media player.[103]The development process is typically different from traditional methods such as Waterfall. Instead favoring early releases and community involvement.[103]Agile development strategies are most often employed OSSD, with are characterized by their iterative and incremental frameworks.[104]Open-source software developers will typically use methods such as E-mail, Wikis, web forums, and instant messaging services for communication, as individuals are not typically working in close proximity to one another.[105]Version control systems such as Git are utilized to make code collaboration easier.[103]
The GNU General Public License (GPL) is one of the most widely used copyleft licenses in the free and open-source software (FOSS) community and was created by the Free Software Foundation (FSF).Version 2 (GPLv2), published in 1991, played a central role in protecting the freedom of software to be run, studied, modified, and shared by users.[106]However, as technology and legal landscapes evolved, particularly with the rise ofDigital Rights Management (DRM)and software patents, some developers and legal experts argued that GPLv2 did not adequately protect user freedoms in newer contexts.[107]This led to the development of GPLv3, which sought to address these concerns.[108]
While copyright is the primary legal mechanism that FOSS authors use to ensure license compliance for their software, other mechanisms such as legislation, patents, and trademarks have implications as well. In response to legal issues with patents and theDigital Millennium Copyright Act(DMCA), the Free Software Foundation releasedversion 3 of its GNU General Public License(GNU GPLv3) in 2007 that explicitly addressed the DMCA and patent rights.
One of the key issues GPLv3 aimed to address was a practice known asTivoization, named after the company TiVo, which used GPL-covered software but implemented hardware restrictions that prevented users from running modified versions of the software. This was seen by the Free Software Foundation (FSF) as a direct violation of software freedom, prompting GPLv3 to include language explicitly forbidding such restrictions.[109]Additionally, GPLv3 introduced clauses to protect users against aggressive enforcement of software patents and reinforced the idea that users should retain control over the software they use.
After the development of the GNU GPLv3 in 2007, the FSF (as the copyright holder of many pieces of the GNU system) updated many[citation needed]of the GNU programs' licenses from GPLv2 to GPLv3. On the other hand, the adoption of the new GPL version was heavily discussed in the FOSS ecosystem,[110]several projects decided against upgrading to GPLv3. For instance theLinux kernel,[111][112]theBusyBox[113][114]project,AdvFS,[115]Blender,[116]and theVLC media playerdecided against adopting the GPLv3.[117]
Apple, a user ofGCCand a heavy user of bothDRMand patents, switched the compiler in itsXcodeIDE from GCC toClang, which is another FOSS compiler[118]but is under apermissive license.[119]LWNspeculated that Apple was motivated partly by a desire to avoid GPLv3.[118]TheSambaproject also switched to GPLv3, so Apple replacedSambain their software suite by a closed-source, proprietary software alternative.[120]
The controversy with GPLv3 mirrored a more general philosophical split in the open source community: whether people should hold licenses that aggressively defend user freedoms (as with copyleft) or take a more permissive, collaborative yet ambiguous approach. Supporters applauded GPLv3 for fortifying protections against restrictions imposed by hardware and patent threats,[121]while critics felt it created legal and ideological barriers that complicated its development and made it less appealing to adopt.[122]The fallout helped to raise the acceptance of permissive licenses like the MIT and Apache licenses, especially by commercial software developers.[123]
Leemhuis criticizes theprioritizationof skilled developers who − instead of fixing issues in already popular open-source applications and desktop environments − create new, mostly redundant software to gain fame and fortune.[124]
He also criticizes notebook manufacturers for optimizing their own products only privately or creatingworkaroundsinstead of helping fix the actual causes of the many issues with Linux on notebooks such as the unnecessary power consumption.[124]
Mergers have affected major open-source software.Sun Microsystems(Sun) acquiredMySQL AB, owner of the popular open-sourceMySQLdatabase, in 2008.[125]
Oracle in turn purchased Sun in January 2010, acquiring their copyrights, patents, and trademarks. Thus, Oracle became the owner of both the most popular proprietary database and the most popular open-source database. Oracle's attempts to commercialize the open-source MySQL database have raised concerns in the FOSS community.[126]Partly in response to uncertainty about the future of MySQL, the FOSS communityforkedthe project into newdatabase systemsoutside of Oracle's control. These includeMariaDB,Percona, andDrizzle.[127]All of these have distinct names; they are distinct projects and cannot use the trademarked name MySQL.[128]
In August 2010,OraclesuedGoogle, claiming that its use ofJavainAndroidinfringed on Oracle's copyrights and patents. In May 2012, the trial judge determined that Google did not infringe on Oracle's patents and ruled that the structure of the Java APIs used by Google was not copyrightable. The jury found that Google infringed a small number of copied files, but the partiesstipulatedthat Google would pay no damages.[129]Oracle appealed to theFederal Circuit, and Google filed across-appealon the literal copying claim.[130]
By defying ownership regulations in the construction and use of information—a key area of contemporarygrowth—theFree/Open Source Software (FOSS) movementcountersneoliberalismandprivatizationin general.[131][132]
By realizing the historical potential of an "economy of abundance" for thenew digital world, FOSS may lay down a plan for political resistance or show the way towards a potentialtransformationofcapitalism.[132]
According toYochai Benkler, Jack N. and Lillian R. Berkman Professor for Entrepreneurial Legal Studies atHarvard Law School, free software is the most visible part of a new economy ofcommons-based peer productionof information, knowledge, and culture. As examples, he cites a variety of FOSS projects, including both free software and open-source.[133]
|
https://en.wikipedia.org/wiki/Free_and_open-source_software
|
These tables comparefree software/open-sourceoperating systems. Where not all of the versions support a feature, the first version which supports it is listed.
|
https://en.wikipedia.org/wiki/Comparison_of_open-source_operating_systems
|
Unified Extensible Firmware Interface(UEFI,/ˈjuːɪfaɪ/or as an acronym)[c]is aspecificationfor the firmwarearchitectureof acomputing platform. When a computeris powered on, the UEFI-implementation is typically the first that runs, before starting theoperating system. Examples includeAMI Aptio,Phoenix SecureCore,TianoCore EDK II,InsydeH2O.
UEFI replaces theBIOSthat was present in theboot ROMof allpersonal computersthat areIBM PC compatible,[5][6]although it can providebackwards compatibilitywith the BIOS usingCSM booting. Unlike its predecessor, BIOS, which is ade factostandard originally created byIBMas proprietary software, UEFI is an open standard maintained by an industryconsortium. Like BIOS, most UEFI implementations are proprietary.
Inteldeveloped the originalExtensible Firmware Interface(EFI) specification. The last Intel version of EFI was 1.10 released in 2005. Subsequent versions have been developed as UEFI by theUEFI Forum.
UEFI is independent of platform and programming language, butCis used for the reference implementation TianoCore EDKII.
The original motivation for EFI came during early development of the first Intel–HPItaniumsystems in the mid-1990s.BIOSlimitations (such as 16-bitreal mode, 1 MB addressable memory space,[7]assembly languageprogramming, andPC AThardware) had become too restrictive for the larger server platforms Itanium was targeting.[8]The effort to address these concerns began in 1998 and was initially calledIntel Boot Initiative.[9]It was later renamed toExtensible Firmware Interface(EFI).[10][11]
The firstopen sourceUEFI implementation, Tiano, was released by Intel in 2004. Tiano has since then been superseded by EDK[12]and EDK II[13]and is now maintained by the TianoCore community.[14]
In July 2005, Intel ceased its development of the EFI specification at version 1.10, and contributed it to theUnified EFI Forum, which has developed the specification as theUnified Extensible Firmware Interface(UEFI). The original EFI specification remains owned by Intel, which exclusively provides licenses for EFI-based products, but the UEFI specification is owned by the UEFI Forum.[8][15]
Version 2.0 of the UEFI specification was released on 31 January 2006. It addedcryptographyand security.
Version 2.1 of the UEFI specification was released on 7 January 2007. It added network authentication and theuser interfacearchitecture ('Human Interface Infrastructure' in UEFI).
In October 2018, Arm announcedArm ServerReady, a compliance certification program for landing the generic off-the-shelf operating systems andhypervisorson Arm-based servers. The program requires the system firmware to comply with Server Base Boot Requirements (SBBR). SBBR requires UEFI,ACPIandSMBIOScompliance. In October 2020, Arm announced the extension of the program to theedgeandIoTmarket. The new program name isArm SystemReady. Arm SystemReady defined the Base Boot Requirements (BBR) specification that currently provides three recipes, two of which are related to UEFI: 1) SBBR: which requires UEFI, ACPI and SMBIOS compliance suitable for enterprise level operating environments such as Windows, Red Hat Enterprise Linux, and VMware ESXi; and 2) EBBR: which requires compliance to a set of UEFI interfaces as defined in the Embedded Base Boot Requirements (EBBR) suitable for embedded environments such as Yocto. Many Linux and BSD distros can support both recipes.
In December 2018,Microsoftannounced Project Mu, a fork of TianoCore EDK II used inMicrosoft SurfaceandHyper-Vproducts. The project promotes the idea offirmware as a service.[16]
The latest UEFI specification, version 2.11, was published in December 2024.[17]
The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a BIOS:[18]
With UEFI, it is possible to store product keys for operating systems such as Windows, on the UEFI firmware of the device.[21][22][23]UEFI is required forSecure Booton devices shipping with Windows 8[24][25]and above.
It is also possible for operating systems to access UEFI configuration data.[26]
As of version 2.5, processor bindings exist for Itanium, x86, x86-64,ARM(AArch32) andARM64(AArch64).[27]Onlylittle-endianprocessors can be supported.[28]Unofficial UEFI support is under development for POWERPC64 by implementingTianoCore[broken anchor]on top of OPAL,[29]the OpenPOWER abstraction layer, running in little-endian mode.[30]Similar projects exist forMIPS[31]andRISC-V.[32]As of UEFI 2.7, RISC-V processor bindings have been officially established for 32-, 64- and 128-bit modes.[33]
Standard PC BIOS is limited to a 16-bit processor mode and 1 MB of addressable memory space, resulting from the design based on theIBM 5150that used a 16-bitIntel 8088processor.[8][34]In comparison, the processor mode in a UEFI environment can be either 32-bit (IA-32, AArch32) or 64-bit (x86-64, Itanium, and AArch64).[8][35]64-bit UEFI firmware implementations supportlong mode, which allows applications in the preboot environment to use 64-bit addressing to get direct access to all of the machine's memory.[36]
UEFI requires the firmware and operating system loader (or kernel) to be size-matched; that is, a 64-bit UEFI firmware implementation can load only a 64-bit operating system (OS) boot loader or kernel (unless the CSM-basedlegacy bootis used) and the same applies to 32-bit. After the system transitions fromboot servicestoruntime services, the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of the runtime services (unless the kernel switches back again).[37]: sections 2.3.2 and 2.3.4As of version 3.15, theLinux kernelsupports 64-bit kernels to bebootedon 32-bit UEFI firmware implementations running onx86-64CPUs, withUEFI handoversupport from a UEFI boot loader as the requirement.[38]UEFI handover protocoldeduplicatesthe UEFI initialization code between the kernel and UEFI boot loaders, leaving the initialization to be performed only by the Linux kernel'sUEFI boot stub.[39][40]
In addition to the standard PC disk partition scheme that uses amaster boot record(MBR), UEFI also works with theGUID Partition Table(GPT) partitioning scheme, which is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to fourprimary partitionsper disk, and up to 2TB(2 × 240bytes)per disk) are relaxed.[41]More specifically, GPT allows for a maximum disk and partition size of 8ZiB(8 × 270bytes).[42][43]
Support for GPT inLinuxis enabled by turning on the optionCONFIG_EFI_PARTITION(EFI GUID Partition Support) during kernel configuration.[44]This option allows Linux to recognize and use GPT disks after the system firmware passes control over the system to Linux.
For reverse compatibility, Linux can use GPT disks in BIOS-based systems for both data storage and booting, as bothGRUB 2and Linux are GPT-aware. Such a setup is usually referred to asBIOS-GPT.[45][unreliable source?]As GPT incorporates the protective MBR, a BIOS-based computer can boot from a GPT disk using a GPT-aware boot loader stored in the protective MBR'sbootstrap code area.[43]In the case of GRUB, such a configuration requires aBIOS boot partitionfor GRUB to embed its second-stage code due to absence of the post-MBR gap in GPT partitioned disks (which is taken over by the GPT'sPrimary HeaderandPrimary Partition Table). Commonly 1MBin size, this partition'sGlobally Unique Identifier(GUID) in GPT scheme is21686148-6449-6E6F-744E-656564454649and is used by GRUB only in BIOS-GPT setups. From GRUB's perspective, no such partition type exists in case of MBR partitioning. This partition is not required if the system is UEFI-based because no embedding of the second-stage code is needed in that case.[19][43][45]
UEFI systems can access GPT disks and boot directly from them, which allows Linux to use UEFI boot methods. Booting Linux from GPT disks on UEFI systems involves creation of anEFI system partition(ESP), which contains UEFI applications such as bootloaders, operating system kernels, and utility software.[46][47][48][unreliable source?]Such a setup is usually referred to asUEFI-GPT, while ESP is recommended to be at least 512 MB in size and formatted with a FAT32 filesystem for maximum compatibility.[43][45][49][unreliable source?]
Forbackward compatibility, some UEFI implementations also support booting from MBR-partitioned disks through the Compatibility Support Module (CSM) that provides legacy BIOS compatibility.[50]In that case, booting Linux on UEFI systems is the same as on legacy BIOS-based systems.
Some of the EFI's practices and data formats mirror those ofMicrosoft Windows.[51][52]
The 64-bit versions ofWindows VistaSP1 and later and 64-bit versions ofWindows 8,8.1,10, and11can boot from a GPT disk that is larger than 2TB.
EFI defines two types of services:boot servicesandruntime services. Boot services are available only while the firmware owns the platform (i.e., before theExitBootServices()call), and they include text and graphical consoles on various devices, and bus, block and file services. Runtime services are still accessible while the operating system is running; they include services such as date, time andNVRAMaccess.
Beyond loading an OS, UEFI can runUEFI applications, which reside as files on theEFI system partition. They can be executed from the UEFI Shell, by the firmware'sboot manager, or by other UEFI applications.UEFI applicationscan be developed and installed independently of theoriginal equipment manufacturers(OEMs).
A type of UEFI application is an OS boot loader such asGRUB,rEFInd,Gummiboot, andWindows Boot Manager, which loads some OS files into memory and executes them. Also, an OS boot loader can provide a user interface to allow the selection of another UEFI application to run. Utilities like the UEFI Shell are also UEFI applications.
EFI defines protocols as a set of software interfaces used for communication between two binary modules. All EFI drivers must provide services to others via protocols. The EFI Protocols are similar to theBIOS interrupt calls.
In addition to standardinstruction set architecture-specific device drivers, EFI provides for a ISA-independentdevice driverstored innon-volatile memoryasEFI byte codeorEBC. System firmware has an interpreter for EBC images. In that sense, EBC is analogous toOpen Firmware, the ISA-independent firmware used inPowerPC-basedApple MacintoshandSun MicrosystemsSPARCcomputers, among others.
Some architecture-specific (non-EFI Byte Code) EFI drivers for some device types can have interfaces for use by the OS. This allows the OS to rely on EFI for drivers to perform basic graphics and network functions before, and if, operating-system-specific drivers are loaded.
In other cases, the EFI driver can be filesystem drivers that allow for booting from other types of disk volumes. Examples includeefifsfor 37 file systems (based onGRUB2code),[56]used byRufusfor chain-loading NTFS ESPs.[57]
The EFI 1.0 specification defined a UGA (Universal Graphic Adapter) protocol as a way to support graphics features. UEFI did not include UGA and replaced it withGOP (Graphics Output Protocol).[58]
UEFI 2.1 defined a "Human Interface Infrastructure" (HII) to manage user input, localized strings, fonts, and forms (in theHTMLsense). These enableoriginal equipment manufacturers(OEMs) orindependent BIOS vendors(IBVs) to design graphical interfaces for pre-boot configuration. UEFI usesUTF-16to encode strings by default.
Most early UEFI firmware implementations were console-based. Today many UEFI firmware implementations are GUI-based.
An EFI system partition, often abbreviated to ESP, is adata storage devicepartition that is used in computers adhering to the UEFI specification. Accessed by the UEFI firmware when a computer is powered up, it stores UEFI applications and the files these applications need to run, including operating systemboot loaders. Supportedpartition tableschemes includeMBRandGPT, as well asEl Toritovolumes on optical discs.[37]: section 2.6.2For use on ESPs, UEFI defines a specific version of theFAT file system, which is maintained as part of the UEFI specification and independently from the original FAT specification, encompassing theFAT32,FAT16andFAT12file systems.[37]: section 12.3[59][60][61]The ESP also provides space for a boot sector as part of the backward BIOS compatibility.[50]
Unlike the legacy PC BIOS, UEFI does not rely onboot sectors, defining instead a boot manager as part of the UEFI specification. When a computer is powered on, the boot manager checks the boot configuration and, based on its settings, then executes the specified OSboot loaderoroperating system kernel(usually boot loader[62]). The boot configuration is defined by variables stored inNVRAM, including variables that indicate the file system paths to OS loaders or OS kernels.
OS boot loaders can be automatically detected by UEFI, which enables easybootingfrom removable devices such asUSB flash drives. This automated detection relies on standardized file paths to the OS boot loader, with the path varying depending on thecomputer architecture. The format of the file path is defined as<EFI_SYSTEM_PARTITION>\EFI\BOOT\BOOT<MACHINE_TYPE_SHORT_NAME>.EFI; for example, the file path to the OS loader on anx86-64system is\efi\boot\bootx64.efi,[37]and\efi\boot\bootaa64.efion ARM64 architecture.
Booting UEFI systems from GPT-partitioned disks is commonly calledUEFI-GPT booting. Despite the fact that the UEFI specification requires MBR partition tables to be fully supported,[37]some UEFI firmware implementations immediately switch to the BIOS-based CSM booting depending on the type of boot disk's partition table, effectively preventing UEFI booting to be performed fromEFI System Partitionon MBR-partitioned disks.[50]Such a boot scheme is commonly calledUEFI-MBR.
It is also common for a boot manager to have a textual user interface so the user can select the desired OS (or setup utility) from a list of available boot options.
On PC platforms, the BIOS firmware that supports UEFI boot can be calledUEFI BIOS, although it may not support CSM boot method, as modern x86 PCs deprecated use of CSM.
To ensure backward compatibility, UEFI firmware implementations on PC-class machines could support booting in legacy BIOS mode from MBR-partitioned disks through theCompatibility Support Module (CSM)that provides legacy BIOS compatibility. In this scenario, booting is performed in the same way as on legacy BIOS-based systems, by ignoring the partition table and relying on the content of aboot sector.[50]
BIOS-style booting from MBR-partitioned disks is commonly calledBIOS-MBR, regardless of it being performed on UEFI or legacy BIOS-based systems. Furthermore, booting legacy BIOS-based systems from GPT disks is also possible, and such a boot scheme is commonly calledBIOS-GPT.
TheCompatibility Support Moduleallows legacy operating systems and some legacyoption ROMsthat do not support UEFI to still be used.[63]It also provides required legacySystem Management Mode(SMM) functionality, calledCompatibilitySmm, as an addition to features provided by the UEFI SMM. An example of such a legacy SMM functionality is providing USB legacy support for keyboard and mouse, by emulating their classicPS/2counterparts.[63]
In November 2017, Intel announced that it planned to phase out support CSM for client platforms by 2020.[64]
In July, of 2022, Kaspersky Labs published information regarding a Rootkit designed to chain boot malicious code on machines using Intel's H81 chipset and the Compatibility Support module of affected motherboards.[65]
In August 2023, Intel announced that it planned to phase out support CSM for server platforms by 2024.[66]
Currently[when?]most computers based on Intel platforms do not support CSM.[citation needed]
The UEFI specification includes support for booting over network via thePreboot eXecution Environment(PXE). PXE bootingnetwork protocolsincludeInternet Protocol(IPv4andIPv6),User Datagram Protocol(UDP),Dynamic Host Configuration Protocol(DHCP),Trivial File Transfer Protocol(TFTP) andiSCSI.[37][67]
OS images can be remotely stored onstorage area networks(SANs), withInternet Small Computer System Interface(iSCSI) andFibre Channel over Ethernet(FCoE) as supported protocols for accessing the SANs.[37][68][69]
Version 2.5 of the UEFI specification adds support for accessing boot images overHTTP.[70]
The UEFI specification defines a protocol known asSecure Boot, which can secure the boot process by preventing the loading of UEFI drivers or OS boot loaders that are notsignedwith an acceptabledigital signature. The details of how these drivers are signed is specified in theUEFI Specification[71]When Secure Boot is enabled, it is initially placed in "setup" mode, which allows a public key known as the "platform key" (PK) to be written to the firmware. Once the key is written, Secure Boot enters "User" mode, where only UEFI drivers and OS boot loaders signed with the platform key can be loaded by the firmware. Additional "key exchange keys" (KEK) can be added to a database stored in memory to allow other certificates to be used, but they must still have a connection to the private portion of the platform key.[72]Secure Boot can also be placed in "Custom" mode, where additional public keys can be added to the system that do not match the private key.[73]
Secure Boot is supported byWindows 8and8.1,Windows Server 2012and 2012 R2,Windows 10,Windows Server 2016,2019, and2022, andWindows 11, VMware vSphere 6.5[74]and a number ofLinux distributionsincludingFedora(since version 18),openSUSE(since version 12.3), RHEL (since version 7), CentOS (since version 7[75]), Debian (since version 10),[76]Ubuntu(since version 12.04.2),Linux Mint(since version 21.3).,[77][78]andAlmaLinux OS(since version 8.4[79]). As of January 2025[update],FreeBSDsupport is in a planning stage.[80]
UEFI provides ashell environment, which can be used to execute other UEFI applications, including UEFIboot loaders.[48]Apart from that, commands available in the UEFI shell can be used for obtaining various other information about the system or the firmware, including getting the memory map (memmap), modifying boot manager variables (bcfg), running partitioning programs (diskpart), loading UEFI drivers, and editing text files (edit).[81][unreliable source?][82][83]
Source code for a UEFI shell can be downloaded from theIntel'sTianoCore[broken anchor]UDK/EDK2 project.[84]A pre-built ShellBinPkg is also available.[85]Shell v2 works best in UEFI 2.3+ systems and is recommended over Shell v1 in those systems. Shell v1 should work in all UEFI systems.[81][86][87]
Methods used for launching UEFI shell depend on the manufacturer and model of the systemmotherboard. Some of them already provide a direct option in firmware setup for launching, e.g. compiled x86-64 version of the shell needs to be made available as<EFI_SYSTEM_PARTITION>/SHELLX64.EFI. Some other systems have an already embedded UEFI shell which can be launched by appropriate key press combinations.[88][unreliable source?][89]For other systems, the solution is either creating an appropriate USB flash drive or adding manually (bcfg) a boot option associated with the compiled version of shell.[83][88][90][unreliable source?][91][unreliable source?]
The following is a list ofcommandssupported by the EFI shell.[82]
Extensions to UEFI can be loaded from virtually anynon-volatilestorage device attached to the computer. For example, anoriginal equipment manufacturer(OEM) can distribute systems with anEFI system partitionon the hard drive, which would add additional functions to the standard UEFI firmware stored on the motherboard'sROM.
UEFI Capsule defines a Firmware-to-OS firmware update interface, marketed as modern and secure.[92]Windows 8,Windows 8.1,Windows 10,[93]andFwupdfor Linux each support the UEFI Capsule.
LikeBIOS, UEFI initializes and tests system hardware components (e.g. memory training, PCIe link training, USB link training on typical x86 systems), and then loads theboot loaderfrom amass storage deviceor through anetwork connection. Inx86systems, the UEFI firmware is usually stored in theNOR flashchip of the motherboard.[94][95]In some ARM-based Android and Windows Phone devices, the UEFI boot loader is stored in theeMMCoreUFSflash memory.
UEFI machines can have one of the following classes, which were used to help ease the transition to UEFI:[96]
Starting from the 10th Gen Intel Core, Intel no longer provides LegacyVideo BIOSfor the iGPU (Intel Graphics Technology). Legacy boot with those CPUs requires a Legacy Video BIOS, which can still be provided by a video card.[citation needed]
This is the first stage of the UEFI boot but may have platform specific binary code that precedes it. (e.g.,Intel ME,AMD PSP, CPUmicrocode). It consists of minimal code written inassembly languagefor the specific architecture. It initializes a temporary memory (often CPU cache-as-RAM (CAR), or SoC on-chip SRAM) and serves as the system's software root of trust with the option of verifying PEI before hand-off.
The second stage of UEFI boot consists of a dependency-aware dispatcher that loads and runs PEI modules (PEIMs) to handle early hardware initialization tasks such asmain memoryinitialization (initializememory controllerandDRAM) and firmware recovery operations. Additionally, it is responsible for discovery of the current boot mode and handling many ACPI S3 operations. In the case of ACPI S3 resume, it is responsible for restoring many hardware registers to a pre-sleep state. PEI also uses CAR. Initialization at this stage involves creating data structures in memory and establishing default values within these structures.[98]
This stage has several components including PEI foundation, PEIMs and PPI. Due less resources available in this stage, this stage must be minimal and do minimal preparations for the next stage(DXE), Which is more richer.
After SEC phase hand off, platform responsibility is taken by PEI Foundation. it's responsibility is:
This component is responsible for invoking PEIMs and managing there dependencies.
Those are minimal PEI drivers that is responsible for initialization of the hardware like permanent memory, CPU, chipset and motherboard. Each PEIMs has single responsibility and focused on single initialization. Those drivers came from different vendors.
This is adata structurethat composed of GUID pairs of pointers. PPIs are discovered by PEIMs through PEI services.
After minimal initialization of the system for DXE, PEI foundation locates and passes control to DXE. The PEI foundation dispatches DXE foundation through special PPI called IPL(Initial Program Load).
This stage consist of C modules and a dependency-aware dispatcher. With main memory now available, CPU, chipset, mainboard and other I/O devices are initialized in DXE and BDS. Initialization at this stage involves assigning EFI device paths to the hardware connected to the motherboard, and transferring configuration data to the hardware.[99]
BDS is a part of the DXE.[100][101]In this stage, boot devices are initialized, UEFI drivers orOption ROMsof PCI devices are executed according to architecturally defined variables calledNVRAM.
This is the stage between boot device selection and hand-off to the OS. At this point one may enter a UEFI shell, or execute a UEFI application such as the OS boot loader.
The UEFI hands off to theoperating system(OS) afterExitBootServices()is executed. A UEFI compatible OS is now responsible for exiting boot services triggering the firmware to unload all no longer needed code and data, leaving only runtime services code/data, e.g.SMMandACPI.[102][failed verification]A typical modern OS will prefer to use its own programs (such askernel drivers) to control hardware devices.
When a legacy OS is used, CSM will handle this call ensuring the system is compatible with legacy BIOS expectations.
Intel's implementation of EFI is theIntel Platform Innovation Framework, codenamedTiano. Tiano runs on Intel'sXScale,Itanium,IA-32andx86-64processors, and is proprietary software, although a portion of the code has been released under theBSD licenseorEclipse Public License(EPL) asTianoCore EDK II. TianoCore can be used as a payload forcoreboot.[103]
Phoenix Technologies' implementation of UEFI is branded as SecureCore Technology (SCT).[104]American Megatrendsoffers its own UEFI firmware implementation known as Aptio,[105]whileInsyde Softwareoffers InsydeH2O,[106]and Byosoft offers ByoCore.
In December 2018,Microsoftreleased an open source version of its TianoCore EDK2-based UEFI implementation from theSurfaceline,Project Mu.[107]
An implementation of the UEFI API was introduced into the Universal Boot Loader (Das U-Boot) in 2017.[108]On theARMv8architectureLinuxdistributions use the U-Boot UEFI implementation in conjunction withGNU GRUBfor booting (e.g.SUSE Linux[109]), the same holds true for OpenBSD.[110]For booting from iSCSIiPXEcan be used as a UEFI application loaded by U-Boot.[111]
Intel's firstItaniumworkstations and servers, released in 2000, implemented EFI 1.02.
Hewlett-Packard's firstItanium 2systems, released in 2002, implemented EFI 1.10; they were able to bootWindows,Linux,FreeBSDandHP-UX;OpenVMSadded UEFI capability in June 2003.
In January 2006,Apple Inc.shipped its firstIntel-based Macintosh computers. These systems used EFI instead ofOpen Firmware, which had been used on its previous PowerPC-based systems.[112]On 5 April 2006, Apple first releasedBoot Camp, which produces a Windows drivers disk and a non-destructive partitioning tool to allow the installation of Windows XP or Vista without requiring a reinstallation of Mac OS X (now macOS). A firmware update was also released that added BIOS compatibility to its EFI implementation. Subsequent Macintosh models shipped with the newer firmware.[113]
During 2005, more than one million Intel systems shipped with Intel's implementation of UEFI.[114][failed verification]New mobile, desktop and server products, using Intel's implementation of UEFI, started shipping in 2006. For instance, boards that use the Intel 945 chipset series use Intel's UEFI firmware implementation.
Since 2005, EFI has also been implemented on non-PC architectures, such asembedded systemsbased onXScalecores.[114]
The EDK (EFI Developer Kit) includes an NT32 target, which allows EFI firmware and EFI applications to run within aWindowsapplication. But no direct hardware access is allowed by EDK NT32. This means only a subset of EFI application and drivers can be executed by the EDK NT32 target.
In 2008, more x86-64 systems adopted UEFI. While many of these systems still allow booting only the BIOS-based OSes via the Compatibility Support Module (CSM) (thus not appearing to the user to be UEFI-based), other systems started to allow booting UEFI-based OSes. For example, IBM x3450 server,MSImotherboards with ClickBIOS, HP EliteBook Notebook PCs.
In 2009, IBM shippedSystem xmachines (x3550 M2, x3650 M2, iDataPlex dx360 M2) andBladeCenterHS22 with UEFI capability. Dell shipped PowerEdge T610, R610, R710, M610 and M710 servers with UEFI capability. More commercially available systems are mentioned in a UEFI whitepaper.[115]
In 2011, major vendors (such asASRock,Asus,Gigabyte, andMSI) launched several consumer-oriented motherboards using the Intel6-seriesLGA 1155chipset and AMD 9 SeriesAM3+chipsets with UEFI.[116]
With the release of Windows 8 in October 2012, Microsoft's certification requirements now require that computers include firmware that implements the UEFI specification. Furthermore, if the computer supports the "Connected Standby" feature of Windows 8 (which allows devices to have power management comparable tosmartphones, with an almost instantaneous return from standby mode), then the firmware is not permitted to contain a Compatibility Support Module (CSM). As such, systems that support Connected Standby are incapable of booting Legacy BIOS operating systems.[117][118]
In October 2017, Intel announced that it would remove legacy PC BIOS support from all its products by 2020, in favor of UEFI Class 3.[119]By 2019, all computers based on Intel platforms no longer have legacy PC BIOS support.
An operating system that can be booted from a (U)EFI is called a (U)EFI-aware operating system, defined by (U)EFI specification. Here the termbooted from a (U)EFImeans directly booting the system using a (U)EFI operating system loader stored on any storage device. The default location for the operating system loader is<EFI_SYSTEM_PARTITION>/BOOT/BOOT<MACHINE_TYPE_SHORT_NAME>.EFI, where short name of the machine type can beIA32,X64,IA64,ARMorAA64.[37]Some operating systems vendors may have their own boot loaders. They may also change the default boot location.
EDK2 Application Development Kit(EADK) makes it possible to usestandard C libraryfunctions in UEFI applications. EADK can be freely downloaded from theIntel's TianoCore UDK / EDK2SourceForgeproject. As an example, a port of thePythoninterpreter is made available as a UEFI application by using the EADK.[158]The development has moved to GitHub since UDK2015.[159]
A minimalistic "hello, world" C program written using EADK looks similar to itsusual C counterpart:
Numerous digital rights activists have protested UEFI.Ronald G. Minnich, a co-author ofcoreboot, andCory Doctorow, a digital rights activist, have criticized UEFI as an attempt to remove the ability of the user to truly control the computer.[160][161]It does not solve the BIOS's long-standing problems of requiring two different drivers—one for the firmware and one for the operating system—for most hardware.[162]
Open-source project TianoCore also provides UEFIs.[163]TianoCore lacks the specialized firmware drivers and modules that initialize chipset functions, but TianoCore is one of many payload options ofcoreboot. The development of coreboot requires cooperation from chipset manufacturers to provide the specifications needed to develop initialization drivers.
In 2011, Microsoft announced that computers certified to run itsWindows 8operating system had to ship with Microsoft's public key enrolled and Secure Boot enabled, which implies that using UEFI is a requirement for these devices.[164][165]Following the announcement, the company was accused by critics and free software/open source advocates (including theFree Software Foundation) of trying to use the Secure Boot functionality of UEFI tohinder or outright preventthe installation of alternative operating systems such asLinux. Microsoft denied that the Secure Boot requirement was intended to serve as a form oflock-in, and clarified its requirements by stating that x86-based systems certified for Windows 8 must allow Secure Boot to enter custom mode or be disabled, but not on systems using theARM architecture.[73][166]Windows 10allowsOEMsto decide whether or not Secure Boot can be managed by users of their x86 systems.[167]
Other developers raised concerns about the legal and practical issues of implementing support for Secure Boot on Linux systems in general. FormerRed HatdeveloperMatthew Garrettnoted that conditions in theGNU General Public License version 3may prevent the use of theGNU GRand Unified Bootloaderwithout a distribution's developer disclosing the private key (however, theFree Software Foundationhas since clarified its position, assuring that the responsibility to make keys available was held by the hardware manufacturer),[168][122]and that it would also be difficult for advanced users to build customkernelsthat could function with Secure Boot enabled without self-signing them.[166]Other developers suggested that signed builds of Linux with another key could be provided, but noted that it would be difficult to persuade OEMs to ship their computers with the required key alongside the Microsoft key.[6]
Several major Linux distributions have developed different implementations for Secure Boot. Garrett himself developed a minimal bootloader known as a shim, which is a precompiled, signed bootloader that allows the user to individually trust keys provided by Linux distributions.[169]Ubuntu 12.10uses an older version of shim[which?]pre-configured for use withCanonical's own key that verifies only the bootloader and allows unsigned kernels to be loaded; developers believed that the practice of signing only the bootloader is more feasible, since a trusted kernel is effective at securing only theuser space, and not the pre-boot state for which Secure Boot is designed to add protection. That also allows users to build their own kernels and use customkernel modulesas well, without the need to reconfigure the system.[122][170][171]Canonical also maintains its own private key to sign installations of Ubuntu pre-loaded on certified OEM computers that run the operating system, and also plans to enforce a Secure Boot requirement as well—requiring both a Canonical key and a Microsoft key (for compatibility reasons) to be included in their firmware.Fedoraalso uses shim,[which?]but requires that both the kernel and its modules be signed as well.[170]shim has Machine Owner Key (MOK) that can be used to sign locally-compiled kernels and other software not signed by distribution maintainer.[172]
It has been disputed whether the operating system kernel and its modules must be signed as well; while the UEFI specifications do not require it, Microsoft has asserted that their contractual requirements do, and that it reserves the right to revoke any certificates used to sign code that can be used to compromise the security of the system.[171]In Windows, if Secure Boot is enabled, all kernel drivers must be digitally signed; non-WHQL drivers may be refused to load. In February 2013, another Red Hat developer attempted to submit a patch to the Linux kernel that would allow it to parse Microsoft's authenticode signing using a masterX.509key embedded inPEfiles signed by Microsoft. However, the proposal was criticized by Linux creatorLinus Torvalds, who attacked Red Hat for supporting Microsoft's control over the Secure Boot infrastructure.[173]
On 26 March 2013, theSpanishfree software development group Hispalinux filed a formal complaint with theEuropean Commission, contending that Microsoft's Secure Boot requirements on OEM systems were "obstructive" andanti-competitive.[174]
At theBlack Hat conferencein August 2013, a group of security researchers presented a series of exploits in specific vendor implementations of UEFI that could be used to exploit Secure Boot.[175]
In August 2016 it was reported that two security researchers had found the "golden key" security key Microsoft uses in signing operating systems.[176]Technically, no key was exposed, however, an exploitable binary signed by the key was. This allows any software to run as though it was genuinely signed by Microsoft and exposes the possibility ofrootkitandbootkitattacks. This also makes patching the fault impossible, since any patch can be replaced (downgraded) by the (signed) exploitable binary. Microsoft responded in a statement that the vulnerability only exists inARM architectureandWindows RTdevices, and has released two patches; however, the patches do not (and cannot) remove the vulnerability, which would require key replacements in end user firmware to fix.[citation needed]
On March 1, 2023, researchers from ESET Cybersecurity Firm reported “The first in-the-wild UEFI bootkit bypassing UEFI Secure Boot” named ‘BlackLotus’ in their public analyses findings describing the theory behind its mechanics exploiting the patches that “do not (and cannot) remove the vulnerability”.[177][178]
In August 2024, theWindows 11andWindows 10security updates applied the Secure Boot Advanced Targeting (SBAT) settings to device's UEFI NVRAM, which caused some Linux distributions to fail to load. SBAT is a protocol that supported in new versions ofWindows Boot Managerand shim, which refuse buggy or vulnerable intermediate bootloaders (usually older versions of Windows Boot Manager andGRUB) to load in the boot process. The change was reverted the next month.[179]
ManyLinux distributionssupport UEFI Secure Boot as of January 2025[update], such asRHEL(RHEL 7 and later),CentOS(CentOS 7 and later[180]),Ubuntu,Fedora,Debian(Debian 10 and later[181]),OpenSUSE, andSUSE Linux Enterprise.[182]
The increased prominence of UEFI firmware in devices has also led to a number of technical problems blamed on their respective implementations.[183]
Following the release of Windows 8 in late 2012, it was discovered that certainLenovocomputer models with Secure Boot had firmware that was hardcoded to allow only executables named "Windows Boot Manager" or "Red Hat Enterprise Linux" to load, regardless of any other setting.[184]Other problems were encountered by severalToshibalaptop models with Secure Boot that were missing certain certificates required for its proper operation.[183]
In January 2013, a bug surrounding the UEFI implementation on someSamsunglaptops was publicized, which caused them to bebrickedafter installing a Linux distribution in UEFI mode. While potential conflicts with a kernel module designed to access system features on Samsung laptops were initially blamed (also prompting kernel maintainers to disable the module on UEFI systems as a safety measure), Matthew Garrett discovered that the bug was actually triggered by storing too many UEFI variables to memory, and that the bug could also be triggered under Windows under certain conditions. In conclusion, he determined that the offending kernel module had caused kernel message dumps to be written to the firmware, thus triggering the bug.[54][185][186]
|
https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface#Secure_Boot_criticism
|
Trusted Computing(TC) is a technology developed and promoted by theTrusted Computing Group.[1]The term is taken from the field oftrusted systemsand has a specialized meaning that is distinct from the field ofconfidential computing.[2]With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced bycomputer hardwareandsoftware.[1]Enforcing this behavior is achieved by loading the hardware with a uniqueencryption keythat is inaccessible to the rest of the system and the owner.
TC is controversial as the hardware is not only secured for its owner, but also against its owner, leading opponents of the technology likefree softwareactivistRichard Stallmanto deride it as "treacherous computing",[3][4]and certain scholarly articles to usescare quoteswhen referring to the technology.[5][6]
Trusted Computing proponents such asInternational Data Corporation,[7]the Enterprise Strategy Group[8]and Endpoint Technologies Associates[9]state that the technology will make computers safer, less prone tovirusesandmalware, and thus more reliable from an end-user perspective. They also state that Trusted Computing will allowcomputersandserversto offer improvedcomputer securityover that which is currently available. Opponents often state that this technology will be used primarily to enforcedigital rights managementpolicies (imposed restrictions to the owner) and not to increase computer security.[3][10]: 23
Chip manufacturersIntelandAMD, hardware manufacturers such asHPandDell, andoperating systemproviders such asMicrosoftinclude Trusted Computing in their products if enabled.[11][12]TheU.S. Armyrequires that every new PC it purchases comes with aTrusted Platform Module(TPM).[13][14]As of July 3, 2007, so does virtually the entireUnited States Department of Defense.[15]
Trusted Computing encompasses six key technology concepts, of which all are required for a fully Trusted system, that is, a system compliant to the TCG specifications:
The endorsement key is a 2048-bitRSApublic and private key pair that is created randomly on the chip at manufacture time and cannot be changed. The private key never leaves the chip, while the public key is used for attestation and for encryption of sensitive data sent to the chip, as occurs during the TPM_TakeOwnership command.[16]
This key is used to allow the execution of secure transactions: every Trusted Platform Module (TPM) is required to be able to sign a random number (in order to allow the owner to show that he has a genuine trusted computer), using a particular protocol created by the Trusted Computing Group (thedirect anonymous attestationprotocol) in order to ensure its compliance of the TCG standard and to prove its identity; this makes it impossible for a software TPM emulator with an untrusted endorsement key (for example, a self-generated one) to start a secure transaction with a trusted entity. The TPM should be[vague]designed to make the extraction of this key by hardware analysis hard, buttamper resistanceis not a strong requirement.
Memory curtaining extends commonmemory protectiontechniques to provide full isolation of sensitive areas of memory—for example, locations containing cryptographic keys. Even theoperating systemdoes not have full access to curtained memory. The exact implementation details are vendor specific.
Sealed storage protects private information by binding it to platform configuration information including the software and hardware being used. This means the data can be released only to a particular combination of software and hardware. Sealed storage can be used for DRM enforcing. For example, users who keep a song on their computer that has not been licensed to be listened will not be able to play it. Currently, a user can locate the song, listen to it, and send it to someone else, play it in the software of their choice, or back it up (and in some cases, use circumvention software to decrypt it). Alternatively, the user may use software to modify the operating system's DRM routines to have it leak the song data once, say, a temporary license was acquired. Using sealed storage, the song is securely encrypted using a key bound to the trusted platform module so that only the unmodified and untampered music player on his or her computer can play it. In this DRM architecture, this might also prevent people from listening to the song after buying a new computer, or upgrading parts of their current one, except after explicit permission of the vendor of the song.
Remote attestation allows changes to the user's computer to be detected by authorized parties. For example, software companies can identify unauthorized changes to software, including users modifying their software to circumvent commercial digital rights restrictions. It works by having the hardware generate a certificate stating what software is currently running. The computer can then present this certificate to a remote party to show that unaltered software is currently executing. Numerous remote attestation schemes have been proposed for various computer architectures, including Intel,[17]RISC-V,[18]and ARM.[19]
Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that requested the attestation, and not by an eavesdropper.
To take the song example again, the user's music player software could send the song to other machines, but only if they could attest that they were running an authorized copy of the music player software. Combined with the other technologies, this provides a more restricted path for the music: encrypted I/O prevents the user from recording it as it is transmitted to the audio subsystem, memory locking prevents it from being dumped to regular disk files as it is being worked on, sealed storage curtails unauthorized access to it when saved to the hard drive, and remote attestation prevents unauthorized software from accessing the song even when it is used on other computers. To preserve the privacy of attestation responders,Direct Anonymous Attestationhas been proposed as a solution, which uses a group signature scheme to prevent revealing the identity of individual signers.
Proof of space(PoS) have been proposed to be used for malware detection, by determining whether the L1 cache of a processor is empty (e.g., has enough space to evaluate the PoSpace routine without cache misses) or contains a routine that resisted being evicted.[20][21]
The Microsoft productsWindows Vista,Windows 7,Windows 8andWindows RTmake use of a Trusted Platform Module to facilitateBitLocker Drive Encryption.[22]Other known applications with runtime encryption and the use of secure enclaves include theSignal messenger[23]and thee-prescriptionservice ("E-Rezept")[24]by the German government.
Trusted Computing would allow companies to create a digital rights management (DRM) system which would be very hard to circumvent, though not impossible. An example is downloading a music file. Sealed storage could be used to prevent the user from opening the file with an unauthorized player or computer. Remote attestation could be used to authorize play only by music players that enforce the record company's rules. The music would be played from curtained memory, which would prevent the user from making an unrestricted copy of the file while it is playing, and secure I/O would prevent capturing what is being sent to the sound system. Circumventing such a system would require either manipulation of the computer's hardware, capturing the analogue (and thus degraded) signal using a recording device or a microphone, or breaking the security of the system.
New business models for use of software (services) over Internet may be boosted by the technology. By strengthening the DRM system, one could base a business model on renting programs for a specific time periods or "pay as you go" models. For instance, one could download a music file which could only be played a certain number of times before it becomes unusable, or the music file could be used only within a certain time period.
Trusted Computing could be used to combatcheating in online games. Some players modify their game copy in order to gain unfair advantages in the game; remote attestation, secure I/O and memory curtaining could be used to determine that all players connected to a server were running an unmodified copy of the software.[25]
Trusted Computing could be used to guarantee participants in agrid computingsystem are returning the results of the computations they claim to be instead of forging them. This would allow large scale simulations to be run (say a climate simulation) without expensive redundant computations to guarantee malicious hosts are not undermining the results to achieve the conclusion they want.[26]
TheElectronic Frontier Foundationand theFree Software Foundationcriticize that trust in the underlying companies is not deserved and that the technology puts too much power and control into the hands of those who design systems and software. They also state that it may cause consumers to lose anonymity in their online interactions, as well as mandating technologies Trusted Computing opponents say are unnecessary. They suggest Trusted Computing as a possible enabler for future versions ofmandatory access control,copy protection, and DRM.
Some security experts, such asAlan Cox[27]andBruce Schneier,[28]have spoken out against Trusted Computing, believing it will provide computer manufacturers and software authors with increased control to impose restrictions on what users are able to do with their computers. There are concerns that Trusted Computing would have ananti-competitiveeffect on the IT market.[10]
There is concern amongst critics that it will not always be possible to examine the hardware components on which Trusted Computing relies, theTrusted Platform Module, which is the ultimate hardware system where the core 'root' of trust in the platform has to reside.[10]If not implemented correctly, it presents a security risk to overall platform integrity and protected data. The specifications, as published by theTrusted Computing Group, are open and are available for anyone to review. However, the final implementations by commercial vendors will not necessarily be subjected to the same review process. In addition, the world of cryptography can often move quickly, and that hardware implementations of algorithms might create an inadvertent obsolescence. Trusting networked computers to controlling authorities rather than to individuals may createdigital imprimaturs.
CryptographerRoss Anderson, University of Cambridge, has great concerns that:[10]
TC can support remote censorship [...] In general, digital objects created using TC systems remain under the control of their creators, rather than under the control of the person who owns the machine on which they happen to be stored [...] So someone who writes a paper that a court decides is defamatory can be compelled to censor it — and the software company that wrote the word processor could be ordered to do the deletion if she refuses. Given such possibilities, we can expect TC to be used to suppress everything from pornography to writings that criticize political leaders.
He goes on to state that:
[...] software suppliers can make it much harder for you to switch to their competitors' products. At a simple level, Word could encrypt all your documents using keys that only Microsoft products have access to; this would mean that you could only read them using Microsoft products, not with any competing word processor. [...]
The [...] most important benefit for Microsoft is that TC will dramatically increase the costs of switching away from Microsoft products (such as Office) to rival products (such asOpenOffice). For example, a law firm that wants to change from Office to OpenOffice right now merely has to install the software, train the staff and convert their existing files. In five years' time, once they have received TC-protected documents from perhaps a thousand different clients, they would have to get permission (in the form of signed digital certificates) from each of these clients in order to migrate their files to a new platform. The law firm won't in practice want to do this, so they will be much more tightly locked in, which will enable Microsoft to hike its prices.
Anderson summarizes the case by saying:
The fundamental issue is that whoever controls the TC infrastructure will acquire a huge amount of power. Having this single point of control is like making everyone use the same bank, or the same accountant, or the same lawyer. There are many ways in which this power could be abused.
One of the early motivations behind trusted computing was a desire by media and software corporations for stricter DRM technology to prevent users from freely sharing and using potentially copyrighted or private files without explicit permission.
An example could be downloading a music file from a band: the band's record company could come up with rules for how the band's music can be used. For example, they might want the user to play the file only three times a day without paying additional money. Also, they could use remote attestation to only send their music to a music player that enforces their rules: sealed storage would prevent the user from opening the file with another player that did not enforce the restrictions. Memory curtaining would prevent the user from making an unrestricted copy of the file while it is playing, and secure output would prevent capturing what is sent to the sound system.
A user who wanted to switch to a competing program might find that it would be impossible for that new program to read old data, as the information would be "locked in" to the old program. It could also make it impossible for the user to read or modify their data except as specifically permitted by the software.
The law in many countries allows users certain rights over data whose copyright they do not own (including text, images, and other media), often under headings such asfair useorpublic interest. Depending on jurisdiction, these may cover issues such aswhistleblowing, production of evidence in court, quoting or other small-scale usage,backupsof owned media, and making a copy of owned material for personal use on other owned devices or systems. The steps implicit in trusted computing have the practical effect of preventing users exercising these legal rights.[3]
A service that requires external validation or permission - such as a music file or game that requires connection with the vendor to confirm permission to play or use - is vulnerable to that service being withdrawn or no longer updated. A number of incidents have already occurred where users, having purchased music or video media, have found their ability to watch or listen to it suddenly stop due to vendor policy or cessation of service,[29][30][31]or server inaccessibility,[32]at times with no compensation.[33]Alternatively in some cases the vendor refuses to provide services in future which leaves purchased material only usable on the present -and increasingly obsolete- hardware (so long as it lasts) but not on any hardware that may be purchased in future.[29]
Some opponents of Trusted Computing advocate "owner override": allowing an owner who is confirmed to be physically present to allow the computer to bypass restrictions and use the secure I/O path. Such an override would allow remote attestation to a user's specification, e.g., to create certificates that say Internet Explorer is running, even if a different browser is used. Instead of preventing software change, remote attestation would indicate when the software has been changed without owner's permission.
Trusted Computing Groupmembers have refused to implement owner override.[34]Proponents of trusted computing believe that owner override defeats the trust in other computers since remote attestation can be forged by the owner. Owner override offers the security and enforcement benefits to a machine owner, but does not allow them to trust other computers, because their owners could waive rules or restrictions on their own computers. Under this scenario, once data is sent to someone else's computer, whether it be a diary, a DRM music file, or a joint project, that other person controls what security, if any, their computer will enforce on their copy of those data. This has the potential to undermine the applications of trusted computing to enforce DRM, control cheating in online games and attest to remote computations forgrid computing.
Because a Trusted Computing equipped computer is able to uniquely attest to its own identity, it will be possible for vendors and others who possess the ability to use the attestation feature to zero in on the identity of the user of TC-enabled software with a high degree of certainty.
Such a capability is contingent on the reasonable chance that the user at some time provides user-identifying information, whether voluntarily, indirectly, or simply through inference of many seemingly benign pieces of data. (e.g. search records, as shown through simple study of the AOL search records leak[35]). One common way that information can be obtained and linked is when a user registers a computer just after purchase. Another common way is when a user provides identifying information to the website of an affiliate of the vendor.
While proponents of TC point out that online purchases and credit transactions could potentially be more secure as a result of the remote attestation capability, this may cause the computer user to lose expectations of anonymity when using the Internet.
Critics point out that this could have a chilling effect on political free speech, the ability of journalists to use anonymous sources, whistle blowing, political blogging and other areas where the public needs protection from retaliation through anonymity.
The TPM specification offers features and suggested implementations that are meant to address the anonymity requirement. By using a third-party Privacy Certification Authority (PCA), the information that identifies the computer could be held by a trusted third party. Additionally, the use ofdirect anonymous attestation(DAA), introduced in TPM v1.2, allows a client to perform attestation while not revealing any personally identifiable or machine information.
The kind of data that must be supplied to the TTP in order to get the trusted status is at present not entirely clear, but the TCG itself admits that "attestation is an important TPM function with significant privacy implications".[36]It is, however, clear that both static and dynamic information about the user computer may be supplied (Ekpubkey) to the TTP (v1.1b),[37]it is not clear what data will be supplied to the “verifier” under v1.2. The static information will uniquely identify the endorser of the platform, model, details of the TPM, and that the platform (PC) complies with the TCG specifications . The dynamic information is described as software running on the computer.[37]If a program like Windows is registered in the user's name this in turn will uniquely identify the user. Another dimension of privacy infringing capabilities might also be introduced with this new technology; how often you use your programs might be possible information provided to the TTP. In an exceptional, however practical situation, where a user purchases a pornographic movie on the Internet, the purchaser nowadays, must accept the fact that he has to provide credit card details to the provider, thereby possibly risking being identified. With the new technology a purchaser might also risk someone finding out that he (or she) has watched this pornographic movie 1000 times. This adds a new dimension to the possible privacy infringement. The extent of data that will be supplied to the TTP/Verifiers is at present not exactly known, only when the technology is implemented and used will we be able to assess the exact nature and volume of the data that is transmitted.
Trusted Computing requests that all software and hardware vendors will follow the technical specifications released by theTrusted Computing Groupin order to allow interoperability between different trusted software stacks. However, since at least mid-2006, there have been interoperability problems between the TrouSerS trusted software stack (released as open source software byIBM) andHewlett-Packard's stack.[38]Another problem is that the technical specifications are still changing, so it is unclear which is the standard implementation of the trusted stack.
People have voiced concerns that trusted computing could be used to keep or discourage users from running software created by companies outside of a small industry group.Microsofthas received a great deal[vague]of bad press surrounding theirPalladiumsoftware architecture, evoking comments such as "Few pieces of vaporware have evoked a higher level of fear and uncertainty than Microsoft's Palladium", "Palladium is a plot to take over cyberspace", and "Palladium will keep us from running any software not personally approved by Bill Gates".[39]The concerns about trusted computing being used to shut out competition exist within a broader framework of consumers being concerned about usingbundlingof products to obscure prices of products and to engage inanti-competitive practices.[5]Trusted Computing is seen as harmful or problematic to independent andopen sourcesoftware developers.[40]
In the widely usedpublic-key cryptography, creation of keys can be done on the local computer and the creator has complete control over who has access to it, and consequentially their ownsecurity policies.[41]In some proposed encryption-decryption chips, a private/public key is permanently embedded into the hardware when it is manufactured,[42]and hardware manufacturers would have the opportunity to record the key without leaving evidence of doing so. With this key it would be possible to have access to data encrypted with it, and to authenticate as it.[43]It is trivial for a manufacturer to give a copy of this key to the government or the software manufacturers, as the platform must go through steps so that it works with authenticated software.
Therefore, to trust anything that is authenticated by or encrypted by a TPM or a Trusted computer, anend userhas to trust the company that made the chip, the company that designed the chip, the companies allowed to make software for the chip, and the ability and interest of those companies not to compromise the whole process.[44]A security breach breaking that chain of trust happened to aSIM cardmanufacturerGemalto, which in 2010 was infiltrated by US and British spies, resulting in compromised security of cellphone calls.[45]
It is also critical that one be able to trust that the hardware manufacturers and software developers properly implement trusted computing standards. Incorrect implementation could be hidden from users, and thus could undermine the integrity of the whole system without users being aware of the flaw.[46]
Since 2004, most major manufacturers have shipped systems that have includedTrusted Platform Modules, with associatedBIOSsupport.[47]In accordance with the TCG specifications, the user must enable the Trusted Platform Module before it can be used.
TheLinux kernelhas included trusted computing support since version 2.6.13, and there are several projects to implement trusted computing for Linux. In January 2005, members ofGentoo Linux's "crypto herd" announced their intention of providing support for TC—in particular support for the Trusted Platform Module.[48]There is also a TCG-compliant software stack for Linux namedTrouSerS, released under an open source license. There are several open-source projects that facilitate the use of confidential computing technology, includingEGo, EdgelessDB and MarbleRun fromEdgeless Systems, as well as Enarx, which originates from security research atRed Hat.
Some limited form of trusted computing can be implemented on current versions ofMicrosoft Windowswith third-party software. Major cloud providers such asMicrosoft Azure,[49]AWS[50]andGoogle Cloud Platform[51]have virtual machines with trusted computing features available. With theIntelSoftware Guard Extension(SGX) andAMDSecure Encrypted Virtualization(SEV) processors, there is hardware available for runtime memory encryption and remote attestation features.[52]
The IntelClassmate PC(a competitor to theOne Laptop Per Child) includes a Trusted Platform Module.[53]
PrivateCorevCage software can be used to attestx86servers with TPM chips.
Mobile T6 secure operating system simulates the TPM functionality in mobile devices using theARM TrustZonetechnology.[54]
Samsungsmartphones come equipped withSamsung Knoxthat depend on features like Secure Boot, TIMA,MDM, TrustZone andSE Linux.[55]
|
https://en.wikipedia.org/wiki/Trusted_Computing
|
Ininformation security,computational trustis the generation of trusted authorities or user trust throughcryptography. In centralised systems, security is typically based on the authenticated identity of external parties. Rigid authentication mechanisms, such aspublic key infrastructures(PKIs)[1]orKerberos,[2]have allowed this model to be extended todistributed systemswithin a few closely collaborating domains or within a single administrative domain. During recent years, computer science has moved from centralised systems to distributed computing. This evolution has several implications for security models, policies and mechanisms needed to protect users’ information and resources in an increasingly interconnected computing infrastructure.[3]
Identity-based security mechanisms cannotauthorisean operation withoutauthenticatingthe claiming entity. This means that no interaction can occur unless both parties are known by their authentication frameworks. Spontaneous interactions would, therefore, require a single, or a few trustedcertificate authorities(CAs). In the present context, PKI has not been considered since they have issues[which?], thus it is unlikely that they will establish themselves as a reference standard in the near future. A user who wishes tocollaboratewith another party can choose between enabling security and thereby disabling spontaneous collaboration, or disabling security and enabling spontaneous collaboration. It is fundamental that mobile users and devices can authenticate in an autonomous way without relying on a common authentication infrastructure. In order to face this problem, we need to examine the challenges introduced by "global computing",[4]a term coined by theEUfor the future of the global information society, and to identify their impact on security.
Cryptocurrencies, such asBitcoin, use methods such asproof of work(PoW) to achieve computational trust inside the transaction network.
Computational Trust applies the human notion oftrustto the digital world, that is seen as malicious rather than cooperative. The expected benefits, according to Marsh et al., result in the use of others' ability through delegation, and in increased cooperation in an open and less protected environment. Research in the area of computational mechanisms for trust and reputation invirtual societiesis directed towards increased reliability and performance of digital communities.[5]
A trust-based decision in a specific domain is a multi-stage process. The first step of this process consists in identifying and selecting the proper input data, that is, the trust evidence. In general, these are domain-specific and are derived from an analysis conducted over theapplicationinvolved. In the next step, a trust computation is performed on the evidence to produce trust values, that means the estimation of the trustworthiness of entities in that particular domain. The selection of evidence and the subsequent trust computation are informed by a notion of trust defined in the trust model. Finally, the trust decision is taken by considering the computed values and exogenous factors, like disposition orrisk assessments.
These concepts have heightened relevance in the last decade in computer science, particularly in the area of distributedartificial intelligence. Themulti-agent systemparadigm and the growth ofe-commercehave increased interest in trust and reputation. In fact, trust andreputation systemshave been recognized as the key factors for electronic commerce. These systems are used by intelligent software agents as an incentive in decision-making, when deciding whether or not to honor contracts, and as a mechanism to search trustworthy exchange partners. In particular, reputation is used in electronic markets as a trust-enforcing mechanism or as a method to avoid cheaters and frauds.[6]
Another area of application of these concepts in agent technology, is teamwork and cooperation.[7]Several definitions of the human notion of trust have been proposed during the last years in different domains fromsociology,psychologytopoliticalandbusiness science. These definitions may even change in accordance with the application domain. For example, Romano's recent definition[8]tries to encompass the previous work in all these domains:
Trust is a subjective assessment of another’s influence in terms of the extent of one’s perception about the quality and significance of another’s impact over one’s outcomes in a given situation, such that one’s expectation of, openness to, and inclination toward such influence provide a sense of control over the potential outcomes of the situation.
Trust and reputation both have a social value. When someone is trustworthy, that person may be expected to perform in a beneficial or at least not in a suspicious way that assure others, with high probability, good collaborations with him. On the contrary, when someone appears not to be trustworthy, others refrain from collaborating since there is a lower level of probability that these collaborations will be successful.[9]
Trust is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently or his capacity ever to be able to monitor it) and in a context in which it affects his own action.
Trust is strongly connected to confidence and it implies some degrees of uncertainty, hopefulness or optimism. Eventually, Marsh[10]addressed the issue of formalizing trust as a computational concept in his PhD thesis. His trust model is based on social and psychological factors.
A lot of proposals have appeared in the literature and here a selection of computational trust and reputation models, that represent a good sample of the current research, is presented.[11]
Trust and reputation can be analysed from different points of view and can be applied in many situations. The next classification is based considering the peculiar characteristics of these models and the environment where they evolve.
Trust and reputation model can be characterized as:
In models based on a cognitive approach, Trust and reputation are made up of underlying beliefs and are a function of the degree of these beliefs.[12]The mental states, that lead to trust another agent or to assign a reputation, are an essential part of the model, as well as the mental consequences of the decision and the act of relying on another agent;
In neurological trust models based neurological theories on the interaction between affective and cognitive states are modeled on a neurological level as well by using theories on the embodiment of emotions.[13]In these models the trust dynamics relate to experiences with (external) sources, both from a cognitive and affective perspective. More specifically for feeling the emotion associated to a mental state, converging recursive body loops are modeled. In addition, based on Hebbian learning (for the strength of the connections to the emotional responses) different adaptation processes are introduced, which are inspired by the Somatic Marker Hypothesis.[14]
Trust and reputation are considered subjective probabilities by which the individual A, expects the individual B to perform a given action on which its welfare depends.[15]
In this approach, trust and reputation are not the result of a mental state of the agent in a cognitive sense, but the result of a more pragmatic game with utility functions and numerical aggregation of past interactions.
It is possible to sort out models by considering the information sources used to compute Trust and reputation values. The traditional information sources are direct experiences and witness information, but recent models have started to consider the connection between information and the sociological aspect of agent's behavior. When the model contains several information sources it can increase the reliability of the results, but conversely, it can increase the complexity of the model.
Direct experience is the most relevant and reliable information source for a Trust/reputation model. Two types of direct experiences can be recognizable:
Witness information, also called indirect information, is what comes from the experience of other members of community. It can be based on their own direct experience or on other data they gathered from others’ experience. Witness information is usually the most abundant but its use is complex for trust and reputation modelling. In fact, it introduces uncertainty and agents can manipulate or hide parts of the information for their own benefit.
People that belong to a community establish different types of relations. Each individual plays one or several roles in that society, influencing their behavior and the interaction with other people. In a multi-agent system, where there are plenty of interactions, the social relations among agents are a simplified reflection of the more complex relations of their human counterparts.[16]Only a few trust and reputation models adopt this sociological information, using techniques likesocial networkanalysis. These methods study social relationships among individuals in a society that emerged as a set of methods for the analysis of social structures, methods that specifically allow an investigation of the relational aspects of these structures.[17]
Prejudice is another, though uncommon, mechanism that influences trust and reputation. According to this method, an individual is given properties of a particular group that make him recognisable as a member. These can be signs such as a uniform, a definite behavior, etc.[18]
As most people today use the word,prejudicerefers to a negative or hostile attitude towards another social group, often racially defined. However, this negative connotation has to be revised when applied to agent communities. The set of signs used in computational trust and reputations models are usually out of the ethical discussion, differently from the signs used in human societies, like skin color or gender.
Most of the literature in cognitive and social sciences claims that humans exhibit non-rational,biasedbehavior with respect to trust. Recently biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better than unbiased trust models.[19][20]
The most relevant sources of information considered by the trust andreputation modelspresented before, are direct experiences and witness information. In e-markets, sociological information is almost non-existent and, in order to increase the efficiency of actual trust and reputation models, it should be considered. However, there is no reason to increase the complexity of models introducing trust evidence if, later, they have to be used in an environment where it is not possible to realise their capabilities. The aggregation of more trust and reputation evidence is useful in a computational model but it can increase its complexity making a general solution difficult. Several models are dependent on the characteristics of the environment and a possible solution could be the use of adaptive mechanisms that can modify how to combine different sources of information in a given environment. A lot of trust and reputation definitions have been presented and there are several works that give meaning to both concepts.[21][22][23][24]
There is a relation between both the concepts that should be considered in depth: reputation is a concept that helps to build trust on others. Nowadays, game theory is the predominant paradigm considered to design computational trust and reputation models. In all likelihood, this theory is taken into account because a significant number of economists and computer scientists, with a strong background in game theory and artificial intelligence techniques, are working in multi-agent and e-commerce contexts. Game theoretical models produce good results but may not be appropriate when the complexity of the agents, in terms of social relations and interaction increases, becomes too restrictive. The exploration of new possibilities should be considered and, for example, there should be a merging of cognitive approaches with game theoretical ones. Apart from that, more trust evidence should be considered, as well as time-sensitivetrust metrics.[25][26]represent the first step to encourage the improvement of computational trust.[27]
An important issue in modeling trust is represented by the transferability of trust judgements by different agents. Social scientists agree to consider unqualified trust values as not transferable, but a more pragmatic approach would conclude that qualified trust judgments are worth being transferred as far as decisions taken considering others’ opinion are better than the ones taken in isolation.
In[28]the authors investigated the problem of trust transferability in open distributed environments, proposing a translation mechanism able to make information exchanged from one agent to another more accurate and useful.
Currently, there is no commonly accepted evaluation framework or benchmark that would allow for a comparison of the models under a set of representative and common conditions. A game-theoretic approach in this direction has been proposed,[29]where the configuration of a trust model is optimized assuming attackers with optimal attack strategies; this allows in a next step to compare the expected utility of different trust models. Similarly, a model-based analytical framework for predicting the effectiveness of reputation mechanisms against arbitrary attack models in arbitrary system models has been proposed[30]for Peer-to-Peer systems.
|
https://en.wikipedia.org/wiki/Computational_trust
|
Power analysisis a form ofside channel attackin which the attacker studies the power consumption of a cryptographic hardware device. These attacks rely on basic physical properties of the device: semiconductor devices are governed by the laws of physics, which dictate that changes in voltages within the device require very small movements of electric charges (currents). By measuring those currents, it is possible to learn a small amount of information about the data being manipulated.
Simple power analysis(SPA) involves visually interpreting powertraces, or graphs of electrical activity over time.Differential power analysis(DPA) is a more advanced form of power analysis, which can allow an attacker to compute the intermediate values within cryptographic computations through statistical analysis of data collected from multiple cryptographic operations. SPA and DPA were introduced to the open cryptography community in 1998 byPaul Kocher,Joshua JaffeandBenjamin Jun.[1][better source needed]
Incryptography, aside channel attackis used to extract secret data from some secure device (such as asmart card,tamper-resistant"black box", orintegrated circuit). Side-channel analysis is typically trying to non-invasively extract cryptographic keys and other secret information from the device. A simple example of this is theGerman tank problem: the serial numbers of tanks provide details of the production data for tanks. In physical security, a non-invasive attack would be similar tolock-picking, where a successful attack leaves no trace of the attacker being present.
Simple power analysis(SPA) is aside-channel attackwhich involves visual examination of graphs of thecurrentused by a device over time. Variations in power consumption occur as the device performs different operations. For example, different instructions performed by a microprocessor will have differing power consumption profiles.
Codeflow that depends on a secret value will thus leak the code-flow via the power consumption monitoring (and thus also leak the secret value). As a simple example, consider a password check as follows:
This password check potentially contains aTiming attack, since the execution time is not constant. Observing the power consumption will make clear the number of loops executed. However, there is a mitigation possible. The function may not output to the user an exploitable result if, for example, there is a compensating delay before the response is returned. Or the code is structured to complete all the loops irrespective of the character position mismatching between the input and the secret-password. We can see one such sample code in the algorithm section ofTiming attack.
Similarly, squaring and multiplication operations in RSA implementations can often be distinguished, enabling an adversary to compute the secret key. Even if the magnitude of the variations in power consumption are small, standard digitaloscilloscopescan easily show the data-induced variations. Frequency filters and averaging functions (such as those built into oscilloscopes) are often used to filter out high-frequency components.
Differential power analysis(DPA) is aside-channel attackwhich involves statistically analyzing power consumption measurements from acryptosystem. The attack exploits biases varying power consumption of microprocessors or other hardware while performing operations using secret keys. DPA attacks havesignal processinganderror correctionproperties which can extract secrets from measurements which contain too much noise to be analyzed using simple power analysis. Using DPA, an adversary can obtain secret keys by analyzing power consumption measurements from multiple cryptographic operations performed by a vulnerable smart card or other device.
High-Order Differential Power Analysis(HO-DPA) is an advanced form of DPA attack. HO-DPA enables multiple data sources and different time offsets to be incorporated in the analysis. HO-DPA is less widely practiced than SPA and DPA, as the analysis is complex and most vulnerable devices can be broken more easily with SPA or DPA.[2]
Power analysis provides a way to "see inside" otherwise 'tamperproof' hardware. For example, DES'skey scheduleinvolves rotating 28-bit key registers. Many implementations check the least significant bit to see if it is a 1. If so, the device shifts the register right and prepends the 1 at the left end. If the bit is a zero, the register is shifted right without prepending a 1. Power analysis can distinguish between these processes, enabling an adversary to determine the bits of the secret key.
Implementations of algorithms such asAESandtriple DESthat are believed to be mathematically strong may be trivially breakable using power analysis attacks. As a result, power analysis attacks combine elements of algorithmic cryptanalysis and implementation security.
For applications where devices may fall into the physical possession of an adversary, protection against power analysis is generally a major design requirement. Power analyses have also been reportedly used against conditional access modules used in pay television systems.[3]
The equipment necessary for performing power analysis attacks is widely available. For example, most digital storage oscilloscopes provide the necessary data collection functionality, and the data analysis is typically performed using conventional PCs.
Power analysis attacks cannot generally be detected by a device, since the adversary's monitoring is normally passive. In addition, the attack is non-invasive. As a result, physical enclosures, auditing capabilities, and attack detectors are ineffective. Instead, cryptosystem engineers must ensure that devices' power variations do not reveal information usable by adversaries.
Simple power analysis can easily distinguish the outcome of conditional branches in the execution of cryptographic software, since a device does different things (consuming different power) depending on whether the conditional branch is taken. For this reason, care should be taken to ensure there are no secret values which affect the conditional branches within cryptographic software implementations. Other sources of variation, such as microcode differences, branches introduced by compilers, and power consumption variations in multipliers, also commonly lead to SPA vulnerabilities.
Differential power analysis is more difficult to prevent, since even small biases in the power consumption can lead to exploitable weaknesses. Some countermeasure strategies involve algorithmic modifications such that the cryptographic operations occur on data that is related to the actual value by some mathematical relationship that survives the cryptographic operation. One approach involvesblindingparameters to randomize their value. Other countermeasure strategies to reduce the effectiveness of DPA attacks involve hardware modifications: varying the chip internal clock frequency has been considered to desynchronize electric signals, which lead in return to algorithmic enhancements of traditional DPA.[4][5]
|
https://en.wikipedia.org/wiki/Power_analysis
|
Incryptography,electromagnetic attacksareside-channel attacksperformed by measuring theelectromagnetic radiationemitted from adeviceand performingsignal analysison it. These attacks are a more specific type of what is sometimes referred to asVan Eck phreaking, with the intention to captureencryptionkeys. Electromagnetic attacks are typically non-invasive and passive, meaning that these attacks are able to be performed by observing the normal functioning of the target device without causing physical damage.[1]However, an attacker may get a bettersignalwith lessnoiseby depackaging the chip and collecting the signal closer to the source. These attacks are successful against cryptographicimplementationsthat perform differentoperationsbased on the data currently being processed, such as thesquare-and-multiplyimplementation ofRSA. Different operations emit different amounts of radiation and an electromagnetic trace of encryption may show the exact operations being performed, allowing an attacker to retrieve full or partialprivate keys.
Like many other side-channel attacks, electromagnetic attacks are dependent on the specific implementation of thecryptographic protocoland not on thealgorithmitself. Electromagnetic attacks are often done in conjunction with other side-channel attacks, likepower analysisattacks.
Allelectronic devicesemit electromagnetic radiation. Because every wire that carries current creates amagnetic field, electronic devices create some small magnetic fields when in use. These magnetic fields can unintentionally revealinformationabout the operation of a device if not properly designed. Because all electronic devices are affected by this phenomenon, the term ‘device’ can refer to anything from a desktop computer, to mobile phone, to a smart card.
Electromagnetic wavesare a type of wave that originate fromcharged particles, are characterized by varyingwavelengthand are categorized along theelectromagnetic spectrum. Any device that uses electricity will emit electromagnetic radiation due to themagnetic fieldcreated by charged particles moving along amedium. For example,radio wavesare emitted byelectricitymoving along aradio transmitter, or even from asatellite.
In the case of electromagnetic side-channel attacks, attackers are often looking at electromagnetic radiation emitted by computing devices, which are made up ofcircuits. Electronic circuits consist ofsemiconductingmaterials upon which billions oftransistorsare placed. When a computer performs computations, such as encryption, electricity running through the transistors create a magnetic field and electromagnetic waves are emitted.[2][3][4]
Electromagnetic waves can be captured using aninduction coiland an analog to digital converter can then sample the waves at a given clock rate and convert the trace to a digital signal to be further processed by computer.
The electronic device performing the computations is synced with a clock that is running at frequencies on the order ofmega-hertz(MHz) togiga-hertz(GHz). However, due to hardware pipelining, and complexity of some instructions, some operations take multiple clock cycles to complete.[5]Therefore, it is not always necessary to sample the signal at such a high clock rate. It is often possible to get information on all or most of the operations while sampling on the order ofkilo-hertz(kHz). Different devices leak information at different frequencies. For example,Intel's Atom processor will leak keys during RSA andAESencryption at frequencies between 50 MHz and 85 MHz.[6]Android version 4.4'sBouncy Castlelibrary implementation ofECDSAis vulnerable to key extraction side channel attacks around the 50 kHz range.[7]
Every operation performed by a computer emits electromagnetic radiation and different operations emit radiation at different frequencies. In electromagnetic side-channel attacks, an attacker is only interested in a few frequencies at which encryption is occurring. Signal processing is responsible for isolating these frequencies from the vast multitude of extraneous radiation and noise. To isolate certain frequencies, abandpass filter, which blocks frequencies outside of a given range, must be applied to the electromagnetic trace. Sometimes, the attacker does not know which frequencies encryption is performed at. In this case, the trace can be represented as aspectrogram, which can help determine which frequencies are most prevalent at different points of execution. Depending on the device being attacked and the level of noise, several filters may need to be applied.
Electromagnetic attacks can be broadly separated into simple electromagnetic analysis (SEMA) attacks and differential electromagnetic analysis (DEMA) attacks.
In simple electromagnetic analysis (SEMA) attacks, the attacker deduces the key directly by observing the trace. It is very effective against asymmetric cryptography implementations.[8]Typically, only a few traces are needed, though the attacker needs to have a strong understanding of the cryptographic device and of the implementation of thecryptographic algorithm. An implementation vulnerable to SEMA attacks will perform a different operation depending on whether thebitof the key is 0 or 1, which will use different amounts of power and/or different chip components. This method is prevalent in many different types of side-channel attacks, in particular, power analysis attacks. Thus, the attacker can observe the entire computation of encryption and can deduce the key.
For example, a common attack on asymmetric RSA relies on the fact that the encryption steps rely on the value of the key bits. Every bit is processed with a square operation and then a multiplication operation if and only if the bit is equal to 1. An attacker with a clear trace can deduce the key simply by observing where the multiplication operations are performed.
In some cases, simple electromagnetic analysis is not possible or does not provide enough information. Differential electromagnetic analysis (DEMA) attacks are more complex, but are effective against symmetric cryptography implementation, against which SEMA attacks are not.[6]Additionally unlike SEMA, DEMA attacks do not require much knowledge about the device being attacked.
While the fact that circuits that emit high-frequency signals may leak secret information was known since 1982 by the NSA, it was classified until 2000,[9]which was right around the time that the first electromagnetic attack against encryption was shown by researchers.[10]Since then, many more complex attacks have been introduced.[which?][citation needed]
Smart cards, often colloquially referred to as “chip cards", were designed to provide a more secure financial transaction than a traditional credit card. They contain simple embeddedintegrated circuitsdesigned to perform cryptographic functions.[11]They connect directly to acard readerwhich provides the power necessary to perform an encryptedfinancial transaction. Many side-channel attacks have been shown to be effective against smart cards because they obtain their power supply and clock directly from the card reader. By tampering with a card reader, it is simple to collect traces and perform side-channel attacks. Other works, however, have also shown that smart cards are vulnerable to electromagnetic attacks.[12][13][14]
A field-programmable gate arrays (FPGA) have been commonly used to implement cryptographic primitives in hardware to increase speed. These hardware implementations are just as vulnerable as other software based primitives. In 2005, an implementation of elliptic curve encryption was shown vulnerable to both SEMA and DEMA attacks.[15]TheARIAblock cipher is a common primitive implemented with FPGAs that has been shown to leak keys.[16]
In contrast to smart cards, which are simple devices performing a single function,personal computersare doing many things at once. Thus, it is much more difficult to perform electromagnetic side-channel attacks against them, due to high levels of noise and fastclock rates. Despite these issues, researchers in 2015 and 2016 showed attacks against a laptop using anear-field magnetic probe. The resulting signal, observed for only a few seconds, was filtered, amplified, and digitized for offline key extraction. Most attacks require expensive, lab-grade equipment, and require the attacker to be extremely close to the victim computer.[17][18]However, some researchers were able to show attacks using cheaper hardware and from distances of up to half a meter.[19]These attacks, however, required the collection of more traces than the more expensive attacks.
Smartphonesare of particular interest for electromagnetic side-channel attacks. Since the advent ofmobile phone payment systemssuch asApple Pay, e-commerce systems have become increasingly commonplace. Likewise, the amount of research dedicated to mobile phone security side channel attacks has also increased.[20]Currently most attacks are proofs of concept that use expensive lab-grade signal processing equipment.[21]One of these attacks demonstrated that a commercial radio receiver could detect mobile phone leakage up to three meters away.[22]
However, attacks using low-end consumer grade equipment have also shown successful. By using an external USB sound card and an induction coil salvaged from a wireless charging pad, researchers were able to extract a user's signing key in Android's OpenSSL and Apple's CommonCrypto implementations of ECDSA.[20][21][22]
Widely used theoretical encryption schemes aremathematically secure, yet this type of security does not consider their physical implementations, and thus, do not necessarily protect against side-channel attacks. Therefore, the vulnerability lies in the code itself, and it is the specific implementation that is shown to be insecure. Luckily, many of the vulnerabilities shown have since beenpatched. Vulnerable implementations include, but are definitely not limited to, the following:
The attacks described thus far have mainly focused on the use of induction to detect unintended radiation. However, the use offar-field communicationtechnologies like that ofAM radioscan also be used for side-channel attacks, although no key extraction methods for far-field signal analysis have been demonstrated.[23]Therefore, a rough characterization of potential adversaries using this attack range from highly educated individuals to low to medium funded cartels. The following demonstrates a few possible scenarios:
Point of sale systemsthat accept payment from mobile phones or smart cards are vulnerable. Induction coils can be hidden on these systems to record financial transactions from smart cards or mobile phone payments. With keys extracted, a malicious attacker could forge his own card or make fraudulent charges with the private key. Belgarric et al. propose a scenario where mobile payments are performed withbitcointransactions. Since theAndroidimplementation of the bitcoin client uses ECDSA, the signing key can be extracted at the point of sale.[7]These types of attacks are only slightly more complex than magnetic card stripe skimmers currently used on traditional magnetic strip cards.
Many public venues such asStarbuckslocations are already offering free publicwireless chargingpads.[24]It was previously shown that the same coils used in wireless charging can be used for detection of unintended radiation. Therefore, these charging pads pose a potential hazard. Malicious charging pads might attempt to extract keys in addition to charging a user’s phone. When coupled with packet sniffing capabilities of public Wi-Fi networks, the keys extracted could be used to performman-in-the-middle attackson users. If far-field attacks are discovered, an attacker only needs to point hisantennaat a victim to perform these attacks; the victim need not be actively charging their phone on one of these public pads.[citation needed]
Several countermeasures against electromagnetic attacks have been proposed, though there is no one perfect solution. Many of the following countermeasures will make electromagnetic attacks harder, not impossible.
One of the most effective ways to prevent electromagnetic attacks is to make it difficult for an attacker to collect an electromagnetic signal at the physical level. Broadly, the hardware designer could design the encryption hardware to reduce signal strength[25]or to protect the chip. Circuit and wire shielding, such as aFaraday cage, are effective in reducing the signal, as well as filtering the signal or introducing extraneous noise to mask the signal. Additionally, most electromagnetic attacks require attacking equipment to be very close to the target, so distance is an effective countermeasure. Circuit designers can also use certain glues or design components in order to make it difficult or impossible to depackage the chip without destroying it.
Recently, white-box modeling was utilized to develop a low-overhead generic circuit-level countermeasure[26]against both electromagnetic as well as power side-channel attacks. To minimize the effects of the higher-level metal layers in an IC acting as more efficient antennas,[27]the idea is to embed the crypto core with a signature suppression circuit,[28][29]routed locally within the lower-level metal layers, leading towards both power and electromagnetic side-channel attack immunity.
As many electromagnetic attacks, especially SEMA attacks, rely on asymmetric implementations of cryptographic algorithms, an effective countermeasure is to ensure that a given operation performed at a given step of the algorithm gives no information on the value of that bit.Randomizationof the order of bit encryption, process interrupts, and clock cycle randomization, are all effective ways to make attacks more difficult.[1]
The classifiedNational Security AgencyprogramTEMPESTfocuses on both the spying on systems by observing electromagnetic radiation and the securing of equipment to protect against such attacks.
TheFederal Communications Commissionoutlines the rules regulating the unintended emissions of electronic devices inPart 15of the Code of Federal Regulations Title 47. The FCC does not provide a certification that devices do not produce excess emissions, but instead relies on a self-verification procedure.[30]
|
https://en.wikipedia.org/wiki/Electromagnetic_attack
|
Acoustic cryptanalysisis a type ofside-channel attackthat exploitssoundsemitted by computers or other devices.
Most of the modern acoustic cryptanalysis focuses on the sounds produced bycomputer keyboardsand internalcomputercomponents, but historically it has also been applied toimpact printers, andelectromechanicaldeciphering machines.
Victor MarchettiandJohn D. Markseventually negotiated the declassification of CIA acoustic intercepts of the sounds of cleartext printing from encryption machines.[1]Technically this method of attack dates to the time ofFFThardware being cheap enough to perform the task; in this case the late 1960s to mid-1970s. However, using other more primitive means such acoustical attacks were made in the mid-1950s.
In his bookSpycatcher, formerMI5operativePeter Wrightdiscusses use of an acoustic attack againstEgyptianHagelincipher machines in 1956. The attack wascodenamed"ENGULF".[2]
In 2004, Dmitri Asonov and Rakesh Agrawal of theIBMAlmaden Research Centerannounced thatcomputer keyboardsand keypads used ontelephonesandautomated teller machines(ATMs) are vulnerable to attacks based on the sounds produced by different keys. Their attack employed aneural networkto recognize the key being pressed. By analyzing recorded sounds, they were able to recover the text of data being entered. These techniques allow an attacker usingcovert listening devicesto obtainpasswords,passphrases,personal identification numbers(PINs), and other information entered via keyboards. In 2005, a group of UC Berkeley researchers performed a number of practical experiments demonstrating the validity of this kind of threat.[3]
Also in 2004,Adi ShamirandEran Tromerdemonstrated that it may be possible to conducttiming attacksagainst aCPUperforming cryptographic operations by analyzing variations in acoustic emissions. Analyzed emissions wereultrasonicnoise emanating fromcapacitorsandinductorson computermotherboards, notelectromagnetic emissionsor the human-audible humming of a cooling fan.[4]Shamir and Tromer, along with new collaborator Daniel Genkin and others, then went on to successfully implement the attack on a laptop running a version ofGnuPG(anRSAimplementation), using either a mobile phone located close to the laptop, or a laboratory-grade microphone located up to 4 m away, and published their experimental results in December 2013.[5]
Acoustic emissions occur in coils and capacitors because of small movements when a current surge passes through them. Capacitors in particular change diameter slightly as their many layers experience electrostatic attraction/repulsion or piezoelectric size change.[6]A coil or capacitor which emits acoustic noise will, conversely, also be microphonic, and the high-end audio industry takes steps with coils[7]and capacitors[8]to reduce these microphonics (immissions) because they can muddy a hi-fi amplifier's sound.[citation needed]
In March 2015, it was made public that some inkjet printers using ultrasonic heads can be read back using high frequencyMEMSmicrophones to record the unique acoustic signals from each nozzle and using timing reconstruction with known printed data,[citation needed]that is, "confidential" in 12-point font.[clarification needed]Thermal printers can also be read using similar methods but with less fidelity as the signals from the bursting bubbles are weaker.[citation needed]The hack also involved implanting a microphone, chip storage IC and burst transmitter with long-life Li+ battery into doctored cartridges substituted for genuine ones sent by post to the target, typically a bank, then retrieved from the garbage using challenge-responseRFIDchip.[citation needed]A similar work on reconstructing printouts made bydot-matrix printerswas publicized in 2011.[9]
A new acoustic cryptanalysis technique discovered by a research team at Israel'sBen-Gurion UniversityCybersecurity Research Center allows data to be extracted using a computer's speakers and headphones.[citation needed]Forbespublished a report stating that researchers found a way to see information being displayed, by using microphone, with 96.5% accuracy.[10]
In 2016, Genkin, Shamir, and Tromer published another paper that described a key extraction attack that relied on the acoustic emissions from laptop devices during the decryption process. They demonstrated the success of their attack with both a simple mobile phone and a more sensitive microphone.[11]
This kind of cryptanalysis can be defeated by generating sounds that are in the same spectrum and same form as keypresses. If sounds of actual keypresses are randomly replayed, it may be possible to totally defeat such kinds of attacks. It is advisable to use at least 5 different recorded variations (36 x 5 = 180 variations) for each keypress to get around the issue ofFFTfingerprinting.[12]Alternatively,white noiseof a sufficient volume (which may be simpler to generate for playback) will also mask the acoustic emanations of individual keypresses.
|
https://en.wikipedia.org/wiki/Acoustic_cryptanalysis
|
Incryptography, atiming attackis aside-channel attackin which the attacker attempts to compromise acryptosystemby analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker can work backwards to the input. Finding secrets through timing information may be significantly easier than usingcryptanalysisof known plaintext, ciphertext pairs. Sometimes timing information is combined with cryptanalysis to increase the rate of information leakage.[1]
Information can leak from a system through measurement of the time it takes to respond to certain queries. How much this information can help an attacker depends on many variables: cryptographic system design, the CPU running the system, the algorithms used, assorted implementation details, timing attack countermeasures, the accuracy of the timing measurements, etc. Timing attacks can be applied to any algorithm that has data-dependent timing variation. Removing timing-dependencies is difficult in some algorithms that use low-level operations that frequently exhibit varied execution time.
Timing attacks are often overlooked in the design phase because they are so dependent on the implementation and can be introduced unintentionally withcompiler optimizations. Avoidance of timing attacks involves design of constant-time functions and careful testing of the final executable code.[1]
Many cryptographic algorithms can be implemented (or masked by a proxy) in a way that reduces or eliminates data-dependent timing information, known as aconstant-time algorithm. An implementation of such an algorithm is sometimes called atiming-safe implementation.[2]Consider an implementation in which every call to a subroutine always returns in exactly x seconds, where x is the maximum time it ever takes to execute that routine on every possible authorized input. In such an implementation, the timing of the algorithm is less likely to leak information about the data supplied to that invocation.[3]The downside of this approach is that the time used for all executions becomes that of theworst-caseperformance of the function.
The data-dependency of timing may stem from one of the following:[1]
Time attacks can also be performed remotely over a network. Observing delays in a system is often influenced by random perturbations, which become even more significant when the observation occurs through a network. In most cases, time attacks require the attacker to have knowledge of the implementation details. However, such attacks can also be leveraged to identify the algorithms in use and facilitate reverse engineering.
The execution time for thesquare-and-multiply algorithmused inmodular exponentiationdepends linearly on the number of '1' bits in the key. While the number of '1' bits alone is not nearly enough information to make finding the key easy, repeated executions with the same key and different inputs can be used to perform statistical correlation analysis of timing information to recover the key completely, even by a passive attacker. Observed timing measurements often include noise (from such sources as network latency, or disk drive access differences from access to access, and theerror correctiontechniques used to recover from transmission errors). Nevertheless, timing attacks are practical against a number of encryption algorithms, includingRSA,ElGamal, and theDigital Signature Algorithm.
In 2003,BonehandBrumleydemonstrated a practical network-based timing attack onSSL-enabled web servers, based on a different vulnerability having to do with the use of RSA withChinese remainder theoremoptimizations. The actual network distance was small in their experiments, but the attack successfully recovered a server private key in a matter of hours. This demonstration led to the widespread deployment and use ofblindingtechniques in SSL implementations. In this context, blinding is intended to remove correlations between key and encryption time.[4]
Some versions ofUnixuse a relatively expensive implementation of thecryptlibrary function for hashing an 8-character password into an 11-character string. On older hardware, this computation took a deliberately and measurably long time: as much as two or three seconds in some cases.[citation needed]Theloginprogram in early versions of Unix executed the crypt function only when the login name was recognized by the system. This leaked information through timing about the validity of the login name, even when the password was incorrect. An attacker could exploit such leaks by first applyingbrute-forceto produce a list of login names known to be valid, then attempt to gain access by combining only these names with a large set of passwords known to be frequently used. Without any information on the validity of login names the time needed to execute such an approach would increase by orders of magnitude, effectively rendering it useless. Later versions of Unix have fixed this leak by always executing the crypt function, regardless of login name validity.[citation needed]
Two otherwise securely isolated processes running on a single system with eithercache memoryorvirtual memorycan communicate by deliberately causingpage faultsand/orcache missesin one process, then monitoring the resulting changes in access times from the other. Likewise, if an application is trusted, but its paging/caching is affected by branching logic, it may be possible for a second application to determine the values of the data compared to the branch condition by monitoring access time changes; in extreme examples, this can allow recovery of cryptographic key bits.[5][6]
The 2017MeltdownandSpectreattacks which forced CPU manufacturers (including Intel, AMD, ARM, and IBM) to redesign their CPUs both rely on timing attacks.[7]As of early 2018, almost every computer system in the world is affected by Spectre.[8][9][10]
Timing attacks are difficult to prevent and can often be used to extend other attacks. For example, in 2018, an old attack on RSA was rediscovered in a timing side-channel variant, two decades after the original bug.[11]
The followingCcode demonstrates a typical insecure string comparison which stops testing as soon as a character doesn't match. For example, when comparing "ABCDE" with "ABxDE" it will return after 3 loop iterations:
By comparison, the following version runs in constant-time by testing all characters and using abitwise operationto accumulate the result:
In the world of C library functions, the first function is analogous tomemcmp(), while the latter is analogous to NetBSD'sconsttime_memequal()or[12]OpenBSD'stimingsafe_bcmp()andtimingsafe_memcmp. On other systems, the comparison function from cryptographic libraries likeOpenSSLandlibsodiumcan be used.
Timing attacks are easier to mount if the adversary knows the internals of the hardware implementation, and even more so, the cryptographic system in use. Since cryptographic security should never depend on the obscurity of either (seesecurity through obscurity, specifically both Shannon's Maxim andKerckhoffs's principle), resistance to timing attacks should not either. If nothing else, an exemplar can be purchased and reverse engineered. Timing attacks and other side-channel attacks may also be useful in identifying, or possibly reverse-engineering, a cryptographic algorithm used by some device.
|
https://en.wikipedia.org/wiki/Timing_attack
|
Supply chain security(also "supply-chain security") activities aim to enhance thesecurityof thesupply chainorvalue chain, the transport andlogisticssystems for the world'scargoand to "facilitate legitimate trade".[1]Their objective is to combine traditional practices ofsupply-chain managementwith the security requirements driven by threats such asterrorism,piracy, and theft. A healthy and robust supply chain absent from security threats requires safeguarding against disturbances at all levels such as facilities, information flow, transportation of goods, and so on. A secure supply chain is critical for organizational performance.[2]
Typical supply-chain security activities include:
According to theOffice of the Director of National Intelligencein the United States, "adversaries exploit supply chain vulnerabilities to steal America’s intellectual property, corrupt our software, surveil our critical infrastructure, and carry out other malicious activities. They infiltrate trusted suppliers and vendors to target equipment, systems, and information used every day by the government, businesses, and individuals."[3]
Local police departmentsoften lack the resources to properly address supply chain security.[4]
Theft andshrinkagecan take place anywhere in the logistics chain: from the shipper, carrier, or consignee. It may be packages, pallet loads, and full truck loads. It can involve individuals with an opportunity to take cargo or can involve organized crime. Security systems involving surveillance systems,tracking systems, and broader corporate security are needed to reduce the theft of material.[5]
Theterrorist attacks of 9/11were the defining event for modern supply chain security. Before 9/11 supply chain security was primarily the concern of theinsuranceand risk management industries; after the attacks more structured approaches were implemented. Early efforts were dominated by concerns over the use of maritime shipping to deliver weapons of mass destruction. From 2001 to 2006 efforts focused on the physical security of goods and shipments but from 2012 on focus shifted to cybersecurity as the awareness of cyber threats grew.[6]TheEuropean Unionalso treated the2010 transatlantic aircraft bomb plotinvolving packages being sent fromYemento theUnited States, as a significant factor in relation to supply chain security.[7]: 3
The European Union'sregulationon theCommunity Customs Code(2008) includes reference to supply chain security in its outline of the mission of each EUcustoms authority:
Customs authorities shall be primarily responsible for the supervision of the Community's international trade, thereby contributing to ... overall supply chain security.[7]: Sect, 2.1
In February 2021 US PresidentJoe Bidenmade supply chain security one of his administration's priorities.[8]
There are a number of supply-chain security initiatives around the world, including:
Supply chain cyber security is a subset of supply chain security which focuses on the digital aspects of the traditional supply chain as well as the supply chain for electronic and digital goods.[6]
|
https://en.wikipedia.org/wiki/Supply_chain_security
|
Current notablecomputer hardwaremanufacturers:
List ofcomputer casemanufacturers:
Topmotherboardmanufacturers:
List of motherboard manufacturers:
Defunct:
Note: most of these companies only make designs, and do not manufacture their own designs.
Top x86CPUmanufacturers:
List of CPU manufacturers (most of the companies sell ARM-based CPUs, assumed if nothing else stated):
List of currenthard disk drivemanufacturers:
Note: the HDDs internal to these devices are manufactured only by theinternal HDD manufacturerslisted above.
List of external hard disk drive manufacturers:
Many companies manufacture SSDs but there are only a few major manufactures[4]ofNAND flash devicesthat are the storage element in most SSDs. The five major NAND flash manufacturers are:
List ofoptical disc drivemanufacturers:
List ofcomputer coolingsystem manufacturers:
List of non-refillable liquid cooling manufacturers:
List of refillable liquid cooling kits manufacturers:
List ofwater blockmanufacturers:
List ofgraphics cardcooling manufacturers:
List of companies that are actively manufacturing and sellingcomputer monitors:
List ofvideo cardmanufacturers:
List ofkeyboardmanufacturers:
List ofmousemanufacturers:
List ofJoystickmanufacturers:
List ofcomputer speakermanufacturers:
List ofmodemmanufacturers:
List ofnetwork cardmanufacturers:
There are a number of other companies (AMD, Microchip, Altera, etc.) making specialized chipsets as part of other ICs, and they are not often found in PC hardware (laptop, desktop or server). There are also a number of now defunct companies (like 3com, DEC, SGI) that produced network related chipsets for us in general computers.
List ofpower supply unit(PSU) designers:
Note that the actual memory chips are manufactured by a small number of DRAM manufacturers. List ofmemory modulemanufacturers:
List of currentDRAMmanufacturers:[5]
List of former or defunctDRAMmanufacturers:
List offablessDRAMcompanies:
In addition, other semiconductor manufacturers includeSRAMoreDRAMembedded in larger chips.
List ofheadphonemanufacturers:
List ofimage scannermanufacturers:
List ofsound cardmanufacturers:
List ofTV tuner cardmanufacturers:
List ofUSB flash drivemanufacturers:
List ofwebcammanufacturers:
|
https://en.wikipedia.org/wiki/List_of_computer_hardware_manufacturers
|
Consumer protectionis the practice of safeguarding buyers of goods and services, and the public, against unfair practices in themarketplace.Consumerprotection measures are often established by law. Such laws are intended to prevent businesses from engaging infraudor specifiedunfair practicesto gain an advantage over competitors or to mislead consumers. They may also provide additional protection for the general public which may be impacted by a product (or its production) even when they are not the direct purchaser or consumer of that product. For example, government regulations may require businesses to disclose detailed information about their products—particularly in areas where public health or safety is an issue, such as with food or automobiles.
Consumer protection is linked to the idea ofconsumer rightsand to the formation ofconsumer organizations, which help consumers make better choices in the marketplace and pursue complaints against businesses. Entities that promote consumer protection include government organizations (such as theFederal Trade Commissionin theUnited States), self-regulating business organizations (such as theBetter Business Bureausin the US,Canada,England, etc.), andnon-governmental organizationsthat advocate for consumer protection laws and help to ensure their enforcement (such as consumer protection agencies and watchdog groups).[citation needed]
A consumer is defined as someone who acquires goods or services for direct use or ownership rather than for resale or use in production and manufacturing. Consumer interests can also serve consumers, consistent with economic efficiency, but this topic is treated in competition law. Consumer protection can also be asserted vianon-governmentorganizations and individuals as consumer activism.
Efforts made for the protection of consumer's rights and interests are:
Consumer protection law or consumer law is considered as an area of law that regulatesprivate lawrelationships between individual consumers and the businesses that sell those goods and services.
Consumer protection covers a wide range of topics, including but not necessarily limited toproduct liability,privacy rights,unfair business practices,fraud,misrepresentation, and other consumer/business interactions. It is a way of preventing frauds and scams from service and sales contracts, eligible fraud, bill collector regulation, pricing, utility turnoffs, consolidation,personal loansthat may lead tobankruptcy. There have been some arguments that consumer law is also a better way to engage in large-scale redistribution thantax lawbecause it does not necessitate legislation and can be more efficient, given the complexities of tax law.[1]
InAustralia, the corresponding agency is theAustralian Competition and Consumer Commissionor the individual State Consumer Affairs agencies. TheAustralian Securities and Investments Commissionhas responsibility for consumer protection regulation of financial services and products. However, in practice, it does so through privately run EDR schemes such as theAustralian Financial Complaints Authority.
In Brazil, consumer protection is regulated by the Consumer's Defense Code (Código de Defesa do Consumidor),[2]as mandated by the1988 Constitution of Brazil. Brazilian law mandates "The offer and presentation of products or services must ensure correct, clear, accurate and conspicuous information in the Portuguese language about their characteristics, qualities, quantity, composition, price, guarantee, validity and origin, among other data, as well as the risks they pose to the health and safety of consumers."[3]In Brazil, the consumer does not have to bring forward evidence that the defender is guilty. Instead, the defense has to bring forward evidence that they are innocent.[2]In the case of Brazil, they narrowlydefine what a consumer, supplier, product, and services[pt]are, so that they can protect consumers from international channels trade laws and protect them from negligence and misconduct from international suppliers.
Several regulations in theEuropean Unionare concerned with consumer protection, including theRegulation on general product safety(GPSR) and theDirective (EU) 2024/2853on liability for defective products.
Germany, as a member state of theEuropean Union, is bound by the consumer protectiondirectivesof the European Union; residents may be directly bound by EU regulations.
A minister of the federal cabinet is responsible for consumer rights and protection (Verbraucherschutzminister). In thecurrent cabinetofFriedrich Merz, this isCarsten Schneider.
When issuing public warnings about products and services, the issuing authority has to take into account that this affects the supplier's constitutionally protected economic liberty, seeBundesverwaltungsgericht(Federal Administrative Court) Case 3 C 34.84, 71 BVerwGE 183.[4]
InIndia, consumer protection is specified in TheConsumer Protection Act, 2019. Under this law, Separate Consumer Dispute Redress Forums have been set up throughout India in every district in which a consumer can file their complaint on a simple paper with nominal court fees and their complaint will be decided by the Presiding Officer of the District Level. The complaint can be filed by both the consumer of a goods as well as of the services. An appeal could be filed to the State Consumer Disputes Redress Commissions and after that to the National Consumer Disputes RedresaRedressalsion (NCDRC).[5]The procedures in thesetribunalsare relatively less formal and more people-friendly and they also take less time to decide upon a consumer dispute[6]when compared to the years-long time taken by the traditionalIndian judiciary. In recent years, many effective judgments have been passed by some state and National Consumer Forums.
Indian Contract Act, 1872lays down the conditions in which promises made by parties to a contract will be legally binding on each other. It also lays down the remedies available to the aggregate party if the other party fails to honor their promise.
The Sale of Goods Act of 1930 provides some safeguards to buyers of goods if goods purchased do not fulfill the express or implied conditions and warranties.
The Agriculture Produce Act of 1937 act provides grade standards for agricultural commodities and livestock products. It specifies the conditions which govern the use of standards and lays down the procedure for grading, marking, and packaging of agricultural produce. The quality mark provided under the act is known asAGMARK-Agriculture Marketing.
The Nigerian government must protect its people from any form of harm to human health through the use and purchase of items to meet daily needs. In light of this, theFederal Competition and Consumer Protection Commission (FCCPC), whose aim is to protect and enhance consumers' interest through information, education, and enforcement of the rights of consumers was established by an Act of Parliament o promote and protect the interest of consumers over all products and services. In a nutshell, it is empowered to eliminate hazardous & substandard goods from the market. Provide speedy redress to consumer complaints and petition arisen from fraud, unfair practice, and exploitation of the consumer.
On 5 February 2019, the President of Nigeria, Muhammadu Buhari, assented to the new Federal Competition and Consumer Protection Commission Bill, 2018. Thus, the bill became a law of the Federal Republic of Nigeria and binding on entities and organizations so specified in the Act.
The long title of the Act reads: "This Act establishes the Federal Competition and Consumer Protection Commission and the Competition and Consumer Protection Tribunal for the promotion of competition in the Nigerian market at all levels by eliminating monopolies, prohibiting abuse of dominant market position and penalizing other restrictive trade and business practices."
The Act further repealed the hitherto Nigerian Consumer Protection Council Act and transferred its core mandate to the new Commission.
Modern Taiwanese law has been heavily influenced by the European civil law systems, particularly German and Swiss law. The Civil Code in Taiwan contains five books: General Principles, Obligations, Rights over Things, Family, and Succession. The second book of the Code, the Book of Obligations, provided the basis from which consumers could bring product liability actions prior to the enactment of the CPL.[7][8]
The Consumer Protection Law (CPL) inTaiwan, as promulgated on 11 January 1994, and effective on 13 January 1993, specifically protects the interests and safety of customers using the products or services provided by business operators. The Consumer Protection Commission of Executive Yuan serves as an ombudsman supervising, coordinating, reporting any unsafe products/services, and periodically reviewing the legislation.
According to the Pacific Rim Law & Policy Association and the American Chamber of Commerce, in a 1997 critical study, the law has been criticized by stating that "although many agree that the intent of the CPL is fair, the CPL's various problems, such as ambiguous terminology, favoritism towards consumer protection groups, and the compensation liability defense, must be addressed before the CPL becomes a truly effective piece of legislation that will protect consumers"[9]
The main consumer protection laws in the UK are theConsumer Protection Act 1987and theConsumer Rights Act 2015.
TheUnited Kingdomhas left theEuropean Union, but during the transition period (until end of 2020) the UK was still bound bydirectivesof the European Union. Specifics of the division of roles between the EU and the UK are detailed here.[10]Domestic (UK) laws originated within the ambit ofcontractandtortbut, with the influence ofEU law, it is emerging as an independent area of law. In many circumstances, where domestic law is in question, the matter is judicially treated astort,contract,restitutionor evencriminal law.[citation needed]
Consumer protection issues were dealt with by theOffice of Fair Tradingbefore 2014. Since then, theCompetition and Markets Authorityhas taken on this role.[11]
In theUnited Statesa variety of laws at both the federal and state levels regulate consumer affairs. Among them are theFederal Food, Drug, and Cosmetic Act,Fair Debt Collection Practices Act, theFair Credit Reporting Act,Truth in Lending Act,Fair Credit Billing Act, and theGramm–Leach–Bliley Act. Federal consumer protection laws are mainly enforced by theFederal Trade Commission, theConsumer Financial Protection Bureau, theFood and Drug Administration, and theU.S. Department of Justice.
At the state level, many states have adopted the Uniform Deceptive Trade Practices Act[12]including, but not limited to, Delaware,[13]Illinois,[14]Maine,[15]and Nebraska.[16]The deceptive trade practices prohibited by the Uniform Act can be roughly subdivided into conduct involving either a) unfair or fraudulent business practices and b) untrue or misleading advertising. The Uniform Act contains a private remedy with attorneys fees for prevailing parties where the losing party "willfully engaged in the trade practice knowing it to be deceptive". Uniform Act §3(b).
Missouri has a similar statute called the Merchandising Practices Act.[17]This statute allows local prosecutors or the Attorney General to press charges against people who knowingly use deceptive business practices in a consumer transaction and authorizes consumers to hire a private attorney to bring an action seeking their actual damages, punitive damages, and attorney's fees.
Also, the majority of states have a Department of Consumer Affairs devoted to regulating certain industries and protecting consumers who use goods and services from those industries. For example, in California, theCalifornia Department of Consumer Affairsregulates about 2.3 million professionals in over 230 different professions, through its forty regulatory entities. In addition, California encourages its consumers to act asprivate attorneys generalthrough the liberal provisions of itsConsumers Legal Remedies Act.
State and federal laws provide for "cooling off" periods giving consumers the right to cancel contracts within a certain time period for several specified types of transactions, potentially including transactions entered into at home, and warranty and repair services contracts.[18][19]
Other states have been the leaders in specific aspects of consumer protection. For example, Florida, Delaware, and Minnesota have legislated requirements that contracts be written at reasonable readability levels as a large proportion of contracts cannot be understood by most consumers who sign them.[20]
Considering the state of Massachusetts, the Massachusetts Consumer Protection Law, MGL 93A, clearly highlights the rights and violations of consumer protection law in the state. The chapter explains what actions are considered illegal under the law for which a party can seek monetary damages from the other party at fault.[21]Some examples of practices that constitute a Chapter 93A violation would be when:
The laws under MGL 93A prohibit activities that relate to overpricing to a consumer and the use of "Bait and Switch" techniques. A court will award the plaintiff the damages if they can prove the (1) defendant knowingly and intentionally violated the MGL 93A agreement or (2) the defendant would not "grant relief in bad faith" knowing that the actions violated the MGL 93A agreement.[22]Additionally, failure to disclose refund/ return policy, warranties, and critical information about the product/service are all in violation of the legislation, and can result in triple damages and lawyer fees.[22]
-Media related toConsumer protectionat Wikimedia Commons
|
https://en.wikipedia.org/wiki/Consumer_protection
|
Asecurity switchis a hardware device designed to protectcomputers,laptops,smartphonesand similar devices from unauthorized access or operation, distinct from avirtual security switchwhich offerssoftwareprotection. Security switches should be operated by anauthorizeduser only; for this reason, it should be isolated from other devices, in order to prevent unauthorized access, and it should not be possible to bypass it, in order to prevent malicious manipulation.
The primary purpose of a security switch is to provide protection againstsurveillance,eavesdropping,malware,spyware, and theft of digital devices.
Unlike other protections or techniques, a security switch can provide protection even if security has already been breached, since it does not have any access from other components and is not accessible by software. It can additionally disconnect or block peripheral devices, and perform "man in the middle" operations.[citation needed]
A security switch can be used forhuman presence detectionsince it can only be initiated by a human operator. It can also be used as afirewall.
A hardware kill switch (HKS) is a physical switch that cuts the signal or power line to the device or disable the chip running them.
Googlestarted to work on a hardware kill switch forAIin 2016.[2]
In 2019,Apple, and Google, along with a handful of smaller players, are designing “kill switches” that cut the power to the microphones or cameras in their devices. Googles first product that implemented this isNest Hub Max.[1]
Hardware kill switches are already available and widely tested on thePinePhone,Librem,Shiftphone, to cut power to the input peripherals (microphone, camera) but also the network connectivity modules (wifi, cellular network).
|
https://en.wikipedia.org/wiki/Security_switch
|
Incomputability theory, a system of data-manipulation rules (such as amodel of computation, a computer'sinstruction set, aprogramming language, or acellular automaton) is said to beTuring-completeorcomputationally universalif it can be used to simulate anyTuring machine[1][2](devised by English mathematician and computer scientistAlan Turing). This means that this system is able to recognize or decode other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.[a]
A related concept is that ofTuring equivalence– two computers P and Q are called equivalent if P can simulate Q and Q can simulate P.[4]TheChurch–Turing thesisconjectures that any function whose values can be computed by analgorithmcan be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. Auniversal Turing machinecan be used to simulate any Turing machine and by extension the purely computational aspects of any possible real-world computer.[5][6]
To show that something is Turing-complete, it is enough to demonstrate that it can be used to simulate some Turing-complete system. No physical system can have infinite memory, but if the limitation of finite memory is ignored, most programming languages are otherwise Turing-complete.[7][8]
Incolloquialusage, the terms "Turing-complete" and "Turing-equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language. In real life, this leads to the practical concepts of computingvirtualizationandemulation.[citation needed]
Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (which uses a "tape" for memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are onlylinear bounded automatoncomplete. In contrast, the abstraction of auniversal computeris defined as a device with a Turing-complete instruction set, infinite memory, and infinite available time.[citation needed]
Incomputability theory, several closely related terms are used to describe the computational power of a computational system (such as anabstract machineorprogramming language):
Turing completeness is significant in that every real-world design for a computing device can be simulated by auniversal Turing machine. TheChurch–Turing thesisstates that this is a law of mathematics – that a universal Turing machine can, in principle, perform any calculation that any other programmablecomputercan. This says nothing about the effort needed to write theprogram, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.
Charles Babbage'sanalytical engine(1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do better.[citation needed]From the 1830s until the 1940s, mechanical calculating machines such as adders and multipliers were built and improved, but they could not perform a conditional branch and therefore were not Turing-complete.
In the late 19th century,Leopold Kroneckerformulated notions of computability, definingprimitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions that compute them do not allow for an infinite loop. In the early 20th century,David Hilbertled a program to axiomatize all of mathematics with precise axioms and precise logical rules of deduction that could be performed by a machine. Soon it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved byKurt Gödelin 1930 to be enough to produce every theorem.
The actual notion of computation was isolated soon after, starting withGödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation that deduces their theorems. Church and Turing independently demonstrated that Hilbert'sEntscheidungsproblem(decision problem) was unsolvable,[9]thus identifying the computational core of the incompleteness theorem. This work, along with Gödel's work ongeneral recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique.
In 1941Konrad Zusecompleted theZ3computer. Zuse was not familiar with Turing's work on computability at the time. In particular, the Z3 lacked dedicated facilities for a conditional jump, thereby precluding it from being Turing complete. However, in 1998, it was shown by Rojas that the Z3 is capable of simulating conditional jumps, and therefore Turing complete in theory. To do this, its tape program would have to be long enough to execute every possible path through both sides of every branch.[10]
The first computer capable of conditional branching in practice, and therefore Turing complete in practice, was theENIACin 1946. Zuse'sZ4computer was operational in 1945, but it did not support conditional branching until 1950.[11]
Computability theoryusesmodels of computationto analyze problems and determine whether they arecomputableand under what circumstances. The first result of computability theory is that there exist problems for which it is impossible to predict what a (Turing-complete) system will do over an arbitrarily long time.
The classic example is thehalting problem: create an algorithm that takes as input a program in some Turing-complete language and some data to be fed tothatprogram, and determines whether the program, operating on the input, will eventually stop or will continue forever. It is trivial to create an algorithm that can do this forsomeinputs, but impossible to do this in general. For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold.
This impossibility poses problems when analyzing real-world computer programs. For example, one cannot write a tool that entirely protects programmers from writing infinite loops or protects users from supplying input that would cause infinite loops.
One can instead limit a program to executing only for a fixed period of time (timeout) or limit the power of flow-control instructions (for example, providing only loops that iterate over the items of an existing array). However, another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with only finite looping abilities (i.e., languages that guarantee that every program will eventually finish to a halt). So any such language is not Turing-complete. For example, a language in which programs are guaranteed to complete and halt cannot compute the computable function produced byCantor's diagonal argumenton all computable functions in that language.
A computer with access to an infinite tape of data may be more powerful than a Turing machine: for instance, the tape might contain the solution to thehalting problemor some other Turing-undecidable problem. Such an infinite tape of data is called aTuring oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot.
All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis calleddigital physicsstates that this is no accident because theuniverseitself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically.[12]
The computational systems (algebras, calculi) that are discussed as Turing-complete systems are those intended for studyingtheoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:
Mostprogramming languages(their abstract models, maybe with some particular constructs that assume finite memory omitted), conventional and unconventional, are Turing-complete. This includes:
Somerewrite systemsare Turing-complete.
Turing completeness is an abstract statement of ability, rather than a prescription of specific language features used to implement that ability. The features used to achieve Turing completeness can be quite different; Fortran systems would use loop constructs or possibly evengotostatements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would userecursion. Most programming languages are describing computations onvon Neumann architectures, which have memory (RAM and register) and a control unit. These two elements make this architecture Turing-complete. Even purefunctional languagesare Turing-complete.[15][16]
Turing completeness in declarative SQL is implemented throughrecursive common table expressions. Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing-complete. This illustrates one reason why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing-complete.
The untypedlambda calculusis Turing-complete, but many typed lambda calculi, includingSystem F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors.
Rule 110andConway's Game of Life, bothcellular automata, are Turing-complete.
Somesoftwareandvideo gamesare Turing-complete by accident, i.e. not by design.
Software:
Games:
Social media:
Computational languages:
Biology:
Many computational languages exist that are not Turing-complete. One such example is the set ofregular languages, which are generated byregular expressionsand which are recognized byfinite automata. A more powerful but still not Turing-complete extension of finite automata is the category ofpushdown automataandcontext-free grammars, which are commonly used to generate parse trees in an initial stage of programcompiling. Further examples include some of the early versions of the pixel shader languages embedded inDirect3DandOpenGLextensions.[citation needed]
Intotal functional programminglanguages, such asCharityandEpigram, all functions are total and must terminate. Charity uses a type system andcontrol constructsbased oncategory theory, whereas Epigram usesdependent types. TheLOOPlanguage is designed so that it computes only the functions that areprimitive recursive. All of these compute proper subsets of the total computable functions, since the full set of total computable functions is notcomputably enumerable. Also, since all functions in these languages are total, algorithms forrecursively enumerable setscannot be written in these languages, in contrast with Turing machines.
Although (untyped)lambda calculusis Turing-complete,simply typed lambda calculusis not.
|
https://en.wikipedia.org/wiki/Turing_completeness
|
Incomputer science, auniversal Turing machine(UTM) is aTuring machinecapable of computing any computable sequence,[1]as described byAlan Turingin his seminal paper "On Computable Numbers, with an Application to theEntscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible.[a]He suggested that we may compare a human in the process of computing a real number to a machine which is only capable of a finite number of conditionsq1,q2,…,qR{\displaystyle q_{1},q_{2},\dots ,q_{R}}; which will be called "m-configurations".[2]He then described the operation of such machine, as described below, and argued:
It is my contention that these operations include all those which are used in the computation of a number.[3]
Turingintroduced the idea of such a machine in 1936–1937.
Martin Davismakes a persuasive argument that Turing's conception of what is now known as "the stored-program computer", of placing the "action table"—the instructions for the machine—in the same "memory" as the input data, strongly influencedJohn von Neumann's conception of the first American discrete-symbol (as opposed to analog) computer—theEDVAC. Davis quotesTimemagazine to this effect, that "everyone who taps at a keyboard ... is working on an incarnation of a Turing machine", and that "John von Neumann [built] on the work of Alan Turing".[4]
Davis makes a case that Turing'sAutomatic Computing Engine(ACE) computer "anticipated" the notions of microprogramming (microcode) andRISCprocessors.[5]Donald Knuthcites Turing's work on the ACE computer as designing "hardware to facilitate subroutine linkage";[6]Davis also references this work as Turing's use of a hardware "stack".[7]
As the Turing machine was encouraging the construction ofcomputers, the UTM was encouraging the development of the fledglingcomputer sciences. An early, if not the first, assembler was proposed "by a young hot-shot programmer" for the EDVAC.[8]Von Neumann's "first serious program ... [was] to simply sort data efficiently".[9]Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine.[b]Knuth furthermore states that
The first interpretive routine may be said to be the "Universal Turing Machine" ... Interpretive routines in the conventional sense were mentioned byJohn Mauchlyin his lectures at theMoore Schoolin 1946 ... Turing took part in this development also; interpretive systems for the Pilot ACE computer were written under his direction.[10]
Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data.[11]
With this encoding of action tables as strings, it becomes possible, in principle, for Turing machines to answer questions about the behaviour of other Turing machines. Most of these questions, however, areundecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as theHalting problem, was shown to be, in general, undecidable in Turing's original paper.Rice's theoremshows that any non-trivial question about the output of a Turing machine is undecidable.
A universal Turing machine can calculate anyrecursive function, decide anyrecursive language, and accept anyrecursively enumerable language. According to theChurch–Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by analgorithmor aneffective method of computation, for any reasonable definition of those terms. For these reasons, a universal Turing machine serves as a standard against which to compare computational systems, and a system that can simulate a universal Turing machine is calledTuring complete.
An abstract version of the universal Turing machine is theuniversal function, a computable function which can be used to calculate any other computable function. TheUTM theoremproves the existence of such a function.
Without loss of generality, the input of Turing machine can be assumed to be in the alphabet {0, 1}; any other finite alphabet can be encoded over {0, 1}. The behavior of a Turing machineMis determined by its transition function. This function can be easily encoded as a string over the alphabet {0, 1} as well. The size of the alphabet ofM, the number of tapes it has, and the size of the state space can be deduced from the transition function's table. The distinguished states and symbols can be identified by their position, e.g. the first two states can by convention be the start and stop states. Consequently, every Turing machine can be encoded as a string over the alphabet {0, 1}. Additionally, we convene that every invalid encoding maps to a trivial Turing machine that immediately halts, and that every Turing machine can have an infinite number of encodings by padding the encoding with an arbitrary number of (say) 1's at the end, just like comments work in a programming language. It should be no surprise that we can achieve this encoding given the existence of aGödel numberand computational equivalence between Turing machines andμ-recursive functions. Similarly, our construction associates to every binary stringα, a Turing machineMα.
Starting from the above encoding, in 1966 F. C. Hennie andR. E. Stearnsshowed that given a Turing machineMαthat halts on inputxwithinNsteps, then there exists a multi-tape universal Turing machine that halts on inputsα,x(given on different tapes) inCNlogN, whereCis a machine-specific constant that does not depend on the length of the inputx, but does depend onM's alphabet size, number of tapes, and number of states. Effectively this is anO(NlogN){\displaystyle {\mathcal {O}}\left(N\log {N}\right)}simulation, usingDonald Knuth'sBig O notation.[12]The corresponding result for space-complexity rather than time-complexity is that we can simulate in a way that uses at mostCNcells at any stage of the computation, anO(N){\displaystyle {\mathcal {O}}(N)}simulation.[13]
When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated.Claude Shannonfirst explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols. He also showed that no universal Turing machine of one state could exist.
Marvin Minskydiscovered a 7-state 4-symbol universal Turing machine in 1962 using2-tag systems. Other small universal Turing machines have since been found by Yurii Rogozhin and others by extending this approach of tag system simulation. If we denote by (m,n) the class of UTMs withmstates andnsymbols the following tuples have been found: (15, 2), (9, 3), (6, 4), (5, 5), (4, 6), (3, 9), and (2, 18).[14][15][16]Rogozhin's (4, 6) machine uses only 22 instructions, and no standard UTM of lesser descriptional complexity is known.
However, generalizing the standard Turing machine model admits even smaller UTMs. One such generalization is to allow an infinitely repeated word on one or both sides of the Turing machine input, thus extending the definition of universality and known as "semi-weak" or "weak" universality, respectively. Small weakly universal Turing machines that simulate theRule 110cellular automaton have been given for the (6, 2), (3, 3), and (2, 4) state-symbol pairs.[17]The proof of universality forWolfram's 2-state 3-symbol Turing machinefurther extends the notion of weak universality by allowing certain non-periodic initial configurations. Other variants on the standard Turing machine model that yield small UTMs include machines with multiple tapes or tapes of multiple dimension, and machines coupled with afinite automaton.
If multiple heads are allowed on a Turing machine then no internal states are required; as "states" can be encoded in the tape. For example, consider a tape with 6 colours: 0, 1, 2, 0A, 1A, 2A. Consider a tape such as 0,0,1,2,2A,0,2,1 where a 3-headed Turing machine is situated over the triple (2,2A,0). The rules then convert any triple to another triple and move the 3-heads left or right. For example, the rules might convert (2,2A,0) to (2,1,0) and move the head left. Thus in this example, the machine acts like a 3-colour Turing machine with internal states A and B (represented by no letter). The case for a 2-headed Turing machine is very similar. Thus a 2-headed Turing machine can be Universal with 6 colours. It is not known what the smallest number of colours needed for a multi-headed Turing machine is or if a 2-colour Universal Turing machine is possible with multiple heads. It also means thatrewrite rulesare Turing complete since the triple rules are equivalent to rewrite rules. Extending the tape to two dimensions with a head sampling a letter and its 8 neighbours, only 2 colours are needed, as for example, a colour can be encoded in a vertical triple pattern such as 110.
Also, if the distance between the two heads is variable (the tape has "slack" between the heads), then it can simulate anyPost tag system, some of which are universal.[18]
For those who would undertake the challenge of designing a UTM exactly as Turing specified see the article by Davies inCopeland (2004). Davies corrects the errors in the original and shows what a sample run would look like. He successfully ran a (somewhat simplified) simulation.
The following example is taken fromTuring (1937). For more about this example, seeTuring machine examples.
Turing used seven symbols { A, C, D, R, L, N, ; } to encode each 5-tuple; as described in the articleTuring machine, his 5-tuples are only of types N1, N2, and N3. The number of each "m‑configuration" (instruction, state) is represented by "D" followed by a unary string of A's, e.g. "q3" = DAAA. In a similar manner, he encodes the symbols blank as "D", the symbol "0" as "DC", the symbol "1" as DCC, etc. The symbols "R", "L", and "N" remain as is.
After encoding each 5-tuple is then "assembled" into a string in order as shown in the following table:
Finally, the codes for all four 5-tuples are strung together into a code started by ";" and separated by ";" i.e.:
This code he placed on alternate squares—the "F-squares" – leaving the "E-squares" (those liable to erasure) empty. The final assembly of the code on the tape for the U-machine consists of placing two special symbols ("e") one after the other, then the code separated out on alternate squares, and lastly the double-colon symbol "::" (blanks shown here with "." for clarity):
The U-machine's action-table (state-transition table) is responsible for decoding the symbols. Turing's action table keeps track of its place with markers "u", "v", "x", "y", "z" by placing them in "E-squares" to the right of "the marked symbol" – for example, to mark the current instructionzis placed to the right of ";"xis keeping the place with respect to the current "m‑configuration" DAA. The U-machine's action table will shuttle these symbols around (erasing them and placing them in different locations) as the computation progresses:
Turing's action-table for his U-machine is very involved.
Roger Penroseprovides examples of ways to encode instructions for the Universal machine using only binary symbols { 0, 1 }, or { blank, mark | }. Penrose goes further and writes out his entire U-machine code. He asserts that it truly is a U-machine code, an enormous number that spans almost 2 full pages of 1's and 0's.[19]
Asperti and Ricciotti described a multi-tape UTM defined by composing elementary machines with very simple semantics, rather than explicitly giving its full action table. This approach was sufficiently modular to allow them to formally prove the correctness of the machine in theMatitaproof assistant.[20]
|
https://en.wikipedia.org/wiki/Universal_Turing_machine
|
Automata theoryis the study ofabstract machinesandautomata, as well as thecomputational problemsthat can be solved using them. It is a theory intheoretical computer sciencewith close connections tocognitive scienceandmathematical logic. The wordautomatacomes from theGreekword αὐτόματος, which means "self-acting, self-willed, self-moving". Anautomaton(automata in plural) is an abstract self-propelledcomputing devicewhich follows a predetermined sequence of operations automatically. An automaton with a finite number ofstatesis called a finite automaton (FA) orfinite-state machine(FSM). The figure on the right illustrates a finite-state machine, which is a well-known type of automaton. This automaton consists ofstates(represented in the figure by circles) andtransitions(represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to itstransition function, which takes the previous state and current input symbol as itsarguments.
Automata theory is closely related toformal languagetheory. In this context, automata are used as finite representations of formal languages that may be infinite. Automata are often classified by the class of formal languages they can recognize, as in theChomsky hierarchy, which describes a nesting relationship between major classes of automata. Automata play a major role in thetheory of computation,compiler construction,artificial intelligence,parsingandformal verification.
The theory of abstract automata was developed in the mid-20th century in connection withfinite automata.[1]Automata theory was initially considered a branch of mathematicalsystems theory, studying the behavior of discrete-parameter systems. Early work in automata theory differed from previous work on systems by usingabstract algebrato describeinformation systemsrather thandifferential calculusto describe material systems.[2]The theory of thefinite-state transducerwas developed under different names by different research communities.[3]The earlier concept ofTuring machinewas also included in the discipline along with new forms of infinite-state automata, such aspushdown automata.
1956 saw the publication ofAutomata Studies, which collected work by scientists includingClaude Shannon,W. Ross Ashby,John von Neumann,Marvin Minsky,Edward F. Moore, andStephen Cole Kleene.[4]With the publication of this volume, "automata theory emerged as a relatively autonomous discipline".[5]The book included Kleene's description of the set of regular events, orregular languages, and a relatively stable measure of complexity in Turing machine programs by Shannon.[6]In the same year,Noam Chomskydescribed theChomsky hierarchy, a correspondence between automata andformal grammars,[7]and Ross Ashby publishedAn Introduction to Cybernetics, an accessible textbook explaining automata and information using basicset theory.
The study oflinear bounded automataled to theMyhill–Nerode theorem,[8]which gives a necessary and sufficient condition for a formal language to be regular, and an exact count of the number of states in a minimal machine for the language. Thepumping lemma for regular languages, also useful in regularity proofs, was proven in this period byMichael O. RabinandDana Scott, along with the computational equivalence of deterministic and nondeterministic finite automata.[9]
In the 1960s, a body of algebraic results known as "structure theory" or "algebraic decomposition theory" emerged, which dealt with the realization of sequential machines from smaller machines by interconnection.[10]While any finite automaton can be simulated using auniversal gate set, this requires that the simulating circuit contain loops of arbitrary complexity. Structure theory deals with the "loop-free" realizability of machines.[5]The theory ofcomputational complexityalso took shape in the 1960s.[11][12]By the end of the decade, automata theory came to be seen as "the pure mathematics of computer science".[5]
What follows is a general definition of an automaton, which restricts a broader definition of asystemto one viewed as acting in discrete time-steps, with its state behavior and outputs defined at each step by unchanging functions of only its state and input.[5]
An automatonrunswhen it is given some sequence ofinputsin discrete (individual)time steps(or juststeps). An automaton processes one input picked from a set ofsymbolsorletters, which is called aninputalphabet. The symbols received by the automaton as input at any step are a sequence of symbols calledwords. An automaton has a set ofstates. At each moment during a run of the automaton, the automaton isinone of its states. When the automaton receives new input, it moves to another state (ortransitions) based on atransition functionthat takes the previous state and current input symbol as parameters. At the same time, another function called theoutput functionproduces symbols from theoutput alphabet, also according to the previous state and current input symbol. The automaton reads the symbols of the input word and transitions between states until the word is read completely, if it is finite in length, at which point the automatonhalts. A state at which the automaton halts is called thefinal state.
To investigate the possible state/input/output sequences in an automaton usingformal languagetheory, a machine can be assigned astarting stateand a set ofaccepting states. Then, depending on whether a run starting from the starting state ends in an accepting state, the automaton can be said toacceptorrejectan input sequence. The set of all the words accepted by an automaton is called thelanguage recognized by the automaton. A familiar example of a machine recognizing a language is anelectronic lock, which accepts or rejects attempts to enter the correct code.
Automata are defined to study useful machines under mathematical formalism. So the definition of an automaton is open to variations according to the "real world machine" that we want to model using the automaton. People have studied many variations of automata. The following are some popular variations in the definition of different components of automata.
Different combinations of the above variations produce many classes of automata.
Automata theory is a subject matter that studies properties of various types of automata. For example, the following questions are studied about a given type of automata.
Automata theory also studies the existence or nonexistence of anyeffective algorithmsto solve problems similar to the following list:
The following is an incomplete list of types of automata.
Normally automata theory describes the states of abstract machines but there are discrete automata,analog automataorcontinuous automata, orhybrid discrete-continuous automata, which use digital data, analog data or continuous time, or digitalandanalog data, respectively.
The following is an incomplete hierarchy in terms of powers of different types of virtual machines. The hierarchy reflects the nested categories of languages the machines are able to accept.[14]
(same power)||{\displaystyle ||}(same power)Nondeterministic Finite Automaton(NFA)(above is weaker)∩{\displaystyle \cap }(below is stronger)Deterministic Push Down Automaton(DPDA-I)with 1 push-down store∩{\displaystyle \cap }Nondeterministic Push Down Automaton(NPDA-I)with 1 push-down store∩{\displaystyle \cap }Linear Bounded Automaton(LBA)∩{\displaystyle \cap }Deterministic Push Down Automaton(DPDA-II)with 2 push-down stores||{\displaystyle ||}Nondeterministic Push Down Automaton(NPDA-II)with 2 push-down stores||{\displaystyle ||}Deterministic Turing Machine(DTM)||{\displaystyle ||}Nondeterministic Turing Machine(NTM)||{\displaystyle ||}Probabilistic Turing Machine(PTM)||{\displaystyle ||}Multitape Turing Machine(MTM)||{\displaystyle ||}Multidimensional Turing Machine
Each model in automata theory plays important roles in several applied areas.Finite automataare used intext processing, compilers, andhardware design.Context-free grammar(CFGs) are used inprogramming languagesand artificial intelligence. Originally, CFGs were used in the study ofhuman languages.Cellular automataare used in the field ofartificial life, the most famous example beingJohn Conway'sGame of Life. Some other examples which could be explained using automata theory in biology include mollusk and pine cone growth and pigmentation patterns. Going further, a theory suggesting that the whole universe is computed by some sort of a discrete automaton, is advocated by some scientists. The idea originated in the work ofKonrad Zuse, and was popularized in America byEdward Fredkin. Automata also appear in the theory offinite fields: the set ofirreducible polynomialsthat can be written as composition of degree two polynomials is in fact a regular language.[15]Another problem for which automata can be used is theinduction of regular languages.
Automata simulators are pedagogical tools used to teach, learn and research automata theory. An automata simulator takes as input the description of an automaton and then simulates its working for an arbitrary input string. The description of the automaton can be entered in several ways. An automaton can be defined in asymbolic languageor its specification may be entered in a predesigned form or its transition diagram may be drawn by clicking and dragging the mouse. Well known automata simulators include Turing's World, JFLAP, VAS, TAGS and SimStudio.[16]
One can define several distinctcategoriesof automata[17]following the automata classification into different types described in the previous section. The mathematical category of deterministic automata,sequential machinesorsequential automata, and Turing machines withautomata homomorphismsdefining the arrows between automata is aCartesian closed category,[18]it has both categoricallimitsandcolimits. An automata homomorphism maps a quintuple of an automatonAionto the quintuple of another automatonAj. Automata homomorphisms can also be considered asautomata transformationsor assemigrouphomomorphisms, when the state space,S, of the automaton is defined as a semigroupSg.Monoidsare also considered as a suitable setting for automata inmonoidal categories.[19][20][21]
One could also define avariable automaton, in the sense ofNorbert Wienerin his book onThe Human Use of Human Beingsviathe endomorphismsAi→Ai{\displaystyle A_{i}\to A_{i}}. Then one can show that such variable automata homomorphisms form a mathematical group. In the case of non-deterministic, or other complex kinds of automata, the latter set of endomorphisms may become, however, avariable automatongroupoid. Therefore, in the most general case, categories of variable automata of any kind arecategories of groupoidsorgroupoid categories. Moreover, the category of reversible automata is then a2-category, and also a subcategory of the 2-category of groupoids, or the groupoid category.
|
https://en.wikipedia.org/wiki/Automata_theory
|
Incryptography,key wrapconstructions are a class ofsymmetric encryptionalgorithms designed toencapsulate(encrypt) cryptographic key material.[1]The Key Wrap algorithms are intended for applications such as protecting keys while in untrusted storage or transmitting keys over untrusted communications networks. The constructions are typically built from standard primitives such asblock ciphersandcryptographic hash functions.
Key Wrap may be considered as a form ofkey encapsulationalgorithm, although it should not be confused with the more commonly knownasymmetric(public-key)key encapsulationalgorithms (e.g.,PSEC-KEM). Key Wrap algorithms can be used in a similar application: to securely transport a session key by encrypting it under a long-term encryption key.
In the late 1990s, theNational Institute of Standards and Technology(NIST) posed the "Key Wrap" problem: to develop secure and efficient cipher-based key encryption algorithms. The resulting algorithms would be formally evaluated by NIST, and eventually approved for use in NIST-certified cryptographic modules. NIST did not precisely define the security goals of the resulting algorithm, and left further refinement to the algorithm developers. Based on the resulting algorithms, the design requirements appear to be (1) confidentiality, (2) integrity protection (authentication), (3) efficiency, (4) use of standard (approved) underlying primitives such as theAdvanced Encryption Standard(AES) and the Secure Hash Algorithm (SHA-1), and (5) consideration of additional circumstances (e.g., resilience to operator error, low-quality random number generators). Goals (3) and (5) are particularly important, given that many widely deployedauthenticated encryptionalgorithms (e.g., AES-CCM) are already sufficient to accomplish the remaining goals.
Several constructions have been proposed. These include:
Each of the proposed algorithms can be considered as a form ofauthenticated encryptionalgorithm providing confidentiality for highlyentropicmessages such as cryptographic keys. The AES Key Wrap Specification, AESKW, TDKW, and AKW1 are intended to maintain confidentiality underadaptive chosen ciphertext attacks, while the AKW2 algorithm is designed to be secure only under known-plaintext (or weaker) attacks. (The stated goal of AKW2 is for use in legacy systems and computationally limited devices where use of the other algorithms would be impractical.) AESKW, TDKW and AKW2 also provide the ability to authenticate cleartext "header", an associated block of data that is not encrypted.
Rogawayand Shrimpton evaluated the design of the ANSX9.102 algorithms with respect to the stated security goals. Among their general findings, they noted the lack of clearly stated design goals for the algorithms, and the absence of security proofs for all constructions.
In their paper,Rogawayand Shrimpton proposed a provable key-wrapping algorithm (SIV—the Synthetic Initialization Vector mode) that authenticates and encrypts an arbitrary string and authenticates,
but does not encrypt,associated datawhich can be bound into the wrapped key. This has been standardized as a
new AES mode inRFC5297.
|
https://en.wikipedia.org/wiki/Key_Wrap
|
Incryptography,Optimal Asymmetric Encryption Padding(OAEP) is apadding schemeoften used together withRSA encryption. OAEP was introduced byBellareandRogaway,[1]and subsequently standardized inPKCS#1 v2and RFC 2437.
The OAEP algorithm is a form ofFeistel networkwhich uses a pair ofrandom oraclesG and H to process the plaintext prior toasymmetric encryption. When combined with any securetrapdoor one-way permutationf{\displaystyle f}, this processing is proved in therandom oracle modelto result in a combined scheme which issemantically secureunderchosen plaintext attack(IND-CPA). When implemented with certain trapdoor permutations (e.g., RSA), OAEP is also proven to be secure againstchosen ciphertext attack. OAEP can be used to build anall-or-nothing transform.
OAEP satisfies the following two goals:
The original version of OAEP (Bellare/Rogaway, 1994) showed a form of "plaintext awareness" (which they claimed implies security againstchosen ciphertext attack) in the random oracle model when OAEP is used with any trapdoor permutation. Subsequent results contradicted this claim, showing that OAEP was onlyIND-CCA1secure. However, the original scheme was proved in therandom oracle modelto beIND-CCA2secure when OAEP is used with the RSA permutation using standard encryption exponents, as in the case of RSA-OAEP.[2]An improved scheme (called OAEP+) that works with any trapdoor one-way permutation was offered byVictor Shoupto solve this problem.[3]More recent work has shown that in thestandard model(that is, when hash functions are not modeled as random oracles) it is impossible to prove the IND-CCA2 security of RSA-OAEP under the assumed hardness of theRSA problem.[4][5]
In the diagram,
RFC 8017[6]for PKCS#1 v2.2 specifies the OAEP scheme as follows for encoding:
Decoding works by reversing the steps taken in the encoding algorithm:
Usage in RSA: The encoded message can then be encrypted with RSA. The deterministic property of RSA is now avoided by using the OAEP encoding because theseedis randomly generated and influences the entire encoded message.
The "all-or-nothing" security is from the fact that to recoverM, one must recover the entiremaskedDBand the entiremaskedSeed;maskedDBis required to recover theseedfrom themaskedSeed, and theseedis required to recover the data blockDBfrommaskedDB. Since any changed bit of a cryptographic hash completely changes the result, the entiremaskedDB, and the entiremaskedSeedmust both be completely recovered.
In the PKCS#1 standard, the random oracles are identical. The PKCS#1 standard further requires that the random oracles beMGF1with an appropriate hash function.[7]
|
https://en.wikipedia.org/wiki/Optimal_Asymmetric_Encryption_Padding
|
Incryptography, ahybrid cryptosystemis one which combines the convenience of apublic-key cryptosystemwith the efficiency of asymmetric-key cryptosystem.[1]Public-key cryptosystems are convenient in that they do not require the sender and receiver to share acommon secretin order to communicate securely.[2]However, they often rely on complicated mathematical computations and are thus generally much more inefficient than comparable symmetric-key cryptosystems. In many applications, the high cost of encrypting long messages in a public-key cryptosystem can be prohibitive. This is addressed by hybrid systems by using a combination of both.[3]
A hybrid cryptosystem can be constructed using any two separate cryptosystems:
The hybrid cryptosystem is itself a public-key system, whose public and private keys are the same as in the key encapsulation scheme.[4]
Note that for very long messages the bulk of the work in encryption/decryption is done by the more efficient symmetric-key scheme, while the inefficient public-key scheme is used only to encrypt/decrypt a short key value.[3]
All practical implementations of public key cryptography today employ the use of a hybrid system. Examples include theTLSprotocol[5]and theSSHprotocol,[6]that use a public-key mechanism for key exchange (such asDiffie-Hellman) and a symmetric-key mechanism for data encapsulation (such asAES). TheOpenPGP[7]file format and thePKCS#7[8]file format are other examples.
Hybrid Public Key Encryption (HPKE, published asRFC 9180) is a modern standard for generic hybrid encryption. HPKE is used within multiple IETF protocols, includingMLSand TLS Encrypted Hello.
Envelope encryption is an example of a usage of hybrid cryptosystems incloud computing. In a cloud context, hybrid cryptosystems also enable centralizedkey management.[9][10]
To encrypt a message addressed to Alice in a hybrid cryptosystem, Bob does the following:
To decrypt this hybrid ciphertext, Alice does the following:
If both the key encapsulation and data encapsulation schemes in a hybrid cryptosystem are secure againstadaptive chosen ciphertext attacks, then the hybrid scheme inherits that property as well.[4]However, it is possible to construct a hybrid scheme secure against adaptive chosen ciphertext attacks even if the key encapsulation has a slightly weakened security definition (though the security of the data encapsulation must be slightly stronger).[12]
Envelope encryption is term used for encrypting with a hybrid cryptosystem used by all majorcloud service providers,[9]often as part of a centralizedkey managementsystem in cloud computing.[13]
Envelope encryption gives names to the keys used in hybrid encryption: Data Encryption Keys (abbreviated DEK, and used to encrypt data) and Key Encryption Keys (abbreviated KEK, and used to encrypt the DEKs). In a cloud environment, encryption with envelope encryption involves generating a DEK locally, encrypting one's data using the DEK, and then issuing a request to wrap (encrypt) the DEK with a KEK stored in a potentially more secureservice. Then, this wrapped DEK and encrypted message constitute aciphertextfor the scheme. To decrypt a ciphertext, the wrapped DEK is unwrapped (decrypted) via a call to a service, and then the unwrapped DEK is used to decrypt the encrypted message.[10]In addition to the normal advantages of a hybrid cryptosystem, using asymmetric encryption for the KEK in a cloud context provides easier key management and separation of roles, but can be slower.[13]
In cloud systems, such asGoogle Cloud PlatformandAmazon Web Services, a key management system (KMS) can be available as a service.[13][10][14]In some cases, the key management system will store keys inhardware security modules, which are hardware systems that protect keys with hardware features like intrusion resistance.[15]This means that KEKs can also be more secure because they are stored on secure specialized hardware.[13]Envelope encryption makes centralized key management easier because a centralized key management system only needs to store KEKs, which occupy less space, and requests to the KMS only involve sending wrapped and unwrapped DEKs, which use less bandwidth than transmitting entire messages. Since one KEK can be used to encrypt many DEKs, this also allows for less storage space to be used in the KMS. This also allows for centralized auditing and access control at one point of access.[10]
|
https://en.wikipedia.org/wiki/Hybrid_cryptosystem
|
The termdoor securityor door security gate may refer to any of a range of measures used to strengthen doors againstdoor breaching,ram-raidingandlock picking, and prevent crimes such asburglaryandhome invasions. Door security is used in commercial and government buildings, as well as in residential settings.
Some strengthened doors function asfire doorsto prevent or inhibit the spread of fire.
Alarms— designed to warn of burglaries.
The following are the types of doors typically used in residential applications: solid wood door, panel doors (hollow and solid core), metal skinned wood-edged doors and metal edge-wrapped doors, and Fiberglass doors (strongest of the residential type). Typically, door frames are solid wood. Residential doors also frequently contain wood.
Steel doors with Multi locking system are recommended by construction professionals as important equipment in your security checklist. This type of door often comes with a wooden finish to maintain a natural aesthetic in their external appearance.
Security tests byConsumer Reports Magazinein the 1990s found that many residential doorsfailordelaminatewhenforceis applied to them. Solid wood doors withstood more force than the very common metal skinned wood-edged doors used in newer construction. A broad range door manufacturer, Premdor (now Masonite) once stated in one of its 1990s brochures entitled "Premdor Entry Systems" page 6 that "The results of tests were overwhelming, Steel edged doors outperform wood-edged doors by a ratio of 7 to 1. When you consider the practically two-thirds of all illegal entries were made through doors... One hit of 100 lb [lbf] strike force broke the wood-edged stile and opened the door. To actually open the steel-edged door required 7 strikes of 100 lb pressure [force]." Most door manufactures offer a number of different types of doors with varying levels of strength.
Consumer Reports Magazinealso reported in its test results that door frames often split with little force applied and lower quality deadbolts simply failed when force was applied to the door.
The Chula Vista Residential Burglary Reduction Project which studied over 1,000 incidents found that "methods found to have relatively low effectiveness included: sliding glass door braces, such as wooden dowels, as opposed to sliding door channel or pin locks; deadbolts installed in the front door only; and outdoor lights on dusk-to-dawn timers".[2]
The Chula Vista Residential Burglary-Reduction Project yielded the following findings: "From victim interviews, we learned that in 87% of the break-ins that occurred when intruders defeated locked doors with tools such as screwdrivers or crowbars, the burglars targeted "the one door that had no deadbolt lock ... not one burglar attempted to break a double-pane window during the course of successful or attempted burglary."[2]
|
https://en.wikipedia.org/wiki/Door_security
|
Anexit control lock(also known as anexit control device,exit lock, or simply anexit control) prevents or deters unauthorized exit.
Many exit control locks incorporatemagnetic locks. One type, called "delayed egress magnetic locks", will not allow the door to open immediately. This delay reserves time for security personnel to get to the door before the door opens. The lock will also release if there is afire alarmorpower failure, but otherwise these locks hold the exit doors shut.
Exit control systems can include a "request to exit detector" such as a pushbutton that opens the exit, if exit requests are enabled.
In some facilities, entrances as well as exits require authentication such as swiping or otherwise reading a card with acard reader. If an intruder slips by the entrance controls of a building, they will not be able to exit undetected, and can be detained for questioning.
Exit control locks are often used in retail establishments to detershoplifting. They are also used in airports and other controlled areas, where people are held until they clear customs orquarantinestations. Exit control locks are also used in libraries, where there is one well-staffed entrance and exit, and a number of other exits that are intended for emergency use only.
Exit control devices are often used in hospitals, and can be interfaced to wireless sensors worn by newborn children, so that all exits will lock if a baby is stolen from one of the hospital rooms. For example, if a newborn baby is removed from a specialized section of the hospital without proper exit procedures, all exit control locks in the area switch to the locked state. Attempts to remove the transmitter from the baby's ankle also lock the exits. If the transmitter falls out, an alarm also sounds. The exits remain locked while the alarm is sounding, and unlock only after the alarm is cleared.
Similar devices are often used inAlzheimer's diseasehousing facilities.
Often, retail stores will install emergency exits in a way that discourages their misuse for shoplifting. Usually, the door is locked with an emergency exit button next to it. Pushing the emergency exit button will unlock the door, and also trigger the fire alarm. This deters shoplifting because a person who unlocks the door in order to take an item out of the building when it is not an emergency may be reported to the police, withCCTVfootage if available.
|
https://en.wikipedia.org/wiki/Exit_control_lock
|
In masterlocksmithing,key relevanceis the measurable difference between an originalkeyand a copy made of that key, either from a wax impression or directly from the original, and how similar the two keys are in size and shape.[1]It can also refer to the measurable difference between a key and the size required to fit and operate thekeywayof its pairedlock.
No two copies of keys are exactly the same, even if they were both made fromkey blanksthat are struck from the samemouldor cut from the same duplicating/milling machinewith no changes to thebittingsettings in between. Even under these favorable circumstances, there will be minute differences between the two key shapes, though their key relevance is extremely high.
In all machining work, there are measurable amounts of difference between the design specification[2]of an object, and its actual manufactured size. In locksmithing, the allowable tolerance[3]is decided by the range of minute differences between a key's size and shape in comparison to the size and shape required to turn the tumblers within the lock. Key relevance is the measure of similarity between the key and the optimal size needed to fit the lock, or it is the similarity between a duplicate key and the original it is seeking to replicate.
Key relevance cannot be deduced from akey code, since the key code merely refers to a central authoritative source for designed shapes and sizes of keys.
Typical modern keys require a key relevance of approximately 0.03 to 0.07 millimetres (0.0012 to 0.0028 in) (accuracy within 0.75% to 1.75%) in order to operate.[4]
"Key relevance" may also be applied to types ofelectronic lockswhen used to refer to similarities in magnetic signatures or radio codes, though truly digital-coded radio signals do not have key relevance ratings because they must have precise unlocking codes, with no degree of difference whatsoever in order to operate.
The term has also occasionally been co-opted by broadermechanical engineeringto refer to the similarity of fit between two interlocking machined parts, although "tolerance" or"backlash"has gained far wider adoption and acceptance in usage.
|
https://en.wikipedia.org/wiki/Key_relevance
|
Physical securitydescribessecuritymeasures that are designed to deny unauthorized access to facilities, equipment, and resources and to protect personnel and property from damage or harm (such asespionage,theft, orterroristattacks).[1]Physical security involves the use of multiple layers of interdependent systems that can includeCCTVsurveillance,security guards,protective barriers,locks,access control,perimeter intrusion detection, deterrent systems,fire protection, and other systems designed to protect persons and property.
Physical security systems for protected facilities can be intended to:[2][3][4]
It is up to security designers, architects and analysts to balance security controls against risks, taking into account the costs of specifying, developing, testing, implementing, using, managing, monitoring and maintaining the controls, along with broader issues such asaesthetics,human rights, health and safety, and societal norms or conventions. Physical access security measures that are appropriate for a high securityprisonor a military site may be inappropriate in an office, a home or a vehicle, although the principles are similar.
The goal of deterrence methods is to convince potential attackers that a successful attack is unlikely due to strong defenses.
The initial layer of security for a campus, building, office, or other physical space can usecrime prevention through environmental designto deter threats. Some of the most common examples are also the most basic: warning signs or window stickers,fences, vehicle barriers, vehicle height-restrictors, restricted access points,security lightingand trenches.[5][6][7][8]
For example, tall fencing, topped withbarbed wire,razor wireor metal spikes are often emplaced on the perimeter of a property, generally with some type of signage that warns people not to attempt entry. However, in some facilities imposing perimeter walls or fencing will not be possible (e.g. an urbanoffice buildingthat is directly adjacent to public sidewalks) or it may be aesthetically unacceptable (e.g. surrounding a shopping center with tall fences topped with razor wire); in this case, the outersecurity perimeterwill be generally defined as thewalls,windowsanddoorsof the structure itself.[9]
Security lightingis another effective form of deterrence. Intruders are less likely to enter well-lit areas for fear of being seen. Doors, gates, and other entrances, in particular, should be well lit to allow close observation of people entering and exiting. When lighting the grounds of a facility, widely distributed low-intensity lighting is generally superior to small patches of high-intensity lighting, because the latter can have a tendency to create blind spots for security personnel and CCTV cameras. It is important to place lighting in a manner that makes it difficult to tamper with (e.g. suspending lights from tall poles), and to ensure that there is a backup power supply so that security lights will not go out if the electricity is cut off.[10]The introduction of low-voltage LED-based lighting products has enabled new security capabilities, such as instant-on or strobing, while substantially reducing electrical consumption.[11]
Fornuclear power plants in the United States (U.S.), per theU.S. Nuclear Regulatory Commission (NRC),10 CFR Part 73,[security] lightingis mentioned four times. The most notable mentioning contained in10 CFR 73.55(i)(6) Illumination, which clearly identifies that licensees "-shall provide aminimumilluminationlevel of 0.2foot-candles,measured horizontally at ground level, in the isolation zones and appropriate exterior areas within the protected area-".[Ref]This is also the minimum illumination level specified inTable H–2 Minimum Night Firing Criteria of 10 CFR 73 Appendix H, for night firing. Per10 CFR 73.46(b)(7)"-Tactical Response Team members, armed response personnel, and guards shall qualify and requalify, at least every 12 months, for day andnightfiringwith assigned weapons in accordance with Appendix H-"; therefore on said respectiveshooting range[at night]perAppendix H, Table H-2, "-all courses [shall have] 0.2foot-candlesat center mass of target area-" applicable tohandguns,shotguns, andrifles.[Ref]1foot-candleis approximately 10.76lux, therefore the minimum illumination requirements for the above sections also reflect2.152lux.
Security alarmscan be installed to alert security personnel when unauthorized access is attempted. Alarm systems work in tandem with physical barriers, mechanical systems, and security guards, serving to trigger a response when these other forms of security have been breached. They consist of sensors includingperimeter sensors,motion sensors, contact sensors, andglass break detectors.[12]
However, alarms are only useful if there is a prompt response when they are triggered. In the reconnaissance phase prior to an actual attack, some intruders will test the response time of security personnel to a deliberately tripped alarm system. By measuring the length of time it takes for a security team to arrive (if they arrive at all), the attacker can determine if an attack could succeed before authorities arrive to neutralize the threat. Loud audiblealarmscan also act as a psychological deterrent, by notifying intruders that their presence has been detected.[13]
In some U.S. jurisdictions, law enforcement will not respond to alarms from intrusion detection systems unless the activation has been verified by an eyewitness or video.[14]Policies like this one have been created to combat the 94–99 percent rate of false alarm activation in the United States.[15]
Surveillance camerascan be a deterrent[16]when placed in highly visible locations and are useful for incident assessment and historical analysis. For example, if alarms are being generated and there is a camera in place, security personnel assess the situation via the camera feed. In instances when an attack has already occurred and a camera is in place at the point of attack, the recorded video can be reviewed. Although the termclosed-circuit television(CCTV) is common, it is quickly becoming outdated as more video systems lose the closed circuit for signal transmission and are instead transmitting onIP cameranetworks.
Video monitoring does not necessarily guarantee a human response. A human must be monitoring the situation in real time in order to respond in a timely manner; otherwise, video monitoring is simply a means to gather evidence for later analysis. However, technological advances likevideo analyticsare reducing the amount of work required for video monitoring as security personnel can be automatically notified of potential security events.[17][18][19]
Access controlmethods are used to monitor and control traffic through specific access points and areas of the secure facility. This is done using a variety of methods, includingCCTVsurveillance,identification cards,security guards,biometric readers,locks, doors,turnstilesandgates.[20][21][22]
Mechanical access control systemsinclude turnstiles, gates, doors, and locks.Key controlof the locks becomes a problem with large user populations and any user turnover.Keysquickly become unmanageable, often forcing the adoption of electronic access control.
Electronic access control systems provide secure access to buildings or facilities by controlling who can enter and exit. Some aspects of these systems can include:
Electronic access control uses credential readers, advanced software, and electrified locks to provide programmable, secure access management for facilities. Integration of cameras, alarms and other systems is also common.
An additional sub-layer of mechanical/electronic access control protection is reached by integrating akey managementsystem to manage the possession and usage of mechanical keys to locks or property within a building or campus.[citation needed]
Another form of access control (procedural) includes the use of policies, processes and procedures to manage the ingress into the restricted area. An example of this is the deployment of security personnel conducting checks for authorized entry at predetermined points of entry. This form of access control is usually supplemented by the earlier forms of access control (i.e. mechanical and electronic access control), or simple devices such as physical passes.
Security personnelplay a central role in all layers of security. All of the technological systems that are employed to enhance physical security are useless without a security force that is trained in their use and maintenance, and which knows how to properly respond to breaches in security. Security personnel perform many functions: patrolling facilities, administering electronic access control, responding to alarms, and monitoring and analyzing video footage.[23]
|
https://en.wikipedia.org/wiki/Physical_security
|
Incomputing, theJava Secure Socket Extension(JSSE) is a Java API and a provider implementation named SunJSSE that enable secureInternetcommunications in theJava Runtime Environment. It implements aJavatechnology version of theSecure Sockets Layer(SSL) and theTransport Layer Security(TLS)protocols. It includes functionality for dataencryption,[1]serverauthentication,message integrity, and optional client-authentication.
JSSE was originally developed as an optional package for Java versions 1.2 and 1.3, but was added as a standard API and implementation into JDK 1.4.
|
https://en.wikipedia.org/wiki/Java_Secure_Socket_Extension
|
Incryptography, akeyringstores knownencryption keys(and, in some cases, passwords). For example,GNU Privacy Guardmakes use of keyrings.[1]
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Keyring_(cryptography)
|
Institute of Electrical and Electronics Engineers(IEEE) standardization project for encryption of stored data, but more generically refers to the Security in Storage Working Group (SISWG), which includes a family of standards for protection of stored data and for the corresponding cryptographic key management.
SISWG oversees work on the following standards:
The base IEEE 1619 Standard Architecture for Encrypted Shared Storage Media uses theXTS-Advanced Encryption Standard(XEX-based Tweaked CodeBook mode (TCB) withciphertext stealing(CTS);[1]the proper name should be XTC (XEXTCBCTS), but that acronym is already used to denote the drugecstasy).
The P1619.1 Authenticated Encryption with Length Expansion for Storage Devices uses the following algorithms:
The P1619.2 Standard for Wide-Block Encryption for Shared Storage Media has proposed algorithms including:
The P1619.3 Standard for Key Management Infrastructure for Cryptographic Protection of Stored Data defines a system for managing encryption data at rest security objects which includes architecture, namespaces, operations, messaging and transport.
P1619 also standardized the key backup in theXMLformat.
An encryption algorithm used for data storage has to support independent encryption and decryption of portions of data. So-called narrow-block algorithms operate on relatively small portions of data, while the wide-block algorithms encrypt or decrypt a wholesector. Narrow-block algorithms have the advantage of more efficient hardware implementation. On the other hand, smaller block size provides finer granularity for data modification attacks. There is no standardized "acceptable granularity"; however, for example, the possibility of data modification with the granularity of one bit (bit-flipping attack) is generally considered unacceptable.
For these reasons, the working group selected the narrow-block (128 bits) encryption with no authentication in the standard P1619, assuming that the added efficiency warrants the additional risk. But recognizing that wide-block encryption might be useful in some cases, another project P1619.2 has been started to study the usage of wide-block encryption.
The project is maintained by the IEEE Security in Storage Working Group (SISWG). Both the disk storage standard P1619 (sometimes called P1619.0) and the tape storage standard P1619.1 were standardized in December 2007.[2]
A discussion was ongoing[when?]on standardization of the wide-block encryption for disk drives, likeCMC and EMEas P1619.2, and on key management as P1619.3.
From 2004 to 2006, drafts of the P1619 standards used theAdvanced Encryption Standard(AES) inLRW mode. In the30 Aug 2006Archived11 March 2007 at theWayback Machinemeeting of the SISWG, a straw poll showed that most members would not approve P1619 as it was. Consequently, LRW-AES has beenreplaced by the XEX-AEStweakable block cipher in P1619.0 Draft 7 (and renamed to XTS-AES in Draft 11). Some members of the group found it non-trivial to abandon LRW, because it had been available for public peer-review for many years (unlike most of the newly suggested variants). The issues of LRW were:
|
https://en.wikipedia.org/wiki/IEEE_P1619
|
BATONis aType 1block cipherin use since at least 1995 by theUnited States governmentto secureclassified information.
While the BATON algorithm itself is secret (as is the case with all algorithms in the NSA'sSuite A), the public PKCS#11 standard includes some general information about how it is used. It has a 320-bit key and uses a 128-bit block in most modes, and also supports a 96-bitelectronic codebook mode. 160 bits of the key arechecksummaterial. It supports a"shuffle" modeof operation, like the NSA cipher JUNIPER. It may use up to 192 bits as an initialization vector, regardless of the block size.[1]
In response to aSenatequestion about encrypted video links, the NSA said that BATON could be used for encryption at speeds higher than those possible withSkipjack.[2]
BATON is used in a variety of products and standards:
|
https://en.wikipedia.org/wiki/BATON
|
Incryptography, apseudorandom permutation (PRP)is a function that cannot be distinguished from arandom permutation(that is, a permutation selected at random with uniform probability, from the family of all permutations on the function's domain) with practical effort.
LetFbe a mapping{0,1}n×{0,1}s→{0,1}n{\displaystyle \left\{0,1\right\}^{n}\times \left\{0,1\right\}^{s}\rightarrow \left\{0,1\right\}^{n}}.Fis a PRP if and only if
Apseudorandom permutation familyis a collection of pseudorandom permutations, where a specific permutation may be chosen using a key.
The idealized abstraction of a (keyed)block cipheris a trulyrandom permutationon the mappings betweenplaintextandciphertext. If a distinguishing algorithm exists that achieves significantadvantagewith less effort than specified by the block cipher'ssecurity parameter(this usually means the effort required should be about the same as a brute force search through the cipher's key space), then the cipher is considered broken at least in a certificational sense, even if such a break doesn't immediately lead to a practicalsecurityfailure.[2]
Modern ciphers are expected to have super pseudorandomness.
That is, the cipher should beindistinguishable from a randomly chosen permutationon the same message space, even if the adversary has black-box access to the forward and inverse directions of the cipher.[3]
Michael Lubyand Charles Rackoff[4]showed that a "strong" pseudorandom permutation can be built from apseudorandom functionusing aLuby–Rackoff constructionwhich is built using aFeistel cipher.
Anunpredictable permutation(UP)Fkis apermutationwhose values cannot be predicted by a fastrandomized algorithm. Unpredictable permutations may be used as acryptographic primitive, a building block for cryptographic systems with more complex properties.
Anadversaryfor an unpredictable permutation is defined to be an algorithm that is given access to an oracle for both forward and inverse permutation operations. The adversary is given a challenge inputkand is asked to predict the value ofFk. It is allowed to make a series of queries to the oracle to help it make this prediction, but is not allowed to query the value ofkitself.[5]
A randomized algorithm for generating permutations generates anunpredictable permutationif its outputs are permutations on a set of items (described by length-nbinary strings) that cannot be predicted with accuracy significantly better than random by an adversary that makes a polynomial (inn) number of queries to the oracle prior to the challenge round, whose running time ispolynomialinn, and whose error probability is less than 1/2 for all instances. That is, it cannot be predicted in thecomplexity classPP,relativizedby the oracle for the permutation.[5]
It can be shown that a functionFkis not a securemessage authentication code(MAC) if it satisfies only the unpredictability requirement. It can also be shown that one cannot build an efficient variable input length MAC from a block cipher which is modelled as a UP ofnbits. It has been shown that the output of ak=n/ω(logλ) round Feistel construction with unpredictable round functions may leak all the intermediate round values.[5]Even for realistic Unpredictable Functions (UF), some partial information about the intermediate round values may be leaked through the output. It was later shown that if a super-logarithmic number of rounds in the Feistel construction is used, then the resulting UP construction is secure even if the adversary gets all the intermediate round values along with the permutation output.[6]
There is also a theorem that has been proven in this regard which states that if there exists an efficient UP adversaryAπthat has non-negligible advantageεπin the unpredictability game against UP construction ψU,kand which makes a polynomial number of queries to the challenger, then there also exists a UF adversaryAfthat has non-negligible advantage in the unpredictability game against a UF sampled from the UF familyF. From this, it can be shown that the maximum advantage of the UP adversaryAπisεπ= O (εf.(qk)6). Hereεfdenotes the maximum advantage of a UF adversary running in time O(t+ (qk)5) against a UF sampled fromF, wheretis the running time of the PRP adversaryAψandqis the number of queries made by it.[6][7]
In addition, a signature scheme that satisfies the property of unpredictability and not necessarily pseudo-randomness is essentially a Verifiable Unpredictable Function (VUF). A verifiable unpredictable function is defined analogously to a Verifiable Pseudorandom Function (VRF) but for pseudo-randomness being substituted with weaker unpredictability. Verifiable unpredictable permutations are the permutation analogs of VUFs or unpredictable analogs of VRPs. A VRP is also a VUP and a VUP can actually be built by building a VRP via the Feistel construction applied to a VRF. But this is not viewed useful since VUFs appear to be much easier to construct than VRFs.[8]
|
https://en.wikipedia.org/wiki/Pseudorandom_permutation
|
Innumber theory, aWilson primeis aprime numberp{\displaystyle p}such thatp2{\displaystyle p^{2}}divides(p−1)!+1{\displaystyle (p-1)!+1}, where "!{\displaystyle !}" denotes thefactorial function; compare this withWilson's theorem, which states that every primep{\displaystyle p}divides(p−1)!+1{\displaystyle (p-1)!+1}. Both are named for 18th-centuryEnglishmathematicianJohn Wilson; in 1770,Edward Waringcredited the theorem to Wilson,[1]although it had been stated centuries earlier byIbn al-Haytham.[2]
The only known Wilson primes are5,13, and563(sequenceA007540in theOEIS). Costa et al. write that "the casep=5{\displaystyle p=5}is trivial", and credit the observation that 13 is a Wilson prime toMathews (1892).[3][4]Early work on these numbers included searches byN. G. W. H. BeegerandEmma Lehmer,[5][3][6]but 563 was not discovered until the early 1950s, when computer searches could be applied to the problem.[3][7][8]If any others exist, they must be greater than 2 × 1013.[3]It has beenconjecturedthat infinitely many Wilson primes exist, and that the number of Wilson primes in an interval[x,y]{\displaystyle [x,y]}is aboutloglogxy{\displaystyle \log \log _{x}y}.[9]
Several computer searches have been done in the hope of finding new Wilson primes.[10][11][12]TheIbercivisdistributed computingproject includes a search for Wilson primes.[13]Another search was coordinated at theGreat Internet Mersenne Prime Searchforum.[14]
Wilson's theorem can be expressed in general as(n−1)!(p−n)!≡(−1)nmodp{\displaystyle (n-1)!(p-n)!\equiv (-1)^{n}\ {\bmod {p}}}for everyintegern≥1{\displaystyle n\geq 1}and primep≥n{\displaystyle p\geq n}. Generalized Wilson primes of ordernare the primespsuch thatp2{\displaystyle p^{2}}divides(n−1)!(p−n)!−(−1)n{\displaystyle (n-1)!(p-n)!-(-1)^{n}}.
It was conjectured that for everynatural numbern, there are infinitely many Wilson primes of ordern.
The smallest generalized Wilson primes of ordern{\displaystyle n}are:
A primep{\displaystyle p}satisfying thecongruence(p−1)!≡−1+Bp(modp2){\displaystyle (p-1)!\equiv -1+Bp\ (\operatorname {mod} {p^{2}})}with small|B|{\displaystyle |B|}can be called anear-Wilson prime. Near-Wilson primes withB=0{\displaystyle B=0}are bona fide Wilson primes. The table on the right lists all such primes with|B|≤100{\displaystyle |B|\leq 100}from 106up to 4×1011.[3]
AWilson numberis a natural numbern{\displaystyle n}such thatW(n)≡0(modn2){\displaystyle W(n)\equiv 0\ (\operatorname {mod} {n^{2}})}, whereW(n)=±1+∏gcd(k,n)=11≤k≤nk,{\displaystyle W(n)=\pm 1+\prod _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}{k},}and where the±1{\displaystyle \pm 1}term is positiveif and only ifn{\displaystyle n}has aprimitive rootand negative otherwise.[15]For every natural numbern{\displaystyle n},W(n){\displaystyle W(n)}is divisible byn{\displaystyle n}, and the quotients (called generalizedWilson quotients) are listed inOEIS:A157249. The Wilson numbers are
If a Wilson numbern{\displaystyle n}is prime, thenn{\displaystyle n}is a Wilson prime. There are 13 Wilson numbers up to 5×108.[16]
|
https://en.wikipedia.org/wiki/Wilson_prime
|
Innumber theorytheAgoh–Giuga conjectureon theBernoulli numbersBkpostulates thatpis aprime numberif and only if
It is named afterTakashi AgohandGiuseppe Giuga.
The conjecture as stated above is due toTakashi Agoh(1990); an equivalent formulation is due toGiuseppe Giuga, from 1950, to the effect thatpis prime if and only if
which may also be written as
It is trivial to show thatpbeing prime is sufficient for the second equivalence to hold, since ifpis prime,Fermat's little theoremstates that
fora=1,2,…,p−1{\displaystyle a=1,2,\dots ,p-1}, and the equivalence follows, sincep−1≡−1(modp).{\displaystyle p-1\equiv -1{\pmod {p}}.}
The statement is still a conjecture since it has not yet been proven that if a numbernis not prime (that is,niscomposite), then the formula does not hold. It has been shown that a composite numbernsatisfies the formula if and only if it is both aCarmichael numberand aGiuga number, and that if such a number exists, it has at least 13,800 digits (Borwein, Borwein, Borwein, Girgensohn 1996). Laerte Sorini, in a work of 2001 showed that a possible counterexample should be a numberngreater than 1036067which represents the limit suggested by Bedocchi for the demonstration technique specified by Giuga to his own conjecture.
The Agoh–Giuga conjecture bears a similarity toWilson's theorem, which has been proven to be true. Wilson's theorem states that a numberpis prime if and only if
which may also be written as
For an odd prime p we have
and for p=2 we have
So, the truth of the Agoh–Giuga conjecture combined with Wilson's theorem would give: a numberpis prime if and only if
and
|
https://en.wikipedia.org/wiki/Agoh%E2%80%93Giuga_conjecture
|
Inalgebra, more specificallygroup theory, ap-elementary groupis adirect productof afinitecyclic groupoforderrelatively prime topand ap-group. A finite group is an elementary group if it isp-elementary for someprime numberp. An elementary group isnilpotent.
Brauer's theorem on induced charactersstates that a character on a finite group is a linear combination with integer coefficients of charactersinducedfrom elementary subgroups.
More generally, a finite groupGis called ap-hyperelementaryif it has theextension
whereC{\displaystyle C}is cyclic of order prime topandPis ap-group. Not every hyperelementary group is elementary: for instance the non-abelian group of order 6 is 2-hyperelementary, but not 2-elementary.
Thisgroup theory-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Elementary_group
|
Inmathematics, atrivial semigroup(asemigroup with one element) is asemigroupfor which thecardinalityof theunderlying setisone. The number ofdistinctnonisomorphicsemigroups with one element is one. IfS= {a} is a semigroup with one element, then theCayley tableofSis
The only element inSis thezero element0 ofSand is also theidentity element1 ofS.[1]However not all semigroup theorists consider the unique element in a semigroup with one element as the zero element of the semigroup. They define zero elements only in semigroups having at least two elements.[2][3]
In spite of its extreme triviality, the semigroup with one element is important in many situations. It is the starting point for understanding thestructureof semigroups. It serves as acounterexamplein illuminating many situations. For example, the semigroup with one element is the only semigroup in which 0 = 1, that is, the zero element and the identity element are equal.
Further, ifSis a semigroup with one element, the semigroup obtained by adjoining an identity element toSis isomorphic to the semigroup obtained by adjoining a zero element toS.
The semigroup with one element is also agroup.
In the language ofcategory theory, any semigroup with one element is aterminal objectin the category of semigroups.
|
https://en.wikipedia.org/wiki/Semigroup_with_one_element
|
Zech logarithmsare used to implement addition infinite fieldswhen elements are represented as powers of a generatorα{\displaystyle \alpha }.
Zech logarithms are named afterJulius Zech,[1][2][3][4]and are also calledJacobi logarithms,[5]afterCarl G. J. Jacobiwho used them fornumber theoreticinvestigations.[6]
Given aprimitive elementα{\displaystyle \alpha }of a finite field, the Zech logarithm relative to the baseα{\displaystyle \alpha }is defined by the equation
which is often rewritten as
The choice of baseα{\displaystyle \alpha }is usually dropped from the notation when it is clear from the context.
To be more precise,Zα{\displaystyle Z_{\alpha }}is afunctionon theintegersmodulothe multiplicative order ofα{\displaystyle \alpha }, and takes values in the same set. In order to describe every element, it is convenient to formally add a new symbol−∞{\displaystyle -\infty }, along with the definitions
wheree{\displaystyle e}is an integer satisfyingαe=−1{\displaystyle \alpha ^{e}=-1}, that ise=0{\displaystyle e=0}for afieldofcharacteristic2, ande=q−12{\displaystyle e={\frac {q-1}{2}}}for a field ofoddcharacteristic withq{\displaystyle q}elements.
Using the Zech logarithm, finite field arithmetic can be done in the exponential representation:
These formulas remain true with our conventions with the symbol−∞{\displaystyle -\infty }, with the caveat that subtraction of−∞{\displaystyle -\infty }is undefined. In particular, the addition and subtraction formulas need to treatm=−∞{\displaystyle m=-\infty }as a special case.
This can be extended to arithmetic of theprojective lineby introducing another symbol+∞{\displaystyle +\infty }satisfyingα+∞=∞{\displaystyle \alpha ^{+\infty }=\infty }and other rules as appropriate.
For fields of characteristic 2,
For sufficiently small finite fields, a table of Zech logarithms allows an especially efficient implementation of all finite field arithmetic in terms of a small number of integer addition/subtractions and table look-ups.
The utility of this method diminishes for large fields where one cannot efficiently store the table. This method is also inefficient when doing very few operations in the finite field, because one spends more time computing the table than one does in actual calculation.
Letα∈ GF(23)be arootof theprimitive polynomialx3+x2+ 1. The traditional representation of elements of this field is aspolynomialsin α ofdegree2 or less.
A table of Zech logarithms for this field areZ(−∞) = 0,Z(0) = −∞,Z(1) = 5,Z(2) = 3,Z(3) = 2,Z(4) = 6,Z(5) = 1, andZ(6) = 4. The multiplicative order ofαis 7, so the exponential representation works with integers modulo 7.
Sinceαis a root ofx3+x2+ 1then that meansα3+α2+ 1 = 0, or if we recall that since all coefficients are in GF(2), subtraction is the same as addition, we obtainα3=α2+ 1.
The conversion from exponential to polynomial representations is given by
Using Zech logarithms to computeα6+α3:
or, more efficiently,
and verifying it in the polynomial representation:
|
https://en.wikipedia.org/wiki/Zech%27s_logarithm
|
Inmathematics, theprojective line over a ringis an extension of the concept ofprojective lineover afield. Given aringA(with 1), the projective line P1(A) overAconsists of points identified byprojective coordinates. LetA×be thegroup of unitsofA; pairs(a,b)and(c,d)fromA×Aare related when there is auinA×such thatua=candub=d. This relation is anequivalence relation. A typicalequivalence classis writtenU[a,b].
P1(A) = {U[a,b] |aA+bA=A}, that is,U[a,b]is in the projective line if theone-sided idealgenerated byaandbis all ofA.
The projective line P1(A) is equipped with agroup of homographies. The homographies are expressed through use of thematrix ringoverAand its group of unitsVas follows: Ifcis in Z(A×), thecenterofA×, then thegroup actionof matrix(c00c){\displaystyle \left({\begin{smallmatrix}c&0\\0&c\end{smallmatrix}}\right)}on P1(A) is the same as the action of the identity matrix. Such matrices represent anormal subgroupNofV. The homographies of P1(A) correspond to elements of thequotient groupV/N.
P1(A) is considered an extension of the ringAsince it contains a copy ofAdue to the embeddingE:a→U[a, 1]. Themultiplicative inversemappingu→ 1/u, ordinarily restricted toA×, is expressed by a homography on P1(A):
Furthermore, foru,v∈A×, the mappinga→uavcan be extended to a homography:
Sinceuis arbitrary, it may be substituted foru−1.
Homographies on P1(A) are calledlinear-fractional transformationssince
Rings that arefieldsare most familiar: The projective line overGF(2)has three elements:U[0, 1],U[1, 0], andU[1, 1]. Its homography group is thepermutation groupon these three.[1]: 29
The ringZ/ 3Z, or GF(3), has the elements 1, 0, and −1; its projective line has the four elementsU[1, 0],U[1, 1],U[0, 1],U[1, −1]since both 1 and −1 areunits. The homography group on this projective line has 12 elements, also described with matrices or as permutations.[1]: 31For afinite fieldGF(q), the projective line is theGalois geometryPG(1,q).J. W. P. Hirschfeldhas described theharmonic tetradsin the projective lines forq= 4, 5, 7, 8, 9.[2]
ConsiderP1(Z/nZ)whennis acomposite number. Ifpandqare distinct primes dividingn, then⟨p⟩and⟨q⟩aremaximal idealsinZ/nZand byBézout's identitythere areaandbinZsuch thatap+bq=1, so thatU[p,q]is inP1(Z/nZ)but it is not an image of an element under the canonical embedding. The whole ofP1(Z/nZ)is filled out by elementsU[up,vq], whereu≠vandu,v∈A×,A×being the units ofZ/nZ. The instancesZ/nZare given here forn= 6, 10, and 12, where according tomodular arithmeticthe group of units of the ring is(Z/ 6Z)×= {1, 5},(Z/ 10Z)×= {1, 3, 7, 9}, and(Z/ 12Z)×= {1, 5, 7, 11}respectively. Modular arithmetic will confirm that, in each table, a given letter represents multiple points. In these tables a pointU[m,n]is labeled bymin the row at the table bottom andnin the column at the left of the table. For instance, thepoint at infinityA =U[v, 0], wherevis a unit of the ring.
The extra points can be associated withQ⊂R⊂C, the rationals in theextended complex upper-half plane. The group of homographies onP1(Z/nZ)is called aprincipal congruence subgroup.[3]
For therational numbersQ, homogeneity of coordinates means that every element of P1(Q) may be represented by an element of P1(Z). Similarly, a homography of P1(Q) corresponds to an element of themodular group, the automorphisms of P1(Z).
The projective line over adivision ringresults in a single auxiliary point∞ =U[1, 0]. Examples include thereal projective line, thecomplex projective line, and the projective line overquaternions. These examples oftopological ringshave the projective line as theirone-point compactifications. The case of thecomplex numberfieldChas theMöbius groupas its homography group.
The projective line over thedual numberswas described by Josef Grünwald in 1906.[4]This ring includes a nonzeronilpotentnsatisfyingnn= 0. The plane{z=x+yn|x,y∈R}of dual numbers has a projective line including a line of pointsU[1,xn],x∈R.[5]Isaak Yaglomhas described it as an "inversive Galilean plane" that has thetopologyof acylinderwhen the supplementary line is included.[6]: 149–153Similarly, ifAis alocal ring, then P1(A) is formed by adjoining points corresponding to the elements of themaximal idealofA.
The projective line over the ringMofsplit-complex numbersintroduces auxiliary lines{U[1,x(1 + j)] |x∈R}and{U[1,x(1 − j)] |x∈R}Usingstereographic projectionthe plane of split-complex numbers isclosed upwith these lines to ahyperboloidof one sheet.[6]: 174–200[7]The projective line overMmay be called theMinkowski planewhen characterized by behaviour of hyperbolas under homographic mapping.
The projective line P1(A) over a ringAcan also be identified as the space ofprojective modulesin themoduleA⊕A. An element of P1(A) is then adirect summandofA⊕A. This more abstract approach follows the view ofprojective geometryas the geometry ofsubspacesof avector space, sometimes associated with thelattice theoryofGarrett Birkhoff[8]or the bookLinear Algebra and Projective GeometrybyReinhold Baer. In the case of the ring of rationalintegersZ, the module summand definition of P1(Z) narrows attention to theU[m,n],mcoprimeton, and sheds the embeddings that are a principal feature of P1(A) whenAis topological. The 1981 article by W. Benz, Hans-Joachim Samaga, & Helmut Scheaffer mentions the direct summand definition.
In an article "Projective representations: projective lines over rings"[9]thegroup of unitsof amatrix ringM2(R) and the concepts of module andbimoduleare used to define a projective line over a ring. The group of units is denoted byGL(2,R), adopting notation from thegeneral linear group, whereRis usually taken to be a field.
The projective line is the set of orbits underGL(2,R)of the free cyclicsubmoduleR(1, 0)ofR×R. Extending the commutative theory of Benz, the existence of a right or leftmultiplicative inverseof a ring element is related to P1(R) andGL(2,R). TheDedekind-finiteproperty is characterized. Most significantly,representationof P1(R) in a projective space over a division ringKis accomplished with a(K,R)-bimoduleUthat is a leftK-vector space and a rightR-module. The points of P1(R) are subspaces ofP1(K,U×U)isomorphic to their complements.
A homographyhthat takes three particular ring elementsa,b,cto the projective line pointsU[0, 1],U[1, 1],U[1, 0]is called thecross-ratio homography. Sometimes[10][11]thecross-ratiois taken as the value ofhon a fourth pointx: (x,a,b,c) =h(x).
To buildhfroma,b,cthe generator homographies
are used, with attention tofixed points: +1 and −1 are fixed under inversion,U[1, 0]is fixed under translation, and the "rotation" withuleavesU[0, 1]andU[1, 0]fixed. The instructions are to placecfirst, then bringatoU[0, 1]with translation, and finally to use rotation to movebtoU[1, 1].
Lemma: IfAis acommutative ringandb−a,c−b,c−aare all units, then(b−c)−1+ (c−a)−1is a unit.
Proof: Evidentlyb−a(b−c)(c−a)=(b−c)+(c−a)(b−c)(c−a){\displaystyle {\frac {b-a}{(b-c)(c-a)}}={\frac {(b-c)+(c-a)}{(b-c)(c-a)}}}is a unit, as required.
Theorem: If(b−c)−1+ (c−a)−1is a unit, then there is a homographyhin G(A) such that
Proof: The pointp= (b−c)−1+ (c−a)−1is the image ofbafterawas put to 0 and then inverted toU[1, 0], and the image ofcis brought toU[0, 1]. Aspis a unit, its inverse used in a rotation will moveptoU[1, 1], resulting ina,b,cbeing all properly placed. The lemma refers to sufficient conditions for the existence ofh.
One application of cross ratio defines theprojective harmonic conjugateof a triplea,b,c, as the elementxsatisfying(x,a,b,c) = −1. Such a quadruple is aharmonic tetrad. Harmonic tetrads on the projective line over afinite fieldGF(q) were used in 1954 to delimit the projective linear groupsPGL(2,q)forq= 5, 7, and 9, and demonstrateaccidental isomorphisms.[12]
Thereal linein thecomplex planegets permuted with circles and other real lines underMöbius transformations, which actually permute the canonical embedding of thereal projective linein thecomplex projective line. SupposeAis analgebra over a fieldF, generalizing the case whereFis the real number field andAis the field of complex numbers. The canonical embedding of P1(F) into P1(A) is
Achainis the image of P1(F) under a homography on P1(A). Four points lie on a chainif and only iftheir cross-ratio is inF.Karl von Staudtexploited this property in his theory of "real strokes" [reeler Zug].[13]
Two points of P1(A) areparallelif there isnochain connecting them. The convention has been adopted that points are parallel to themselves. This relation isinvariantunder the action of a homography on the projective line. Given three pair-wise non-parallel points, there is a unique chain that connects the three.[14]
August Ferdinand Möbiusinvestigated theMöbius transformationsbetween his bookBarycentric Calculus(1827) and his 1855 paper "Theorie der Kreisverwandtschaft in rein geometrischer Darstellung".Karl Wilhelm FeuerbachandJulius Plückerare also credited with originating the use of homogeneous coordinates.Eduard Studyin 1898, andÉlie Cartanin 1908, wrote articles onhypercomplex numbersfor German and FrenchEncyclopedias of Mathematics, respectively, where they use these arithmetics withlinear fractional transformationsin imitation of those of Möbius. In 1902Theodore Vahlencontributed a short but well-referenced paper exploring some linear fractional transformations of aClifford algebra.[15]The ring ofdual numbersDgave Josef Grünwald opportunity to exhibit P1(D) in 1906.[4]Corrado Segre(1912) continued the development with that ring.[5]
Arthur Conway, one of the early adopters of relativity viabiquaterniontransformations, considered the quaternion-multiplicative-inverse transformation in his 1911 relativity study.[16]In 1947 some elements of inversive quaternion geometry were described by P.G. Gormley in Ireland.[17]In 1968Isaak Yaglom'sComplex Numbers in Geometryappeared in English, translated from Russian. There he uses P1(D) to describeline geometryin the Euclidean plane and P1(M) to describe it for Lobachevski's plane. Yaglom's textA Simple Non-Euclidean Geometryappeared in English in 1979. There in pages 174 to 200 he developsMinkowskian geometryand describes P1(M) as the "inversive Minkowski plane". The Russian original of Yaglom's text was published in 1969. Between the two editions,Walter Benz(1973) published his book,[7]which included the homogeneous coordinates taken fromM.
|
https://en.wikipedia.org/wiki/Projective_line_over_a_ring#Over_discrete_rings
|
In mathematics, apseudo-finite fieldFis an infinite model of thefirst-ordertheoryoffinite fields. This is equivalent to the condition thatFisquasi-finite(perfect with a uniqueextensionof every positive degree) andpseudo algebraically closed(every absolutelyirreducible varietyoverFhas a point defined overF). Everyhyperfinite fieldis pseudo-finite and every pseudo-finite field is quasifinite. Every non-principalultraproductof finite fields is pseudo-finite.
Pseudo-finite fields were introduced byAx(1968).
|
https://en.wikipedia.org/wiki/Pseudo-finite_field
|
In algebra,Hua's identity[1]named afterHua Luogeng, states that for any elementsa,bin adivision ring,a−(a−1+(b−1−a)−1)−1=aba{\displaystyle a-\left(a^{-1}+\left(b^{-1}-a\right)^{-1}\right)^{-1}=aba}wheneverab≠0,1{\displaystyle ab\neq 0,1}. Replacingb{\displaystyle b}with−b−1{\displaystyle -b^{-1}}gives another equivalent form of the identity:(a+ab−1a)−1+(a+b)−1=a−1.{\displaystyle \left(a+ab^{-1}a\right)^{-1}+(a+b)^{-1}=a^{-1}.}
The identity is used in a proof ofHua's theorem,[2]which states that ifσ{\displaystyle \sigma }is afunctionbetween division rings satisfyingσ(a+b)=σ(a)+σ(b),σ(1)=1,σ(a−1)=σ(a)−1,{\displaystyle \sigma (a+b)=\sigma (a)+\sigma (b),\quad \sigma (1)=1,\quad \sigma (a^{-1})=\sigma (a)^{-1},}thenσ{\displaystyle \sigma }is ahomomorphismor anantihomomorphism. This theorem is connected to thefundamental theorem of projective geometry.
One has(a−aba)(a−1+(b−1−a)−1)=1−ab+ab(b−1−a)(b−1−a)−1=1.{\displaystyle (a-aba)\left(a^{-1}+\left(b^{-1}-a\right)^{-1}\right)=1-ab+ab\left(b^{-1}-a\right)\left(b^{-1}-a\right)^{-1}=1.}
The proof is valid in any ring as long asa,b,ab−1{\displaystyle a,b,ab-1}areunits.[3]
Thisalgebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hua%27s_identity
|
Inmathematics, analgebraic structure(R,T){\displaystyle (R,T)}consisting of a non-empty setR{\displaystyle R}and a ternary mappingT:R3→R{\displaystyle T\colon R^{3}\to R\,}may be called aternary system. Aplanar ternary ring(PTR) orternary fieldis special type of ternary system used byMarshall Hall[1]to constructprojective planesby means of coordinates. A planar ternary ring is not aringin the traditional sense, but anyfieldgives a planar ternary ring where the operationT{\displaystyle T}is defined byT(a,b,c)=ab+c{\displaystyle T(a,b,c)=ab+c}. Thus, we can think of a planar ternary ring as a generalization of a field where the ternary operation takes the place of both addition and multiplication.
There is wide variation in the terminology. Planar ternary rings or ternary fields as defined here have been called by other names in the literature, and the term "planar ternary ring" can mean a variant of the system defined here. The term "ternary ring" often means a planar ternary ring, but it can also simply mean a ternary system.
Aplanar ternary ringis a structure(R,T){\displaystyle (R,T)}whereR{\displaystyle R}is a set containing at least two distinct elements, called 0 and 1, andT:R3→R{\displaystyle T\colon R^{3}\to R\,}is a mapping which satisfies these five axioms:[2]
WhenR{\displaystyle R}is finite, the third and fifth axioms are equivalent in the presence of the fourth.[3]
No other pair (0', 1') inR2{\displaystyle R^{2}}can be found such thatT{\displaystyle T}still satisfies the first two axioms.
Definea⊕b=T(a,1,b){\displaystyle a\oplus b=T(a,1,b)}.[4]The structure(R,⊕){\displaystyle (R,\oplus )}is aloopwithidentity element0.
Definea⊗b=T(a,b,0){\displaystyle a\otimes b=T(a,b,0)}. The setR0=R∖{0}{\displaystyle R_{0}=R\setminus \{0\}\,}is closed under this multiplication. The structure(R0,⊗){\displaystyle (R_{0},\otimes )}is also a loop, with identity element 1.
A planar ternary ring(R,T){\displaystyle (R,T)}is said to belinearifT(a,b,c)=(a⊗b)⊕c,∀a,b,c∈R{\displaystyle T(a,b,c)=(a\otimes b)\oplus c,\quad \forall a,b,c\in R}.
For example, the planar ternary ring associated to aquasifieldis (by construction) linear.
Given a planar ternary ring(R,T){\displaystyle (R,T)}, one can construct aprojective planewith point setPand line setLas follows:[5][6](Note that∞{\displaystyle \infty }is an extra symbol not inR{\displaystyle R}.)
Let
Then define,∀a,b,c,d∈R{\displaystyle \forall a,b,c,d\in R}, theincidence relationI{\displaystyle I}in this way:
Every projective plane can be constructed in this way, starting with an appropriate planar ternary ring. However, two nonisomorphic planar ternary rings can lead to the construction of isomorphic projective planes.
Conversely, given any projective plane π, by choosing four points, labelledo,e,u, andv, no three of which lie on the same line, coordinates can be introduced in π so that these special points are given the coordinates:o= (0,0),e= (1,1),v= (∞{\displaystyle \infty }) andu= (0).[7]The ternary operation is now defined on the coordinate symbols (except∞{\displaystyle \infty }) byy= T(x,a,b) if and only if the point (x,y) lies on the line which joins (a) with (0,b). The axioms defining a projective plane are used to show that this gives a planar ternary ring.
Linearity of the PTR is equivalent to a geometric condition holding in the associated projective plane.[8]
The connection between planar ternary rings (PTRs) and two-dimensional geometries, specifically projective andaffine geometries, is best described by examining the affine case first. In affine geometry, points on a plane are described usingCartesian coordinates, a method that is applicable even innon-Desarguesian geometries— there, coordinate-components can always be shown to obey the structure of a PTR. By contrast,homogeneous coordinates, typically used in projective geometry, are unavailable innon-Desarguesiancontexts. Thus, the simplestanalyticway to construct a projective plane is to start with an affine plane and extend it by adding a "line at infinity"; this bypasses homogeneous coordinates.
In an affine plane, when the plane is Desarguesian, lines can be represented inslope-intercept formy=mx+c{\displaystyle y=mx+c}. This representation extends to non-Desarguesian planes through the ternary operation of a PTR, allowing a line to be expressed asy=T(x,m,c){\displaystyle y=T(x,m,c)}. Lines parallel to the y-axis are expressed byx=c{\displaystyle x=c}.
We now show how to derive theanalyticrepresentation of a general projective plane given at the start of this section. To do so, we pass from the affine plane, represented asR2{\displaystyle R^{2}}, to a representation of the projective planeRP2{\displaystyle R\mathbb {P} ^{2}}, by adding a line at infinity. Formally, the projective plane is described asRP2:=R2∪RP1{\displaystyle R\mathbb {P} ^{2}:=R^{2}\cup R\mathbb {P} ^{1}}, whereR2{\displaystyle R^{2}}represents an affine plane in Cartesian coordinates and includes all finite points, whileRP1{\displaystyle R\mathbb {P} ^{1}}denotes the line at infinity. Similarly,RP1{\displaystyle R\mathbb {P} ^{1}}is expressed asRP1:=R1∪RP0{\displaystyle R\mathbb {P} ^{1}:=R^{1}\cup R\mathbb {P} ^{0}}. Here,R1{\displaystyle R^{1}}is an affine line which we give its own Cartesian coordinate system, andRP0{\displaystyle R\mathbb {P} ^{0}}consists of a single point not lying on that affine line, which we represent using the symbol∞{\displaystyle \infty }.
PTR's which satisfy additional algebraic conditions are given other names. These names are not uniformly applied in the literature. The following listing of names and properties is taken fromDembowski (1968, p. 129).
A linear PTR whose additive loop isassociative(and thus agroup), is called acartesian group. In a cartesian group, the mappings
x⟶−x⊗a+x⊗b{\displaystyle x\longrightarrow -x\otimes a+x\otimes b}, andx⟶a⊗x−b⊗x{\displaystyle x\longrightarrow a\otimes x-b\otimes x}
must be permutations whenevera≠b{\displaystyle a\neq b}. Since cartesian groups are groups under addition, we revert to using a simple "+" for the additive operation.
Aquasifieldis a cartesian group satisfying the right distributive law:(x+y)⊗z=x⊗z+y⊗z{\displaystyle (x+y)\otimes z=x\otimes z+y\otimes z}.
Addition in any quasifield iscommutative.
Asemifieldis a quasifield which also satisfies the left distributive law:x⊗(y+z)=x⊗y+x⊗z.{\displaystyle x\otimes (y+z)=x\otimes y+x\otimes z.}
Aplanarnearfieldis a quasifield whose multiplicative loop is associative (and hence a group). Not all nearfields are planar nearfields.
|
https://en.wikipedia.org/wiki/Planar_ternary_ring
|
Inmathematics, especiallyabstract algebra,looptheory andquasigrouptheory are active research areas with manyopen problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Many of the problems posed here first appeared in theLoops (Prague)conferences and theMile High (Denver)conferences.
LetLbe aMoufang loopwith normalabeliansubgroup(associative subloop)Mof odd order such thatL/Mis acyclic groupof order bigger than 3. (i) IsLagroup? (ii) If the orders ofMandL/Marerelatively prime, is L a group?
Conjecture: Any finitecommutative Moufang loopof period 3 can be embedded into a commutativealternative algebra.
Conjecture: LetLbe a finite Moufang loop and Φ(L) the intersection of all maximal subloops ofL. Then Φ(L) is a normal nilpotent subloop ofL.
For a groupG{\displaystyle G}, defineM(G,2){\displaystyle M(G,2)}onG{\displaystyle G}xC2{\displaystyle C_{2}}by(g,0)(h,0)=(gh,0){\displaystyle (g,0)(h,0)=(gh,0)},(g,0)(h,1)=(hg,1){\displaystyle (g,0)(h,1)=(hg,1)},(g,1)(h,0)=(gh−1,1){\displaystyle (g,1)(h,0)=(gh^{-1},1)},(g,1)(h,1)=(h−1g,0){\displaystyle (g,1)(h,1)=(h^{-1}g,0)}. Find a minimal presentation for the Moufang loopM(G,2){\displaystyle M(G,2)}with respect to apresentationforG{\displaystyle G}.
Letpandqbe distinct odd primes. Ifqis not congruent to 1modulop, are all Moufang loops of orderp2q3groups? What aboutpq4?
Is there a Moufang loop of odd order with trivial nucleus?
Find presentations for all nonassociative finite simple Moufang loops in the variety of Moufang loops.
Conjecture: LetMbe a finite Moufang loop of exponentnwithmgenerators. Then there exists a functionf(n,m) such that |M| <f(n,m).
Conjecture: LetLbe a finitely generated Moufang loop of exponent 4 or 6. ThenLis finite.
Let MFnbe thefreeMoufang loop withngenerators.
Conjecture: MF3istorsionfree but MFnwithn> 4 is not.
For a left Bol loopQ, find some relation between thenilpotencydegree of the left multiplication group ofQand the structure ofQ.
Let(Q,∗){\displaystyle (Q,*)},(Q,+){\displaystyle (Q,+)}be twoquasigroupsdefined on the same underlyingsetQ{\displaystyle Q}. The distanced(∗,+){\displaystyle d(*,+)}is the number of pairs(a,b){\displaystyle (a,b)}inQ×Q{\displaystyle Q\times Q}such thata∗b≠a+b{\displaystyle a*b\neq a+b}. Call a class of finite quasigroupsquadraticif there is a positive real numberα{\displaystyle \alpha }such that any two quasigroups(Q,∗){\displaystyle (Q,*)},(Q,+){\displaystyle (Q,+)}of ordern{\displaystyle n}from the class satisfyingd(∗,+)<αn2{\displaystyle d(*,+)<\alpha \,n^{2}}are isomorphic. Are Moufang loops quadratic? AreBol loopsquadratic?
Determine theCampbell–Hausdorff seriesfor analytic Bol loops.
A loop isuniversally flexibleif every one of its loop isotopes isflexible, that is, satisfies (xy)x=x(yx). A loop ismiddle Bolif every one of its loop isotopes has the antiautomorphic inverse property, that is, satisfies (xy)−1=y−1x−1. Is there a finite, universally flexible loop that is not middle Bol?
Is there a finite simple nonassociative Bol loop with nontrivial conjugacy classes?
LetQbe a loop whose inner mapping group is nilpotent. IsQnilpotent? IsQsolvable?
LetQbe a loop with abelian inner mapping group. IsQnilpotent? If so, is there a bound on the nilpotency class ofQ? In particular, can the nilpotency class ofQbe higher than 3?
Determine the number of nilpotent loops of order 24 up to isomorphism.
Construct a finite nilpotent loop with no finite basis for its laws.
Are there infinite simple paramedial quasigroups?
A varietyVof quasigroups isisotopically universalif every quasigroup is isotopic to a member ofV. Is the variety of loops a minimal isotopically universal variety? Does every isotopically universal variety contain the variety of loops or its parastrophes?
Does there exist a quasigroupQof orderq= 14, 18, 26 or 42 such that the operation * defined onQbyx*y=y−xyis a quasigroup operation?
Construct a latin squareLof ordernas follows: LetG=Kn,nbe the completebipartite graphwith distinct weights on itsn2edges. LetM1be the cheapest matching inG,M2the cheapest matching inGwithM1removed, and so on. Each matchingMidetermines a permutationpiof 1, ...,n. LetLbe obtained fromGby placing the permutationpiinto rowiofL. Does this procedure result in a uniform distribution on the space of Latin squares of ordern?
For a loopQ, let Mlt(Q) denote the multiplication group ofQ, that is, the groupgeneratedby all left and right translations. Is |Mlt(Q)| <f(|Q|) for somevarietyof loops and for somepolynomialf?
Does every finitealternativeloop, that is, every loop satisfyingx(xy) = (xx)yandx(yy) = (xy)y, have 2-sided inverses?
Find a nonassociative finite simpleautomorphic loop, if such a loop exists.
We say that a varietyVof loops satisfies the Moufang theorem if for every loopQinVthe following implication holds: for everyx,y,zinQ, ifx(yz) = (xy)zthen the subloop generated byx,y,zis a group. Is every variety that satisfies Moufang theorem contained in the variety of Moufang loops?
A loop isOsbornif it satisfies the identityx((yz)x) = (xλ\y)(zx). Is every Osborn loop universal, that is, is every isotope of an Osborn loop Osborn? If not, is there a nice identity characterizing universal Osborn loops?
The following problems were posed as open at various conferences and have since been solved.
Is there aBuchsteiner loopthat is not conjugacy closed? Is there a finite simple Buchsteiner loop that is not conjugacy closed?
Classify nonassociative Moufang loops of order 64.
Construct a conjugacy closed loop whose left multiplication group is not isomorphic to its right multiplication group.
Is there a finite simpleBol loopthat is not Moufang?
Is there a finite non-Moufang leftBol loopwith trivial right nucleus?
Does every finite Moufang loop have the strong Lagrange property?
Is there a Moufang loop whose commutant is not normal?
Is the class of cores of Bol loops a quasivariety?
Let I(n) be the number of isomorphism classes of quasigroups of order n. Is I(n) odd for every n?
Classify the finite simple paramedial quasigroups.
|
https://en.wikipedia.org/wiki/Problems_in_loop_theory_and_quasigroup_theory
|
PAdES(PDF Advanced Electronic Signatures) is a set of restrictions and extensions toPDFand ISO 32000-1[1]making it suitable foradvanced electronic signatures(AdES). This is published byETSIas EN 319 142.[2]
WhilePDF and ISO 32000-1provide a framework for digitally signing their documents, PAdES specifies precise profiles making it compliant with ETSI standards for digital signatures (Advanced Electronic Signature - AES and Qualified Electronic Signature - QES). ETSI (European Technical Standards Institute) has the function of issuing technical standards by delegation in the EUeIDASRegulation (European Union Regulation on electronic identification and trust services for electronic transactions in the internal market). The eIDAS regulation enhances and repeals theElectronic Signatures Directive1999/93/EC.[3][4]EIDAS is legally binding and in all EU member states since July 2014 and unlike the Directive it replaces, the eIDAS as a Regulation is directly applicable without implementing or interpreting legislation. Any electronic signature recognised under eIDAS (including ‘click accept’) cannot be denied validity and effectiveness by reason of being electronic. If it is a ‘digital signature’ that is, an electronic signature implementing digital certificates in compliance with the advanced or qualified described in eIDAS (and their implementations developed by ETSI from a technology level) it can support PAdES. AES and QES have a higher evidentiary value than simple or ‘standard’ electronic signatures. QES is recognised the same legal value as a handwritten signature.[3]
PAdES standards travel in the same direction and have the same aims as digital signatures (AES and QES). This means they can be easily verified in any PDF reader and as[5]
PAdES has 4 levels of verification for digital certificate, from the most simple and basic (b-b, indicating a signature was executed with a certificate that was valid on a date) to the most complex (b-LTV) allowing electronically signed documents to remain valid for long periods (long term validity) even if underlying cryptographic algorithms or the other certificates expired.
As PAdES recognizes that digitally-signed documents may be used or archived for many years, and may need to be evidenced in court, once signer certificates have expired (after a few months on b-b) they can be enquired by contacting the certification authority. In b-lt or b-LTV (the most complex) certificates remain valid for a very long term. PAdES allows certificates to be verified even after many decades at any time in the future, in spite of technological and other advances. If in the document itself the validar to confirm that the signature was valid is concept known as long-term validation (LTV).[2]
The PAdES standard,ETSIEuropean Standard (EN) 319 142, introduces a number of adaptations and extensions to PDF to satisfy the Directive's requirements. ETSI will feed these European-specific elements back into ISO for inclusion in the next release of the PDF standard, ISO 32000-2.
An electronic signature is a paperless way to sign a document using a unique credential associated with a given person that is logically attached to or associated with the document, carrying an authority equivalent to a handwritten signature. It can be used to authenticate the signer as well as to detect any changes in the document made after it was signed. Electronic signatures are recognized as a catalyst to electronic commerce, notably Internet transactions. Availability of electronic signature techniques has already been a major stimulus to eBusiness and eGovernment.
Digital signatures are a secure and legally binding means to implement electronic signatures through three cryptographic algorithms:[5]
For PDF documents, the signature data is incorporated directly within the signed PDF document, much as an ink signature becomes an integral part of a paper document, allowing the complete self-contained PDF file to be copied, stored and distributed as a simple electronic file. The signature can also have a visual representation as a form field, just as it might on a paper document. A significant advantage of PAdES is that it is being deployed by means of widely available PDF software: it does not require development or customization of specialized software.[2]
PAdES is complementary to two other standards for an implementation of electronic signatures through cryptographically secured digital signatures in compliance to the eIDAS regulation.[4]Like PAdES, they are legally binding in the European Union and suited for applications that do not involve human-readable documents: Cryptographic Message Syntax Advanced Electronic Signatures (CAdES) and XML Advanced Electronic Signatures (XAdES).[6]
The EU recognizes three differenteIDAS-compliant implementations of advanced electronic signatures through digital signatures: PAdES,XAdESandCAdES.[7]
An electronic signature can carry legal effect and be used as evidence in legal proceedings. Aqualified electronic signatureshall have the equivalent legal effect of a handwritten signature. If the qualified certificate was issued in one member state it shall be recognized as aqualified electronic signaturein all other member states.[8]
The PAdESETSItechnical specification contains 2 parts:
The PAdES standards can be downloaded from theETSI download page.
ETSI EN 319 142 replaced[9]previous standards such asETSI TS 103 172andETSI TS 102 778.
|
https://en.wikipedia.org/wiki/Long-term_validation
|
TheUNCITRAL Model Law on Electronic Transferable Records(“MLETR”) is auniform model lawthat has been adopted by theUnited Nations Commission on International Trade Law(UNCITRAL) in 2017.[1]Its scope is to allow the use of transferable documents and instruments in electronic form. Transferable documents and instruments typically includebills of lading,warehouse receipts,bills of exchange,promissory notesandcheques. National law qualifies a document or instrument as transferable.
Transferable documents and instruments allow to request delivery of goods and payment of a sum of money based on possession of the document or instrument. However, it has been difficult to reproduce the notion of possession, which has to do with control over tangible goods, in an electronic environment. The MLETR addresses that legal gap.
Under the MLETR each dematerialised document does not need to be managed in a separate information system, but the same system could manage multiple documents, or also all documents related to a business transactions. This may allow to merge logistics and supply chain, or even commercial and regulatory documents, in a single electronic transferable record.[2]
A study on the impact of the adoption of a law aligned to the MLETR in the United Kingdom has quantified the benefits of such adoption. Besides economic benefits, which include up to £224 billion in efficiency savings, adoption of such legislation may reduce the number of days needed for processing trade documents by up to 75%.[3]
The impact assessment of the Electronic Trade Documents Bill (see below) prepared by theUK Governmentestimates in the next 10 years economic benefits ranging from a low estimate of 249.8 million pounds to a high estimate of 2,049.7 million pounds, with a best estimate of 1,137.0 million pounds.[4]
At the micro-economic level, a study describing 16 case studies of application of the UK Electronic Trade Documents Act (which is aligned with MLETR) and associated economic benefits is available.[5]
The MLETR is divided in four chapters: general provisions; provisions on functional equivalence; use of electronic transferable records; and cross-border recognition of electronic transferable records.
The MLETR is built on the same fundamental principles of other UNCITRAL texts on electronic commerce, namely functional equivalence (articles 8-11 MLETR), technology neutrality and non-discrimination against the use of electronic means (article 7 MLETR).
The MLETR is also model-neutral and may be implemented by using registries, tokens or distributed ledgers.[6]The Explanatory Note to the MLETR provides some guidance on the use ofdistributed ledgersin implementing the MLETR and is therefore considered an early example of legislative text facilitating the use ofblockchain.[7][8]
Article 2 MLETR defines the notion of electronic transferable record as an electronic record that complies with the requirements of article 10 MLETR. It also defines "transferable document or instrument" as a document that entitles its holder to the payment of a sum of money or the delivery of goods.
Article 6 MLETR legally recognizes the possibility of including metadata in electronic transferable records. It is therefore considered asmart contractenabler.[9]
Articles 8 and 9 MLETR provide functional equivalence rules, respectively, for the paper-based notions of "writing" and "signature". Those articles do not need to be enacted if national law, for instance an electronic transactions act, already contains those notions and they are made applicable by reference to electronic transferable records.
Article 10 MLETR establishes the conditions for functional equivalence between paper-based transferable documents and instruments, on the one hand, and electronic transferable records, on the other hand. Those conditions are:
1) the electronic transferable record shall contain all information required for the corresponding paper-based transferable document or instrument;
2) a reliable method shall be used:
a) to identify the electronic transferable record as such;
b) to render the electronic transferable record subject to control throughout its life-cycle;
c) to retain the integrity of the electronic transferable record throughout its life-cycle.
Article 11 MLETR establishes the functional equivalence rule for possession of a transferable document or instrument. The conditions to satisfy that requirement are the use of a reliable method to establish exclusive control of the electronic transferable record and the identification of the person in control.
Article 10 and 11 MLETR are based on the notions of "control" and "singularity" of the electronic transferable record.[10]
In general, all events that may occur in relation to a transferable document or instrument may also occur in relation to an electronic transferable record.[11]Articles 15 and 16 MLETR reaffirm that general rule with respect to, respectively, endorsement and amendment of an electronic transferable record. The amendment should be identified as such as otherwise the electronic nature may not make the amendment easily recognisable.
Article 12 MLETR contains a non-exclusive list of elements relevant to assess the reliability of the method used. It contains also a safety clause that indicates that a method is reliable in fact if it has fulfilled the function it pursued, alone or with other evidence.
Article 19 MLETR contains a provision on geographic non-discrimination of the electronic transferable record. The provision does not affect private international law rules.
The MLETR has been enacted in Bahrain,[12]in Belize,[13]in France,[14]in Kiribati,[15]in Paraguay,[16]in Papua New Guinea,[17]in Singapore,[18]in Timor-Leste,[19]in the United Kingdom,[20]and in the Abu Dhabi Global Market (ADGM), an International Financial Centre located in Abu Dhabi, United Arab Emirates.[21]
The adoption of the MLETR in Bahrain has taken place in conjunction with a review of the Electronic Transactions Act, which was originally passed in 2002 and is based on the UNCITRAL Model Law on Electronic Commerce.[22]
Singapore had conducted two public consultations prior to enactment, the first in March 2017[23]and the second in summer 2019, in the broader framework of the review of the Electronic Transactions Act.[24]
In Thailand, the Cabinet has approved the inclusion of the MLETR in the Electronic Transactions Act.[25]Czechia has conducted a public consultation on MLETR adoption.[26]
The International Chamber of Commerce (ICC) has been promoting actively adoption of the MLETR. Initially, this was done to facilitate the use of electronic bills of lading as recommended in a report by the law firm Clyde & Co and the ICC Banking Commission.[27]MLETR adoption is now being actively promoted by the ICC Digital Standards Initiative (DSI), including as a manner to overcome the effects of the COVID-19 pandemic and to increase supply chain resilience. ICC DSI offers also guidance on MLETR implementation, including technical standards and business practices.[28]
On 28 April 2021 the UK, Canada, France, Germany, Italy, Japan, the US and the European Union adopted a G7 Digital and Technology Ministerial Declaration[29]to develop a framework for the use of electronic transferable records that promotes the adoption of legal frameworks compatible with the principles of the MLETR.
On 11 May 2022, the G7 Digital Ministers adopted a Ministerial Declaration[30]endorsing the “Principles for domestic legal frameworks to promote the use of electronic transferable records” contained in Annex 2 to the Declaration.[31]
The G7 declarations have prompted the consideration of MLETR adoption in G7 member States, with significant impact:
With respect to use in business practice, one provider has started offering issuance of electronic bills of lading based on Singapore law incorporating MLETR and approved by the International Group of P&I Club as of 1 July 2021.[36]These electronic bills of lading issued under the law of Singapore and MLETR have been used for the first time to cover shipments from Australia to China.[37]
In Bahrain, an electronic check system has been launched based on MLETR provisions incorporated in Bahraini law. It allows issuing, endorsing and presenting electronic checks on mobile phones and other devices.[38]
|
https://en.wikipedia.org/wiki/UNCITRAL_Model_Law_on_Electronic_Signatures
|
TheUNCITRAL Model Law on Electronic Transferable Records(“MLETR”) is auniform model lawthat has been adopted by theUnited Nations Commission on International Trade Law(UNCITRAL) in 2017.[1]Its scope is to allow the use of transferable documents and instruments in electronic form. Transferable documents and instruments typically includebills of lading,warehouse receipts,bills of exchange,promissory notesandcheques. National law qualifies a document or instrument as transferable.
Transferable documents and instruments allow to request delivery of goods and payment of a sum of money based on possession of the document or instrument. However, it has been difficult to reproduce the notion of possession, which has to do with control over tangible goods, in an electronic environment. The MLETR addresses that legal gap.
Under the MLETR each dematerialised document does not need to be managed in a separate information system, but the same system could manage multiple documents, or also all documents related to a business transactions. This may allow to merge logistics and supply chain, or even commercial and regulatory documents, in a single electronic transferable record.[2]
A study on the impact of the adoption of a law aligned to the MLETR in the United Kingdom has quantified the benefits of such adoption. Besides economic benefits, which include up to £224 billion in efficiency savings, adoption of such legislation may reduce the number of days needed for processing trade documents by up to 75%.[3]
The impact assessment of the Electronic Trade Documents Bill (see below) prepared by theUK Governmentestimates in the next 10 years economic benefits ranging from a low estimate of 249.8 million pounds to a high estimate of 2,049.7 million pounds, with a best estimate of 1,137.0 million pounds.[4]
At the micro-economic level, a study describing 16 case studies of application of the UK Electronic Trade Documents Act (which is aligned with MLETR) and associated economic benefits is available.[5]
The MLETR is divided in four chapters: general provisions; provisions on functional equivalence; use of electronic transferable records; and cross-border recognition of electronic transferable records.
The MLETR is built on the same fundamental principles of other UNCITRAL texts on electronic commerce, namely functional equivalence (articles 8-11 MLETR), technology neutrality and non-discrimination against the use of electronic means (article 7 MLETR).
The MLETR is also model-neutral and may be implemented by using registries, tokens or distributed ledgers.[6]The Explanatory Note to the MLETR provides some guidance on the use ofdistributed ledgersin implementing the MLETR and is therefore considered an early example of legislative text facilitating the use ofblockchain.[7][8]
Article 2 MLETR defines the notion of electronic transferable record as an electronic record that complies with the requirements of article 10 MLETR. It also defines "transferable document or instrument" as a document that entitles its holder to the payment of a sum of money or the delivery of goods.
Article 6 MLETR legally recognizes the possibility of including metadata in electronic transferable records. It is therefore considered asmart contractenabler.[9]
Articles 8 and 9 MLETR provide functional equivalence rules, respectively, for the paper-based notions of "writing" and "signature". Those articles do not need to be enacted if national law, for instance an electronic transactions act, already contains those notions and they are made applicable by reference to electronic transferable records.
Article 10 MLETR establishes the conditions for functional equivalence between paper-based transferable documents and instruments, on the one hand, and electronic transferable records, on the other hand. Those conditions are:
1) the electronic transferable record shall contain all information required for the corresponding paper-based transferable document or instrument;
2) a reliable method shall be used:
a) to identify the electronic transferable record as such;
b) to render the electronic transferable record subject to control throughout its life-cycle;
c) to retain the integrity of the electronic transferable record throughout its life-cycle.
Article 11 MLETR establishes the functional equivalence rule for possession of a transferable document or instrument. The conditions to satisfy that requirement are the use of a reliable method to establish exclusive control of the electronic transferable record and the identification of the person in control.
Article 10 and 11 MLETR are based on the notions of "control" and "singularity" of the electronic transferable record.[10]
In general, all events that may occur in relation to a transferable document or instrument may also occur in relation to an electronic transferable record.[11]Articles 15 and 16 MLETR reaffirm that general rule with respect to, respectively, endorsement and amendment of an electronic transferable record. The amendment should be identified as such as otherwise the electronic nature may not make the amendment easily recognisable.
Article 12 MLETR contains a non-exclusive list of elements relevant to assess the reliability of the method used. It contains also a safety clause that indicates that a method is reliable in fact if it has fulfilled the function it pursued, alone or with other evidence.
Article 19 MLETR contains a provision on geographic non-discrimination of the electronic transferable record. The provision does not affect private international law rules.
The MLETR has been enacted in Bahrain,[12]in Belize,[13]in France,[14]in Kiribati,[15]in Paraguay,[16]in Papua New Guinea,[17]in Singapore,[18]in Timor-Leste,[19]in the United Kingdom,[20]and in the Abu Dhabi Global Market (ADGM), an International Financial Centre located in Abu Dhabi, United Arab Emirates.[21]
The adoption of the MLETR in Bahrain has taken place in conjunction with a review of the Electronic Transactions Act, which was originally passed in 2002 and is based on the UNCITRAL Model Law on Electronic Commerce.[22]
Singapore had conducted two public consultations prior to enactment, the first in March 2017[23]and the second in summer 2019, in the broader framework of the review of the Electronic Transactions Act.[24]
In Thailand, the Cabinet has approved the inclusion of the MLETR in the Electronic Transactions Act.[25]Czechia has conducted a public consultation on MLETR adoption.[26]
The International Chamber of Commerce (ICC) has been promoting actively adoption of the MLETR. Initially, this was done to facilitate the use of electronic bills of lading as recommended in a report by the law firm Clyde & Co and the ICC Banking Commission.[27]MLETR adoption is now being actively promoted by the ICC Digital Standards Initiative (DSI), including as a manner to overcome the effects of the COVID-19 pandemic and to increase supply chain resilience. ICC DSI offers also guidance on MLETR implementation, including technical standards and business practices.[28]
On 28 April 2021 the UK, Canada, France, Germany, Italy, Japan, the US and the European Union adopted a G7 Digital and Technology Ministerial Declaration[29]to develop a framework for the use of electronic transferable records that promotes the adoption of legal frameworks compatible with the principles of the MLETR.
On 11 May 2022, the G7 Digital Ministers adopted a Ministerial Declaration[30]endorsing the “Principles for domestic legal frameworks to promote the use of electronic transferable records” contained in Annex 2 to the Declaration.[31]
The G7 declarations have prompted the consideration of MLETR adoption in G7 member States, with significant impact:
With respect to use in business practice, one provider has started offering issuance of electronic bills of lading based on Singapore law incorporating MLETR and approved by the International Group of P&I Club as of 1 July 2021.[36]These electronic bills of lading issued under the law of Singapore and MLETR have been used for the first time to cover shipments from Australia to China.[37]
In Bahrain, an electronic check system has been launched based on MLETR provisions incorporated in Bahraini law. It allows issuing, endorsing and presenting electronic checks on mobile phones and other devices.[38]
|
https://en.wikipedia.org/wiki/MLES
|
Inphysical securityandinformation security,access control(AC) is the action of deciding whether a subject should be granted or denied access to an object (for example, a place or a resource). The act ofaccessingmay mean consuming, entering, or using. It is often used interchangeably withauthorization, although the authorization may be granted well in advance of the access control decision.[1]
Access control on digital platforms is also termedadmission control. The protection of externaldatabasesis essential to preservedigital security.[2]
Access control is considered to be a significant aspect of privacy that should be further studied.Access control policy(alsoaccess policy) is part of an organization’ssecurity policy. In order to verify the access control policy, organizations use an access control model.[3]General security policies require designing or selecting appropriatesecurity controlsto satisfy an organization'srisk appetite- access policies similarly require the organization to design or select access controls.
Broken access control is often listed as the number one risk in web applications.[4]On the basis of the "principle of least privilege", consumers should only be authorized to access whatever they need to do their jobs, and nothing more.[5]
Geographical access control may be enforced by personnel (e.g.border guard,bouncer,ticketchecker), or with a device such as aturnstile. There may befencesto avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g.Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.[6]
The term access control refers to the practice of restricting entrance to a property, abuilding, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like themantrap. Within these environments, physical key management may also be employed as a means of further managing and monitoring access to mechanically keyed areas or access to certain small assets.[6]
Physical access control is a matter of who, where, and when. An access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically, this was partially accomplished through keys and locks. When a door is locked, only someone with a key can enter through the door, depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door, and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.[7]
Electronic access control (EAC) uses computers to solve the limitations of mechanical locks and keys. It is particularly difficult to guarantee identification (a critical component ofauthentication) with mechanical locks and keys. A wide range ofcredentialscan be used to replace mechanical keys, allowing for completeauthentication, authorization, and accounting. The electronic access control system grants access based on the credential presented. When access is granted, the resource is unlocked for a predetermined time and thetransactionis recorded. When access is refused, the resource remains locked and the attempted access is recorded. The system will also monitor the resource and alarm if the resource is forcefully unlocked or held open too long after being unlocked.[6]
When a credential is presented to a reader, the reader sends the credential's information, usually a number, to a control panel, a highly reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, and sends a transaction log to adatabase. When access is denied based on theaccess control list, the door remains locked. If there is a match between the credential and the access control list, the control panel operates a relay that in turn unlocks the resource. The control panel also ignores an opening signal to prevent an alarm. Often the reader provides feedback, such as a flashing redLEDfor an access denied and a flashing green LED for an access granted.[8]
The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the access control list. For example, Alice has access rights to theserver room, but Bob does not. Alice either gives Bob her credential, or Bob takes it; he now has access to the server room. To prevent this,two-factor authenticationcan be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted; another factor can be a PIN, a second credential, operator intervention, or abiometric input.[8]
There are three types (factors) of authenticating information:[9]
Passwords are a common means of verifying a user's identity before access is given to information systems. In addition, a fourth factor of authentication is now recognized: someone you know, whereby another person who knows you can provide a human element of authentication in situations where systems have been set up to allow for such scenarios. For example, a user may have their password, but have forgotten their smart card. In such a scenario, if the user is known to designated cohorts, the cohorts may provide their smart card and password, in combination with the extant factor of the user in question, and thus provide two factors for the user with the missing credential, giving three factors overall to allow access.[citation needed]
A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something a person knows (such as a number or PIN), something they have (such as anaccess badge), something they are (such as a biometric feature), something they do (measurable behavioural patterns), or some combination of these items. This is known asmulti-factor authentication. The typical credential is an access card or key-fob, and newer software can also turn users' smartphones into access devices.[10]
There are many card technologies including magnetic stripe, bar code,Wiegand, 125 kHz proximity, 26-bit card-swipe, contact smart cards, andcontactless smart cards. Also available are key-fobs, which are more compact than ID cards, and attach to a key ring.Biometric technologiesinclude fingerprint,facial recognition,iris recognition,retinal scan, voice, and hand geometry. The built-in biometric technologies found on newer smartphones can also be used as credentials in conjunction with access software running on mobile devices.[11]In addition to older more traditional card access technologies, newer technologies such asnear-field communication(NFC),Bluetooth low energyorUltra-wideband(UWB) can also communicate user credentials to readers for system or building access.[12][13][14]
Components of an access control system include:
Access control decisions are made by comparing the credentials to an access control list. This look-up can be done by a host or server, by an access control panel, or by a reader. The development of access control systems has observed a steady push of the look-up out from a central host to the edge of the system, or the reader. The predominant topology circa 2009 is hub and spoke with a control panel as the hub, and the readers as the spokes. The look-up and control functions are by the control panel. The spokes communicate through a serial connection; usually RS-485. Some manufactures are pushing the decision making to the edge by placing a controller at the door. The controllers areIPenabled, and connect to a host and database using standard networks[16]
Access control readers may be classified by the functions they are able to perform:[17]
Some readers may have additional features such as an LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support.
1. Serial controllers.Controllers are connected to a host PC via a serialRS-485communication line (or via 20mAcurrent loopin some older systems). External RS-232/485 converters or internal RS-485 cards have to be installed, as standard PCs do not have RS-485 communication ports.[citation needed]
Advantages:[citation needed]
Disadvantages:[citation needed]
2. Serial main and sub-controllers.All door hardware is connected to sub-controllers (a.k.a. door controllers or door interfaces). Sub-controllers usually do not make access decisions, and instead forward all requests to the main controllers. Main controllers usually support from 16 to 32 sub-controllers.
Advantages:[citation needed]
Disadvantages:[citation needed]
3. Serial main controllers & intelligent readers.All door hardware is connected directly to intelligent or semi-intelligent readers. Readers usually do not make access decisions, and forward all requests to the main controller. Only if the connection to the main controller is unavailable, will the readers use their internal database to make access decisions and record events. Semi-intelligent reader that have no database and cannot function without the main controller should be used only in areas that do not require high security. Main controllers usually support from 16 to 64 readers. All advantages and disadvantages are the same as the ones listed in the second paragraph.
4. Serial controllers with terminal servers.In spite of the rapid development and increasing use of computer networks, access control manufacturers remained conservative, and did not rush to introduce network-enabled products. When pressed for solutions with network connectivity, many chose the option requiring less efforts: addition of aterminal server, a device that converts serial data for transmission via LAN or WAN.
Advantages:[citation needed]
Disadvantages:[citation needed]
All the RS-485-related advantages and disadvantages also apply.
5. Network-enabled main controllers.The topology is nearly the same as described in the second and third paragraphs. The same advantages and disadvantages apply, but the on-board network interface offers a couple of valuable improvements. Transmission of configuration and user data to the main controllers is faster, and may be done in parallel. This makes the system more responsive, and does not interrupt normal operations. No special hardware is required in order to achieve redundant host PC setup: in the case that the primary host PC fails, the secondary host PC may start polling network controllers. The disadvantages introduced by terminal servers (listed in the fourth paragraph) are also eliminated.
6.IP controllers.Controllers are connected to a host PC via Ethernet LAN or WAN.
Advantages:[citation needed]
Disadvantages:[citation needed]
7. IP readers.Readers are connected to a host PC via Ethernet LAN or WAN.
Advantages:[citation needed]
Disadvantages:[citation needed]
The advantages and disadvantages of IP controllers apply to the IP readers as well.
The most common security risk of intrusion through an access control system is by simply following a legitimate user through a door, and this is referred to astailgating. Often the legitimate user will hold the door for the intruder. This risk can be minimized through security awareness training of the user population or more active means such as turnstiles. In very high-security applications this risk is minimized by using asally port, sometimes called a security vestibule or mantrap, where operator intervention is required presumably to assure valid identification.[18]
The second most common risk is from levering a door open. This is relatively difficult on properly secured doors with strikes or high holding force magnetic locks. Fully implemented access control systems include forced door monitoring alarms. These vary in effectiveness, usually failing from high false positive alarms, poor database configuration, or lack of active intrusion monitoring. Most newer access control systems incorporate some type of door prop alarm to inform system administrators of a door left open longer than a specified length of time.[19][20][21]
The third most common security risk is natural disasters. In order to mitigate risk from natural disasters, the structure of the building, down to the quality of the network and computer equipment vital. From an organizational perspective, the leadership will need to adopt and implement an All Hazards Plan, or Incident Response Plan. The highlights of any incident plan determined by theNational Incident Management Systemmust include Pre-incident planning, during incident actions, disaster recovery, and after-action review.[22]
Similar to levering is crashing through cheap partition walls. In shared tenant spaces, the divisional wall is a vulnerability. A vulnerability along the same lines is the breaking of sidelights.[citation needed]
Spoofing locking hardware is fairly simple and more elegant than levering. A strong magnet can operate the solenoid controlling bolts in electric locking hardware. Motor locks, more prevalent in Europe than in the US, are also susceptible to this attack using a doughnut-shaped magnet. It is also possible to manipulate the power to the lock either by removing or adding current, although most Access Control systems incorporate battery back-up systems and the locks are almost always located on the secure side of the door.[citation needed]
Access cards themselves have proven vulnerable to sophisticated attacks. Enterprising hackers have built portable readers that capture the card number from a user's proximity card. The hacker simply walks by the user, reads the card, and then presents the number to a reader securing the door. This is possible because card numbers are sent in the clear, no encryption being used. To counter this, dual authentication methods, such as a card plus a PIN should always be used.
Many access control credentials unique serial numbers are programmed in sequential order during manufacturing. Known as a sequential attack, if an intruder has a credential once used in the system they can simply increment or decrement the serial number until they find a credential that is currently authorized in the system. Ordering credentials with random unique serial numbers is recommended to counter this threat.[23]
Finally, most electric locking hardware still has mechanical keys as a fail-over. Mechanical key locks are vulnerable tobumping.[24]
Incomputer security, general access control includesauthentication,authorization, and audit. A more narrow definition of access control would cover only access approval, whereby the system makes a decision to grant or reject an access request from an already authenticated subject, based on what the subject is authorized to access. Authentication and access control are often combined into a single operation, so that access is approved based on successful authentication, or based on an anonymous access token. Authentication methods and tokens include passwords, biometric analysis, physical keys, electronic keys and devices, hidden paths, social barriers, and monitoring by humans and automated systems.
In any access-control model, the entities that can perform actions on the system are calledsubjects, and the entities representing resources to which access may need to be controlled are calledobjects(see alsoAccess Control Matrix). Subjects and objects should both be considered as software entities, rather than as human users: any human users can only have an effect on the system via the software entities that they control.[citation needed]
Although some systems equate subjects withuser IDs, so that all processes started by a user by default have the same authority, this level of control is not fine-grained enough to satisfy theprinciple of least privilege, and arguably is responsible for the prevalence ofmalwarein such systems (seecomputer insecurity).[citation needed]
In some models, for example theobject-capability model, any software entity can potentially act as both subject and object.[citation needed]
As of 2014[update], access-control models tend to fall into one of two classes: those based oncapabilitiesand those based onaccess control lists(ACLs).
Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members of agroupof subjects (often the group is itself modeled as a subject).[citation needed]
Access control systems provide the essential services ofauthorization,identification and authentication(I&A),access approval, andaccountabilitywhere:[25]
Access to accounts can be enforced through many types of controls.[26]
Intelecommunications, the termaccess controlis defined in U.S.Federal Standard 1037C[33]with the following meanings:
This definition depends on several other technical terms from Federal Standard 1037C.
Special public member methods –accessors(akagetters) andmutator methods(often calledsetters) are used to control changes to class variables in order to prevent unauthorized access and data corruption.
Inpublic policy, access control to restrict access to systems ("authorization") or to track or monitor behavior within systems ("accountability") is an implementation feature of usingtrusted systemsforsecurityorsocial control.
|
https://en.wikipedia.org/wiki/Access_control
|
Authorization hold(alsocard authorization,preauthorization, orpreauth) is a service offered bycreditanddebit cardproviders whereby the provider puts a hold of the amount approved by the cardholder, reducing the balance of available funds until the merchant clears the transaction (also calledsettlement), after the transaction is completed or aborted, or because the hold expires.
In the case of debit cards, authorization holds can fall off the account, thus rendering the balance available again, anywhere from one to eightbusiness daysafter the transaction date, depending on the bank's policy. In the case of credit cards, holds may last as long as thirty days, depending on the issuing bank. Transactions may be withdrawn but in most cases, especially with smaller banks, will not show up as a deposit on the cardholder's bank statement but will instead be directly added to the available balance automatically due to it only being a “temporary charge”.
The usual reason for authorization holds is where there is a two-step process in the payment, consisting of an authorization and a settlement with a time lag in between. These were common with signature-based (non-PIN-based) credit and debit card transactions where a transaction was authorised but not settled until a few days later. It is also common in hotel, rental car services or onpay at the pumpat filling stations where the company wants to confirm a valid method of payment has been received prior to providing services or goods and knowing the amount that will be charged.
The major consequence for the user is that they cannot access that part of their account until the authorization hold expires without being finalized or is settled and the banking system transfers the funds. If the account balance is low this could result in anunavailable funds fee. The actual balance will not be reduced until the merchant submits the batch of transactions, and the banking system transfers the funds.
When a merchant swipes a customer's credit card, thecredit card terminalconnects to the merchant'sacquirer, or credit card processor, which verifies that the customer's account is valid and that sufficient funds are available to cover the transaction's cost. At this step, the funds are "held" and deducted from the customer's credit limit (or available bank balance, in the case of a debit card), but are not yet transferred to the merchant. At the time of the merchant's choosing, the merchant instructs the credit card machine to submit the finalized transactions to the acquirer in a "batch transfer," which begins the settlement process, where the funds are transferred from the customers' accounts to the merchant's accounts.
This process is not instantaneous: the transaction may not appear on the customer's statement or online account activity for one to two days, and it can take up to three days for funds to be deposited in the merchant's account. The preauthorization will be terminated if it is allowed to expire before the settlement occurs.
For example, if an individual has a credit limit of $100 and uses a credit card to make a purchase at a retail store for $30, then the available credit will immediately decrease to $70, because the merchant has obtained an authorization from the individual's bank by swiping the card through its credit card terminal.
If the billing statement were then sent out immediately, the actual charges would still be $0, because the merchant has not actually collected the funds in question. The actual charge is not put through until the merchant submits their batch of transactions and the banking system transfers the funds.
A debit card works differently. Like in the previous example, if one has a balance of $100 in the bank and used a debit card to make a purchase at a retail store for $30, the available balance will immediately decrease to $70, as a hold on the $30 is enacted because the merchant has obtained an authorization from the bank by swiping the card through the credit card terminal. However, the actual balance with the bank is still $100, because the merchant has not actually collected the funds in question.
Some services, like hotels and rental cars, have the card as authorized at the beginning of service, but the settlement will not be placed until the completion of the service. Many times, there are additional deposits required in addition to the anticipated cost of the service to cover any additional unexpected charges for damages, excessive mileage or fuel, or if hotel guests charge room service, movies, or phone calls to the account.
Upon the completion of the service, the vendor will then process the settlement of any final charges and submit them to the processor. It is not until then that the vendor will receive any funds, even though the customer's account would have shown the pending transaction from the authorization at the beginning of service. The settled charges will be deducted from the authorized funds and are not in addition to the authorized funds. Both car rental and hotel incidental holds can be kept by the merchant for up to 30 days from the date of original charge. In most cases, the merchant determines the amount of the hold, and the card issuing company determines the length of time it takes before the hold is released.
On occasion, negligence or computer error may make a merchant attempt to authorize a card twice, creating adouble holdon the cardholder's bank account. That often happens when a processor requires additional security verification such as acard security code,ZIP code, or address, and incorrect information is provided or is mistyped. Gasoline pumps often impose a double hold, one for a standard amount (such as $75) and another for the amount of purchase. Though the merchant will settle the transaction only once, the hold will temporarily lower the customer's available balance, potentially causing declines or, for a debit card, even overdrafts.[1]
Another issue that occurs on a regular basis with authorization holds is the transaction amount changing between the time the hold is placed on the account and when the transaction is settled. It most commonly occurs when the final debit amount is uncertain when the authorization is actually obtained.
For example, if an individual makes a fuel purchase by swiping a check card or credit card at the pump without using the PIN, the pump has no way of knowing how much fuel will be used. The pump typically authorizes a fixed amount, usually $1 but sometimes up to $100, to verify that the card is legitimate and that the customer has funds available. When the transaction is settled, it will actually post for the value of the purchase.[2]
Another example can be seen with a restaurant transaction. If an individual spends $40 at a meal, the server does not know how large a tip they will leave if they choose to leave one on the card. The restaurant's credit card terminal is typically set to authorize a larger amount, such as 20% above the cost of the meal, but the transaction will settle for the actual total including the actual tip written on the receipt. Some restaurants will authorize just the amount of the bill, but the transaction will settle higher with the tip included. This type of settlement (for an amount higher than the authorized amount) is only possible when the original authorization was executed with a specific merchant category code (hospitality). It is not possible in eCommerce or retail environments.
Acquiring banks sometimes forbid the practice of preauthorizing an amount including a tip, but will guarantee settlement of the amount authorized, plus 15 or 20%.[citation needed]
Other businesses that may settle transactions for different amounts than those originally authorized include hotels and car rental agencies. The final cost of these transactions can be extremely unpredictable from unforeseen extras such as room service charges, refuelling charges, or longer stays. Those companies typically place a hold on the customer's credit card at the beginning of the transaction for the estimated total, plus a percentage or a fixed cash amount (such as the estimated rental charges, plus 15% or $250). The establishments usually do not settle the transactions until after the customer has checked out or returned the rental car. Some hotels and car rental agencies do not acceptVisaorMasterCard-brandeddebit cards, as the authorization holds can expire before the transaction is settled. Additionally, some agencies use the requirement of a credit card as a tool to screen high-risk customers, as credit cards usually require a good credit history, and all that is needed for a debit card is achecking account.
Another example of a transaction that may settle for an amount different from the amount authorized is a transaction incurred in acurrencydifferent from the currency in which the card is denominated. The final, settled, transaction amount will be based on the exchange rate in effect on the settlement date. Since that rate is generally not known at the time of authorization, the banks will use an estimated amount based on the exchange rate at the time of authorization.
|
https://en.wikipedia.org/wiki/Authorization_hold
|
Anopen service interface definition(OSID) is a programmatic interface specification describing a service. These interfaces are specified by theOpen Knowledge Initiative(OKI) to implement aservice-oriented architecture(SOA) to achieveinteroperabilityamong applications across a varied base of underlying and changing technologies.
To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer fromprotocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments.
OSIDs assist insoftware designand development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider andbelowthe interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplifiedproject management.
OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achievesreusabilityat a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes.
An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means ofabstraction. When all the OSID providers implement the same service, this is called anadapterpattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application.
This Internet-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Authorization_OSID
|
Multi-party authorization(MPA) is a process to protect atelecommunications network,data centerorindustrial control systemfrom undesirable acts by a malicious insider or inexperienced technician acting alone. MPA requires that a second authorized user approve an action before it is allowed to take place. This pro-actively protects data or systems from an undesirable act.
Existing methods to protect data and systems from the malicious insider includeauditing,job rotationandseparation of duties. Auditing is a reactive method meant to discover who did what after the fact. Job rotation and separation of duties are limiting techniques meant to minimize prolonged access to sensitive data or systems in order to limit undesirable acts. In contrast, MPA is a pro-active solution.
An advantage MPA has over other methods to protect from undesirable acts by a malicious insider or inexperienced operator is that MPA is pro-active and prevents data or systems from compromise by a single entity acting alone. MPA prevents the initial undesirable act rather than dealing with a breach or compromise after the fact.
Multi-party authorization technology can secure the most vulnerable and sensitive activities and data sources from attack by a compromised insider acting alone. It is somewhat analogous to weapons systems that require two individuals to turn two different keys in order to enable the system. One person cannot do it alone. Another example is to consider access to a lock box in a bank. That access requires multiple parties, one the lock box owner and another a bank official. Both individuals act together to access the lock box, while neither could do so alone. MPA, in like manner, ensures that a second set of eyes reviews and approves of activity involving critical or sensitive data or systems before the action takes place.
Multi-party authorization is suitable for a wide variety of applications. MPA can be implemented to protect any type of sensitive data in electronic form or any activity within a network infrastructure or computerized control system. Anelectronic health recordis an example of a data record that could be protected by MPA. Multi-party authorization provides pro-active protection from undesirable acts by the inexperienced technician or malicious insider.
US Patent 7,519,826, issued: April 14, 2009 for"Near Real Time Multi-Party Task Authorization Access Control"
IT BusinessEdge, Nov 25, 2009"Protecting From the Malicious Insider: Multi Party Authorization"
|
https://en.wikipedia.org/wiki/Multi-party_authorization
|
OAuth(short foropen authorization[1][2]) is an open standard for accessdelegation, commonly used as a way for internet users to grant websites or applications access to their information on other websites but without giving them the passwords.[3][4]This mechanism is used by companies such asAmazon,[5]Google,Meta Platforms,Microsoft, andTwitterto permit users to share information about their accounts with third-party applications or websites.
Generally, the OAuth protocol provides a way for resource owners to provide aclient applicationwith secure delegated access to server resources. It specifies a process for resource owners to authorize third-party access to their server resources without providing credentials. Designed specifically to work withHypertext Transfer Protocol(HTTP), OAuth essentially allowsaccess tokensto be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.[2]
OAuth began in November 2006 whenBlaine Cookwas developing anOpenIDimplementation forTwitter. Meanwhile,Ma.gnolianeeded a solution to allow its members with OpenIDs to authorizeMac OS X Dashboardwidgets to access their service. Cook,Chris Messinaand Larry Halff from Magnolia met withDavid Recordonto discuss using OpenID with the Twitter and MagnoliaAPIsto delegate authentication. They concluded that there were no open standards for API access delegation.[6]
The OAuthdiscussion groupwas created in April 2007, for a small group of implementers to write the draft proposal for an open protocol. DeWitt Clinton fromGooglelearned of the OAuth project, and expressed his interest in supporting the effort. In July 2007, the team drafted an initial specification. Eran Hammer joined and coordinated the many OAuth contributions creating a more formal specification. On 4 December 2007, the OAuth Core 1.0 final draft was released.[7]
At the 73rdInternet Engineering Task Force(IETF) meeting inMinneapolisin November 2008, an OAuthBoFwas held to discuss bringing the protocol into the IETF for further standardization work. The event was well attended and there was wide support for formally chartering an OAuth working group within the IETF.
The OAuth 1.0 protocol was published as RFC 5849, an informationalRequest for Comments, in April 2010. Since 31 August 2010, all third party Twitter applications have been required to use OAuth.[8]
The OAuth 2.0 framework was published considering additional use cases and extensibility requirements gathered from the wider IETF community. Albeit being built on the OAuth 1.0 deployment experience, OAuth 2.0 is not backwards compatible with OAuth 1.0. OAuth 2.0 was published as RFC 6749 and the Bearer Token Usage specification as RFC 6750, both standards track Requests for Comments, in October 2012.[2][9]
As of November 2024, the OAuth 2.1 Authorization Framework draft is a work in progress. It consolidates the functionality in RFCs OAuth 2.0, OAuth 2.0 for Native Apps, Proof Key for Code Exchange, OAuth 2.0 for Browser-Based Apps, OAuth Security Best Current, and Bearer Token Usage.[10]
On 23 April 2009, asession fixationsecurity flaw in the 1.0 protocol was announced. It affects the OAuth authorization flow (also known as "3-legged OAuth") in OAuth Core 1.0 Section 6.[11]Version 1.0a of the OAuth Core protocol was issued to address this issue.[12]
In January 2013, the Internet Engineering Task Force published a threat model for OAuth 2.0.[13]Among the threats outlined is one called "Open Redirector"; in early 2014, a variant of this was described under the name "Covert Redirect" by Wang Jing.[14][15][16][17]
OAuth 2.0 has been analyzed using formal web protocol analysis. This analysis revealed that in setups with multiple authorization servers, one of which is behaving maliciously, clients can become confused about the authorization server to use and may forward secrets to the malicious authorization server (AS Mix-Up Attack).[18]This prompted the creation of a newbest current practiceinternet draft that sets out to define a new security standard for OAuth 2.0.[19]Assuming a fix against the AS Mix-Up Attack in place, the security of OAuth 2.0 has been proven under strong attacker models using formal analysis.[18]
One implementation of OAuth 2.0 with numerous security flaws has been exposed.[20]
In April and May 2017, about one million users ofGmail(less than 0.1% of users as of May 2017) were targeted by an OAuth-based phishing attack, receiving an email purporting to be from a colleague, employer or friend wanting to share a document on Google Docs.[21]Those who clicked on the link within the email were directed to sign in and allow a potentially malicious third-party program called "Google Apps" to access their "email account, contacts and online documents".[21]Within "approximately one hour",[21]the phishing attack was stopped by Google, who advised those who had given "Google Apps" access to their email to revoke such access and change their passwords.
In the draft of OAuth 2.1 the use of the PKCE (RFC 7636) extension for native apps has been recommended to all kinds of OAuth clients, including web applications and other confidential clients in order to prevent malicious browser extensions from performing OAuth 2.0 code injection attacks.[10]
OAuth framework specifies several grant types for different use cases. Some of the most common OAuth grant types are:[22]
Facebook'sGraph APIonly supports OAuth 2.0.[23]Googlesupports OAuth 2.0 as the recommended authorization mechanism for all of itsAPIs.[24]Microsoftalso supports OAuth 2.0 for various APIs and its Azure Active Directory service,[25]which is used to secure many Microsoft and third party APIs.
OAuth can be used as an authorizing mechanism to access securedRSS/Atomfeeds. Access to RSS/ATOM feeds that require authentication has always been an issue. For example, an RSS feed from a securedGoogle Sitecould not have been accessed usingGoogle Reader. Instead, three-legged OAuth would have been used to authorize that RSS client to access the feed from the Google Site.
Free software client implementations of the OAuth2 protocol such as theLibreOfficeOAuth2OOoextension will allow you to access remote resources (ie: via theGoogle APIor theMicrosoft GraphAPI and OAuth 2.0) and possibly even with theLibreOffice Basiclanguage. This makes it very easy to write and use HTTP requests supporting the OAuth 2.0 protocol inLibreOffice macros.
OAuth is a service that is complementary to and distinct fromOpenID. OAuth is unrelated toOATH, which is a reference architecture for authentication, not a standard for authorization. However, OAuth is directly related toOpenID Connect(OIDC), since OIDC is an authentication layer built on top of OAuth 2.0. OAuth is also unrelated toXACML, which is an authorization policy standard. OAuth can be used in conjunction with XACML, where OAuth is used for ownership consent and access delegation whereas XACML is used to define the authorization policies (e.g., managers can view documents in their region).
OAuth is anauthorizationprotocol, rather than anauthenticationprotocol. Using OAuth on its own as an authentication method may be referred to as pseudo-authentication.[26]The following diagrams highlight the differences between using OpenID (specifically designed as an authentication protocol) and OAuth for authorization.
The communication flow in both processes is similar:
The crucial difference is that in the OpenIDauthenticationuse case, the response from the identity provider is an assertion of identity; while in the OAuthauthorizationuse case, the identity provider is also anAPIprovider, and the response from the identity provider is an access token that may grant the application ongoing access to some of the identity provider's APIs, on the user's behalf. The access token acts as a kind of "valet key" that the application can include with its requests to the identity provider, which prove that it has permission from the user to access those APIs.
Because the identity provider typically (but not always) authenticates the user as part of the process of granting an OAuth access token, it is tempting to view a successful OAuth access token request as an authentication method itself. However, because OAuth was not designed with this use case in mind, making this assumption can lead to major security flaws.[27]
XACMLis a policy-based,attribute-based access controlauthorization framework. It provides:
XACML and OAuth can be combined to deliver a more comprehensive approach to authorization. OAuth does not provide a policy language with which to define access control policies. XACML can be used for its policy language.
Where OAuth focuses on delegated access (I, the user, grant Twitter access to my Facebook wall), and identity-centric authorization, XACML takes an attribute-based approach which can consider attributes of the user, the action, the resource, and the context (who, what, where, when, how). With XACML it is possible to define policies such as
XACML provides more fine-grained access control than OAuth does. OAuth is limited in granularity to the coarse functionality (the scopes) exposed by the target service. As a result, it often makes sense to combine OAuth and XACML together where OAuth will provide the delegated access use case and consent management and XACML will provide the authorization policies that work on the applications, processes, and data.
Lastly, XACML can work transparently across multiple stacks (APIs, web SSO, ESBs, home-grown apps, databases...). OAuth focuses exclusively on HTTP-based apps.
Eran Hammer resigned from his role of lead author for the OAuth 2.0 project, withdrew from theIETF working group, and removed his name from the specification in July 2012. Hammer cited a conflict between web and enterprise cultures as his reason for leaving, noting that IETF is a community that is "all about enterprise use cases" and "not capable of simple". "What is now offered is a blueprint for an authorization protocol", he noted, "that is the enterprise way", providing a "whole new frontier to sell consulting services and integration solutions".[28]In comparing OAuth 2.0 with OAuth 1.0, Hammer points out that it has become "more complex, less interoperable, less useful, more incomplete, and most importantly, less secure". He explains how architectural changes for 2.0 unbound tokens from clients, removed all signatures and cryptography at a protocol level and added expiring tokens (because tokens could not be revoked) while complicating the processing of authorization. Numerous items were left unspecified or unlimited in the specification because "as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide."[28]
David Recordonlater also removed his name from the specifications for unspecified reasons.[citation needed]Dick Hardttook over the editor role, and the framework was published in October 2012.[2]
David Harris, author of the email clientPegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", requiring developers to write custom modules specific to each service (Gmail, Microsoft Mail services, etc.), and to register specifically with them.[29]
|
https://en.wikipedia.org/wiki/OAuth
|
OpenIDis anopen standardanddecentralizedauthenticationprotocolpromoted by the non-profitOpenID Foundation. It allows users to be authenticated by co-operating sites (known asrelying parties, or RP) using a third-party identity provider (IDP) service, eliminating the need forwebmastersto provide their ownad hoclogin systems, and allowing users to log in to multiple unrelated websites without having to have a separate identity and password for each.[1]Users create accounts by selecting an OpenIDidentity provider,[1]and then use those accounts to sign on to any website that accepts OpenID authentication. Several large organizations either issue or accept OpenIDs on their websites.[2]
The OpenID standard provides a framework for the communication that must take place between the identity provider and the OpenID acceptor (the "relying party").[3]An extension to the standard (the OpenID Attribute Exchange) facilitates the transfer of user attributes, such as name and gender, from the OpenID identity provider to the relying party (each relying party may request a different set of attributes, depending on its requirements).[4]The OpenID protocol does not rely on a central authority to authenticate a user's identity. Moreover, neither services nor the OpenID standard may mandate a specific means by which to authenticate users, allowing for approaches ranging from the common (such as passwords) to the novel (such assmart cardsor biometrics).
The final version of OpenID is OpenID 2.0, finalized and published in December 2007.[5]The termOpenIDmay also refer to an identifier as specified in the OpenID standard; these identifiers take the form of a uniqueUniform Resource Identifier(URI), and are managed by some "OpenID provider" that handles authentication.[1]
As of March 2016[update], there are over 1 billion OpenID-enabled accounts on the Internet (see below) and approximately 1,100,934 sites have integrated OpenID consumer support:[6]AOL,Flickr,Google,Amazon.com,Canonical(provider nameUbuntu One),LiveJournal,Microsoft(provider nameMicrosoft account),Mixi,Myspace,Novell,OpenStreetMap,Orange,Sears,Sun,Telecom Italia,Universal Music Group,VeriSign,WordPress,Yahoo!, theBBC,[7]IBM,[8]PayPal,[9]andSteam,[10]although some of those organizations also have their own authentication management.
Many if not all of the larger organizations require users to provide authentication in the form of an existing email account or mobile phone number in order to sign up for an account (which then can be used as an OpenID identity). There are several smaller entities that accept sign-ups with no extra identity details required.
Facebook did use OpenID in the past, but moved toFacebook Connect.[11]Blogger also used OpenID, but since May 2018 no longer supports it.[12]
OpenID is a decentralized authentication protocol that allows users to authenticate with multiple websites using a single set of credentials, eliminating the need for separate usernames and passwords for each website. OpenID authenticates a user with an identity provider (IDP), who then provides the user with a unique identifier (called an OpenID). This identifier can then be used to authenticate the user with any website that supports OpenID.
When a user visits a website that supports OpenID authentication, the website will redirect the user to their chosen IDP. The IDP will then prompt the user to authenticate themselves (e.g., by entering a username and password). Once the user is authenticated, the IDP will generate an OpenID and send it back to the website. The website can then use this OpenID to authenticate the user without needing to know their actual credentials.
OpenID is built on top of several existing standards, including HTTP, HTML, and XML. OpenID relies on a number of technologies, including a discovery mechanism that allows websites to find the IDP associated with a particular OpenID, as well as security mechanisms to protect against phishing and other attacks.[13]
One of the key benefits of OpenID is that it allows users to control their own identity information, rather than relying on individual websites to store and manage their login credentials. This can be particularly important in cases where websites are vulnerable to security breaches or where users are concerned about the privacy of their personal information.
OpenID has been widely adopted by a number of large websites and service providers, including Google, Yahoo!, and PayPal. The protocol is also used by a number of open source projects and frameworks, including Ruby on Rails and Django.
The end user interacts with a relying party (such as a website) that provides an option to specify an OpenID for the purposes of authentication; an end user typically has previously registered an OpenID (e.g.alice.openid.example.org) with an OpenID provider (e.g.openid.example.org).[1]
The relying party typically transforms the OpenID into a canonical URL form (e.g.http://alice.openid.example.org/).
There are two modes in which the relying party may communicate with the OpenID provider:
Thecheckid_immediatemode can fall back to thecheckid_setupmode if the operation cannot be automated.
First, the relying party and the OpenID provider (optionally) establish ashared secret, referenced by anassociate handle, which the relying party then stores. If using thecheckid_setupmode, the relying party redirects the end user's user-agent to the OpenID provider so the end user can authenticate directly with the OpenID provider.
The method of authentication may vary, but typically, an OpenID provider prompts the end user for a password or some cryptographic token, and then asks whether the end user trusts the relying party to receive the necessary identity details.
If the end user declines the OpenID provider's request to trust the relying party, then the user-agent is redirected back to the relying party with a message indicating that authentication was rejected; the relying party in turn refuses to authenticate the end user.
If the end user accepts the OpenID provider's request to trust the relying party, then the user-agent is redirected back to the relying party along with the end user's credentials. That relying party must then confirm that the credentials really came from the OpenID provider. If the relying party and OpenID provider had previously established a shared secret, then the relying party can validate the identity of the OpenID provider by comparing its copy of the shared secret against the one received along with the end user's credentials; such a relying party is calledstatefulbecause it stores the shared secret between sessions. In contrast, astatelessordumbrelying party must make one more background request (check_authentication) to ensure that the data indeed came from the OpenID provider.
After the OpenID has been verified, authentication is considered successful and the end user is considered logged into the relying party under the identity specified by the given OpenID (e.g.alice.openid.example.org). The relying party typically then stores the end user's OpenID along with the end user's other session information.
To obtain an OpenID-enabledURLthat can be used to log into OpenID-enabled websites, a user registers an OpenID identifier with an identity provider. Identity providers offer the ability to register a URL (typically a third-level domain, e.g. username.example.com) that will automatically be configured with OpenID authentication service.
Once they have registered an OpenID, a user can also use an existing URL under their own control (such as a blog or home page) as an alias or "delegated identity". They simply insert the appropriate OpenID tags in theHTML[14]or serve aYadisdocument.[15]
Starting with OpenID Authentication 2.0 (and some 1.1 implementations), there are two types of identifiers that can be used with OpenID: URLs and XRIs.
XRIsare a new form ofInternetidentifierdesigned specifically for cross-domain digital identity. For example, XRIs come in two forms—i-namesandi-numbers—that are usually registered simultaneously assynonyms. I-names are reassignable (like domain names), while i-numbers are never reassigned. When an XRI i-name is used as an OpenID identifier, it is immediately resolved to the synonymous i-number (the CanonicalID element of the XRDS document). This i-number is the OpenID identifier stored by the relying party. In this way, both the user and the relying party are protected from the end user's OpenID identity ever being taken over by another party as can happen with a URL based on a reassignable DNS name.
The OpenID Foundation (OIDF) promotes and enhances the OpenID community and technologies. The OIDF is a non-profit international standards development organization of individual developers, government agencies and companies who wish to promote and protect OpenID. The OpenID Foundation was formed in June 2007 and serves as a public trust organization representing an open community of developers, vendors and users. OIDF assists the community by providing needed infrastructure and help in promoting and supporting adoption of OpenID. This includes managing intellectual property and trade marks as well a fostering viral growth and global participation in OpenID.
The OpenID Foundation's board of directors has six community board members and eight corporate board members:[16]
Community board members
Corporate board members
OIDF is a global organization to promote digital identity and to encourage the further adoption of OpenID, the OIDF has encouraged the creation of member chapters. Member chapters are officially part of the Foundation and work within their own constituency to support the development and adoption of OpenID as a framework for user-centric identity on the internet.
The OIDF ensures that OpenID specifications are freely implementable therefore the OIDF requires all contributors to sign a contribution agreement. This agreement both grants a copyright license to the Foundation to publish the collective specifications and includes a patent non-assertion agreement. The non-assertion agreement states that the contributor will not sue someone for implementing OpenID specifications.
The OpenID trademark in the United States was assigned to the OpenID Foundation in March 2008.[17]It had been registered by NetMesh Inc. before the OpenID Foundation was operational.[18][19]In Europe, as of August 31, 2007, the OpenID trademark is registered to the OpenID Europe Foundation.[20]
The OpenID logo was designed by Randy "ydnar" Reddig, who in 2005 had expressed plans to transfer the rights to an OpenID organization.[21]
Since the original announcement of OpenID, the official site has stated:[22]
Nobody should own this. Nobody's planning on making any money from this. The goal is to release every part of this under the most liberal licenses possible, so there's no money or licensing or registering required to play. It benefits the community as a whole if something like this exists, and we're all a part of the community.
Sun Microsystems,VeriSignand a number of smaller companies involved in OpenID have issued patentnon-assertion covenantscovering OpenID 1.1 specifications. The covenants state that the companies will not assert any of their patents against OpenID implementations and will revoke their promises from anyone who threatens, or asserts, patents against OpenID implementors.[23][24]
In March, 2012, a research paper[25]reported two generic security issues in OpenID. Both issues allow an attacker to sign in to a victim's relying party accounts. For the first issue, OpenID and Google (an Identity Provider of OpenID) both published security advisories to address it.[26][27]Google's advisory says "An attacker could forge an OpenID request that doesn't ask for the user's email address, and then insert an unsigned email address into the IDPs response. If the attacker relays this response to a website that doesn't notice that this attribute is unsigned, the website may be tricked into logging the attacker in to any local account." The research paper claims that many popular websites have been confirmed vulnerable, includingYahoo! Mail,smartsheet.com,Zoho,manymoon.com,diigo.com. The researchers have notified the affected parties, who have then fixed their vulnerable code.
For the second issue, the paper called it "Data Type Confusion Logic Flaw", which also allows attackers to sign in to victims' RP accounts.GoogleandPayPalwere initially confirmed vulnerable. OpenID published a vulnerability report[28]on the flaw. The report says Google and PayPal have applied fixes, and suggest other OpenID vendors to check their implementations.
Some observers have suggested that OpenID has security weaknesses and may prove vulnerable tophishingattacks.[29][30][31]For example, a malicious relaying party may forward the end user to a bogus identity provider authentication page asking that end user to input their credentials. On completion of this, the malicious party (who in this case also controls the bogus authentication page) could then have access to the end user's account with the identity provider, and then use that end user's OpenID to log into other services.
In an attempt to combat possible phishing attacks, some OpenID providers mandate that the end user needs to be authenticated with them prior to an attempt to authenticate with the relying party.[32]This relies on the end user knowing the policy of the identity provider. In December 2008, the OpenID Foundation approved version 1.0 of the Provider Authentication Policy Extension (PAPE), which "enables Relying Parties to request that OpenID Providers employ specified authentication policies when authenticating users and for OpenID Providers to inform the Relying Parties which policies were actually used."[33]
Other security issues identified with OpenID involve lack of privacy and failure to address thetrust problem.[34]However, this problem is not unique to OpenID and is simply the state of the Internet as commonly used.[citation needed]
The Identity Provider does, however, get a log of your OpenID logins; they know when you logged into what website, makingcross-site trackingmuch easier. A compromised OpenID account is also likely to be a more serious breach of privacy than a compromised account on a single site.
Another important vulnerability is present in the last step in the authentication scheme when TLS/SSL are not used: the redirect-URL from the identity provider to the relying party. The problem with this redirect is the fact that anyone who can obtain this URL (e.g. by sniffing the wire) can replay it and get logged into the site as the victim user. Some of the identity providers usenonces(a number used just once) to allow a user to log into the site once and fail all the consecutive attempts. The nonce solution works if the user is the first one to use the URL. However, a fast attacker who is sniffing the wire can obtain the URL and immediately reset a user's TCP connection (as an attacker is sniffing the wire and knows the required TCP sequence numbers) and then execute the replay attack as described above. Thus nonces only protect against passive attackers, but cannot prevent active attackers from executing the replay attack.[35]Use of TLS/SSL in the authentication process can significantly reduce this risk.
This can be restated as:
On May 1, 2014, a bug dubbed "Covert Redirectrelated toOAuth2.0 and OpenID" was disclosed.[36][37]It was discovered by mathematics doctoral student Wang Jing at the School of Physical and Mathematical Sciences,Nanyang Technological University, Singapore.[38][39][40]
The announcement of OpenID is:
"'Covert Redirect', publicized in May 2014, is an instance of attackers using open redirectors – a well-known threat, with well-known means of prevention. The OpenID Connect protocol mandates strict measures that preclude open redirectors to prevent this vulnerability."[41]
"The general consensus, so far, is that Covert Redirect is not as bad, but still a threat. Understanding what makes it dangerous requires a basic understanding of Open Redirect, and how it can be exploited."[42]
A patch was not immediately made available. Ori Eisen, founder, chairman and chief innovation officer at 41st Parameter told Sue Marquette Poremba, "In any distributed system, we are counting of the good nature of the participants to do the right thing. In cases like OAuth and OpenID, the distribution is so vast that it is unreasonable to expect each and every website to patch up in the near future".[43]
The original OpenID authentication protocol was developed in May 2005[44]byBrad Fitzpatrick, creator of popular community websiteLiveJournal, while working atSix Apart.[45]Initially referred to as Yadis (an acronym for "Yet another distributed identity system"),[46]it was named OpenID after the openid.netdomain namewas given to Six Apart to use for the project.[47]OpenID support was soon implemented onLiveJournaland fellow LiveJournalenginecommunityDeadJournalfor blog post comments and quickly gained attention in the digital identity community.[48][49]Web developerJanRainwas an early supporter of OpenID, providing OpenIDsoftware librariesand expanding its business around OpenID-based services.
In late June, discussions started between OpenID users and developers fromenterprise softwarecompany NetMesh, leading to collaboration on interoperability between OpenID and NetMesh's similarLight-weight Identity(LID) protocol. The direct result of the collaboration was theYadisdiscovery protocol, adopting the name originally used for OpenID. The new Yadis was announced on October 24, 2005.[50]After a discussion at the 2005Internet Identity Workshopa few days later,XRI/i-namesdevelopers joined the Yadis project,[51]contributing their Extensible Resource Descriptor Sequence (XRDS) format for utilization in the protocol.[52]
In December, developers at Sxip Identity began discussions with the OpenID/Yadis community[53]after announcing a shift in the development of version 2.0 of its Simple Extensible Identity Protocol (SXIP) to URL-based identities like LID and OpenID.[54]In March 2006, JanRain developed a Simple Registration (SREG) extension for OpenID enabling primitive profile-exchange[55]and in April submitted a proposal to formalize extensions to OpenID. The same month, work had also begun on incorporating fullXRIsupport into OpenID.[56]Around early May, key OpenID developerDavid Recordonleft Six Apart, joining VeriSign to focus more on digital identity and guidance for the OpenID spec.[49][57]By early June, the major differences between the SXIP 2.0 and OpenID projects were resolved with the agreement to support multiple personas in OpenID by submission of an identity provider URL rather than a full identity URL. With this, as well as the addition of extensions and XRI support underway, OpenID was evolving into a full-fledged digital identity framework, with Recordon proclaiming "We see OpenID as being an umbrella for the framework that encompasses the layers for identifiers, discovery, authentication and a messaging services layer that sits atop and this entire thing has sort of been dubbed 'OpenID 2.0'.[58]" In late July, Sxip began to merge its Digital Identity Exchange (DIX) protocol into OpenID, submitting initial drafts of the OpenID Attribute Exchange (AX) extension in August. Late in 2006, aZDNetopinion piece made the case for OpenID to users, web site operators and entrepreneurs.[59]
On January 31, 2007,Symantecannounced support for OpenID in its Identity Initiative products and services.[60]A week later, on February 6Microsoftmade a joint announcement with JanRain, Sxip, and VeriSign to collaborate on interoperability between OpenID and Microsoft'sWindows CardSpacedigital identity platform, with particular focus on developing a phishing-resistant authentication solution for OpenID. As part of the collaboration, Microsoft pledged to support OpenID in its future identity server products and JanRain, Sxip, and VeriSign pledged to add support for Microsoft'sInformation Cardprofile to their future identity solutions.[61]In mid-February,AOLannounced that an experimental OpenID provider service was functional for all AOL andAOL Instant Messenger(AIM) accounts.[62]
In May,Sun Microsystemsbegan working with the OpenID community, announcing an OpenID program,[63]as well as entering a non-assertion covenant with the OpenID community, pledging not to assert any of its patents against implementations of OpenID.[23]In June, OpenID leadership formed the OpenID Foundation, an Oregon-basedpublic benefit corporationfor managing the OpenID brand and property.[64]The same month, an independent OpenID Europe Foundation was formed in Belgium[65]by Snorri Giorgetti. By early December, non-assertion agreements were collected by the major contributors to the protocol and the final OpenID Authentication 2.0 and OpenID Attribute Exchange 1.0 specifications were ratified on December 5.[66]
In mid-January 2008,Yahoo!announced initial OpenID 2.0 support, both as a provider and as a relying party, releasing the provider service by the end of the month.[67]In early February, Google, IBM, Microsoft, VeriSign and Yahoo! joined the OpenID Foundation as corporate board members.[68]Around early May,SourceForge, Inc.introduced OpenID provider and relying party support to leading open source software development websiteSourceForge.net.[69]In late July, popularsocial network serviceMySpaceannounced support for OpenID as a provider.[70]In late October, Google launched support as an OpenID provider and Microsoft announced thatWindows Live IDwould support OpenID.[71]In November, JanRain announced a free hosted service, RPX Basic, that allows websites to begin accepting OpenIDs for registration and login without having to install, integrate and configure the OpenID open source libraries.[72]
In January 2009, PayPal joined the OpenID Foundation as a corporate member, followed shortly by Facebook in February. The OpenID Foundation formed an executive committee and appointed Don Thibeau as executive director. In March, MySpace launched their previously announced OpenID provider service, enabling all MySpace users to use their MySpace URL as an OpenID. In May, Facebook launched their relying party functionality,[73][74]letting users use an automatic login-enabled OpenID account (e.g. Google) to log into Facebook.[75]
In September 2013,Janrainannounced that MyOpenID.com would be shut down on February 1, 2014; a pie chart showed Facebook and Google dominate the social login space as of Q2 2013.[76]Facebook has since left OpenID; it is no longer a sponsor, represented on the board, or permitting OpenID logins.[16][77]
In May 2016, Symantec announced that they would be discontinuing their pip.verisignlabs.com OpenID personal identity portal service.[78][79]
In March 2018, Stack Overflow announced an end to OpenID support, citing insufficient usage to justify the cost. In the announcement, it was stated that based on activity, users strongly preferred Facebook, Google, and e-mail/password based account authentication.[80]
OpenID is a way to use a single set of user credentials to access multiple sites, whileOAuthfacilitates the authorization of one site to access and use information related to the user's account on another site. Although OAuth is not anauthenticationprotocol, it can be used as part of one.
Authentication in the context of a user accessing an application tells an application who the current user is and whether or not they're present. [...] Authentication is all about the user and their presence with the application, and an internet-scale authentication protocol needs to be able to do this across network and security boundaries.
However, OAuth tells the application none of that. OAuth says absolutely nothing about the user, nor does it say how the user proved their presence or even if they're still there. As far as an OAuth client is concerned, it asked for a token, got a token, and eventually used that token to access some API. It doesn't know anything about who authorized the application or if there was even a user there at all. In fact, much of the point of OAuth is about giving this delegated access for use in situations where the user is not present on the connection between the client and the resource being accessed. This is great for client authorization, but it's really bad for authentication where the whole point is figuring out if the user is there or not (and who they are).[81]
The following drawing highlights the differences between using OpenID versusOAuthfor authentication. Note that with OpenID, the process starts with the application asking the user for their identity (typically an OpenID URI), whereas in the case of OAuth, the application directly requests a limited access OAuth Token (valet key) to access the APIs (enter the house) on user's behalf. If the user can grant that access, the application can retrieve the unique identifier for establishing the profile (identity) using the APIs.
OpenID provides a cryptographic verification mechanism that prevents the attack below against users who misuse OAuth for authentication.
Note that the valet key does not describe the user in any way, it only provides limited access rights, to some house (which is not even necessarily the user's, they just had a key). Therefore if the key becomes compromised (the user is malicious and managed to steal the key to someone else's house), then the user can impersonate the house owner to the application who requested their authenticity. If the key is compromised by any point in the chain of trust, a malicious user may intercept it and use it to impersonate user X for any application relying on OAuth2 for pseudo authentication against the same OAuth authorization server. Conversely, the notarized letter contains the user's signature, which can be checked by the requesting application against the user, so this attack is not viable.[82]
The letter can usepublic-key cryptographyto be authenticated.
Published in February 2014[83]by the OpenID Foundation, OpenID Connect (OIDC) is the third generation of OpenID technology. It is an authentication layer on top of theOAuth 2.0authorization framework.[84]It allows computing clients to verify the identity of an end user based on the authentication performed by an authorization server, as well as to obtain the basic profile information about the end user in an interoperable and REST-like manner. In technical terms, OpenID Connect specifies a RESTful HTTP API, usingJSONas a data format.
OpenID Connect allows a range of parties, including web-based, mobile and JavaScript clients, to request and receive information about authenticated sessions and end users. The OpenID Connect specification is extensible, supporting optional features such as encryption of identity data, discovery of OpenID providers, and session management.
|
https://en.wikipedia.org/wiki/OpenID_Connect
|
WebFingeris aprotocolspecified by the Internet Engineering Task ForceIETFinRFC 7033that allows for discovery of information about people and things identified by aURI.[1]Information about a person might be discovered via anacct:URI, for example, which is a URI that looks like anemail address.
WebFinger is specified as the discovery protocol forOpenID Connect,[2]which is a protocol that allows one to more easilylog into various sites on the Internet.[3]
The WebFinger protocol is used byfederatedsoftware, such asGNU social,[citation needed]Diaspora,[4]orMastodon,[5]to discover users on federated nodes and pods, as well as the remoteStorage protocol.[6]
As a historical note, the name "WebFinger" is derived from the oldARPANETFinger protocol, but it is a very different protocol designed forHTTP.[7]
The protocol payload is represented inJSONformat.
Client request:
Server response:
On Mastodon, any federated servers can look up users by sending a request to the WebFinger endpoint on other servers.[5]Here is an example for the user@Mastodon@mastodon.social:
Client request:
Server response:
|
https://en.wikipedia.org/wiki/WebFinger
|
WebIDis a method for internet services and members to know who they are communicating with. The WebID specifications define a set ofeditor's draftsto prepare the process of standardization foridentity, identification andauthenticationonHTTP-based networks. WebID-based protocols (Solid OIDC,WebID-TLS,WebID-TLS+Delegation) offer a new way to log into internet services. Instead of using a password, for example, the member refers to another web address which can vouch for it. WebID is not a specific service or product.
Technically speaking, a WebID is anHTTPURIthat denotes ("refers to" or "names") an agent on an HTTP based network such as theWebor an enterpriseintranet. In line withlinked dataprinciples, when a WebID isde-referenced("looked up"), it resolves to aprofile document(a WebID-Profile) that describes its referent (what it denotes). This profile document consists ofRDFmodel based structured data, originally constructed primarily using terms from theFOAFvocabulary, but now often including terms from other vocabularies.
Profile documents can be stored on the agent's own Web server, and access thereto may be partially or wholly constrained to specific agent identities via the use ofaccess controls, to preserve the privacy of the profile document's subject.
TheWebID-TLSprotocol (formerly known as FOAF+SSL[1]) is a decentralized and secure authentication protocol built upon the profile information as well as theTransport Layer Security (TLS)client certificates available in virtually all modern web browsers. It was first presented[2][3]for the W3C Workshop on the Future of Social Networking in 2009.
Unlike the usual SSL usage patterns, WebID-TLS does not require[4]a dedicatedCertificate Authorityto perform the user authorization. Users can easily mint useful identities for themselves using any TLS certificate (even self-signed ones). Using TLS client certificates for Web site user authentication doesn't usually require that the user input a password, unlike many othersingle sign-onmechanisms, which can make WebID-TLS quite convenient. However, the client certificate selection dialogues in popular Web browsers are not yet as user-friendly as they might be, negating some of this convenience.
Aweb of trustbased on WebID-TLS could be gradually constructed, connecting users and their activities on the World Wide Web, without formalkey signing parties, making the identification harder for anyone (even the original issuing authority) to falsify.[5]
WebID-OIDC was an authentication delegation protocol used bySolid. It is based on decentralizedOAuth2/OpenID Connect.[6]It has been superseded bySolid OIDC[7]
Solid OIDC is an authentication delegation protocol used bySolid. It is based onOAuth2/OpenID Connect.[8]
WebID-TLS+Delegation extends WebID-TLS through "On Behalf Of" headers and relationships, enabling one Agent to act with all permissions/privileges of another Agent.[9]
|
https://en.wikipedia.org/wiki/WebID
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.