text
stringlengths
16
172k
source
stringlengths
32
122
Incomputerized business management,single version of the truth(SVOT), is a technical concept describing thedata warehousingideal of having either a single centraliseddatabase, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent andnon-redundantform. This contrasts with the related concept ofsingle source of truth(SSOT), which refers to a data storage principle to always source a particular piece of information from one place.[citation needed] In some systems and in the context of message processing systems (oftenreal-time systems), this term also refers to the goal of establishing a single agreed sequence of messages within a database formed by a particular but arbitrary sequencing of records. The key concept is that data combined in a certain sequence is a "truth" which may be analyzed and processed giving particular results, and that although the sequence is arbitrary (and thus another correct but equally arbitrary sequencing would ultimately provide different results in any analysis), it is desirable to agree that the sequence enshrined in the "single version of the truth" is the version that will be considered "the truth", and that any conclusions drawn from analysis of the database are valid and unarguable, and (in a technical context) the database may be duplicated to a backup environment to ensure a persistent record is kept of the "single version of the truth". The key point is when the database is created using an external data source (such as a sequence of trading messages from a stock exchange) an arbitrary selection is made of one possibility from two or more equally valid representations of the input data, but henceforth the decision sets "in stone" one and only one version of the truth. Critics of SVOT as applied to message sequencing argue that this concept is not scalable. As the world moves towards systems spread over many processing nodes, the effort involved in negotiating a single agreed-upon sequence becomes prohibitive. But as pointed out by Owen Rubel at hisAPIWorld talk 'The New API Pattern', the SVOT is always going to be the service layer in a distributed architecture whereinput/output(I/O) meet; this also is where the endpoint binding belongs to allow for modularization and better abstraction of the I/O data across the architecture to avoid the architecturalcross cutting concern.[1] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Single_version_of_the_truth
TheApple SIMis a proprietarysubscriber identity module(SIM) produced byApple Inc.It is included in GPS + Cellular versions of theiPad Air 2and later,iPad mini 3and later, andiPad Pro.[1] The Apple SIM supports wireless services across multiple supported carriers, which can be selected from a user interface withiniOSandiPadOS, removing the need to install a SIM provided by the carrier itself. While Apple did not acknowledge the feature whilst presenting the iPad models that ship with Apple SIM, promotional materials on its website discuss the feature as being geared toward users of short-termmobile Internetcontracts across multiple carriers.[2] Carriers supported by Apple SIM includeAT&T,Verizon,T-Mobile US,EE,au,GigSky,Truphone,Three, and AlwaysOnline Wireless.[3]Altogether these carriers provide coverage in 100+ countries.[3]However, activating mobile services on AT&T will permanently lock the Apple SIM to AT&T, requiring the purchase of a new Apple SIM in order to use a different carrier.[4] The Apple SIM is known as aRemovable SIM with Remote Provisioning[5]– it is a special SIM card that may be configured with different operator profiles. This is in contrast to anembedded SIM, which is not removable and may also be remotely provisioned. It appears that Apple has begun to include both types of SIM in their newer devices.[6] As of October 1, 2022, Apple SIM technology is no longer available for activating new cellular data plans on iPad.[7] Source:[8] The following devices have embedded Apple SIM (except China). The following devices support physical Apple SIM cards.
https://en.wikipedia.org/wiki/Apple_SIM
AneSIM(embedded SIM) is a form ofSIM cardthat is embedded directly into a device as software installed onto aeUICCchip. First released in March 2016, eSIM is a global specification by theGSMAthat enablesremote SIM provisioning; end-users can changemobile network operatorswithout the need to physically swap a SIM from the device.[1]eSIM technology has been referred to as adisruptive innovationfor themobile telephonyindustry.[2][3]Most flagship devices manufactured since 2018 that are notSIM lockedsupport eSIM technology;[4]as of October 2023, there were 134 models ofmobile phonesthat supported eSIMs. In addition tomobile phones,tablet computers, andsmartwatches, eSIM technology is used forInternet of thingsapplications such as connected cars (smart rearview mirrors,on-board diagnostics, vehicleWi-Fi hotspots),artificial intelligencetranslators,MiFidevices, smart earphones,smart metering,GPS tracking units, database transaction units,bicycle-sharing systems, advertising players, andclosed-circuit television cameras. A report stated that by 2025, 98% ofmobile network operatorswere expected to offer eSIMs.[5] The eUICC chip used to host the eSIM is installed viasurface-mount technologyat the factory and uses the same electrical interface as a physical SIM as defined inISO/IEC 7816but with a small format of 6 mm × 5 mm. Once an eSIM carrier profile has been installed on an eUICC, it operates in the same way as a physical SIM, complete with a unique ICCID andnetwork authentication keygenerated by the carrier.[6]If the eSIM is eUICC-compatible, it can be re-programmed with new SIM information. Otherwise, the eSIM is programmed with its ICCID/IMSI and other information at the time it is manufactured, and cannot be changed. One common physical form factor of an eUICC chip is commonly designated MFF2.[7]All eUICCs are programmed with a permanent eUICC ID (EID) at the factory, which is used by the provisioning service to associate the device with an existing carrier subscription as well as to negotiate a secure channel for programming.[8] TheGSMAmaintains two different versions of the eSIM standard: one for consumer andInternet of thingsdevices[9]and another formachine to machine(M2M) devices.[10] In November 2010, theGSMAbegan discussing the possibility of a software-based SIM.[11] In March 2012, at the meeting of theEuropean Telecommunications Standards Institute,Motorolanoted that eUICC is geared at industrial devices, while Apple foresaw eSIMs in consumer products.[12] In February 2016,Samsungreleased theSamsung Gear S2Classic 3G smartwatch, the first device to implement an eSIM.[13] In March 2017, duringMobile World Congress, Qualcomm introduced a technical solution, with a live demonstration, within its Snapdragon hardware chip associated with related software (secured Java applications).[14] In September 2017,Applefirst introduced eSIM support with theApple Watch Series 3.[15]In 2018, it introduced it toiPhone, with theiPhone XS[16]andiPhone XR,[17]andiPad, with theiPad Pro (3rd generation).[18]The first iPhone models to not have a SIM card tray and work exclusively with eSIM were theiPhone 14andiPhone 14 Pro, announced in 2022.[19]Outside the United States, all iPhone models continue to be sold with support for physical SIM cards, but theiPad Air (6th generation),iPad Pro (7th generation), andiPad Mini (7th generation), announced in 2024, work exclusively with eSIM.[20] In October 2017, Google unveiled thePixel 2, the first mobile phone to use an eSIM, available via itsGoogle Fi Wirelessservice.[21]In 2018, Google released thePixel 3and Pixel 3 XL and in May 2019, thePixel 3aand Pixel 3a XL, with eSIM support for carriers other than Google Fi.[22][23][24]In October 2019, Google released thePixel 4and Pixel 4 XL with eSIM support.[25] In December 2017,Microsoftlaunched its first eSIM-enabled device, theMicrosoft Surface ProLTE.[26]In 2018, Microsoft also introduced eSIM to theWindows 10operating system.[27] In June 2018,Singaporesoughtpublic consultationon introducing eSIM as a new standard.[28] In July 2018, Plintron implemented the eSIM4ThingsInternet of thingsproduct.[29] Motorolareleased the 2020 version of theMotorola Razr, afoldable smartphonethat has no physical SIM slot since it only supports eSIM.[30] Samsung shipped theSamsung Galaxy S21andS20in North America with eSIM hardware onboard but no software support out of the box. The feature was enabled with theOne UIversion 4 update in November 2021.[31] In 2023, there were 650 million installed devices with eSIM capability.[32]
https://en.wikipedia.org/wiki/ESIM
AnIP Multimedia Services Identity Module (ISIM)is an application residing on theUICC, anICcard specified in TS 31.101. This module could be on aUMTS3GorIMSVoLTEnetwork. It contains parameters for identifying and authenticating the user to the IMS. The ISIM application can co-exist withSIMandUSIMon the same UICC making it possible to use the same smartcard in bothGSMnetworks and earlier releases ofUMTS. Among the data present on ISIM are anIP Multimedia Private Identity(IMPI), the home operatordomain name, one or moreIP Multimedia Public Identity(IMPU) and a long-term secret used to authenticate and calculate cipher keys. The first IMPU stored in the ISIM is used in emergency registration requests. This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/IP_Multimedia_Services_Identity_Module
Amobile equipment identifier (MEID)is a globally unique number identifying a physical piece ofCDMA2000mobile station equipment. The number format is defined by the3GPP2 report S.R0048but in practical terms, it can be seen as anIMEIbut withhexadecimaldigits. An MEID is 56bitslong (14 hexadecimal digits). It consists of three fields, including an 8-bit regional code (RR), a 24-bit manufacturer code, and a 24-bit manufacturer-assigned serial number. The check digit (CD) is not considered part of the MEID. The MEID was created to replaceelectronic serial numbers(ESNs), whose virgin form was exhausted in November 2008.[1]As of TIA/EIA/IS-41 Revision D and TIA/EIA/IS-2000 Rev C, the ESN is still a required field in many messages—for compatibility, devices with an MEID can use a pseudo-ESN (pESN), which is a manufacturer code of 0x80 (formerly reserved) followed by the least significant 24 bits of theSHA-1hash of the MEID.[2]MEIDs are used on CDMA mobile phones. GSM phones do not have ESN or MIN, only an International Mobile Station Equipment Identity (IMEI) number. Commonly, opening the phone's dialler and typing *#06# will display its MEID.[3] The separation between international mobile equipment identifiers (IMEIs) used by GSM/UMTS and MEIDs is based on the number ranges. There are two administrators: the global decimal administrator (GDA) for IMEIs and the global hexadecimal administrator (GHA). As of August 2006, the TIA acts as the GHA to assign MEID code prefixes (0xA0 and up), and the GSM Association acts as the global decimal administrator. TIA also allocates IMEI codes, specifically destined for dual-technology phones, out of the RR=99 range. This range is commonly (but not exclusively) used forLTE-capable handsets with CDMA support. Other administrators working under GSMA may also allocate any IMEI for use in dual-technology phones. For instance,AppleandLGnormally use RR=35 which is allocated byBABTwhileChinesebrands such asHuaweiuse two RR=86 IMEIs allocated by TAF for 3GPP networks alongside a distinct RR=99 decimal or RR=A0 hexadecimal MEID for 3GPP2 networks. Every IMEI can also be used as an MEID in CDMA devices (as well as in single-mode devices designed with GSM or other 3GPP protocols) but MEIDs can also contain hexadecimal digits and this MEID variant cannot be used as an IMEI. There are two standard formats for MEIDs, and both can include an optional check-digit. This is defined by3GPP2 standard X.S0008. The hexadecimal form is specified to be 14 digits grouped together and applies whether all digits are in the decimal range or whether some are in the range 'A'–'F'. In the first case, all digits are in the range '0'–'9', the check-digit is calculated using the normal base 10Luhnalgorithm, but if at least one digit is in the range 'A'–'F' this check digit algorithm uses base 16 arithmetic. The check-digit is never transmitted or stored. It is intended to detect most (but not all) input errors, it is not intended to be a checksum or CRC to detect transmission errors. Consequently, it may be printed on phones or their packaging in case of manual entry of an MEID (e.g. because there is nobar codeor the bar code is unreadable). The decimal form is specified to be 18 digits grouped in a 5–5–4–4 pattern and is calculated by converting the manufacturer code portion (32 bits) to decimal and padding on the left with '0' digits to 10 digits and separately converting the serial number portion to decimal and padding on the left to 8 digits. A check-digit can be calculated from the 18 digit result using the standard base 10Luhnalgorithm and appended to the end. Note that to produce this form the MEID digits are treated as base 16 numbers even if all of them are in the range '0'–9'. Because the pESN is formed by a hash on the MEID there is the potential for hash collisions. These will cause an extremely rare condition known as a 'collision' on a pure ESN-only network as the ESN is used for the calculation of the Public Long Code Mask (PLCM) used for communication with the base-station. Two mobiles using the same pESN within the same base-station area (operating on the same frequency) can result in call setup and page failures. The probability of a collision has been carefully examined.[4]Roughly, it is estimated that even on a heavily loaded network the frequency of this situation is closer to 1 out of 1 million calls than to 1 out of 100 000. 3GPP2 specificationC.S0072provides a solution to this problem by allowing the PLCM to be established by the base station. It is easy for the base station to ensure that all PLCM codes are unique when this is done. This specification also allows the PLCM to be based on the MEID orIMSI. A different problem occurs when ESN codes are stored in a database (such as forOTASP). In this situation, the risk of at least two phones having the same pseudo-ESN can be calculated using thebirthday paradoxand works out to about a 50 per cent probability in a database with 4,800 pseudo-ESN entries. 3GPP2 specificationsC.S0016(Revision C or higher) andC.S0066have been modified to allow the replacement MEID identifier to be transmitted, resolving this problem. Another problem is that messages delivered on the forward paging channel using the pESN as an address could be delivered to multiple mobiles seemingly randomly. This problem can be avoided by usingmobile identification number(MIN) or IMSI based addressing instead. This shortPythonscript will convert an MEID to a pESN. The CDG also provides ajavascript calculator with more conversion options. This C# method will convert an MEID from HEX to DEC format (or return empty for an invalid MEID HEX value)
https://en.wikipedia.org/wiki/Mobile_equipment_identifier
Amobile signatureis adigital signaturegenerated either on a mobile phone or on aSIM cardon a mobile phone. The term first appeared in articles introducingmSign(short for Mobile Electronic Signature Consortium). It was founded in 1999 and comprised 35 member companies. In October 2000, the consortium published an XML-interface defining a protocol allowing service providers to obtain a mobile (digital) signature from a mobile phone subscriber. In 2001,mSigngained industry-wide coverage when it came apparent that Brokat (one of the founding companies) also obtained a process patent in Germany for using the mobile phone to generate digital signatures. The term was then used by Paul Gibson (G&D) and Romary Dupuis (France Telecom) in their standardisation work at theEuropean Telecommunications Standards Institute(ETSI) and published in ETSI Technical Report TR 102 203. The ETSI-MSS specifications define aSOAPinterface and mobile signature roaming for systems implementing mobile signature services. ETSI TS 102 204, and ETSI TS 102 207. The mobile signature can have the legal equivalent of your own wet signature, hence the term "Mobile Ink", commercial term coined by Swiss Sicap. Other terms include "Mobile ID", "Mobile Certificate" by a circle of trust of 3 Finnish mobile network operators implementing a roaming mobile signature framework Mobiilivarmenne, etc. According to the EU directives for electronic signatures[1]the mobile signature can have the same level of protection as the handwritten signature if all components in the signature creation chain are appropriately certified. The governing standard for the mobile signature creation devices and equivalent of a handwritten signature is described in the Commission Decision 2003/511/EC of 14 July 2003 on the publication of reference numbers of generally recognised standards for electronic signature products in accordance with theElectronic Signatures Directive.[2]If the signature solution[buzzword]isCommon Criteriaevaluated by an independent party and given the EAL4+ designation, the solution[buzzword]can produce what the EU directive and consequent clarifications are calling aqualified electronic signature. The current standard dates back to the year 2002/2003 and is in the process being renewed and published by the end of 2012.[3]Most, if not all, mobile signature implementations to date generate what the EU Directive is callingadvanced electronic signature. The most successful mobile signature solutions[buzzword]can be found inTurkey,[4]Lithuania,[5]Estonia[6]andFinland[7][8]with millions of users. Technically the mobile signature is created by a security module when a request for it reaches the device (SIM card) and after introducing the request to the user with a few explanation prompts, the device asks for a secret code that only the correct user should know. Usually, this is in form of aPIN. If the access control secret was entered correctly, the device is approved with access to secret data containing for exampleRSAprivate key, which is then used to do the signature or other operations that the request wanted. ThePKIsystem associates the public key counterpart of the secret key held at the secure device with a set of attributes contained in a structure calleddigital certificate. The choice of the registration procedure details during the definition of the attributes included in this digital certificate can be used to produce different levels of identity assurance. Anything from anonymous but specific to high-standard real-word identity. By doing a signature, the secure device owner can claim that identity. Thus, the mobile signature is a unique feature for: See[1]. ParsMSS in Iran Pars Mobile Signature Services Project (ParsMSS) has been designed and produced in Iran for the first time since 2011. Pars Mobile Signature Services (ParsMSS) can be provided in two ways: SIM-Based and SIM-less.Registration Authority(RA) connects to this service and issues the electronic certificate in person or remotely. With this service, financial transactions and documents can be signed digitally. Mobile Ink[9]unites high security and user-friendly access to digital services which require strong authentication and authorization. Subscribers can get mobile signature access to m-banking or corporate applications for example. Mobile Ink is a commercial term associated with the mobile signature solution[buzzword]of Sicap building on Kiuru MSSP platform[10]by Methics Oy.[11][12] The platform allows simultaneous existence of multiple keys and associated identities with distinct registration procedures. This is used for example as a replacement for RSA SecurID dongles with anonymous but specific identity in corporate access applications. Mobile Certificate i.e. Mobiilivarmenne[13]in Finnish is a term used in the Finnish market space to describe the roaming mobile signature solution[buzzword]deployed by the three mobile network operatorsElisa,Sonera, andDNA. This setup was developed in all three operators co-operation under national Telecom technology coordination group FiCom, and it is world's first system where a fully functional co-operating ETSI TS 102 207 roaming service mesh was established in multi-vendor software environment. Another national feature is that mobile phone numbers are portable across the operators, and thus the phone number prefix does not identify the operator. To make things easy for the Application Providers (see ETSI TS 102 204), they can purchase service from any one of the Acquiring Entity service providers (mobile network operators), and reach all users. Part of the background was update of national laws allowing digital Person Identity Certificates (for Mobiilivarmenne use) to be issued also by other parties than official registration authorities via Police offices. Another part was co-operation agreement between the operators on the form of the certificates, and certification procedures and practices producing similar certificate contents with similar identity issuance traceability. All of these were reviewed and approved by the Finnish Communication Regulatory Authority which tasks include the oversight of the identity registration services also at government registries. In Ukraine,Mobile IDproject started in 2015, and later declared as one ofGovernment of Ukrainepriorities supported by EU. At the beginning of 2018 Ukrainian cell operators are evaluating proposals and testing platforms from different local and foreign developers. Platform selection will be followed up by comprehensive certification process. List of cryptographic information protection tools[14](and manufacturers), that are legally allowed for use in Ukraine (as of February 19, 2018). MPass Austriastarted mobile signature by 2003, as a technology ofBürgerkarte(which includes electronic signing with SmartCards). It was provided beimobilkom Austria, but ended in 2007. After a relaunch in 2009, namedHandy-Signatur, it is well used, by 2014 over 300.000 people, 5% of the adult inhabitants, own a registered mobile signature. It is controlled by Austrian Government, National Bank and Graz University of Technology. It is based on aTANsent beiSMSon request and confirmed with a private PIN.[15]According to 1999/93/EG signing by Handy-Signature is completely equivalent to a handwritten autograph. Valimo Wireless, aGemaltocompany, was the first company in the world to introduce mobile signature solutions[buzzword]into the market and creating the term Mobile ID. The initial mobile signature solution[buzzword]in Turkey by Turkcell used Valimo technology to implement the very successful mobile signature solution.[buzzword][16][17]Currently Valimo Mobile ID is in use in several countries. Methics Oyis a privately heldFinnishtechnology company with strong expertise on PKI and MSSP services. The Kiuru MSSP product line is used directly and as OEM product by several service and solution[buzzword]providers. Mobile ID platformby Innovation Development HUB LLC is the only electronic identification and mobile signature solution[buzzword], having already passed State certification in Ukraine. Uses both post-Soviet and European cryptography algorithms, which makes the platform suitable for CIS and EU PKI. G&D SmartTrustis the original supplier ofSIM cardembeddedWAPbrowsers with encryption plugins developed in late 1990es, it is called WIB (Wireless Internet Browser.) The WIB technology is licensed by the SmartTrust to many SIM card manufacturers, and the mobile network operators can choose to use cards with WIB capabilities in their normal user base immediately enabling them for use of the MSSP services. SmartTrust's MSSP offering is called SmartLicentio. Authentication may still be vulnerable toman-in-the-middle attacksand trojan horses, depending on the scheme employed.[18]Schemes likeone-time-password-generators andtwo-factor authenticationdo not completely solve man-in-the-middle attacks on open networks like the Internet.[19]However, supporting the authentication on the Internet with a parallel closed network like mobile/GSM and a digital signature enabled SIM card is the most secure method today against this type of attack. If the application provider provides a detailed explanation of the transaction to be signed both on its Internet site and signing request to mobile operator, the attack can easily be recognized by the individual when comparing both screens. Since mobile operators do not let applications send signing requests for free, the cost and technicality of intrusion between the application provider and the mobile operator make it an improbable attack target. Nonetheless, there have been evidence in multiple places where an attack has occurred. When a mobile user creates the sPIN (signing PIN) and secret key online within the secureSIM cardduring the registration process, this is known as "on-board key generation".[20]This requires a bit more interaction on user's behalf while registering, but on the other hand it makes the security mode interaction process familiar and lets them practice service usage. Also when the user forgets/locks the PIN associated with generated key, it is simple to generate a new key and assign it a new sPIN destroying the previous versions using same process as with original registration, and most importantly: without need for replacement of the SIM card. In these systems there is commonly no secondary signing PIN unblocking code (sPUK) at all, because revelation of such a code has identical requirements for the requesting person's identity verification as was with original person's identity registration.[21] Compare this with older "factory generated keys" model for older technology SIM cards that had insufficient processing power to do on-board key generation. The SIM card factory ran key-generation with special hardware accelerator and stored the key material on card along with initial sPIN and sPUK codes. Sometimes actual generation happened within the SIM card that was running in special manufacturing mode. After the generation the capability of doing it at all was usually disabled by blowing a special control fuse. Delivery of in particular the sPUK codes creates considerable security information logistics problems, which can entirely be avoided with the use of on-board key generation. Turkcell was the first provider to roll out a mobile signature service with "On Board Key Generation" functionality, which enables customers to create their signing and validation key pair, after they get the simcard. In this way GSM operators do not need to distribute signing PINs to customers. Customers can create their sPIN anew, on their own.[22] In introduction of the Finnish Mobiilivarmenne[23]service in 2010, only one out of three operators chose to use this on-board key generation capability with user interaction. Cited reasons claimed it to be too hard for the user. Actual experience did show that those without it created easily non-functional registrations without any online indication of the status, while usage of on-board key generation always resulted in positive indication of success when the service became fully functional for the user. Also if a mobile phone version had issues withSIM Application Toolkitprotocol, that became evident immediately during a registration process using on-board key generation.
https://en.wikipedia.org/wiki/Mobile_signature
Multi-SIMtechnology allows cloning up to 12GSMSIMcards (of formatCOMP128v1) into one card. The subscriber can leave the original cards in a secure place and use only the multi-SIM card in day to day life. For telecom operator-provided cards, only the GroupMSISDNnumber is known to multi-SIM subscribers. Member-SIMMSISDNis transparent to the subscriber. Messages sent to member SIM are delivered to the Group MSISDN. Multi-SIM allows switching among (up to) 12 stored numbers from the phone's main menu. A new menu entry in the subscriber’s phone automatically appears after inserting the multi-SIM card into the cell phone. Only one of the member cards may be active at a time. Modern SIM cards from many mobile operators are not compatible with multi-SIM technology and may not be cloned.[1]Multi-SIM technology is a result of poor security algorithms used in the encryption of the first generation of GSM SIM cards, commonly called COMP128v1. SIM cloning is now more difficult to perform, as more and more mobile operators are moving towards newer encryption methods, such as COMP128v2 or COMP128v3. SIM cloning is still possible in some countries such as Russia, Iran and China. SIM cards issued before June 2002 most likely are COMP128v1 SIM cards, thus clonable.[2]
https://en.wikipedia.org/wiki/Multi-SIM_card
Aregional lockout(orregion coding) is a class ofdigital rights managementpreventing the use of a certain product or service, such asmultimediaor a hardware device, outside a certain region or territory. A regional lockout may be enforced through physical means, through technological means such as detecting the user'sIP addressor using an identifying code, or through unintentional means introduced by devices only supporting certain regional technologies (such asvideo formats, i.e.,NTSCandPAL). A regional lockout may be enforced for several reasons, such as to stagger the release of a certain product, to avoid losing sales to the product's foreign publisher,[1]to maximize the product's impact in a certain region through localization,[1]to hindergrey marketimports by enforcingprice discrimination, or to prevent users from accessing certain content in their territory because of legal reasons (either due tocensorshiplaws, or because a distributor does not have the rights to certainintellectual propertyoutside their specified region). TheDVD,Blu-ray Disc, andUMDmedia formats all support the use of region coding; DVDs use eight region codes (Region 7 is reserved for future use; Region 8 is used for "international venues", such as airplanes and cruise ships), and Blu-ray Discs use three region codes corresponding to different areas of the world. Most Blu-rays, however, are region-free.Ultra HD Blu-raydiscs are also region-free.[2] On computers, the DVD region can usually be changed five times. Windows uses three region counters: its own, that of the DVD drive, and that of the player software (occasionally, the player software has no region counter of its own, but uses that of Windows). After the fifth region change, the system is locked to that region. In modern DVD drives (type RPC-2), the region lock is saved to its hardware, so that even reinstalling Windows or using the drive with a different computer will not unlock the drive again. Unlike DVD regions, Blu-ray regions are verified only by the player software, not by the computer system or the drive. The region code is stored in a file or the registry, and there are hacks to reset the region counter of the player software. In stand-alone players, the region code is part of the firmware. For bypassing region codes, there are software and multi-regional players available. A new form of Blu-ray region coding tests not only the region of the player/player software, but also its country code, repurposing a user setting intended forlocalization(PSR19) as a new form of regional lockout. This means, for example, while both the U.S. and Japan are Region A, some American discs will not play on devices/software configured for Japan or vice versa, since the two countries have different country codes. (For example, the United States is "US" (21843 orhex0x5553), Japan is "JP" (19024 or hex 0x4a50), and Canada is "CA" (17217 or hex 0x4341).) Although there are only three Blu-ray regions, the country code allows much more precise control of the regional distribution of Blu-ray Discs than the six (or eight) DVD regions. Since Blu-ray discs are cheaper in America than in Japan, American releases of Japanese anime series are often protected in that way to prevent reversal importations. Some discs check whether the country code is U.S. or Canada (sometimes also Mexico) and play only in these countries, others allow all country codes (even those of non–Region-A countries), except the Japanese. AnyDVDHD (7.5.9.0 and higher) has an option to enforce the U.S. country code. The software developers say users can also change the country code to enforce in the registry value "bdCountryCode" themselves or set "no country code" by using the value 4294967295 or hex 0xFFFFFFFF. (Before the change of the value, AnyDVD must be closed, and after changing, it must be restarted.) Some stand-alone Blu-ray players and player programs allow to change the country code (e.g., via the parental lock) and can bypass this protection like that. Some features of certain programs are/were disabled if the software is/was installed on a computer in a certain region. In older versions of the copy softwareCloneCD, the features "Amplify Weak Sectors", "Protected PC Games," and "Hide CDR Media" were disabled in the United States and Japan. Changing the regionandlanguage settings in Windows (e.g., to Canadian English) or patches could unlock these features in the two countries.SlySoftdecided to leave these options disabled for the U.S. for legal reasons, but, strangely enough, in the program "AnyDVD", which is also illegal according to U.S. law, no features were disabled. The current version of CloneCD (5.3.1.4) is not region-restricted anymore. The newer versions of the copy softwareDVDFab[de](9.1.5.0 and higher) come in a U.S. version (with no Blu-ray–ripping feature), which is downloaded if the homepagedvdfab.cnidentifies a U.S. IP address, and a non-US version (with working Blu-ray–ripping feature). Some webpages allow the download of the non-U.S. version also from the U.S. (they store the non-U.S. version directly and do not use download links to the developer's homepage). The softwareCCleanerv5.45.6611 has an added check to prevent the use in embargoed countries. Some programs (e.g., games) are distributed in different versions for NTSC and PAL computers. In some cases, to avoid grey market imports or international software piracy, they are designed not to run on a computer with the wrong TV system. Other programs can run on computers with both TV systems. Kaspersky Labsells its anti-virus products at different prices in different regions and uses regionalized activation codes. A program bought in a country of a region can be activated in another country of thesameregion. Once activated, the software can also be used in and download updates from other regions as long as the license is valid. Problems may arise when the license must be renewed, or if the software must be reinstalled, in a region other than the one where it was bought. The region is identified by the IP address (there is no activation possible without Internet connection), so the use of VPN or a proxy is recommended to circumvent the restriction. The Kaspersky regions are: The desktop versions ofHP PavilionandCompaq Presarioare region-locked, according the build is 91UKV6PRA1, for the A6740uk released in 2009.WildTangentEMEA,Magic Desktopwill not work on models in the U.S. The HP FlexBuild regions are: On theinternet, geo-blocking is used primarily to control access to online media content that is only licensed for playback in a certain region due to territorial licensing arrangements.[3] Regional lockouts in video games have been achieved by several methods, such as hardware/software authentication, slot pin-out change, differences in cartridge cases, IP blocking and online software patching. Mostconsolevideo gameshave region encoding. The main regions are: TheAtari 2600does not have regional locking; however, NTSC games can display wrong colors, slow speed and sound on PAL systems, and vice versa. TheAtari 7800has regional locking on NTSC systems, making PAL games unplayable on them. However, the PAL versions of the Atari 7800 can run NTSC games, but still suffering from the same problems the Atari 2600 had. TheAtari 5200,LynxandJaguarare region-free. Nintendo was the first console maker to introduce regional locks to its consoles, using them for every one of its home consoles until theNintendo Switch.[4]Nintendo has mostly abstained from using them for its handheld consoles. Games for theNintendo Entertainment System(NES) were locked through both physical and technical means; the design ofcartridgesfor the NES differed between Japan and other markets, using a different number of pins. As the Famicom (the Japanese model) used slightly smaller cartridges, Japanese games could not fit into NES consoles without an adapter (and even with that, they could still not use the extra sound functionalities of the Famicom due to their differing hardware). Official adapters existed inside early copies ofGyromite; other Famicom games could be played by disassembling the cartridge and then swapping out the original board of the game with a different Famicom game's board. Additionally, the NES also contained the10NESauthentication chip; the chip was coded for one of four regions: A game's region is recognized by the console using the 10NES chip. If the chip inside the cartridge conflicts with the chip inside the console, the game will not boot. The 10NES chip also doubled as a form ofdigital rights managementto prevent loading unlicensed or bootleg games. The redesignedNintendo Entertainment System (Model NES-101)released in 1993/1994 lacks the 10NES chip, and can play PAL and unlicensed games, although Famicom games still need a converter. The Famicom does not include a 10NES chip, but is still unable to play imports unless an adapter is used, due to the different size of the media. The AmericanSuper Nintendo Entertainment System(SNES) and the Super Famicom use differences in cartridge cases. A Super NES cartridge will not fit in a Super Famicom/PAL SNES slot due to its different shape and two pieces of plastic in the SNES slot prevent Super Famicom cartridges from being inserted in the SNES. PAL SNES carts can be fully inserted in Japanese consoles, but a similar chip to the 10NES, called the CIC, prevents PAL games from being played in NTSC consoles and vice versa. While physical modification of the cases (either console or cartridges) is needed to play games from the different regions, in order to play games of different TV systems, a hardware modification is also needed. Region locks could be bypassed using special unlicensed cartridge adapters such asGame Genie. The swapping of cartridge shells also bypasses the physical regional lockout. TheNintendo 64uses similar lockout methods as the Super NES. Both theGameCubeandWiiare region-locked, and theWii Shop Channelis also region-locked as well. On the Wii, channels from other regions will refuse to load with the message "This channel can't be used." The coded regions are: The GameCube and Wii's regional lockout can be bypassed either by console modification or simply by third-party software.Datel's FreeLoader orAction Replaydiscs are most notable. TheWii Uand itsGamepadsare also region-locked.[5][6] TheNintendo Switchis region-free, and therefore allows for games from any region to be played, whether through physicalcartridgesor digital downloads. For instance, games from theNintendo eShopcan be purchased and downloaded regardless of region.[7]The only exception to this is the Chinese version of the Nintendo Switch distributed byTencentin mainland China. This version of the console can still play cartridge-based games from any region; however, they can only connect to Chinese servers. Thus, it cannot access any game updates, DLC or online modes from games in other regions, or download said games digitally. Conversely, all other versions of the Nintendo Switch are unable to play cartridge-based games made by Tencent specifically for the Chinese Nintendo Switch.[8] TheNintendo Switch 2will also be region-free in regions outside of Japan, similar to the Nintendo Switch, however a cheaper Japan-exclusive model will be released alongside the regular worldwide version for that region which prevents the use of non-Japanese accounts and is locked to the Japanese language only.[10] All Nintendo handheld consoles except bothNintendo DSiandNintendo 3DSmodels are fully region-free.[11]In the case of the former, only the physical and digital games that cannot be played on earlier DS models are region-locked. The latter's region lock strictly applies to all software designed for it, with the only exception being the applicationNintendo 3DS Guide: Louvre,[12]which is not a game in of itself but rather as an application that serves as a guide for visitors of theLouvre Museum. Like the Wii, the 3DS's regional lockout can be bypassed by third-party software or custom firmware such as Luma3DS. ThePlayStationandPlayStation 2are region-locked into three regions:NTSC U/C,NTSC-J, andPAL. It is possible to disable region locking on said systems via amodchipor by performing a disk-swap when the console starts. In the case of the PlayStation 2, a special disc known as the Swap Magic can be used to bypass regional locks and allows imported games and game backups to successfully run on the system.[13]DVDmovies on the PlayStation 2 are also region-locked. AllPlayStation 3games except forPersona 4 ArenaandWay of the Samurai 3are region free.[14][15]Although publishers could choose to region-lock specific games based on a mechanism that allows for the game to query the model of the PS3, none did so during the first three years after the PS3's launch.[16]In the case ofPersona 4 Arena;[17]publisher Atlus declined to reverse its decision despite substantial outcry by some of their fanbase.[18]The decision was made to avoid excessive importing, because all versions of the game share the same features and language support, but have differing price points and release dates in each region.[18]They did, however, decide not to apply region‑locking to its sequel (Persona 4 Arena Ultimax).[19]Region locking is present for backwards-compatible PlayStation and PlayStation 2 games, as well as DVD andBlu-raymovies. Additionally, some games separate online players per region, such asMetal Gear Solid 4: Guns of the Patriots. ThePlayStation Storeonly contains content for its own country. For example, the EU store will not supply usable map packs for an imported U.S. copy ofCall of Duty 4: Modern Warfare. In addition,downloadable contentfor the PlayStation 3 systems is region-matched with the game itself. For example, DLC from the U.S.PlayStation Storemust be used in conjunction with a U.S.–region game. More specifically, the PS3's file system includes region-of-origin, so DLC cannot be shared between different region games much like save files cannot. Also, the PSN Store is tied to each user's PSN account, and payment methods for PSN is also region-locked. For example, a user with a Japanese PSN account will only be able to access the Japanese PSN store despite owning a U.S. PS3, and can only pay for a game with a Japanese PSN gift card or Japanese credit card. However, with a few rare exceptions, notablyJoysound Dive,[20]downloaded content from each PSN store are also region free, as are PSOne and PS2 classics offered on the store. AlthoughPlayStation Portablehas no region locking forUMDgames,[25]UMD movies are locked by region.[26]However, Sony has confirmed that it is possible to implement region‑locking on the PSP, and the firmware will disable features based on region. For example, Asian region PSPs will not display the "Extras" option on the XMB despite having been upgraded to the U.S. version of Firmware 6.20, preventing owners of such PSPs from installing the Comic Book Viewer and the TV Streaming applications. As the applications are installed through a PC, and users from the region are not blocked from downloading them, it is possible to install them on non-Asian PSPs that have been imported into the region. WhilePlayStation Vitagames had the potential to be region-locked, all games released for the system are region-free. Like with the PlayStation 3, thePlayStation 4andPlayStation 5are not region-locked, although it is still possible to develop region-locked games. Sony's official stance is that they discourage developers from region-locking and will only relent in special cases (as with the PS3 withPersona 4 Arena).[27][28]However, as with the PlayStation 3, digital content such as downloadable content for games still requires a PSN account from the region the content was made for. That said, PSN accounts themselves are not region-locked and an account for one region can be made on a console from another one.[29] In the case of the PlayStation 5, both physical and digital versions of games no longer use region codes and instead used rating labels and SKU title IDs to determine the regional variant of the game.[30][31]Nevertheless, both distribution methods of PlayStation 5 games are region-free. TheSG-1000does not have region lockout between Japanese and Australian systems. The same applies with SC-3000 games on cartridge and cassettes, as well SF-7000 disks. WesternSega Master Systemshave a different shape and pinout from the Japanese cartridge connector, meaning that Sega Mark III and SG-1000 games are incompatible with it. A BIOS included with western systems prevents Japanese cartridges (both Mark III and SG-1000) to be played on those systems, even with adapters. TheSega Cardslot on these systems has the same pinout as its Japanese counterpart, but they cannot run Japanese and SG-1000 cards due to lack of a certain code in the ROM header. This can be circumvented by removing the BIOS IC from it. However, some European-only games such asBack to the Future Part IIIwill refuse to boot on NTSC systems. Japanese games can be run onPower Base Converterwith use of adapters, but it will not run SG-1000 games, regardless of region. JapaneseSega Mega Drivecartridges use the same pinouts as with the Genesis and PAL Mega Drive cartridges but has a different shape and will not fit in the Genesis or PAL Mega Drive slot, which has the same shape (although the Genesis 3 in the U.S. will accept Japanese titles due to its wider slot.) Japanese Mega Drive systems also has a piece of plastic that slides in a place of the cartridge when the power switch is turned on, preventing it from being removed while the console is running, thus, inserting an American or European cart will make it impossible to use on a Japanese Mega Drive (though minor modifications to the plastic locks in the systems will bypass this). The console's main board, which was designed with language and frequency jumper sets that originally activated features in the same ROM for different regions, was later used to enable software-based regional locks that display warning messages that prevent the game from being played. Switches, instead of the jumpers, were used to bypass the locks. In region-locked games, if there is a multiple language feature, it can be changed with the switches after the game has booted-up (as with the case of NTSC versions ofCyber Brawl/Cosmic Carnagefor the32X). Despite the console itself being region‑locked, most of the games, especially ones made by Sega, were region-free and could be played on any region, unless the cartridge doesn't fit the console. Swapping out cartridge shells between either designs will also bypass the physical lockouts of the console. TheGame Gearis region free, and some games have a dual-language feature depending on which region of the system is used.Puyo Puyo(game title changes toPuzlow Kidson Western systems) andDonald no Magical World(title changes toRonald McDonald in Magical Worldin Western systems), are both Japan-exclusive games, but if run on Western units, they will be fully translated. Sega Mega-CDgames are region-locked. The region can be changed when makingCD-Rcopies; however, this method is not completely foolproof (e.g.,Sengoku Denshouin American consoles will freeze on the Sega license screen with a region-swapped CD-R copy). Furthermore, third party accessories exist that can bypass the regional-lockouts of Mega-CD games, which are inserted into the main console's cartridge slot.Flashcartssuch as the Everdrive can also be used to boot any regional Mega-CD BIOS from the SD card.[32]Different region Mega-CD consoles will also work with different region Mega Drive/Genesis systems; however, they often carry the risk of damaging the add-on if the propervoltage convertersare not used. The power voltages for the add-on will vary depending on the region, therefore requiring voltage converters to prevent damage to the add-on. For instance, the Japanese Mega-CD will work in the Genesis; however, it does not use the same AC voltage as the Genesis or the American Sega CD. Likewise, the European Mega-CD will work on a Japanese Mega Drive or Genesis, but the differences in AC voltage are more staggering compared to a Japanese Mega-CD in a Genesis. In such cases, using a step-up (or step-down) converter is recommended. Most AmericanSega Saturndiscs can be played in Japanese consoles, but most Japanese games are locked for American and European consoles. Like the Mega Drive/Genesis, the use of a switch will circumvent the region-lock, but won't change the language. In addition, the use of certain unlicensed backup/RAM cartridges will also allow a console to play games from different regions, except for games that use proprietary ROM-RAM carts. Games from differenttelevision systemsmay have graphical problems. DreamcastGD-ROMdiscs are region-locked; however, this could be circumvented with the use of boot discs.MIL-CDsand backup CDs are region-free. TheXboxand theXbox 360are region-locked, but some games are region-free and will play in any region. Digital content through Xbox Live on the Xbox 360 and original Xbox are also region-locked, such as DLC, movies, and apps. TheXbox Onewas initially planned to have a region blocking policy that would have prevented its use outside its region in an effort to curb parallel importing. Microsoft later reversed the policy and the final retail version of the console was not region-locked.[33]It was reported, though, that the console would be region-locked inChina;[34]however, this decision has since been reverted as of April 2015.[35][36] TheXbox Series Xis region-free. TheTurboGrafx-16and PC Engine (the Japanese model of the TurboGrafx-16)HuCardsare region-locked, however this can be circumvented with adapters. ThePhilips CD-iand the3DO Interactive Multiplayerare region-free. Japanese 3DO units, however, have akanjifont in ROM, which is required by a few games. When such games can't find the font, they can get locked or rendered unplayable. TheNeo Geo,Neo Geo CD, andNeo Geo Pocketline are also region-free. AmongstPC games, regional lockout is more difficult to enforce because both the game application and the operating system can be easily modified. Subscription-based online games often enforce a regional lock by blockingIP addresses(which can often be circumvented through anopen proxy) or by requiring the user to enter anational IDnumber (which may be impossible to verify). A number of other games using regional lockout are rare but do exist. One of the examples of this is the Windows version ofThe Orange Box, which usesSteamto enforce the regional lockout.[37]Steam also enforces a form of regional lockout in adherence to German law by offering to German users special versions of some games with banned content – most notably swastikas – replaced.[38]Steam is also used to restrict the release of the PC port ofMetal Gear Rising: Revengeanceto US and Europe only due to Sony having an exclusivity deal with Konami in Asia, and to restrict the Asian release of theFinal Fantasy XIIItrilogy to Japanese-only versions of the games. Besides the law and licensing issues, there is also a financial reason for Steam to region lock their games, since inRussiaand otherCIScountries prices of games on Steam are much lower than in theEUorNorth America.[39] Hewlett-Packardprinter cartridges have been regionalised since 2004.[40]Thus, they do not work in printers with a different region code, unless the user calls technical support for the device to be reassigned to the appropriate region.[41] HP printers have four regions: The region can be changed three times; then, the printer will be locked to a region. Lexmarkprinters use different region-coding systems: a) e.g., OfficeEdge Pro4000, OfficeEdge Pro4000c, OfficeEdge Pro5500, OfficeEdge Pro5500t, CS310, CS410 Color Laser Printer b) e.g., MS710, MS810 Monochrome Laser Printer Canonprint cartridges for the Pixma MP 480 will not work in printers of that type with a different region code either (even when listed on the packaging of the Canon printer cartridges in question). Epsonink cartridges are also region-coded. Xeroxalso uses region codes. Their printers are shipped with neutral "factory" ink sticks with no region coding. Upon the installation of the first new ink stick after these factory sticks, the machine will set a region code based on the installed ink stick and will only accept ink stick for that region from that point forward. "Officially," only three starter ink sticks per color can be used; then, the printer will no longer accept them and will want region-coded ink sticks to be inserted, but there are workarounds for that problem. Common region settings are: One method to bypass printer-region-coding is to store empty cartridges from the old region and refill them with the ink of cartridges from the new region, but many modern ink cartridges have chips and sensors to prevent refilling, which makes the process more difficult. On the Internet, one can find refilling instructions for various cartridge models. Another method is have ink cartridges from the old region shipped to the new region. Some manufacturers of regionalized printers also offer region-free printers specially designed for travelers. Starting from theSamsung Galaxy Note 3, Samsung phones and tablets contained a warning label stating that it would only operate withSIM cardsfrom the region the phone was sold in. A spokesperson clarified the policy, stating that it was intended to prevent grey-market reselling, and that it only applied to the first SIM card inserted.[42]For devices to use a SIM card from other regions, one of the following actions totaling five minutes or longer in length must first be performed with the SIM card from the local region: Samsung has changed the region locking policy completely since it was first introduced in 2013. Previously, the CSC was generally decided based on which region the phone was meant for, and it could be changed by user input. The SIM card now plays a central role in deciding what CSC is active.[43] If the phone/tablet was purchased in any of these countries and the SIM card is from a different region, they cannot be used together. Although the restriction can be removed after using a local SIM card to make a call for at least five minutes, it only removes the restrictions for that particular country. Furthermore, the device may not work properly in some circumstances, such as Samsung Pay. An unlocked device purchased in these above markets can only work on other service providers within the same market where the device was purchased from, and so on, despite having the bands to work on other networks. For example, if one buys an XAA/OYM device in the United States but one is now living in China and using a Chinese SIM, the active CSC will not be changed to CHC/OYM, because the firmware does not include CHC in the multi-CSC file. As a result, the new SIM card cannot be used in a device made for another market. The same rules still apply as usual; however, not all the countries will be unlocked for use. For example, if one buys a DBT/OXM device in Germany, but one is now living in Portugal and using a Portuguese SIM, the active CSC is potentially TPH/OXM, which makes the new SIM card usable in the new region. All countries except for South Korea, USA, Canada, and mainland China fall in the B section. As of 2022, Samsung has simplified most CSCs into a larger one. Assume that all service providers are supported for each region. For example, if one buys an XAR/OXM tablet in the United States but one is now living in Taiwan and using a Chinese phone with a Taiwanese SIM card paired to an XAR tablet, the active CSC on both devices is potentially BRI/OXM, which makes the paired phone usable in the new region. However, if the SIM card is from mainland China, it cannot be paired with the tablet, since mainland China (CHN) is not part of the OXM multi-CSC file. With the release of the Galaxy Tab S8 series, Southeast Asia is now Asia Pacific (APAC does not include Indonesia, South Korea, Japan, Taiwan or China (Mainland China and Hong Kong)). As of the Galaxy Tab S6, Wi-Fi only devices are no longer region locked, and the Call and Text on Other Devices features on those devices can work on SIM cards from most regions (excluding mainland China and Indonesia). All regions (except for mainland China and Indonesia) share the same CSC in their firmware version. Starting withWindows Phone 7for mobile devices[44]andWindows 8for computers, not all display languages are preinstalled/available for download on all devices with an OEM license of Windows. Users may not see all the display languages listed as options on the device or as options available for download as separate language packs. These exact options depend on the device manufacturer and country/region of purchase (and the mobile operator, if the device features cellular connectivity). Microsoft believes region locking is necessary because these display languages (which contains additional features like text suggestions, speech languages, and supplementary fonts) can take up a significant amount of storage, which leaves less space for data and media and impacts device performance. For cellular-connected devices, some mobile operators may choose not to support particular languages. For example, a wireless carrier in North America may not feel comfortable supporting European languages. Starting from June 2021, Google began enforcing region-locking by preventing users from redeemingGoogle Play gift cardsof other countries, despite user attempts to alter location with usage ofVPNs. Detection is based on location history tracking anddevice fingerprintactivity data associated with a user's Google account usage history.[45][46] There are provisions in place to restrict iOS features in specific locations, which include both hard-coding techniques (e.g., device model variants for different countries) and soft-coding techniques. For example,FaceTimeis not available in the UAE, Oman, Qatar, and Saudi Arabia due to government restrictions on VoIP services.[47] An inspection ofiOS 16.2'scode showed a new system internally called "countryd" was added to precisely determine the user's location, to enforce restrictions determined by government regulators. The system gathers multiple data sources, such as: current GPS location, country code from the Wi-Fi router, and information obtained from the SIM card to determine the user's current country.[48] In March 2024, Apple releasediOS 17.4, introducing a distinct set of additional features for the European Union residents to comply with theDigital Markets Act.[49][50]The expanded features include: installation of apps from alternative marketplaces, installation of web browsers with alternative engines, management of default web browser settings, utilization of alternative payment methods in the app store. To prevent non-EU users from accessing the additional features introduced in iOS 17.4, Apple has put in place measures which include: requiring a user's Apple ID region to be set to one of the EU's 27 member states, and performing routine on-device geolocation checks to ensure a user is physically present within the EU's borders. Users who travel out of the EU can continue accessing alternative app marketplaces for a grace period of 30 days. Users who are away from the EU for too long would lose access to these features; they can continue using previously installed apps but are unable to update or install apps from 3rd party app stores.[51][52]
https://en.wikipedia.org/wiki/Regional_lockout
Phone cloningis the copying of acellular device's identity to another. Analogue mobile telephones were notorious for their lack of security.[1]Casual listeners easily heard conversations as plainnarrowband FM; eavesdroppers with specialized equipment readily intercepted handsetElectronic Serial Numbers(ESN) and Mobile Directory Numbers (MDN or CTN, the Cellular Telephone Number) over the air. The intercepted ESN/MDN pairs would be cloned onto another handset and used in other regions for making calls. Due to widespread fraud, some carriers required aPINbefore making calls or used a system ofradio fingerprintingto detect the clones. Code-Division Multiple Access(CDMA) mobile telephone cloning involves gaining access to the device's embeddedfile system/nvm/num directory via specialized software or placing a modifiedEEPROMinto the target mobile telephone, allowing theElectronic Serial Number(ESN) and/orMobile Equipment Identifier(MEID) of the mobile phone to be changed. To obtain the MEID of your phone, simply open your phone's dialler and type *#06# to get its MEID number.[2]The ESN or MEID is typically transmitted to the cellular company'sMobile Telephone Switching Office(MTSO) in order to authenticate a device onto the mobile network. Modifying these, as well as the phone'sPreferred Roaming List(PRL) and themobile identification number, or MIN, can pave the way for fraudulent calls, as the target telephone is now a clone of the telephone from which the original ESN and MIN data were obtained. GSMcloning occurs by copying a secret key from the victimSIM card,[3]typically not requiring any internal data from the handset (the phone itself). GSM handsets do not have ESN or MIN, only anInternational Mobile Equipment Identity(IMEI) number. There are various methods used to obtain the IMEI. The most common method is to eavesdrop on a cellular network. Older GSM SIM cards can be cloned by performing a cryptographic attack against theCOMP128authentication algorithm used by these older SIM cards.[4]By connecting the SIM card to a computer, the authentication procedure can be repeated many times in order to slowly leak information about the secret key. If this procedure is repeated enough times, it is possible to derive theKikey.[5][6]Later GSM SIMs have various mitigations built in, either by limiting the number of authentications performed in a power on session, or by the manufacturer choosing resistant Kikeys. However if it is known that a resistant key was used, it is possible to speed up the attack by eliminating weak Kikeys from the pool of possible keys. Phone cloning is outlawed in theUnited Statesby the Wireless Telephone Protection Act of 1998, which prohibits "knowingly using, producing, trafficking in, having control or custody of, or possessing hardware or software knowing that it has been configured to insert or modify telecommunication identifying information associated with or contained in a telecommunications instrument so that such instrument may be used to obtain telecommunications service without authorization."[7] The effectiveness of phone cloning is limited. Every mobile phone contains aradio fingerprintin its transmission signal which remains unique to that mobile despite changes to the phone's ESN, IMEI, or MIN. Thus, cellular companies are often able to catch cloned phones when there are discrepancies between the fingerprint and the ESN, IMEI, or MIN.[citation needed]
https://en.wikipedia.org/wiki/Phone_cloning
ASubscriber Identity Module(SIM) card connectorincludes a connector body, the connector body defines a receptacle channel that extends inwardly from the front and the receptacle channel further defines a first hole and a second hole. Pluralities of terminals mount in the middle of the connector body; a switch terminal mounts in the connector body. The switch terminal has a fixing portion received in the first hole and a contacting portion received in the second hole, the contacting portion forms an arced surface, the top of the arced surface is inserted into the second hole and protrudes above the top surface of the housing base in the receiving cavity. The SIM card connector comprises a body having an accommodating space for disposing aSIM cardand multiple connected-through receptacles for receiving conducting terminals. Through the conducting terminals, an electrical signaling contact with the SIM card can be made. The connector further includes a guide arm having a first salient block and a second salient block, the first salient block and the second salient block are disposed on the respective sides of the guide arm and a cover is connected to the body for covering the accommodating space. Furthermore, the cover may connect with the guide arm through a pivot; the cover further comprises a groove. When the second salient block is moved, the first salient block shifts inside the groove and pushes the SIM card out from the accommodating space. Through the rapid development inwireless transmission technologies, all kinds of portable electronic products are produced. One of the most common and versatile electronic products is the mobile communication device. In a mobile telephone communication system, amobile phonenumber generally corresponds to a SIM card. As soon as a mobile phone user combines the SIM card with the mobile phone, the system is immediately able to identify the user, providing the kind of transmission services and data gathering services required for billing purposes.
https://en.wikipedia.org/wiki/SIM_connector
TheSingle Wire Protocol(SWP) is a specification for a single-wire connection between theSIM cardand anear field communication(NFC) chip in a cell phone. It was under final review by theEuropean Telecommunications Standards Institute(ETSI)[when?].[1][2] SWP is an interface between contactless frontend (CLF) anduniversal integrated circuit card(UICC/SIM card chip). It is a contact-based protocol that is used for contactless communication. C6 pin of UICC is connected to CLF for SWP support. It is a bit-orientedfull duplexprotocol, i.e. at the same time transmission as well as reception is possible. CLF acts as a master and UICC as a slave. CLF provides the UICC with energy, a transmission clock, data, and a signal for bus management. The data to be transmitted are represented by the binary states of voltage and current on the single wire. This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Single_Wire_Protocol
Intelecommunications, atransponderis a device that, upon receiving a signal, emits a different signal in response.[1]The term is ablendoftransmitterandresponder.[2][3] Inair navigationorradio frequency identification, aflight transponderis an automatedtransceiverin an aircraft that emits a coded identifying signal in response to an interrogating received signal. In acommunications satellite, asatellite transponderreceives signals over a range of uplink frequencies, usually from asatellite ground station; the transponder amplifies them, and re-transmits them on a different set of downlink frequencies to receivers on Earth, often without changing the content of the received signal or signals. Acommunications satellite’schannelsare called transponders because each is a separatetransceiverorrepeater. Withdigital videodata compressionandmultiplexing, severalvideoandaudiochannels may travel through a single transponder on a singlewidebandcarrier. Originalanalog videoonly has one channel per transponder, withsubcarriersfor audio and automatic transmission identification service (ATIS). Non-multiplexedradio stationscan also travel insingle channel per carrier(SCPC) mode, with multiple carriers (analog or digital) per transponder. This allows each station to transmit directly to the satellite, rather than paying for a whole transponder, or usinglandlinesto send it to anearth stationfor multiplexing with other stations. Infiber-optic communications, a transponder is the element that sends and receives the optical signal from afiber. A transponder is typically characterized by its data rate and the maximum distance the signal can travel. The term "transponder" can apply to different items with important functional differences, mentioned across academic and commercial literature: As a result, differences in transponder functionality also might influence the functional description of related optical modules like transceivers andmuxponders. Another type of transponder occurs inidentification friend or foe(IFF) systems inmilitary aviationand inair traffic controlsecondary surveillance radar(beacon radar) systems forgeneral aviationandcommercial aviation.[7] Primary radarworks best with large all-metal aircraft, but not so well on small, composite aircraft. Its range is also limited by terrain and rain or snow and also detects unwanted objects such as automobiles, hills and trees. Furthermore, it cannot always estimate the altitude of an aircraft.Secondary radarovercomes these limitations but it depends on a transponder in the aircraft to respond to interrogations from the ground station to make the plane more visible.[citation needed] Depending on the type of interrogation, the transponder sends back atransponder code(or "squawk code", Mode A) or altitude information (Mode C) to help air traffic controllers to identify the aircraft and to maintain separation between planes. Another mode called Mode S (Mode Select) is designed to help avoiding over-interrogation of the transponder (having many radars in busy areas) and to allow automatic collision avoidance. Mode S transponders arebackward compatiblewith Modes A and C. Mode S is mandatory incontrolled airspacein many countries. Some countries have also required, or are moving toward requiring, that all aircraft be equipped with Mode S, even inuncontrolled airspace. However, in the field of general aviation there have been objections to these moves, because of the cost, size, limited benefit to the users in uncontrolled airspace, and, in the case ofballoonsandgliders, the power requirements during long flights.[citation needed] Transponders are used on some military aircraft to ensure ground personnel can verify the functionality of a missile’sflight termination systemprior to launch. Such radar-enhancing transponders are needed as the enclosed weapon bays onmodern aircraftinterfere with prelaunch, flight termination system verification performed byrange safetypersonnel during training test launches. The transponders re-radiate the signals allowing for much longer communication distances.[8] TheInternational Maritime Organization'sInternational Convention for the Safety of Life at Sea(SOLAS) requires theAutomatic Identification System(AIS) to be fitted aboard international voyaging ships with 300 or moregross tonnage(GT), and all passenger ships regardless of size.[9]AIS transmitters/receivers are generally calledtransponders, but they generally transmit autonomously, althoughcoast stationscan interrogateclass B transponderson smaller vessels for additional information.[citation needed]In addition,navigational aidsoften have transponders calledRACON(radar beacons) designed to make them stand out on a ship's radar screen.[citation needed] Sonartransponders operate under water and are used to measure distance and form the basis of underwater location marking, position tracking andnavigation. Electronic toll collectionsystems such asE-ZPassin the eastern United States useRFIDtransponders to identify vehicles.[10] Transponders are used in races for lap timing. A cable loop is dug into the race circuit near to the start/finish line. Each individual runner or car has an active transponder with a unique ID code. When the individual passes the start/finish line, the lap time and the racing position is shown on the score board.[citation needed] Passive and active RFID systems are used inmotor sports, and off-road events such asEnduroandHare and Houndsracing, the riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.[citation needed] NASCARuses transponders and cable loops placed at numerous points around the track to determine the lineup during a caution period. This system replaced a dangerousrace back to the start-finish line.[citation needed] Many modern automobiles have keys with transponders hidden inside the plastic head of the key. The user of the car may not even be aware that the transponder is there, because there are no buttons to press. When a key is inserted into the ignition lock cylinder and turned, the car's computer sends a signal to the transponder. Unless the transponder replies with a valid code, the computer will not allow the engine to be started. Transponder keys have no battery; they are energized by the signal itself.[11][12] Transponders may also be used by residents to enter theirgated communities.[citation needed]However, having more than one transponder causes problems. If a resident's car with simple transponder is parked in the vicinity, any vehicle can come up to the automated gate, triggering thegate interrogation signal, which may get an acceptable response from the resident's car. Such units properly installed might involvebeamforming, unique transponders for each vehicle, or simply obliging vehicles to be stored away from the gate.[citation needed]
https://en.wikipedia.org/wiki/Transponder
W-SIM(Willcom-SIM) is aSIMcard developed byWillcomwhich, in addition to standard SIM functions, also has the core components of a cellular telephone (PHS), such as theradioreceiver/transmitter, built inside.[1]It is currently used in some terminals (listed below), which do not have radio modules. The W-SIM core module is an extended version of a SIM card, containing not only user data but also all the transmission technology needed for a mobile device to work. A terminal device developer can build a customized terminal without the design and development of a radio module, and users can change to a different radio module W-SIM without changing the terminal itself. Similar to traditional handsets made by WILLCOM, W-SIM terminals can be used inJapan, including global roaming withTaiwanandThailand. For Mainland China, China Netcom had W-SIMs available, though its PHS network is due to be phased out this year. In addition, a GSM version called the CM-G100 had been developed for use in other countries.
https://en.wikipedia.org/wiki/W-SIM
Inquantum computing, theBrassard–Høyer–Tapp algorithmorBHT algorithmis aquantum algorithmthat solves thecollision problem. In this problem, one is givennand anr-to-1 functionf:{1,…,n}→{1,…,n}{\displaystyle f:\,\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}}and needs to find two inputs thatfmaps to the same output. The BHT algorithm only makesO(n1/3){\displaystyle O(n^{1/3})}queries tof, which matches the lower bound ofΩ(n1/3){\displaystyle \Omega (n^{1/3})}in theblack boxmodel.[1][2] The algorithm was discovered byGilles Brassard, Peter Høyer, and Alain Tapp in 1997.[3]It usesGrover's algorithm, which was discovered the year before. Intuitively, the algorithm combines the square root speedup from thebirthday paradoxusing (classical) randomness with the square root speedup from Grover's (quantum) algorithm. First,n1/3inputs tofare selected at random andfis queried at all of them. If there is a collision among these inputs, then we return the colliding pair of inputs. Otherwise, all these inputs map to distinct values byf. Then Grover's algorithm is used to find a new input tofthat collides. Since there areninputs tofandn1/3of these could form a collision with the already queried values, Grover's algorithm can find a collision withO(nn1/3)=O(n1/3){\displaystyle O\left({\sqrt {\frac {n}{n^{1/3}}}}\right)=O(n^{1/3})}extra queries tof.[3]
https://en.wikipedia.org/wiki/BHT_algorithm
A standardSudokucontains 81 cells, in a 9×9 grid, and has 9 boxes, each box being the intersection of the first, middle, or last 3 rows, and the first, middle, or last 3 columns. Each cell may contain a number from one to nine, and each number can only occur once in each row, column, and box. A Sudoku starts with some cells containing numbers (clues), and the goal is to solve the remaining cells. Proper Sudokus have one solution.[1]Players and investigators use a wide range of computer algorithms to solve Sudokus, study their properties, and make new puzzles, including Sudokus with interesting symmetries and other properties. There are several computer algorithms that will solve 9×9 puzzles (n= 9) in fractions of a second, butcombinatorial explosionoccurs asnincreases, creating limits to the properties of Sudokus that can be constructed, analyzed, and solved asnincreases. Some hobbyists have developed computer programs that will solve Sudoku puzzles using abacktrackingalgorithm, which is a type ofbrute force search.[3]Backtracking is adepth-first search(in contrast to abreadth-first search), because it will completely explore one branch to a possible solution before moving to another branch. Although it has been established that approximately 5.96 x 1026final grids exist, a brute force algorithm can be a practical method to solve Sudoku puzzles. A brute force algorithm visits the empty cells in some order, filling in digits sequentially, or backtracking when the number is found to be not valid.[4][5][6][7]Briefly, a program would solve a puzzle by placing the digit "1" in the first cell and checking if it is allowed to be there. If there are no violations (checking row, column, and box constraints) then the algorithm advances to the next cell and places a "1" in that cell. When checking for violations, if it is discovered that the "1" is not allowed, the value is advanced to "2". If a cell is discovered where none of the 9 digits is allowed, then the algorithm leaves that cell blank and moves back to the previous cell. The value in that cell is then incremented by one. This is repeated until the allowed value in the last (81st) cell is discovered. The animation shows how a Sudoku is solved with this method. The puzzle's clues (red numbers) remain fixed while the algorithm tests each unsolved cell with a possible solution. Notice that the algorithm may discard all the previously tested values if it finds the existing set does not fulfill the constraints of the Sudoku. Advantages of this method are: The disadvantage of this method is that the solving time may be slow compared to algorithms modeled after deductive methods. One programmer reported that such an algorithm may typically require as few as 15,000 cycles, or as many as 900,000 cycles to solve a Sudoku, each cycle being the change in position of a "pointer" as it moves through the cells of a Sudoku.[8][9] A different approach which also uses backtracking, draws from the fact that in the solution to a standard sudoku the distribution for every individual symbol (value) must be one of only 46656 patterns. In manual sudoku solving this technique is referred to as pattern overlay or using templates and is confined to filling in the last values only. A library with all the possible patterns may get loaded or created at program start. Then every given symbol gets assigned a filtered set with those patterns, which are in accordance with the given clues. In the last step, the actual backtracking part, patterns from these sets are tried to be combined or overlayed in a non-conflicting way until the one permissible combination is hit upon. The Implementation is exceptionally easy when using bit vectors, because for all the tests only bit-wise logical operations are needed, instead of any nested iterations across rows and columns. Significant optimization can be achieved by reducing the sets of patterns even further during filtering. By testing every questionable pattern against all the reduced sets that were already accepted for the other symbols the total number of patterns left for backtracking is greatly diminished. And as with all sudoku brute-force techniques, run time can be vastly reduced by first applying some of the most simple solving practices which may fill in some 'easy' values. A Sudoku can be constructed to work against backtracking. Assuming the solver works from top to bottom (as in the animation), a puzzle with few clues (17), no clues in the top row, and has a solution "987654321" for the first row, would work in opposition to the algorithm. Thus the program would spend significant time "counting" upward before it arrives at the grid which satisfies the puzzle. In one case, a programmer found a brute force program required six hours to arrive at the solution for such a Sudoku (albeit using a 2008-era computer). Such a Sudoku can be solved nowadays in less than 1 second using an exhaustive search routine and faster processors.[10]p:25 Sudoku can be solved using stochastic (random-based) algorithms.[11][12]An example of this method is to: A solution to the puzzle is then found. Approaches for shuffling the numbers includesimulated annealing,genetic algorithmandtabu search. Stochastic-based algorithms are known to be fast, though perhaps not as fast as deductive techniques. Unlike the latter however, optimisation algorithms do not necessarily require problems to be logic-solvable, giving them the potential to solve a wider range of problems. Algorithms designed for graph colouring are also known to perform well with Sudokus.[13]It is also possible to express a Sudoku as aninteger linear programmingproblem. Such approaches get close to a solution quickly, and can then use branching towards the end. Thesimplex algorithmis able to solve proper Sudokus, indicating if the Sudoku is not valid (no solution). If there is more than one solution (non-proper Sudokus) the simplex algorithm will generally yield a solution with fractional amounts of more than one digit in some squares. However, for proper Sudokus, linear programming presolve techniques alone will deduce the solution without any need for simplex iterations. The logical rules used by presolve techniques for the reduction of LP problems include the set of logical rules used by humans to solve Sudokus. A Sudoku may also be modelled as aconstraint satisfaction problem. In his paperSudoku as a Constraint Problem,[14]Helmut Simonis describes manyreasoning algorithmsbased on constraints which can be applied to model and solve problems. Some constraint solvers include a method to model and solve Sudokus, and a program may require fewer than 100 lines of code to solve a simple Sudoku.[15][16]If the code employs a strong reasoning algorithm, incorporating backtracking is only needed for the most difficult Sudokus. An algorithm combining a constraint-model-based algorithm with backtracking would have the advantage of fast solving time – of the order of a few milliseconds[17]– and the ability to solve all sudokus.[5] Sudoku puzzles may be described as anexact coverproblem, or more precisely, an exacthitting setproblem. This allows for an elegant description of the problem and an efficient solution. Modelling Sudoku as an exact cover problem and using an algorithm such asKnuth's Algorithm Xand hisDancing Linkstechnique "is the method of choice for rapid finding [measured in microseconds] of all possible solutions to Sudoku puzzles."[18]An alternative approach is the use of Gauss elimination in combination with column and row striking. LetQbe the 9x9 Sudoku matrix,N= {1, 2, 3, 4, 5, 6, 7, 8, 9}, andXrepresent a generic row, column, or block.Nsupplies symbols for fillingQas well as theindex setfor the 9 elements of anyX. The given elementsqinQrepresent aunivalent relationfromQtoN. The solutionRis atotal relationand hence afunction. Sudoku rules require that therestrictionofRtoXis abijection, so any partial solutionC, restricted to anX, is apartial permutationofN. LetT= {X:Xis a row, column, or block ofQ}, soThas 27 elements. Anarrangementis either a partial permutation or apermutationonN. LetZbe the set of all arrangements onN. A partial solutionCcan be reformulated to include the rules as acomposition of relationsA(one-to-three) andBrequiring compatible arrangements: Solution of the puzzle, suggestions for newqto enterQ, come from prohibited arrangementsC¯,{\displaystyle {\bar {C}},}, thecomplementofCinQxZ: useful tools in the calculus of relations areresiduals:
https://en.wikipedia.org/wiki/Sudoku_solving_algorithms#Sudoku_brute_force
Iterationis the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. Inmathematicsandcomputer science, iteration (along with the related technique ofrecursion) is a standard element ofalgorithms. In mathematics, iteration may refer to the process ofiterating a function, i.e. applying a function repeatedly, using the output from one iteration as the input to the next. Iteration of apparently simple functions can produce complex behaviors and difficult problems – for examples, see theCollatz conjectureandjuggler sequences. Another use of iteration in mathematics is initerative methodswhich are used to produce approximate numerical solutions to certain mathematical problems.Newton's methodis an example of an iterative method. Manual calculation of a number's square root is a common use and a well-known example. In computing, iteration is the technique marking out of a block of statements within acomputer programfor a defined number of repetitions. That block of statements is said to beiterated; a computer scientist might also refer to that block of statements asan"iteration". Loopsconstitute the most common language constructs for performing iterations. The followingpseudocode"iterates" three times the line of code between begin & end through afor loop, and uses the values ofias increments. It is permissible, and often necessary, to use values from other parts of the program outside the bracketed block of statements, to perform the desired function. Iteratorsconstitute alternative language constructs to loops, which ensure consistent iterations over specific data structures. They can eventually save time and effort in later coding attempts. In particular, an iterator allows one to repeat the same kind of operation at each node of such a data structure, often in some pre-defined order. Iterateesare purely functional language constructs, which accept or reject data during the iterations. Recursions and iterations have different algorithmic definitions, even though they can generate identical effects/results. The primary difference is that recursion can be employed as a solution without prior knowledge as to how many times the action will have to repeat, while a successful iteration requires that foreknowledge. Some types of programming languages, known asfunctional programming languages, are designed such that they do not set up a block of statements for explicit repetition, as with theforloop. Instead, those programming languages exclusively userecursion. Rather than call out a block of code to be repeated a pre-defined number of times, the executing code block instead "divides" the work to be done into a number of separate pieces, after which the code block executes itself on each individual piece. Each piece of work will be divided repeatedly until the "amount" of work is as small as it can possibly be, at which point the algorithm will do that work very quickly. The algorithm then "reverses" and reassembles the pieces into a complete whole. The classic example of recursion is in list-sorting algorithms, such asmerge sort. The merge sort recursive algorithm will first repeatedly divide the list into consecutive pairs; each pair is then ordered, then each consecutive pair of pairs, and so forth until the elements of the list are in the desired order. The code below is an example of a recursive algorithm in theSchemeprogramming language that will output the same result as the pseudocode under the previous heading. In some schools ofpedagogy, iterations are used to describe the process of teaching or guiding students to repeat experiments, assessments, or projects, until more accurate results are found, or the student has mastered the technical skill. This idea is found in the old adage, "Practice makes perfect." In particular, "iterative" is defined as the "process of learning and development that involves cyclical inquiry, enabling multiple opportunities for people to revisit ideas and critically reflect on their implication."[1] Unlike computing and math, educational iterations are not predetermined; instead, the task is repeated until success according to some external criteria (often a test) is achieved.
https://en.wikipedia.org/wiki/Iteration#Computing
Innumber theory, anaurifeuillean factorization, named afterLéon-François-Antoine Aurifeuille, isfactorizationof certain integer values of thecyclotomic polynomials.[1]Because cyclotomic polynomials areirreducible polynomialsover the integers, such a factorization cannot come from an algebraic factorization of the polynomial. Nevertheless, certain families of integers coming from cyclotomic polynomials have factorizations given by formulas applying to the whole family, as in the examples below. In 1869, before the discovery of aurifeuillean factorizations,Landry[fr;es;de], through a tremendous manual effort,[8][9][10]obtained the following factorization intoprimes: Three years later, in 1871,Aurifeuillediscovered the nature of this factorization; the number24k+2+1{\displaystyle 2^{4k+2}+1}fork=14{\displaystyle k=14}, with the formula from the previous section, factors as:[2][8] Of course, Landry's full factorization follows from this (taking out the obvious factor of 5). The general form of the factorization was later discovered byLucas.[2] 536903681 is an example of aGaussian Mersennenorm.[9][10]
https://en.wikipedia.org/wiki/Aurifeuillean_factorization
Bach's algorithmis a probabilisticpolynomial timealgorithmforgenerating random numbersalong with theirfactorizations. It was published byEric Bachin 1988. No algorithm is known that efficiently factors random numbers, so the straightforward method, namely generating a random number and then factoring it, is impractical.[1] The algorithm performs, in expectation,O(logn)primality tests. A simpler but less-efficient algorithm (performing, in expectation,O(log(n)2)primality tests), is due toAdam Kalai.[2][3] Bach's algorithm may be used as part of certain methods forkey generationin cryptography.[4] Bach's algorithm produces a numberx{\displaystyle x}uniformly at random in the rangeN/2<x≤N{\displaystyle N/2<x\leq N}(for a given inputN{\displaystyle N}), along with its factorization. It does this by picking aprime numberp{\displaystyle p}and an exponenta{\displaystyle a}such thatpa≤N{\displaystyle p^{a}\leq N}, according to a certain distribution. The algorithm then recursively generates a numbery{\displaystyle y}in the rangeM/2<y≤M{\displaystyle M/2<y\leq M}, whereM=N/pa{\displaystyle M=N/p^{a}}, along with the factorization ofy{\displaystyle y}. It then setsx=pay{\displaystyle x=p^{a}y}, and appendspa{\displaystyle p^{a}}to the factorization ofy{\displaystyle y}to produce the factorization ofx{\displaystyle x}. This givesx{\displaystyle x}with logarithmic distribution over the desired range;rejection samplingis then used to get a uniform distribution.[1][5] Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Bach%27s_algorithm
Inmathematics, thefundamental theorem of arithmetic, also called theunique factorization theoremandprime factorization theorem, states that everyintegergreater than 1 can be represented uniquely as a product ofprime numbers,up tothe order of the factors.[3][4][5]For example, The theorem says two things about this example: first, that 1200canbe represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product. The requirement that the factors be prime is necessary: factorizations containingcomposite numbersmay not be unique (for example,12=2⋅6=3⋅4{\displaystyle 12=2\cdot 6=3\cdot 4}). This theorem is one of the mainreasons why 1 is not considered a prime number: if 1 were prime, then factorization into primes would not be unique; for example,2=2⋅1=2⋅1⋅1=…{\displaystyle 2=2\cdot 1=2\cdot 1\cdot 1=\ldots } The theorem generalizes to otheralgebraic structuresthat are calledunique factorization domainsand includeprincipal ideal domains,Euclidean domains, andpolynomial ringsover afield. However, the theorem does not hold foralgebraic integers.[6]This failure of unique factorization is one of the reasons for the difficulty of the proof ofFermat's Last Theorem. The implicit use of unique factorization in rings of algebraic integers is behind the error of many of the numerous false proofs that have been written during the 358 years betweenFermat'sstatement andWiles's proof. The fundamental theorem can be derived from Book VII, propositions 30, 31 and 32, and Book IX, proposition 14 ofEuclid'sElements. If two numbers by multiplying one another make some number, and any prime number measure the product, it will also measure one of the original numbers. (In modern terminology: if a primepdivides the productab, thenpdivides eitheraorbor both.) Proposition 30 is referred to asEuclid's lemma, and it is the key in the proof of the fundamental theorem of arithmetic. Any composite number is measured by some prime number. (In modern terminology: every integer greater than one is divided evenly by some prime number.) Proposition 31 is proved directly byinfinite descent. Any number either is prime or is measured by some prime number. Proposition 32 is derived from proposition 31, and proves that the decomposition is possible. If a number be the least that is measured by prime numbers, it will not be measured by any other prime number except those originally measuring it. (In modern terminology: aleast common multipleof several prime numbers is not a multiple of any other prime number.) Book IX, proposition 14 is derived from Book VII, proposition 30, and proves partially that the decomposition is unique – a point critically noted byAndré Weil.[7]Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case. WhileEuclidtook the first step on the way to the existence of prime factorization,Kamāl al-Dīn al-Fārisītook the final step[8]and stated for the first time the fundamental theorem of arithmetic.[9] Article 16 ofGauss'sDisquisitiones Arithmeticaeis an early modern statement and proof employingmodular arithmetic.[1] Every positive integern> 1can be represented in exactly one way as a product of prime powers wherep1<p2< ... <pkare primes and theniare positive integers. This representation is commonly extended to all positive integers, including 1, by the convention that theempty productis equal to 1 (the empty product corresponds tok= 0). This representation is called thecanonical representation[10]ofn, or thestandard form[11][12]ofn. For example, Factorsp0= 1may be inserted without changing the value ofn(for example,1000 = 23×30×53). In fact, any positive integer can be uniquely represented as aninfinite producttaken over all the positive prime numbers, as where a finite number of theniare positive integers, and the others are zero. Allowing negative exponents provides a canonical form for positiverational numbers. The canonical representations of the product,greatest common divisor(GCD), andleast common multiple(LCM) of two numbersaandbcan be expressed simply in terms of the canonical representations ofaandbthemselves: However,integer factorization, especially of large numbers, is much more difficult than computing products, GCDs, or LCMs. So these formulas have limited use in practice. Many arithmetic functions are defined using the canonical representation. In particular, the values ofadditiveandmultiplicativefunctions are determined by their values on the powers of prime numbers. The proof usesEuclid's lemma(ElementsVII, 30): If a primedividesthe product of two integers, then it must divide at least one of these integers. It must be shown that every integer greater than1is either prime or a product of primes. First,2is prime. Then, bystrong induction, assume this is true for all numbers greater than1and less thann. Ifnis prime, there is nothing more to prove. Otherwise, there are integersaandb, wheren=a b, and1 <a≤b<n. By the induction hypothesis,a=p1p2⋅⋅⋅pjandb=q1q2⋅⋅⋅qkare products of primes. But thenn=a b=p1p2⋅⋅⋅pjq1q2⋅⋅⋅qkis a product of primes. Suppose, to the contrary, there is an integer that has two distinct prime factorizations. Letnbe the least such integer and writen=p1p2...pj=q1q2...qk, where eachpiandqiis prime. We see thatp1dividesq1q2...qk, sop1divides someqibyEuclid's lemma. Without loss of generality, sayp1dividesq1. Sincep1andq1are both prime, it follows thatp1=q1. Returning to our factorizations ofn, we may cancel these two factors to conclude thatp2...pj=q2...qk. We now have two distinct prime factorizations of some integer strictly smaller thann, which contradicts the minimality ofn. The fundamental theorem of arithmetic can also be proved without using Euclid's lemma.[13]The proof that follows is inspired by Euclid's original version of theEuclidean algorithm. Assume thats{\displaystyle s}is the smallest positive integer which is the product of prime numbers in two different ways. Incidentally, this implies thats{\displaystyle s}, if it exists, must be acomposite numbergreater than1{\displaystyle 1}. Now, say Everypi{\displaystyle p_{i}}must be distinct from everyqj.{\displaystyle q_{j}.}Otherwise, if saypi=qj,{\displaystyle p_{i}=q_{j},}then there would exist some positive integert=s/pi=s/qj{\displaystyle t=s/p_{i}=s/q_{j}}that is smaller thansand has two distinct prime factorizations. One may also suppose thatp1<q1,{\displaystyle p_{1}<q_{1},}by exchanging the two factorizations, if needed. SettingP=p2⋯pm{\displaystyle P=p_{2}\cdots p_{m}}andQ=q2⋯qn,{\displaystyle Q=q_{2}\cdots q_{n},}one hass=p1P=q1Q.{\displaystyle s=p_{1}P=q_{1}Q.}Also, sincep1<q1,{\displaystyle p_{1}<q_{1},}one hasQ<P.{\displaystyle Q<P.}It then follows that As the positive integers less thanshave been supposed to have a unique prime factorization,p1{\displaystyle p_{1}}must occur in the factorization of eitherq1−p1{\displaystyle q_{1}-p_{1}}orQ. The latter case is impossible, asQ, being smaller thans, must have a unique prime factorization, andp1{\displaystyle p_{1}}differs from everyqj.{\displaystyle q_{j}.}The former case is also impossible, as, ifp1{\displaystyle p_{1}}is a divisor ofq1−p1,{\displaystyle q_{1}-p_{1},}it must be also a divisor ofq1,{\displaystyle q_{1},}which is impossible asp1{\displaystyle p_{1}}andq1{\displaystyle q_{1}}are distinct primes. Therefore, there cannot exist a smallest integer with more than a single distinct prime factorization. Every positive integer must either be a prime number itself, which would factor uniquely, or a composite that also factors uniquely into primes, or in the case of the integer1{\displaystyle 1}, not factor into any prime. The first generalization of the theorem is found in Gauss's second monograph (1832) onbiquadratic reciprocity. This paper introduced what is now called theringofGaussian integers, the set of allcomplex numbersa+biwhereaandbare integers. It is now denoted byZ[i].{\displaystyle \mathbb {Z} [i].}He showed that this ring has the four units ±1 and ±i, that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes (up tothe order and multiplication by units).[14] Similarly, in 1844 while working oncubic reciprocity,Eisensteinintroduced the ringZ[ω]{\displaystyle \mathbb {Z} [\omega ]}, whereω=−1+−32,{\textstyle \omega ={\frac {-1+{\sqrt {-3}}}{2}},}ω3=1{\displaystyle \omega ^{3}=1}is a cuberoot of unity. This is the ring ofEisenstein integers, and he proved it has the six units±1,±ω,±ω2{\displaystyle \pm 1,\pm \omega ,\pm \omega ^{2}}and that it has unique factorization. However, it was also discovered that unique factorization does not always hold. An example is given byZ[−5]{\displaystyle \mathbb {Z} [{\sqrt {-5}}]}. In this ring one has[15] Examples like this caused the notion of "prime" to be modified. InZ[−5]{\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]}it can be proven that if any of the factors above can be represented as a product, for example, 2 =ab, then one ofaorbmust be a unit. This is the traditional definition of "prime". It can also be proven that none of these factors obeys Euclid's lemma; for example, 2 divides neither (1 +√−5) nor (1 −√−5) even though it divides their product 6. Inalgebraic number theory2 is calledirreducibleinZ[−5]{\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]}(only divisible by itself or a unit) but notprimeinZ[−5]{\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]}(if it divides a product it must divide one of the factors). The mention ofZ[−5]{\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]}is required because 2 is prime and irreducible inZ.{\displaystyle \mathbb {Z} .}Using these definitions it can be proven that in anyintegral domaina prime must be irreducible. Euclid's classical lemma can be rephrased as "in the ring of integersZ{\displaystyle \mathbb {Z} }every irreducible is prime". This is also true inZ[i]{\displaystyle \mathbb {Z} [i]}andZ[ω],{\displaystyle \mathbb {Z} [\omega ],}but not inZ[−5].{\displaystyle \mathbb {Z} [{\sqrt {-5}}].} The rings in which factorization into irreducibles is essentially unique are calledunique factorization domains. Important examples arepolynomial ringsover the integers or over afield,Euclidean domainsandprincipal ideal domains. In 1843Kummerintroduced the concept ofideal number, which was developed further byDedekind(1876) into the modern theory ofideals, special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are calledDedekind domains. There is a version ofunique factorization for ordinals, though it requires some additional conditions to ensure uniqueness. Any commutative Möbius monoid satisfies a unique factorization theorem and thus possesses arithmetical properties similar to those of the multiplicative semigroup of positive integers. Fundamental Theorem of Arithmetic is, in fact, a special case of the unique factorization theorem in commutative Möbius monoids. TheDisquisitiones Arithmeticaehas been translated from Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, §n". Footnotes referencing theDisquisitiones Arithmeticaeare of the form "Gauss, DA, Art.n". These are in Gauss'sWerke, Vol II, pp. 65–92 and 93–148; German translations are pp. 511–533 and 534–586 of the German edition of theDisquisitiones.
https://en.wikipedia.org/wiki/Canonical_representation_of_a_positive_integer
Inmathematics,factorization(orfactorisation, seeEnglish spelling differences) orfactoringconsists of writing a number or anothermathematical objectas a product of severalfactors, usually smaller or simpler objects of the same kind. For example,3 × 5is aninteger factorizationof15, and(x− 2)(x+ 2)is apolynomial factorizationofx2− 4. Factorization is not usually considered meaningful within number systems possessingdivision, such as therealorcomplex numbers, since anyx{\displaystyle x}can be trivially written as(xy)×(1/y){\displaystyle (xy)\times (1/y)}whenevery{\displaystyle y}is not zero. However, a meaningful factorization for arational numberor arational functioncan be obtained by writing it in lowest terms and separately factoring its numerator and denominator. Factorization was first considered byancient Greek mathematiciansin the case of integers. They proved thefundamental theorem of arithmetic, which asserts that every positive integer may be factored into a product ofprime numbers, which cannot be further factored into integers greater than 1. Moreover, this factorization is uniqueup tothe order of the factors. Althoughinteger factorizationis a sort of inverse to multiplication, it is much more difficultalgorithmically, a fact which is exploited in theRSA cryptosystemto implementpublic-key cryptography. Polynomial factorizationhas also been studied for centuries. In elementary algebra, factoring a polynomial reduces the problem of finding itsrootsto finding the roots of the factors. Polynomials with coefficients in the integers or in afieldpossess theunique factorization property, a version of the fundamental theorem of arithmetic with prime numbers replaced byirreducible polynomials. In particular, aunivariate polynomialwithcomplexcoefficients admits a unique (up to ordering) factorization intolinear polynomials: this is a version of thefundamental theorem of algebra. In this case, the factorization can be done withroot-finding algorithms. The case of polynomials with integer coefficients is fundamental forcomputer algebra. There are efficient computeralgorithmsfor computing (complete) factorizations within the ring of polynomials with rational number coefficients (seefactorization of polynomials). Acommutative ringpossessing the unique factorization property is called aunique factorization domain. There arenumber systems, such as certainrings of algebraic integers, which are not unique factorization domains. However, rings of algebraic integers satisfy the weaker property ofDedekind domains:idealsfactor uniquely intoprime ideals. Factorizationmay also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of asurjective functionwith aninjective function.Matricespossess many kinds ofmatrix factorizations. For example, every matrix has a uniqueLUP factorizationas a product of alower triangular matrixLwith all diagonal entries equal to one, anupper triangular matrixU, and apermutation matrixP; this is a matrix formulation ofGaussian elimination. By thefundamental theorem of arithmetic, everyintegergreater than 1 has a unique (up to the order of the factors) factorization intoprime numbers, which are those integers which cannot be further factorized into the product of integers greater than one. For computing the factorization of an integern, one needs analgorithmfor finding adivisorqofnor deciding thatnis prime. When such a divisor is found, the repeated application of this algorithm to the factorsqandn/qgives eventually the complete factorization ofn.[1] For finding a divisorqofn, if any, it suffices to test all values ofqsuch that1 <qandq2≤n. In fact, ifris a divisor ofnsuch thatr2>n, thenq=n/ris a divisor ofnsuch thatq2≤n. If one tests the values ofqin increasing order, the first divisor that is found is necessarily a prime number, and thecofactorr=n/qcannot have any divisor smaller thanq. For getting the complete factorization, it suffices thus to continue the algorithm by searching a divisor ofrthat is not smaller thanqand not greater than√r. There is no need to test all values ofqfor applying the method. In principle, it suffices to test only prime divisors. This needs to have a table of prime numbers that may be generated for example with thesieve of Eratosthenes. As the method of factorization does essentially the same work as the sieve of Eratosthenes, it is generally more efficient to test for a divisor only those numbers for which it is not immediately clear whether they are prime or not. Typically, one may proceed by testing 2, 3, 5, and the numbers > 5, whose last digit is 1, 3, 7, 9 and the sum of digits is not a multiple of 3. This method works well for factoring small integers, but is inefficient for larger integers. For example,Pierre de Fermatwas unable to discover that the 6thFermat number is not a prime number. In fact, applying the above method would require more than10000divisions, for a number that has 10decimal digits. There are more efficient factoring algorithms. However they remain relatively inefficient, as, with the present state of the art, one cannot factorize, even with the more powerful computers, a number of 500 decimal digits that is the product of two randomly chosen prime numbers. This ensures the security of theRSA cryptosystem, which is widely used for secureinternetcommunication. For factoringn= 1386into primes: Manipulatingexpressionsis the basis ofalgebra. Factorization is one of the most important methods for expression manipulation for several reasons. If one can put anequationin a factored formE⋅F= 0, then the problem of solving the equation splits into two independent (and generally easier) problemsE= 0andF= 0. When an expression can be factored, the factors are often much simpler, and may thus offer some insight on the problem. For example, having 16 multiplications, 4 subtractions and 3 additions, may be factored into the much simpler expression with only two multiplications and three subtractions. Moreover, the factored form immediately givesrootsx=a,b,cas the roots of the polynomial. On the other hand, factorization is not always possible, and when it is possible, the factors are not always simpler. For example,x10−1{\displaystyle x^{10}-1}can be factored into twoirreducible factorsx−1{\displaystyle x-1}andx9+x8+⋯+x2+x+1{\displaystyle x^{9}+x^{8}+\cdots +x^{2}+x+1}. Various methods have been developed for finding factorizations; some are describedbelow. Solvingalgebraic equationsmay be viewed as a problem ofpolynomial factorization. In fact, thefundamental theorem of algebracan be stated as follows: everypolynomialinxof degreenwithcomplexcoefficients may be factorized intonlinear factorsx−ai,{\displaystyle x-a_{i},}fori= 1, ...,n, where theais are the roots of the polynomial.[2]Even though the structure of the factorization is known in these cases, theais generally cannot be computed in terms of radicals (nthroots), by theAbel–Ruffini theorem. In most cases, the best that can be done is computingapproximate valuesof the roots with aroot-finding algorithm. The systematic use of algebraic manipulations for simplifying expressions (more specificallyequations) may be dated to 9th century, withal-Khwarizmi's bookThe Compendious Book on Calculation by Completion and Balancing, which is titled with two such types of manipulation. However, even for solvingquadratic equations, the factoring method was not used beforeHarriot's work published in 1631, ten years after his death.[3]In his bookArtis Analyticae Praxis ad Aequationes Algebraicas Resolvendas, Harriot drew tables for addition, subtraction, multiplication and division ofmonomials,binomials, andtrinomials. Then, in a second section, he set up the equationaa−ba+ca= +bc, and showed that this matches the form of multiplication he had previously provided, giving the factorization(a−b)(a+c).[4] The following methods apply to any expression that is a sum, or that may be transformed into a sum. Therefore, they are most often applied topolynomials, though they also may be applied when the terms of the sum are notmonomials, that is, the terms of the sum are a product of variables and constants. It may occur that all terms of a sum are products and that some factors are common to all terms. In this case, thedistributive lawallows factoring out this common factor. If there are several such common factors, it is preferable to divide out the greatest such common factor. Also, if there are integer coefficients, one may factor out thegreatest common divisorof these coefficients. For example,[5]6x3y2+8x4y3−10x5y3=2x3y2(3+4xy−5x2y),{\displaystyle 6x^{3}y^{2}+8x^{4}y^{3}-10x^{5}y^{3}=2x^{3}y^{2}(3+4xy-5x^{2}y),}since 2 is the greatest common divisor of 6, 8, and 10, andx3y2{\displaystyle x^{3}y^{2}}divides all terms. Grouping terms may allow using other methods for getting a factorization. For example, to factor4x2+20x+3xy+15y,{\displaystyle 4x^{2}+20x+3xy+15y,}one may remark that the first two terms have a common factorx, and the last two terms have the common factory. Thus4x2+20x+3xy+15y=(4x2+20x)+(3xy+15y)=4x(x+5)+3y(x+5).{\displaystyle 4x^{2}+20x+3xy+15y=(4x^{2}+20x)+(3xy+15y)=4x(x+5)+3y(x+5).}Then a simple inspection shows the common factorx+ 5, leading to the factorization4x2+20x+3xy+15y=(4x+3y)(x+5).{\displaystyle 4x^{2}+20x+3xy+15y=(4x+3y)(x+5).} In general, this works for sums of 4 terms that have been obtained as the product of twobinomials. Although not frequently, this may work also for more complicated examples. Sometimes, some term grouping reveals part of arecognizable pattern. It is then useful to add and subtract terms to complete the pattern. A typical use of this is thecompleting the squaremethod for getting thequadratic formula. Another example is the factorization ofx4+1.{\displaystyle x^{4}+1.}If one introduces the non-realsquare root of –1, commonly denotedi, then one has adifference of squaresx4+1=(x2+i)(x2−i).{\displaystyle x^{4}+1=(x^{2}+i)(x^{2}-i).}However, one may also want a factorization withreal numbercoefficients. By adding and subtracting2x2,{\displaystyle 2x^{2},}and grouping three terms together, one may recognize the square of abinomial:x4+1=(x4+2x2+1)−2x2=(x2+1)2−(x2)2=(x2+x2+1)(x2−x2+1).{\displaystyle x^{4}+1=(x^{4}+2x^{2}+1)-2x^{2}=(x^{2}+1)^{2}-\left(x{\sqrt {2}}\right)^{2}=\left(x^{2}+x{\sqrt {2}}+1\right)\left(x^{2}-x{\sqrt {2}}+1\right).}Subtracting and adding2x2{\displaystyle 2x^{2}}also yields the factorization:x4+1=(x4−2x2+1)+2x2=(x2−1)2+(x2)2=(x2+x−2−1)(x2−x−2−1).{\displaystyle x^{4}+1=(x^{4}-2x^{2}+1)+2x^{2}=(x^{2}-1)^{2}+\left(x{\sqrt {2}}\right)^{2}=\left(x^{2}+x{\sqrt {-2}}-1\right)\left(x^{2}-x{\sqrt {-2}}-1\right).}These factorizations work not only over the complex numbers, but also over anyfield, where either –1, 2 or –2 is a square. In afinite field, the product of two non-squares is a square; this implies that thepolynomialx4+1,{\displaystyle x^{4}+1,}which isirreducibleover the integers, is reduciblemoduloeveryprime number. For example,x4+1≡(x+1)4(mod2);{\displaystyle x^{4}+1\equiv (x+1)^{4}{\pmod {2}};}x4+1≡(x2+x−1)(x2−x−1)(mod3),{\displaystyle x^{4}+1\equiv (x^{2}+x-1)(x^{2}-x-1){\pmod {3}},}since12≡−2(mod3);{\displaystyle 1^{2}\equiv -2{\pmod {3}};}x4+1≡(x2+2)(x2−2)(mod5),{\displaystyle x^{4}+1\equiv (x^{2}+2)(x^{2}-2){\pmod {5}},}since22≡−1(mod5);{\displaystyle 2^{2}\equiv -1{\pmod {5}};}x4+1≡(x2+3x+1)(x2−3x+1)(mod7),{\displaystyle x^{4}+1\equiv (x^{2}+3x+1)(x^{2}-3x+1){\pmod {7}},}since32≡2(mod7).{\displaystyle 3^{2}\equiv 2{\pmod {7}}.} Manyidentitiesprovide an equality between a sum and a product. The above methods may be used for letting the sum side of some identity appear in an expression, which may therefore be replaced by a product. Below are identities whose left-hand sides are commonly used as patterns (this means that the variablesEandFthat appear in these identities may represent any subexpression of the expression that has to be factorized).[6] Thenthroots of unityare thecomplex numberseach of which is arootof the polynomialxn−1.{\displaystyle x^{n}-1.}They are thus the numberse2ikπ/n=cos⁡2πkn+isin⁡2πkn{\displaystyle e^{2ik\pi /n}=\cos {\tfrac {2\pi k}{n}}+i\sin {\tfrac {2\pi k}{n}}}fork=0,…,n−1.{\displaystyle k=0,\ldots ,n-1.} It follows that for any two expressionsEandF, one has:En−Fn=(E−F)∏k=1n−1(E−Fe2ikπ/n){\displaystyle E^{n}-F^{n}=(E-F)\prod _{k=1}^{n-1}\left(E-Fe^{2ik\pi /n}\right)}En+Fn=∏k=0n−1(E−Fe(2k+1)iπ/n)ifnis even{\displaystyle E^{n}+F^{n}=\prod _{k=0}^{n-1}\left(E-Fe^{(2k+1)i\pi /n}\right)\qquad {\text{if }}n{\text{ is even}}}En+Fn=(E+F)∏k=1n−1(E+Fe2ikπ/n)ifnis odd{\displaystyle E^{n}+F^{n}=(E+F)\prod _{k=1}^{n-1}\left(E+Fe^{2ik\pi /n}\right)\qquad {\text{if }}n{\text{ is odd}}} IfEandFare real expressions, and one wants real factors, one has to replace every pair ofcomplex conjugatefactors by its product. As the complex conjugate ofeiα{\displaystyle e^{i\alpha }}ise−iα,{\displaystyle e^{-i\alpha },}and(a−beiα)(a−be−iα)=a2−ab(eiα+e−iα)+b2eiαe−iα=a2−2abcosα+b2,{\displaystyle \left(a-be^{i\alpha }\right)\left(a-be^{-i\alpha }\right)=a^{2}-ab\left(e^{i\alpha }+e^{-i\alpha }\right)+b^{2}e^{i\alpha }e^{-i\alpha }=a^{2}-2ab\cos \,\alpha +b^{2},}one has the following real factorizations (one passes from one to the other by changingkinton−korn+ 1 −k, and applying the usualtrigonometric formulas:E2n−F2n=(E−F)(E+F)∏k=1n−1(E2−2EFcoskπn+F2)=(E−F)(E+F)∏k=1n−1(E2+2EFcoskπn+F2){\displaystyle {\begin{aligned}E^{2n}-F^{2n}&=(E-F)(E+F)\prod _{k=1}^{n-1}\left(E^{2}-2EF\cos \,{\tfrac {k\pi }{n}}+F^{2}\right)\\&=(E-F)(E+F)\prod _{k=1}^{n-1}\left(E^{2}+2EF\cos \,{\tfrac {k\pi }{n}}+F^{2}\right)\end{aligned}}}E2n+F2n=∏k=1n(E2+2EFcos(2k−1)π2n+F2)=∏k=1n(E2−2EFcos(2k−1)π2n+F2){\displaystyle {\begin{aligned}E^{2n}+F^{2n}&=\prod _{k=1}^{n}\left(E^{2}+2EF\cos \,{\tfrac {(2k-1)\pi }{2n}}+F^{2}\right)\\&=\prod _{k=1}^{n}\left(E^{2}-2EF\cos \,{\tfrac {(2k-1)\pi }{2n}}+F^{2}\right)\end{aligned}}} Thecosinesthat appear in these factorizations arealgebraic numbers, and may be expressed in terms ofradicals(this is possible because theirGalois groupis cyclic); however, these radical expressions are too complicated to be used, except for low values ofn. For example,a4+b4=(a2−2ab+b2)(a2+2ab+b2).{\displaystyle a^{4}+b^{4}=(a^{2}-{\sqrt {2}}ab+b^{2})(a^{2}+{\sqrt {2}}ab+b^{2}).}a5−b5=(a−b)(a2+1−52ab+b2)(a2+1+52ab+b2),{\displaystyle a^{5}-b^{5}=(a-b)\left(a^{2}+{\frac {1-{\sqrt {5}}}{2}}ab+b^{2}\right)\left(a^{2}+{\frac {1+{\sqrt {5}}}{2}}ab+b^{2}\right),}a5+b5=(a+b)(a2−1−52ab+b2)(a2−1+52ab+b2),{\displaystyle a^{5}+b^{5}=(a+b)\left(a^{2}-{\frac {1-{\sqrt {5}}}{2}}ab+b^{2}\right)\left(a^{2}-{\frac {1+{\sqrt {5}}}{2}}ab+b^{2}\right),} Often one wants a factorization with rational coefficients. Such a factorization involvescyclotomic polynomials. To express rational factorizations of sums and differences or powers, we need a notation for thehomogenization of a polynomial: ifP(x)=a0xn+aixn−1+⋯+an,{\displaystyle P(x)=a_{0}x^{n}+a_{i}x^{n-1}+\cdots +a_{n},}itshomogenizationis thebivariate polynomialP¯(x,y)=a0xn+aixn−1y+⋯+anyn.{\displaystyle {\overline {P}}(x,y)=a_{0}x^{n}+a_{i}x^{n-1}y+\cdots +a_{n}y^{n}.}Then, one hasEn−Fn=∏k∣nQ¯n(E,F),{\displaystyle E^{n}-F^{n}=\prod _{k\mid n}{\overline {Q}}_{n}(E,F),}En+Fn=∏k∣2n,k∤nQ¯n(E,F),{\displaystyle E^{n}+F^{n}=\prod _{k\mid 2n,k\not \mid n}{\overline {Q}}_{n}(E,F),}where the products are taken over all divisors ofn, or all divisors of2nthat do not dividen, andQn(x){\displaystyle Q_{n}(x)}is thenth cyclotomic polynomial. For example,a6−b6=Q¯1(a,b)Q¯2(a,b)Q¯3(a,b)Q¯6(a,b)=(a−b)(a+b)(a2−ab+b2)(a2+ab+b2),{\displaystyle a^{6}-b^{6}={\overline {Q}}_{1}(a,b){\overline {Q}}_{2}(a,b){\overline {Q}}_{3}(a,b){\overline {Q}}_{6}(a,b)=(a-b)(a+b)(a^{2}-ab+b^{2})(a^{2}+ab+b^{2}),}a6+b6=Q¯4(a,b)Q¯12(a,b)=(a2+b2)(a4−a2b2+b4),{\displaystyle a^{6}+b^{6}={\overline {Q}}_{4}(a,b){\overline {Q}}_{12}(a,b)=(a^{2}+b^{2})(a^{4}-a^{2}b^{2}+b^{4}),}since the divisors of 6 are 1, 2, 3, 6, and the divisors of 12 that do not divide 6 are 4 and 12. For polynomials, factorization is strongly related with the problem of solvingalgebraic equations. An algebraic equation has the form whereP(x)is a polynomial inxwitha0≠0.{\displaystyle a_{0}\neq 0.}A solution of this equation (also called arootof the polynomial) is a valuerofxsuch that IfP(x)=Q(x)R(x){\displaystyle P(x)=Q(x)R(x)}is a factorization ofP(x) = 0as a product of two polynomials, then the roots ofP(x)are theunionof the roots ofQ(x)and the roots ofR(x). Thus solvingP(x) = 0is reduced to the simpler problems of solvingQ(x) = 0andR(x) = 0. Conversely, thefactor theoremasserts that, ifris a root ofP(x) = 0, thenP(x)may be factored as whereQ(x)is the quotient ofEuclidean divisionofP(x) = 0by the linear (degree one) factorx−r. If the coefficients ofP(x)arerealorcomplexnumbers, thefundamental theorem of algebraasserts thatP(x)has a real or complex root. Using the factor theorem recursively, it results that wherer1,…,rn{\displaystyle r_{1},\ldots ,r_{n}}are the real or complex roots ofP, with some of them possibly repeated. This complete factorization is uniqueup tothe order of the factors. If the coefficients ofP(x)are real, one generally wants a factorization where factors have real coefficients. In this case, the complete factorization may have some quadratic (degree two) factors. This factorization may easily be deduced from the above complete factorization. In fact, ifr=a+ibis a non-real root ofP(x), then itscomplex conjugates=a−ibis also a root ofP(x). So, the product is a factor ofP(x)with real coefficients. Repeating this for all non-real factors gives a factorization with linear or quadratic real factors. For computing these real or complex factorizations, one needs the roots of the polynomial, which may not be computed exactly, and only approximated usingroot-finding algorithms. In practice, most algebraic equations of interest haveintegerorrationalcoefficients, and one may want a factorization with factors of the same kind. Thefundamental theorem of arithmeticmay be generalized to this case, stating that polynomials with integer or rational coefficients have theunique factorization property. More precisely, every polynomial with rational coefficients may be factorized in a product whereqis a rational number andP1(x),…,Pk(x){\displaystyle P_{1}(x),\ldots ,P_{k}(x)}are non-constant polynomials with integer coefficients that areirreducibleandprimitive; this means that none of thePi(x){\displaystyle P_{i}(x)}may be written as the product two polynomials (with integer coefficients) that are neither 1 nor −1 (integers are considered as polynomials of degree zero). Moreover, this factorization is unique up to the order of the factors and the signs of the factors. There are efficientalgorithmsfor computing this factorization, which are implemented in mostcomputer algebrasystems. SeeFactorization of polynomials. Unfortunately, these algorithms are too complicated to use for paper-and-pencil computations. Besides the heuristics above, only a few methods are suitable for hand computations, which generally work only for polynomials of low degree, with few nonzero coefficients. The main such methods are described in next subsections. Every polynomial withrationalcoefficients, may be factorized, in a unique way, as the product of a rational number and a polynomial with integer coefficients, which isprimitive(that is, thegreatest common divisorof the coefficients is 1), and has a positive leading coefficient (coefficient of the term of the highest degree). For example: In this factorization, the rational number is called thecontent, and the primitive polynomial is theprimitive part. The computation of this factorization may be done as follows: firstly, reduce all coefficients to a common denominator, for getting the quotient by an integerqof a polynomial with integer coefficients. Then one divides out the greater common divisorpof the coefficients of this polynomial for getting the primitive part, the content beingp/q.{\displaystyle p/q.}Finally, if needed, one changes the signs ofpand all coefficients of the primitive part. This factorization may produce a result that is larger than the original polynomial (typically when there are manycoprimedenominators), but, even when this is the case, the primitive part is generally easier to manipulate for further factorization. The factor theorem states that, ifris arootof apolynomial meaningP(r) = 0, then there is a factorization where witha0=b0{\displaystyle a_{0}=b_{0}}. Thenpolynomial long divisionorsynthetic divisiongive: This may be useful when one knows or can guess a root of the polynomial. For example, forP(x)=x3−3x+2,{\displaystyle P(x)=x^{3}-3x+2,}one may easily see that the sum of its coefficients is 0, sor= 1is a root. Asr+ 0 = 1, andr2+0r−3=−2,{\displaystyle r^{2}+0r-3=-2,}one has For polynomials with rational number coefficients, one may search for roots which are rational numbers. Primitive part-content factorization (seeabove) reduces the problem of searching for rational roots to the case of polynomials with integer coefficients having no non-trivialcommon divisor. Ifx=pq{\displaystyle x={\tfrac {p}{q}}}is a rational root of such a polynomial the factor theorem shows that one has a factorization where both factors have integer coefficients (the fact thatQhas integer coefficients results from the above formula for the quotient ofP(x)byx−p/q{\displaystyle x-p/q}). Comparing the coefficients of degreenand the constant coefficients in the above equality shows that, ifpq{\displaystyle {\tfrac {p}{q}}}is a rational root inreduced form, thenqis a divisor ofa0,{\displaystyle a_{0},}andpis a divisor ofan.{\displaystyle a_{n}.}Therefore, there is a finite number of possibilities forpandq, which can be systematically examined.[7] For example, if the polynomial has a rational rootpq{\displaystyle {\tfrac {p}{q}}}withq> 0, thenpmust divide 6; that isp∈{±1,±2,±3,±6},{\displaystyle p\in \{\pm 1,\pm 2,\pm 3,\pm 6\},}andqmust divide 2, that isq∈{1,2}.{\displaystyle q\in \{1,2\}.}Moreover, ifx< 0, all terms of the polynomial are negative, and, therefore, a root cannot be negative. That is, one must have A direct computation shows that only32{\displaystyle {\tfrac {3}{2}}}is a root, so there can be no other rational root. Applying the factor theorem leads finally to the factorization2x3−7x2+10x−6=(2x−3)(x2−2x+2).{\displaystyle 2x^{3}-7x^{2}+10x-6=(2x-3)(x^{2}-2x+2).} The above method may be adapted forquadratic polynomials, leading to theac methodof factorization.[8] Consider the quadratic polynomial with integer coefficients. If it has a rational root, its denominator must divideaevenly and it may be written as a possiblyreducible fractionr1=ra.{\displaystyle r_{1}={\tfrac {r}{a}}.}ByVieta's formulas, the other rootr2{\displaystyle r_{2}}is withs=−(b+r).{\displaystyle s=-(b+r).}Thus the second root is also rational, and Vieta's second formular1r2=ca{\displaystyle r_{1}r_{2}={\frac {c}{a}}}gives that is Checking all pairs of integers whose product isacgives the rational roots, if any. In summary, ifax2+bx+c{\displaystyle ax^{2}+bx+c}has rational roots there are integersrandssuchrs=ac{\displaystyle rs=ac}andr+s=−b{\displaystyle r+s=-b}(a finite number of cases to test), and the roots arera{\displaystyle {\tfrac {r}{a}}}andsa.{\displaystyle {\tfrac {s}{a}}.}In other words, one has the factorization For example, let consider the quadratic polynomial Inspection of the factors ofac= 36leads to4 + 9 = 13 =b, giving the two roots and the factorization Any univariatequadratic polynomialax2+bx+c{\displaystyle ax^{2}+bx+c}can be factored using thequadratic formula: whereα{\displaystyle \alpha }andβ{\displaystyle \beta }are the tworootsof the polynomial. Ifa, b, care allreal, the factors are real if and only if thediscriminantb2−4ac{\displaystyle b^{2}-4ac}is non-negative. Otherwise, the quadratic polynomial cannot be factorized into non-constant real factors. The quadratic formula is valid when the coefficients belong to anyfieldofcharacteristicdifferent from two, and, in particular, for coefficients in afinite fieldwith an odd number of elements.[9] There are also formulas for roots ofcubicandquarticpolynomials, which are, in general, too complicated for practical use. TheAbel–Ruffini theoremshows that there are no general root formulas in terms of radicals for polynomials of degree five or higher. It may occur that one knows some relationship between the roots of a polynomial and its coefficients. Using this knowledge may help factoring the polynomial and finding its roots.Galois theoryis based on a systematic study of the relations between roots and coefficients, that includeVieta's formulas. Here, we consider the simpler case where two rootsx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}of a polynomialP(x){\displaystyle P(x)}satisfy the relation whereQis a polynomial. This implies thatx1{\displaystyle x_{1}}is a common root ofP(Q(x)){\displaystyle P(Q(x))}andP(x).{\displaystyle P(x).}It is therefore a root of thegreatest common divisorof these two polynomials. It follows that this greatest common divisor is a non constant factor ofP(x).{\displaystyle P(x).}Euclidean algorithm for polynomialsallows computing this greatest common factor. For example,[10]if one know or guess that:P(x)=x3−5x2−16x+80{\displaystyle P(x)=x^{3}-5x^{2}-16x+80}has two roots that sum to zero, one may apply Euclidean algorithm toP(x){\displaystyle P(x)}andP(−x).{\displaystyle P(-x).}The first division step consists in addingP(x){\displaystyle P(x)}toP(−x),{\displaystyle P(-x),}giving theremainderof Then, dividingP(x){\displaystyle P(x)}byx2−16{\displaystyle x^{2}-16}gives zero as a new remainder, andx− 5as a quotient, leading to the complete factorization The integers and the polynomials over afieldshare the property of unique factorization, that is, every nonzero element may be factored into a product of an invertible element (aunit, ±1 in the case of integers) and a product ofirreducible elements(prime numbers, in the case of integers), and this factorization is unique up to rearranging the factors and shifting units among the factors.Integral domainswhich share this property are calledunique factorization domains(UFD). Greatest common divisorsexist in UFDs, but not every integral domain in which greatest common divisors exist (known as aGCD domain) is a UFD. Everyprincipal ideal domainis a UFD. AEuclidean domainis an integral domain on which is defined aEuclidean divisionsimilar to that of integers. Every Euclidean domain is a principal ideal domain, and thus a UFD. In a Euclidean domain, Euclidean division allows defining aEuclidean algorithmfor computing greatest common divisors. However this does not imply the existence of a factorization algorithm. There is an explicit example of afieldFsuch that there cannot exist any factorization algorithm in the Euclidean domainF[x]of the univariate polynomials overF. Inalgebraic number theory, the study ofDiophantine equationsled mathematicians, during 19th century, to introduce generalizations of theintegerscalledalgebraic integers. The firstring of algebraic integersthat have been considered wereGaussian integersandEisenstein integers, which share with usual integers the property of beingprincipal ideal domains, and have thus theunique factorization property. Unfortunately, it soon appeared that most rings of algebraic integers are not principal and do not have unique factorization. The simplest example isZ[−5],{\displaystyle \mathbb {Z} [{\sqrt {-5}}],}in which and all these factors areirreducible. This lack of unique factorization is a major difficulty for solving Diophantine equations. For example, many wrong proofs ofFermat's Last Theorem(probably includingFermat's"truly marvelous proof of this, which this margin is too narrow to contain") were based on the implicit supposition of unique factorization. This difficulty was resolved byDedekind, who proved that the rings of algebraic integers have unique factorization ofideals: in these rings, every ideal is a product ofprime ideals, and this factorization is unique up the order of the factors. Theintegral domainsthat have this unique factorization property are now calledDedekind domains. They have many nice properties that make them fundamental in algebraic number theory. Matrix rings are non-commutative and have no unique factorization: there are, in general, many ways of writing amatrixas a product of matrices. Thus, the factorization problem consists of finding factors of specified types. For example, theLU decompositiongives a matrix as the product of alower triangular matrixby anupper triangular matrix. As this is not always possible, one generally considers the "LUP decomposition" having apermutation matrixas its third factor. SeeMatrix decompositionfor the most common types of matrix factorizations. Alogical matrixrepresents abinary relation, andmatrix multiplicationcorresponds tocomposition of relations. Decomposition of a relation through factorization serves to profile the nature of the relation, such as adifunctionalrelation.
https://en.wikipedia.org/wiki/Factorization
Innumber theory, amultiplicative partitionorunordered factorizationof an integern{\displaystyle n}is a way of writingn{\displaystyle n}as a product of integers greater than 1, treating two products as equivalent if they differ only in the ordering of the factors. The numbern{\displaystyle n}is itself considered one of these products. Multiplicative partitions closely parallel the study ofmultipartite partitions,[1]which are additivepartitionsof finite sequences of positive integers, with the addition madepointwise. Although the study of multiplicative partitions has been ongoing since at least 1923, the name "multiplicative partition" appears to have been introduced byHughes & Shallit (1983).[2]The Latin name "factorisatio numerorum" had been used previously.MathWorlduses the termunordered factorization. Hughes & Shallit (1983)describe an application of multiplicative partitions in classifying integers with a given number of divisors. For example, the integers with exactly 12 divisors take the formsp11{\displaystyle p^{11}},p⋅q5{\displaystyle p\cdot q^{5}},p2⋅q3{\displaystyle p^{2}\cdot q^{3}}, andp⋅q⋅r2{\displaystyle p\cdot q\cdot r^{2}}, wherep{\displaystyle p},q{\displaystyle q}, andr{\displaystyle r}are distinctprime numbers; these forms correspond to the multiplicative partitions12{\displaystyle 12},2⋅6{\displaystyle 2\cdot 6},3⋅4{\displaystyle 3\cdot 4}, and2⋅2⋅3{\displaystyle 2\cdot 2\cdot 3}respectively. More generally, for each multiplicative partitionk=∏ti{\displaystyle k=\prod t_{i}}of the integerk{\displaystyle k}, there corresponds a class of integers having exactlyk{\displaystyle k}divisors, of the form where eachpi{\displaystyle p_{i}}is a distinct prime. This correspondence follows from themultiplicativeproperty of thedivisor function.[2] Oppenheim (1926)creditsMacMahon (1923)with the problem of counting the number of multiplicative partitions ofn{\displaystyle n};[3][4]this problem has since been studied by others under the Latin name offactorisatio numerorum. If the number of multiplicative partitions ofn{\displaystyle n}isan{\displaystyle a_{n}}, McMahon and Oppenheim observed that itsDirichlet seriesgenerating functionf(s){\displaystyle f(s)}has the product representation[3][4]f(s)=∑n=1∞anns=∏k=2∞11−k−s.{\displaystyle f(s)=\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}=\prod _{k=2}^{\infty }{\frac {1}{1-k^{-s}}}.} The sequence of numbersan{\displaystyle a_{n}}begins Oppenheim also claimed an upper bound onan{\displaystyle a_{n}}, of the form[3]an≤n(exp⁡log⁡nlog⁡log⁡log⁡nlog⁡log⁡n)−2+o(1),{\displaystyle a_{n}\leq n\left(\exp {\frac {\log n\log \log \log n}{\log \log n}}\right)^{-2+o(1)},}but asCanfield, Erdős & Pomerance (1983)showed, this bound is erroneous and the true bound is[5]an≤n(exp⁡log⁡nlog⁡log⁡log⁡nlog⁡log⁡n)−1+o(1).{\displaystyle a_{n}\leq n\left(\exp {\frac {\log n\log \log \log n}{\log \log n}}\right)^{-1+o(1)}.} Both of these bounds are not far from linear inn{\displaystyle n}: they are of the formn1−o(1){\displaystyle n^{1-o(1)}}. However, the typical value ofan{\displaystyle a_{n}}is much smaller: the average value ofan{\displaystyle a_{n}}, averaged over an intervalx≤n≤x+N{\displaystyle x\leq n\leq x+N}, isa¯=exp⁡(4log⁡N2elog⁡log⁡N(1+o(1))),{\displaystyle {\bar {a}}=\exp \left({\frac {4{\sqrt {\log N}}}{{\sqrt {2e}}\log \log N}}{\bigl (}1+o(1){\bigr )}\right),}a bound that is of the formno(1){\displaystyle n^{o(1)}}.[6] Canfield, Erdős & Pomerance (1983)observe, andLuca, Mukhopadhyay & Srinivas (2010)prove, that most numbers cannot arise as the numberan{\displaystyle a_{n}}of multiplicative partitions of somen{\displaystyle n}: the number of values less thanN{\displaystyle N}which arise in this way isNO(log⁡log⁡log⁡N/log⁡log⁡N){\displaystyle N^{O(\log \log \log N/\log \log N)}}.[5][6]Additionally, Luca et al. show that most values ofn{\displaystyle n}are not multiples ofan{\displaystyle a_{n}}: the number of valuesn≤N{\displaystyle n\leq N}such thatan{\displaystyle a_{n}}dividesn{\displaystyle n}isO(N/log1+o(1)⁡N){\displaystyle O(N/\log ^{1+o(1)}N)}.[6]
https://en.wikipedia.org/wiki/Multiplicative_partition
Innumber theory, thep-adicvaluationorp-adic orderof anintegernis theexponentof the highest power of theprime numberpthatdividesn. It is denotedνp(n){\displaystyle \nu _{p}(n)}. Equivalently,νp(n){\displaystyle \nu _{p}(n)}is the exponent to whichp{\displaystyle p}appears in the prime factorization ofn{\displaystyle n}. Thep-adicvaluation is avaluationand gives rise to an analogue of the usualabsolute value. Whereas thecompletionof the rational numbers with respect to the usual absolute value results in thereal numbersR{\displaystyle \mathbb {R} }, the completion of the rational numbers with respect to thep{\displaystyle p}-adic absolute value results in thep-adicnumbersQp{\displaystyle \mathbb {Q} _{p}}.[1] Letpbe aprime number. Thep-adic valuationof an integern{\displaystyle n}is defined to be whereN0{\displaystyle \mathbb {N} _{0}}denotes the set ofnatural numbers(including zero) andm∣n{\displaystyle m\mid n}denotesdivisibilityofn{\displaystyle n}bym{\displaystyle m}. In particular,νp{\displaystyle \nu _{p}}is a functionνp:Z→N0∪{∞}{\displaystyle \nu _{p}\colon \mathbb {Z} \to \mathbb {N} _{0}\cup \{\infty \}}.[2] For example,ν2(−12)=2{\displaystyle \nu _{2}(-12)=2},ν3(−12)=1{\displaystyle \nu _{3}(-12)=1}, andν5(−12)=0{\displaystyle \nu _{5}(-12)=0}since|−12|=12=22⋅31⋅50{\displaystyle |{-12}|=12=2^{2}\cdot 3^{1}\cdot 5^{0}}. The notationpk∥n{\displaystyle p^{k}\parallel n}is sometimes used to meank=νp(n){\displaystyle k=\nu _{p}(n)}.[3] Ifn{\displaystyle n}is a positive integer, then this follows directly fromn≥pνp(n){\displaystyle n\geq p^{\nu _{p}(n)}}. Thep-adic valuation can be extended to therational numbersas the function defined by For example,ν2(98)=−3{\displaystyle \nu _{2}{\bigl (}{\tfrac {9}{8}}{\bigr )}=-3}andν3(98)=2{\displaystyle \nu _{3}{\bigl (}{\tfrac {9}{8}}{\bigr )}=2}since98=2−3⋅32{\displaystyle {\tfrac {9}{8}}=2^{-3}\cdot 3^{2}}. Some properties are: Moreover, ifνp(r)≠νp(s){\displaystyle \nu _{p}(r)\neq \nu _{p}(s)}, then wheremin{\displaystyle \min }is the minimum (i.e. the smaller of the two). Legendre's formulashows thatνp(n!)=∑i=1∞⌊npi⌋{\displaystyle \nu _{p}(n!)=\sum _{i=1}^{\infty {}}{\left\lfloor {\frac {n}{p^{i}}}\right\rfloor {}}}. For any positive integern,n=n!(n−1)!{\displaystyle n={\frac {n!}{(n-1)!}}}and soνp(n)=νp(n!)−νp((n−1)!){\displaystyle \nu _{p}(n)=\nu _{p}(n!)-\nu _{p}((n-1)!)}. Therefore,νp(n)=∑i=1∞(⌊npi⌋−⌊n−1pi⌋){\displaystyle \nu {}_{p}(n)=\sum _{i=1}^{\infty {}}{{\bigg (}\left\lfloor {\frac {n}{p^{i}}}\right\rfloor {}-\left\lfloor {\frac {n-1}{p^{i}}}\right\rfloor {}{\bigg )}}}. This infinite sum can be reduced to∑i=1⌊logp⁡(n)⌋(⌊npi⌋−⌊n−1pi⌋){\displaystyle \sum _{i=1}^{\lfloor {\log _{p}{(n)}\rfloor {}}}{{\bigg (}\left\lfloor {\frac {n}{p^{i}}}\right\rfloor {}-\left\lfloor {\frac {n-1}{p^{i}}}\right\rfloor {}{\bigg )}}}. This formula can be extended to negative integer values to give: νp(n)=∑i=1⌊logp⁡(|n|)⌋(⌊|n|pi⌋−⌊|n|−1pi⌋){\displaystyle \nu {}_{p}(n)=\sum _{i=1}^{\lfloor {\log _{p}{(|n|)}\rfloor {}}}{{\bigg (}\left\lfloor {\frac {|n|}{p^{i}}}\right\rfloor {}-\left\lfloor {\frac {|n|-1}{p^{i}}}\right\rfloor {}{\bigg )}}} Thep-adicabsolute value(orp-adic norm,[6]though not anormin the sense of analysis) onQ{\displaystyle \mathbb {Q} }is the function defined by Thereby,|0|p=p−∞=0{\displaystyle |0|_{p}=p^{-\infty }=0}for allp{\displaystyle p}and for example,|−12|2=2−2=14{\displaystyle |{-12}|_{2}=2^{-2}={\tfrac {1}{4}}}and|98|2=2−(−3)=8.{\displaystyle {\bigl |}{\tfrac {9}{8}}{\bigr |}_{2}=2^{-(-3)}=8.} Thep-adic absolute value satisfies the following properties. From themultiplicativity|rs|p=|r|p|s|p{\displaystyle |rs|_{p}=|r|_{p}|s|_{p}}it follows that|1|p=1=|−1|p{\displaystyle |1|_{p}=1=|-1|_{p}}for theroots of unity1{\displaystyle 1}and−1{\displaystyle -1}and consequently also|−r|p=|r|p.{\displaystyle |{-r}|_{p}=|r|_{p}.}Thesubadditivity|r+s|p≤|r|p+|s|p{\displaystyle |r+s|_{p}\leq |r|_{p}+|s|_{p}}follows from thenon-Archimedeantriangle inequality|r+s|p≤max(|r|p,|s|p){\displaystyle |r+s|_{p}\leq \max \left(|r|_{p},|s|_{p}\right)}. The choice of basepin theexponentiationp−νp(r){\displaystyle p^{-\nu _{p}(r)}}makes no difference for most of the properties, but supports the product formula: where the product is taken over all primespand the usual absolute value, denoted|r|0{\displaystyle |r|_{0}}. This follows from simply taking theprime factorization: each prime power factorpk{\displaystyle p^{k}}contributes its reciprocal to itsp-adic absolute value, and then the usualArchimedeanabsolute value cancels all of them. Ametric spacecan be formed on the setQ{\displaystyle \mathbb {Q} }with a (non-Archimedean,translation-invariant) metric defined by ThecompletionofQ{\displaystyle \mathbb {Q} }with respect to this metric leads to the setQp{\displaystyle \mathbb {Q} _{p}}ofp-adic numbers.
https://en.wikipedia.org/wiki/P-adic_valuation
Innumber theoryandcombinatorics, apartitionof a non-negativeintegern, also called aninteger partition, is a way of writingnas asumofpositive integers. Two sums that differ only in the order of theirsummandsare considered the same partition. (If order matters, the sum becomes acomposition.) For example,4can be partitioned in five distinct ways: The only partition of zero is the empty sum, having no parts. The order-dependent composition1 + 3is the same partition as3 + 1, and the two distinct compositions1 + 2 + 1and1 + 1 + 2represent the same partition as2 + 1 + 1. An individual summand in a partition is called apart. The number of partitions ofnis given by thepartition functionp(n). Sop(4) = 5. The notationλ⊢nmeans thatλis a partition ofn. Partitions can be graphically visualized withYoung diagramsorFerrers diagrams. They occur in a number of branches ofmathematicsandphysics, including the study ofsymmetric polynomialsand of thesymmetric groupand ingroup representation theoryin general. The seven partitions of 5 are Some authors treat a partition as a non-increasing sequence of summands, rather than an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as thetuple(2, 2, 1)or in the even more compact form(22, 1)where the superscript indicates the number of repetitions of a part. This multiplicity notation for a partition can be written alternatively as1m12m23m3⋯{\displaystyle 1^{m_{1}}2^{m_{2}}3^{m_{3}}\cdots }, wherem1is the number of 1's,m2is the number of 2's, etc. (Components withmi= 0may be omitted.) For example, in this notation, the partitions of 5 are written51,1141,2131,1231,1122,1321{\displaystyle 5^{1},1^{1}4^{1},2^{1}3^{1},1^{2}3^{1},1^{1}2^{2},1^{3}2^{1}}, and15{\displaystyle 1^{5}}. There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named afterNorman Macleod Ferrers, and as Young diagrams, named afterAlfred Young. Both have several possible conventions; here, we useEnglish notation, with diagrams aligned in the upper-left corner. The partition 6 + 4 + 3 + 1 of the number 14 can be represented by the following diagram: The 14 circles are lined up in 4 rows, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are shown below: An alternative visual representation of an integer partition is itsYoung diagram(often also called a Ferrers diagram). Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is while the Ferrers diagram for the same partition is While this seemingly trivial variation does not appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study ofsymmetric functionsandgroup representation theory: filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects calledYoung tableaux, and these tableaux have combinatorial and representation-theoretic significance.[1]As a type of shape made by adjacent squares joined together, Young diagrams are a special kind ofpolyomino.[2] Thepartition functionp(n){\displaystyle p(n)}counts the partitions of a non-negative integern{\displaystyle n}. For instance,p(4)=5{\displaystyle p(4)=5}because the integer4{\displaystyle 4}has the five partitions1+1+1+1{\displaystyle 1+1+1+1},1+1+2{\displaystyle 1+1+2},1+3{\displaystyle 1+3},2+2{\displaystyle 2+2}, and4{\displaystyle 4}. The values of this function forn=0,1,2,…{\displaystyle n=0,1,2,\dots }are: Thegenerating functionofp{\displaystyle p}is Noclosed-form expressionfor the partition function is known, but it has bothasymptotic expansionsthat accurately approximate it andrecurrence relationsby which it can be calculated exactly. It grows as anexponential functionof thesquare rootof its argument.,[3]as follows: In 1937,Hans Rademacherfound a way to represent the partition functionp(n){\displaystyle p(n)}by theconvergent series p(n)=1π2∑k=1∞Ak(n)k⋅ddn(1n−124sinh⁡[πk23(n−124)]){\displaystyle p(n)={\frac {1}{\pi {\sqrt {2}}}}\sum _{k=1}^{\infty }A_{k}(n){\sqrt {k}}\cdot {\frac {d}{dn}}\left({{\frac {1}{\sqrt {n-{\frac {1}{24}}}}}\sinh \left[{{\frac {\pi }{k}}{\sqrt {{\frac {2}{3}}\left(n-{\frac {1}{24}}\right)}}}\,\,\,\right]}\right)}where Ak(n)=∑0≤m<k,(m,k)=1eπi(s(m,k)−2nm/k).{\displaystyle A_{k}(n)=\sum _{0\leq m<k,\;(m,k)=1}e^{\pi i\left(s(m,k)-2nm/k\right)}.}ands(m,k){\displaystyle s(m,k)}is theDedekind sum. Themultiplicative inverseof its generating function is theEuler function; by Euler'spentagonal number theoremthis function is an alternating sum ofpentagonal numberpowers of its argument. Srinivasa Ramanujandiscovered that the partition function has nontrivial patterns inmodular arithmetic, now known asRamanujan's congruences. For instance, whenever the decimal representation ofn{\displaystyle n}ends in the digit 4 or 9, the number of partitions ofn{\displaystyle n}will be divisible by 5.[4] In both combinatorics and number theory, families of partitions subject to various restrictions are often studied.[5]This section surveys a few such restrictions. If we flip the diagram of the partition 6 + 4 + 3 + 1 along itsmain diagonal, we obtain another partition of 14: By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to beconjugateof one another.[6]In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest are partitions, such as 2 + 2, which have themselves as conjugate. Such partitions are said to beself-conjugate.[7] Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts. Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram: One can then obtain abijectionbetween the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example: Among the 22 partitions of the number 8, there are 6 that contain onlyodd parts: Alternatively, we could count partitions in which no number occurs more than once. Such a partition is called apartition with distinct parts. If we count the partitions of 8 with distinct parts, we also obtain 6: This is a general property. For each positive number, the number of partitions with odd parts equals the number of partitions with distinct parts, denoted byq(n).[8][9]This result was proved byLeonhard Eulerin 1748[10]and later was generalized asGlaisher's theorem. For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example isq(n) (partitions into distinct parts). The first few values ofq(n) are (starting withq(0)=1): Thegenerating functionforq(n) is given by[11] Thepentagonal number theoremgives a recurrence forq:[12] whereakis (−1)mifk= 3m2−mfor some integermand is 0 otherwise. By taking conjugates, the numberpk(n)of partitions ofninto exactlykparts is equal to the number of partitions ofnin which the largest part has sizek. The functionpk(n)satisfies the recurrence with initial valuesp0(0) = 1andpk(n) = 0ifn≤ 0 ork≤ 0andnandkare not both zero.[13] One recovers the functionp(n) by One possible generating function for such partitions, takingkfixed andnvariable, is More generally, ifTis a set of positive integers then the number of partitions ofn, all of whose parts belong toT, hasgenerating function This can be used to solvechange-making problems(where the setTspecifies the available coins). As two particular cases, one has that the number of partitions ofnin which all parts are 1 or 2 (or, equivalently, the number of partitions ofninto 1 or 2 parts) is and the number of partitions ofnin which all parts are 1, 2 or 3 (or, equivalently, the number of partitions ofninto at most three parts) is the nearest integer to (n+ 3)2/ 12.[14] One may also simultaneously limit the number and size of the parts. Letp(N,M;n)denote the number of partitions ofnwith at mostMparts, each of size at mostN. Equivalently, these are the partitions whose Young diagram fits inside anM×Nrectangle. There is a recurrence relationp(N,M;n)=p(N,M−1;n)+p(N−1,M;n−M){\displaystyle p(N,M;n)=p(N,M-1;n)+p(N-1,M;n-M)}obtained by observing thatp(N,M;n)−p(N,M−1;n){\displaystyle p(N,M;n)-p(N,M-1;n)}counts the partitions ofninto exactlyMparts of size at mostN, and subtracting 1 from each part of such a partition yields a partition ofn−Minto at mostMparts.[15] The Gaussian binomial coefficient is defined as:(k+ℓℓ)q=(k+ℓk)q=∏j=1k+ℓ(1−qj)∏j=1k(1−qj)∏j=1ℓ(1−qj).{\displaystyle {k+\ell \choose \ell }_{q}={k+\ell \choose k}_{q}={\frac {\prod _{j=1}^{k+\ell }(1-q^{j})}{\prod _{j=1}^{k}(1-q^{j})\prod _{j=1}^{\ell }(1-q^{j})}}.}The Gaussian binomial coefficient is related to thegenerating functionofp(N,M;n)by the equality∑n=0MNp(N,M;n)qn=(M+NM)q.{\displaystyle \sum _{n=0}^{MN}p(N,M;n)q^{n}={M+N \choose M}_{q}.} Therankof a partition is the largest numberksuch that the partition contains at leastkparts of size at leastk. For example, the partition 4 + 3 + 3 + 2 + 1 + 1 has rank 3 because it contains 3 parts that are ≥ 3, but does not contain 4 parts that are ≥ 4. In the Ferrers diagram or Young diagram of a partition of rankr, ther×rsquare of entries in the upper-left is known as theDurfee square: The Durfee square has applications within combinatorics in the proofs of various partition identities.[16]It also has some practical significance in the form of theh-index. A different statistic is also sometimes called therank of a partition(or Dyson rank), namely, the differenceλk−k{\displaystyle \lambda _{k}-k}for a partition ofkparts with largest partλk{\displaystyle \lambda _{k}}. This statistic (which is unrelated to the one described above) appears in the study ofRamanujan congruences. There is a naturalpartial orderon partitions given by inclusion of Young diagrams. This partially ordered set is known asYoung's lattice. The lattice was originally defined in the context ofrepresentation theory, where it is used to describe theirreducible representationsofsymmetric groupsSnfor alln, together with their branching properties, in characteristic zero. It also has received significant study for its purely combinatorial properties; notably, it is the motivating example of adifferential poset. There is a deep theory of random partitions chosen according to the uniform probability distribution on thesymmetric groupvia theRobinson–Schensted correspondence. In 1977, Logan and Shepp, as well as Vershik and Kerov, showed that the Young diagram of a typical large partition becomes asymptotically close to the graph of a certain analytic function minimizing a certain functional. In 1988, Baik, Deift and Johansson extended these results to determine the distribution of the longest increasing subsequence of a random permutation in terms of theTracy–Widom distribution.[17]Okounkovrelated these results to the combinatorics ofRiemann surfacesand representation theory.[18][19]
https://en.wikipedia.org/wiki/Integer_partition
TheInternational Certificate of Vaccination or Prophylaxis(ICVP), also known as theCarte JauneorYellow Card, is an officialvaccinationreport created by theWorld Health Organization(WHO).[1]As atravel document, it is a kind ofmedicalpassportthat is recognised internationally and may be required for entry to certain countries where there are increased health risks for travellers.[1] The ICVP is not animmunity passport; the primary difference is that vaccination certificates such as the ICVP incentivise individuals to obtain vaccination against a disease, while immunity passports incentivise individuals to get infected with and recover from a disease.[2] Various schemes forhealth passportsorvaccination certificateshave been proposed for people who have beenvaccinated against COVID-19. The ICVP'snicknameYellow Cardor itsFrenchequivalentCarte Jaunederives from theyellowcolour of the document. The fact thatyellow feveris a commonly required vaccination for travel has contributed to the document's association with the colour yellow, even though the ICVP can cover a wide range of vaccinations and booster shots, not just yellow fever.[1] The International Certificate of Inoculation and Vaccination was established by theInternational Sanitary Convention for Aerial Navigation (1933)inThe Hague, which came into force on 1 August 1935 and was amended in 1944.[3]Afterthe 1944 amendment, in addition to Personal, Aircraft and Maritime Declarations of Health, the Convention covered five certificates:[4][5] TheWorld Health Organization(WHO) was formed by its constitution on 22 July 1946, effective on 7 April 1948. The WHO Constitution included stipulationsto stimulate and advance work to eradicate epidemic, endemic and other diseases(Article 2.g) and that theWorld Health Assemblywouldhave authority to adopt regulations concerning sanitary and quarantine requirements and other procedures designed to prevent the international spread of disease(Article 21.a).[6]The Fourth World Health Assembly adopted the International Sanitary Regulations (alias WHO Regulations No. 2) on 25 May 1951,replacing and completingthe earlier International Sanitary Conventions. It confirmed the validity and use of international certificates of vaccination (Article 115), and updated the old model with a new version (Appendices 2, 3, 4).[7]The certificates mentioned were used for proof of vaccination against diseases such as cholera, yellow fever and smallpox; the terminoculationwas no longer used.[7][8]The old International Certificates of Inoculation and Vaccination remained valid until they expired, after which they were replaced by the new ICV.[8]On 23 May 1956, the Ninth World Health Assembly amended the form of the International Certificate of Vaccination or Revaccination against Smallpox per 1 October 1956.[9] The WHO'sWorld Health Assemblyadopted theInternational Health Regulations(IHR) in 1969, succeeding the previous International Sanitary Conventions/Regulations.[10]IHR Article 79 introduced a model International Certificate of Vaccination, and Appendix 2 and Annex VI stipulated a number of conditions that had to be fulfilled in order for it to be considered valid, such as being printed and filled out in English and French (a third language, relevant to the territory in which it is issued, could be added).[10]The 1969 IHR focused on four diseases: cholera,plague, smallpox, and yellow fever; however, Article 51 specified that vaccination against plague wouldnot be required as a condition of admission of any person to a territory.[10]The World Health Assembly determined in 1973 thatvaccination against cholerawas unable to prevent the introduction of cholera from one country to another,[11]and removed this requirement from the 1973 revision of the IHR;[10][11]it was also removed from the ICV.[11] The ICV was most successful in the case of smallpox. The mandatory possession of vaccination certificates significantly increased the number of travellers who were vaccinated, and thus contributed to preventing the spread of smallpox, especially when therapid expansion of air travelin the 1960s and 1970s reduced the travelling time from endemic countries to all other countries to just a few hours.[12]After smallpox was successfully eradicated in 1980, the International Certificate of Vaccination against Smallpox was cancelled in 1981, and the new 1983 form lacked any provision for smallpox vaccination.[10][12]Thus, only yellow fever remained as vaccination requirement for international travel for which the ICV was used.[citation needed] By 1994,Saudi Arabialegally required pilgrims going toMeccafor the annualHajjtovaccinate againstmeningococcal meningitis, while theCenter for Disease Controlalso advisedAmericanstravelling to theAfrican meningitis beltorKenya,TanzaniaandBurundito take the vaccine, especially when visiting during thedry season(November–April).[11] The2002–2004 SARS outbreakwas the driving force behind the 23 May 2005 revision of the International Health Regulations, which entered into force on 15 June 2007.[13]: 1On that day, the model International Certificate of Vaccination or Prophylaxis contained in Annex 6 of the International Health Regulations (as amended in 2005) replaced the International Certificate of Vaccination or Revaccination against Yellow Fever contained in appendix 2 of the International Health Regulations (1969).[14] The main portion of the ICVP is a form for physicians to fill out when administering a vaccine. This section is mandated by the WHO's 2005International Health Regulations, in which they provide a model of the document. It includes places for the traveller's name, date of birth, sex, nationality, national identification document, and signature. Below that is a row for each vaccine administered, in which the physician must include the prophylaxis or vaccine administered, date, signature, manufacturer and batch number, dates valid, and an official stamp from the administering centre.[15][13][16] Below this, the document outlines requirements for validity. The ICVP is only valid for vaccines approved by the WHO.[citation needed]The form must be fully completed in English or French by a medical practitioner or authorized health worker and must include the official stamp of the administering centre. The certificate is valid for as long as the vaccines included are valid.[15][13] The form may include additional information. In 2007, the WHO prepared a booklet that included the following additional sections.[17] The notes section includes information aboutyellow fever, since it is the only disease included in the International Health Regulations. It also specifies that the same certificate can be used if any future regulations require vaccination for another disease.[15] The information for travellers section recommends that travellers consult their physicians to determine appropriate vaccinations before international travel and inform their physician of international travel if they fall ill after their trip.[15] Malariais a serious disease with no vaccine available. The ICVP recommends that travellers protect against mosquitos through mosquito nets or repellent, as mosquitos can transmit malaria. Travellers can also consult their physician forantimalarial medication, which must be taken regularly for the full duration of the prescription.[15] The ICVP gives instructions for filling out the certificate. It also gives physicians guidelines for documentingcontraindicationsin cases where a traveller has a medical reason that prevents them from getting a particular vaccine. This section also reminds physicians to consider travel-associated illnesses when treating a patient who has fallen ill after traveling.[15] Yellow fever is the most common vaccine required for international travel. Many countries require the vaccine for all travellers or only for travellers coming from countries with risk of yellow fever transmission.[19]Exceptions are typically made for newborns until 9 months or one year of age, depending on the country.[20]The ICVP form is valid for yellow fever starting 10 days after vaccination. As of 2016, the vaccine is valid for the life of the traveller. No changes need to be made for those who received their vaccine or ICVP prior to 2016.[21] In the event that a traveller cannot be vaccinated for a particular disease for medical reasons, their physician can provide them with documentation indicating their condition. They may be subject to additional requirements, such as isolation, quarantine, or observation. A traveller who refuses a vaccine or prophylaxis that is required may be subject to similar requirements or denied entry. In some cases, equivalent military-issued forms are accepted in place of the ICVP, provided the forms include the same information.[13] Due to the prevalence of counterfeit certificates in some places, several countries, including Zimbabwe, Zambia, and Nigeria, are developing digital certificates that can authenticate an ICVP.[22][23]As of July 2019, Nigeria requires its citizens to have its digital "e-Yellow Card" for travel outside the country. The card has a QR code that can be scanned to verify its validity. This requirement does not affect travellers from other countries with valid ICVPs, but those arriving in Nigeria who haven't been vaccinated for yellow fever may receive the vaccine and the e-Yellow Card upon arrival.[24][25][26]As of September 2023, Ecuador started handing out digital certificates too and is no longer going to issue yellow booklets after they are out of stock.[27] Similar schemes have been proposed for travellers who have been vaccinated againstCOVID-19.[citation needed] Multiple agencies and countries were creating different forms of documentation for people who have been vaccinated against COVID-19.[28]Agencies attempting this include non-profit organisations such asWorld Economic Forumand theCommons Project Foundation, technology companies such asIBM, travel companies such asVeriFly, and theInternational Air Transport Association.[28]As of March 2021[update], standards for digital documentation, such as an app on a smartphone, had not been established.[28]On 12 March 2021,Ecma Internationalannounced its intention to create international standards which guard against counterfeiting and protects private data as much as possible in a "Call for Participation on Vaccine Passports International Standardization".[29] WithCOVID-19 vaccinesshowing promising results, several industry organizations including global airline lobbyIATAand theWorld Economic Forumhave announced pilots.[30]IATA's solution, "Travel Pass", is a mobile app that can display test results, proof of inoculation and will be integrated with the existingTIMATICsystem.[31] Israel employed a digital "green pass" to allow individuals fully vaccinated against COVID-19 to dine out, attend concerts, and travel to other nations.[32]It has been the subject of several privacy and data security concerns. Shortly after the scheme was rolled out, theKnessetpassed a law allowing local authorities to compile data on citizens who have refused to get vaccinated.[33] Work has been started to established and standardize at Ecma International, allow for an open interoperability ecosystem so that multiple COVID-19 immunity verification systems can work together and effectively across borders.[34]
https://en.wikipedia.org/wiki/International_Certificate_of_Vaccination_or_Prophylaxis
TheInternational Civil Aviation Organization Public Key Directory(ICAO PKD) is adatabasemaintained by theInternational Civil Aviation Organizationholding nationalcryptographickeys related to the authentication ofe-passportinformation. The ICAO PKD content is open to the public, and can be downloaded for free athttps://download.pkd.icao.int/.[1] TheUnited Nationsbecame the first non-state participant in October 2012, enabling issuing ofe-UNLP, the electronic form of theUnited Nations laissez-passer.[2] In December 2014, ICAO reported the PKD as having 45 participants.[3] In 2015 the GermanBundesdruckerei(German Federal Printing Office) won therequest for tenderof the ICAO to provide the ICAO PKD.[4] In July 2017, ICAO reported the PKD as having 58 participants.[3] As of November 2017, 60 participants were part of the ICAO PKD, with theEuropean Unionbeing the 60th member and at the same time the second non-state participant.[3] As of December 2024, 101 participants were part of the ICAO PKD.[3] Each ICAO PKD participant needs to pay a one-time registration fee and annual fees. The registration fee wasUSD56,000 at first, and was reduced to USD 15,900 since March 2015.[5]The annual fee comprises two components: the ICAO portion and the PKD Operator portion. Every participant pays the same amount of annual fees. Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/International_Civil_Aviation_Organization_Public_Key_Directory
Apolycarbonate e-passportis a type of travel document that features abiometric datapage made frompolycarbonate, a durable plastic material, rather than a laminated paper sheet. This construction offers enhanced protection for the passport’s electronic components and personal data. By laser-engraving information into the inner layers of the plastic, polycarbonate e-passports significantly improve resistance tocounterfeitingand offer greater durability and reliability compared to traditional laminated pages.[1][2][3] Finland was the first country to introduce a passport with a polycarbonate data page in 1997.[4]Sweden followed shortly after, becoming the first to implement a biometric polycarbonate data page during the early adoption of e-passports. Since then, the design has gradually been adopted around the world.[5] As of 2019, over 40 countries have transitioned from laminated paper biometric data pages to polycarbonate alternatives in their passports.[6]
https://en.wikipedia.org/wiki/Polycarbonate_e-passport
Anelectronic identification("eID") is a digital solution for proof of identity of citizens or organizations. They can be used to view to access benefits or services provided by government authorities, banks or other companies, formobile payments, etc. Apart from onlineauthenticationand login, many electronic identity services also give users the option to sign electronic documents with adigital signature. One form of eID is anelectronic identification card(eIC), which is a physicalidentity cardthat can be used for online and offline personal identification orauthentication. The eIC is asmart cardinID-1format of a regularbank card, with identity information printed on the surface (such as personal details and a photograph) and in an embeddedRFIDmicrochip, similar to that inbiometric passports. The chip stores the information printed on the card (such as the holder's name and date of birth) and the holder's photo(s). Several photos may be taken from different angles along with different facial expressions, thus allowing thebiometricfacial recognition systems to measure and analyze the overall structure, shape and proportions of the face.[1]It may also store the holder's fingerprints. The card may be used for online authentication, such as for age verification or fore-governmentapplications. Anelectronic signature, provided by a private company, may also be stored on the chip. Countries which currently issue government-issued eIDs includeAfghanistan,Bangladesh,Belgium,Bulgaria,Chile,Estonia,Finland,Guatemala,Germany,Iceland,India,Indonesia,Israel,Italy,Latvia,[2]Lithuania,[3]Luxembourg, theNetherlands,Nigeria,Morocco,Pakistan,Peru,Portugal,Poland,Romania,Saudi Arabia,Spain,Slovakia,[4]Malta, andMauritius.Germany,Uruguayand previouslyFinlandhave accepted government issued physical eICs.Norway,SwedenandFinlandaccept bank-issued eIDs (also known asBankID) for identification by government authorities. There are also an increasing number of countries applying electronic identification for voting (enrollment, issuingvoter ID cards, voter identification and authentication, etc.), including those countries usingbiometric voter registration. According to the EUelectronic identification and trust services(eIDAS) Regulation, described as a pan-European login system, all organizations delivering public digital services in an EU member state shall accept electronic identification from all EU member states from 29 September 2018.[5][6][7] Austria has initially issued eIDs ("Bürgerkarte") via its national health insurance card (eCard), but has later introduced an app-based solution ("Handy-Signatur"). Electronic signatures were deemed equivalent to handwritten signatures since January 2000.[8]As of 5 December 2023, the Handy-Signatur and the Bürgerkarte (Citizen Card) have been upgraded and replaced by "ID Austria", which offers enhanced digital identification and authentication capabilities.[9]More than 2 Million people are enrolled in ID Austria. It interconnects with eIDAS systems from other EU Member States.[10]It is widely used for government online services, but also increasingly by the private sector. Belgium has been issuing eIDs since 2003, and all identity cards issued since 2004 have been electronic, replacing the previousplastic card.[11] The eID card contains a chip containing:[12] At home, the users can use their electronic IDs to log into specific websites (such as Tax-on-web, allowing them to fill in their tax form online).[13]To do this the user needs When other software (such as anInternet Browser) attempts to read the eID, the users are asked for confirmation for this action, and potentially even for theirPIN.[13] Other applications include signing emails with the user's eID certificate private key. Giving the public key to your recipients allows them to verify your identity. Although legally Belgian citizens only have to carry an ID from the age of 12, as of March 2009,[14]a "Kids ID" has been introduced for children below this age, on a strictly voluntary basis. This ID, beside containing the usual information, also holds a contact number that people, or the child themselves, can call when they, for example, are in danger or had an accident. The card can be used for electronic identification after the age of six, and it does not contain a signing certificate as minors cannot sign a legally binding document. An important goal of the Kids-ID card is to allow children to join "youth-only" chat sites, using their eID to gain entrance. These sites would essentially block any users above a certain age from gaining access to the chat sessions, effectively blocking out potentialpedophiles. Bulgaria introduced a limited scale proof-of-concept of electronic identity cards, called ЕИК (Eлектронна карта за идентичност), in 2013. Croatia introduced its electronic identity cards, callede-osobna iskaznica, on 8 June 2015. Electronic identities in Denmark, issued by banks and the government jointly, is namedMitID. The former eID,NemID, was depreciated as of October 2023. MitID authentication allows larger payments inMobilePay- a service used by more than half of the population as of 2017. TheEstonian ID card, issued since 2002, is also used for authentication for Estonia'sInternet-based votingsystem. In February 2007, Estonia was the first country to allow for electronic voting for parliamentary elections. Over 30,000 voters participated in the country's e-election.[15] At end of 2014 Estonia extended the Estonian ID Card to non-residents. The target of the project is to reach 10 million residents by 2025, which is 8 times more than the Estonian population of 1.3 million. The Finnish electronic ID[16]was first issued to citizens on 1 December 1999. Electronic identities in Finland are issued by banks. They make it possible to log into Finnish authorities, universities and banks, and to make larger payments using the MobilePay mobile payment service. Themobiilivarmenneis utilizing the mobile phone SIM card for authentication, and is financed by a fee to the mobile network operator for each authentication. Germany introduced its electronic identity cards, calledPersonalausweis, in 2010. In Iceland, electronic IDs (Icelandic:Rafræn skilríki) are extensively used by the public and private sector today and were first introduced in 2008. The most widely used version today is on amobile phone- with the authentication key held on a SIM card. In Iceland 95% of the eligible population (13 years or older) has an active eID, including 75% of over 75s. Icelandic eID holders used their eID more than 20 times a month in 2021.[17] During enrollment, users create a PIN. Each time they need to identify, verify or sign something online, a prompt viaflash SMSis initiated and the PIN code is validated. Today this system is used by all banks, government services (island.isportal), healthcare, eductation, document signing and over 300 private companies using for customer page logins (linked to theIcelandic ID no.). Since the only thing to remember is one's PIN code and their phone, it is very prevalent, and works as a sort ofsingle-sign-onservice. They are administered by Auðkenni hf., which was initially created by a consortium of banks but is now owned by the government.[18] The first form of the system in 2008 was a specialsmartcardwith anEMVchip, paired with a smartcard reader on the client's computer. The smartcard was first introduced in late 2008 for employees of government departments, large companies and the healthcare system. It was rolled out to all departments and companies handling sensitive data.[19]It was also possible to store one's eID on a debit card. In November 2013 the SIM card implementation for mobile phones was introduced, which led to a much quicker take-up of eIDs due to its ease of use.[20]By 2014, 40% of Icelanders were using eIDs.[21] Italy introduced its electronic identity cards, calledCarta d'Identità Elettronica(in Italy identified with theacronymCIE), to replace thepaper-based ID card in Italy. Since 4 July 2016, Italy is in the process of renewing all ID cards to electronic ID cards. eID and eSignature service provider in Latvia is calledeParaksts Since 12 February 2014, Malta is in the process of renewing all ID cards to electronic ID cards. Electronic identities in Netherlands are calledDigiDand Netherlands is currently developing an eID scheme. Electronic identities in Norway issued by banks are calledBankID(different than Sweden's BankID). They make it possible to log into Norwegian authorities, universities and banks, and to make larger payments using theVippsmobile paymentservice, used by more than half of the population as of 2017. The NorwegianBankID på mobilservice is utilizing the mobile phone SIM card for authentication, and is financed by a fee to the mobile network operator for each authentication. Since 25 May 2023, Romanians are able to use their national ID to sign up to the RoEID application which allows them to access public services[22] Electronic identity cards in Spain are calledDNIeand have been issued since 2006. SwissID,[23]developed by SwissSign,[24]is a certified digital ID in Switzerland offered since 2017 (2010–17 as SuisseID). As a base for a newFederal Act on Electronic Identification Services (e-ID Act),[25]an eID-concept had been developed by the authorities, yet experts criticized its technology part.[26]The law was accepted by the Swiss parliament on 29 September 2019. It would have updated current legislation and would have continued to allow private companies or public organizations to issue eIDs if certified by a new federal authority. However, an optionalreferendumcalled for a public vote on this issue in the weeks until Sunday,[27]7 March 2021.[28]The vote resulted in 35.6% Yes and 64.4% No, rejecting the proposed new law.[29] SwissSign might develop the SwissID further, to make it compatible with future E-ID regulations.[30] The most widespread electronic identification in Sweden is issued by banks and calledBankID(different than Norwegian BankID). The BankID may be in the form of a certificate file on disk, on card or on smart phones. The latter (Swedishmobile BankIDservice) was used by 84 percent of the Swedish population in 2019.[31]A Mobile BankID login does not require a fee since the service is provided by banks rather than mobile operators. It can be used both for authentication within various apps and web services on the same smart phone, and also for web pages on other devices. It also supports fingerprint andface recognitionauthentication on compatible iOS and Android devices. Electronic IDs are used for secure web login to Swedish authorities, banks, health centers (allowing people to see their medical records and prescriptions and book doctors visits), and companies such as pharmacies. Mobile BankID also allows theSwishmobile paymentservice, utilized by 78 percent of the Swedish population in 2019, at first mainly for payments between individuals.[32]BankID was previously used for university applications and admissions, but this was prohibited bySwedbanksince universities utilized the system for distribution of their own student logins. Increasingly, BankID is used as an added security for signing contracts.[33] The adoption of eIDs varies per EU Member State. While some countries mainly rely on government eID systems, others have dominant private or public-private systems, like eIDs managed by banks. The table below provides an overview over eID adoption in various countries. Afghanistan issued its first electronic ID (e-ID) card on 3 May 2018. Afghan PresidentAshraf Ghaniwas the first to receive the card. Afghan President was accompanied by First LadyRula Ghani, his VP, Head of Afghan Senate, Head of Afghan Parliament, Chief Justice and other senior government officials, and they also received their cards.[39]As of January 2021, approximately 1.7 million Afghan citizens have obtained their e-ID cards.[40] Costa Rica plans to introduce facial recognition data into its national identification card.[41] Guatemala introduced its electronic identity card called DPI (Documento Personal de Identificación) in August 2010. Indonesia introduced its electronic identity cards, called e-KTP.Indonesian electronic IDwas trialed in six areas in 2009 and launched nationwide in 2011.[42][43] Electronic identity cards in Israel have been issued since July 2013. Kazakhstan introduced its electronic identity cards in 2009. Mauritius has had electronic identity cards since 2013. Mexico had an intent to develop an official electronic biometric ID card for all youngsters under the age of 18 years and was called the Personal Identity Card (Record of Minors), which included the data verified on the birth certificate, including the names of the legal ascendant(s), a unique key of the Population Registry (CURP), a biometric facial recognition photograph, a scan of all 10 fingerprints, and an iris scan registration.[44] General Multi-purpose Electronic Identity Cards are issued by theNational Identity Management Commission(NIMC), a Federal Government agency under the Presidency. The NeID Card complies withICAOstandard 9303,ISOstandard 7816–4., as well as GVCP for the MasterCard-supported payment applet. NIMC plans to issue 50m multilayer-polycarbonate cards, the first set being contact only, but also dual-interface with DESFire Emulation in the near future. Pakistan officially began its nationwideComputerized National Identity Card(CNIC) distribution in 2002, with over 89.5 x CNICs issued by 2012.[45]In October 2012, theNational Database and Registration Authority(NADRA) introduced the smart national identity card (SNIC), which contains a data chip and 36 security features. The SNIC complies withICAOstandard 9303 andISOstandard 7816–4. The SNIC can be used for both offline and online identification, voting, pension disbursement, social and financial inclusion programmes and other services.[46][47]NADRA aims to replace all 89.5 million CNICs with SNICs by 2020. Serbia has its first trustful and reliable electronic identity since June 2019. The first reliable service provider is The Office for IT and eGovernment, through which citizens and residents of Serbia can access services on eGovernment Portal and eHealth portal.[48][49][50]The electronic identification offers two levels of security, first basic level with authentication of only user name and password, and medium level of two-factor of authentication.[51] Since on 1 January 2016,Sri Lankais in the process of developing aSmart CardbasedRFIDE-National Identity Card which will replace the obsolete 'Laminated Type' cards by storing the holders information on a chip that can be read by banks, offices etc. thereby reducing the need to have documentation of these informations physically by storing in thecloud. InTurkeythe e-Government (e-Devlet) Gateway is a largely scaled Internet site that provides access to all public services from a single point. The purpose of the Gateway is to present public services to the citizens, enterprises and public institutions effectively and efficiently with information and communication technologies.[52] Uruguay has had electronic identity cards since 2015. The Uruguayan eID has a private key that allows to digitally sign documents, and has the user fingerprint stored in order to allow to verify the identity. It is also a valid travel document in some South American countries. As of 2017 the old laminated ID coexists with the new eID. Electronic identification can also be attributed to the manufacturing sector, where the technology of electronic identification is transferred to individual parts or components within a manufacturing facility in order to track, and identify these parts to enhance manufacturing efficiency. This can also be referred to location detection technologies within theFourth Industrial Revolution.
https://en.wikipedia.org/wiki/Electronic_identification
Replayable CCA security(RCCA security) is a security notion incryptographythat relaxes the older notion of Security againstChosen-Ciphertext Attack(CCA, more preciselyadaptivesecurity notion CCA2): all CCA-secure systems are RCCA secure but the converse is not true. The claim is that for a lot of use cases, CCA is too strong and RCCA suffices.[1]Nowadays a certain amount of cryptographic scheme are proved RCCA-secure instead of CCA secure. It was introduced in 2003 in a research publication byRan Canetti, Hugo Krawczyk and Jesper B. Nielsen. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/RCCA_security
Inmathematics, theadditive polynomialsare an important topic in classicalalgebraic number theory. Letkbe afieldofprimecharacteristicp. ApolynomialP(x) withcoefficientsinkis called anadditive polynomial, or aFrobeniuspolynomial, if as polynomials inaandb. It is equivalent to assume that this equality holds for allaandbin some infinite field containingk, such as itsalgebraic closure. Occasionallyabsolutely additiveis used for the condition above, andadditiveis used for the weaker condition thatP(a+b) =P(a) +P(b) for allaandbin the field. For infinite fields the conditions are equivalent, but forfinite fieldsthey are not, and the weaker condition is the "wrong" as it does not behave well. For example, over a field of orderqany multiplePofxq−xwill satisfyP(a+b) =P(a) +P(b) for allaandbin the field, but will usually not be (absolutely) additive. The polynomialxpis additive. Indeed, for anyaandbin the algebraic closure ofkone has by thebinomial theorem Sincepis prime, for alln= 1, ...,p−1 thebinomial coefficient(pn){\displaystyle \scriptstyle {p \choose n}}isdivisiblebyp, which implies that as polynomials inaandb. Similarly all the polynomials of the form are additive, wherenis a non-negativeinteger. The definition makes sense even ifkis a field of characteristic zero, but in this case the only additive polynomials are those of the formaxfor someaink.[citation needed] It is quite easy to prove that anylinear combinationof polynomialsτpn(x){\displaystyle \tau _{p}^{n}(x)}with coefficients inkis also an additive polynomial. An interesting question is whether there are other additive polynomials except these linear combinations. The answer is that these are the only ones. One can check that ifP(x) andM(x) are additive polynomials, then so areP(x) +M(x) andP(M(x)). These imply that the additive polynomials form aringunder polynomial addition andcomposition. This ring is denoted This ring is notcommutativeunlesskis the fieldFp=Z/pZ{\displaystyle \mathbb {F} _{p}=\mathbf {Z} /p\mathbf {Z} }(seemodular arithmetic). Indeed, consider the additive polynomialsaxandxpfor a coefficientaink. For them to commute under composition, we must have and henceap−a= 0. This is false foranot arootof this equation, that is, foraoutsideFp.{\displaystyle \mathbb {F} _{p}.} LetP(x) be a polynomial with coefficients ink, and{w1,…,wm}⊂k{\displaystyle \{w_{1},\dots ,w_{m}\}\subset k}be the set of its roots. Assuming that the roots ofP(x) are distinct (that is,P(x) isseparable), thenP(x) is additiveif and only ifthe set{w1,…,wm}{\displaystyle \{w_{1},\dots ,w_{m}\}}forms agroupwith the field addition.
https://en.wikipedia.org/wiki/Additive_polynomial
Inmathematics, aLaurent polynomial(named afterPierre Alphonse Laurent) in one variable over afieldF{\displaystyle \mathbb {F} }is alinear combinationof positive and negative powers of the variable withcoefficientsinF{\displaystyle \mathbb {F} }. Laurent polynomials inX{\displaystyle X}form aringdenotedF[X,X−1]{\displaystyle \mathbb {F} [X,X^{-1}]}.[1]They differ from ordinarypolynomialsin that they may have terms of negative degree. The construction of Laurent polynomials may be iterated, leading to the ring of Laurent polynomials in several variables. Laurent polynomials are of particular importance in the study ofcomplex variables. ALaurent polynomialwith coefficients in a fieldF{\displaystyle \mathbb {F} }is an expression of the form whereX{\displaystyle X}is a formal variable, the summation indexk{\displaystyle k}is aninteger(not necessarily positive) and only finitely many coefficientspk{\displaystyle p_{k}}are non-zero. Two Laurent polynomials are equal if their coefficients are equal. Such expressions can be added, multiplied, and brought back to the same form by reducing similar terms. Formulas for addition and multiplication are exactly the same as for the ordinary polynomials, with the only difference that both positive and negative powers ofX{\displaystyle X}can be present: and Since only finitely many coefficientsai{\displaystyle a_{i}}andbj{\displaystyle b_{j}}are non-zero, all sums in effect have only finitely many terms, and hence represent Laurent polynomials.
https://en.wikipedia.org/wiki/Laurent_polynomial
Inmathematics, informally speaking,Euclid's orchardis an array of one-dimensional "trees" of unit height planted at the lattice points in one quadrant of asquare lattice.[1]More formally, Euclid's orchard is the set of line segments from(x,y, 0)to(x,y, 1), wherexandyare positiveintegers. The trees visible from the origin are those at lattice points(x,y, 0), wherexandyarecoprime, i.e., where the fraction⁠x/y⁠is inreduced form. The nameEuclid's orchardis derived from theEuclidean algorithm. If the orchard isprojectedrelative to the origin onto the planex+y= 1(or, equivalently, drawn inperspectivefrom a viewpoint at the origin) the tops of the trees form a graph ofThomae's function. The point(x,y, 1)projects to The solution to theBasel problemcan be used to show that the proportion of points in the⁠n×n{\displaystyle n\times n}⁠grid that have trees on them is approximately6π2{\displaystyle {\tfrac {6}{\pi ^{2}}}}and that the error of this approximation goes to zero in thelimitasngoes to infinity.[2] This article about thehistory of mathematicsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Euclid%27s_orchard
In mathematics, asuperpartient ratio, also calledsuperpartient numberorepimeric ratio, is arational numberthat is greater than one and is notsuperparticular. The term has fallen out of use in modern pure mathematics, but continues to be used inmusic theoryand in thehistorical study of mathematics. Superpartient ratios were written about byNicomachusin his treatiseIntroduction to Arithmetic. Mathematically, asuperpartient numberis aratioof the form whereais greater than 1 (a> 1) and is alsocoprimeton. Ratios of the formn+1n{\displaystyle {\tfrac {n+1}{n}}}are also greater than one and fully reduced, but are calledsuperparticular ratiosand are not superpartient. "Superpartient" comes fromGreekἐπιμερήςepimeres"containing a whole and a fraction," literally "superpartient".
https://en.wikipedia.org/wiki/Superpartient_number
Incomputer networking,ARP spoofing(alsoARP cache poisoningorARP poison routing) is a technique by which an attacker sends (spoofed)Address Resolution Protocol(ARP) messages onto alocal area network. Generally, the aim is to associate the attacker'sMAC addresswith theIP addressof anotherhost, such as thedefault gateway, causing any traffic meant for that IP address to be sent to the attacker instead. ARP spoofing may allow an attacker to interceptdata frameson a network, modify the traffic, or stop all traffic. Often the attack is used as an opening for other attacks, such asdenial of service,man in the middle, orsession hijackingattacks.[1] The attack can only be used on networks that use ARP, and requires that the attacker has direct access to the localnetwork segmentto be attacked.[2] TheAddress Resolution Protocol(ARP) is a widely usedcommunications protocolfor resolvingInternet layeraddresses intolink layeraddresses. When anInternet Protocol(IP)datagramis sent from one host to another in alocal area network, the destination IP address must be resolved to aMAC addressfor transmission via thedata link layer. When another host's IP address is known, and its MAC address is needed, abroadcast packetis sent out on the local network. This packet is known as anARP request. The destination machine with the IP in the ARP request then responds with anARP replythat contains the MAC address for that IP.[2] ARP is astateless protocol. Network hosts will automaticallycacheany ARP replies they receive, regardless of whether network hosts requested them. Even ARP entries that have not yet expired will be overwritten when a new ARP reply packet is received. There is no method in the ARP protocol by which a host canauthenticatethe peer from which the packet originated. This behavior is the vulnerability that allows ARP spoofing to occur.[1][2][3] The basic principle behind ARP spoofing is to exploit the lack of authentication in the ARP protocol by sendingspoofedARP messages onto the LAN. ARP spoofing attacks can be run from a compromised host on the LAN, or from an attacker's machine that is connected directly to the target LAN. An attacker using ARP spoofing will disguise as a host to the transmission of data on the network between the users.[4]Then users would not know that the attacker is not the real host on the network.[4] Generally, the goal of the attack is to associate the attacker's host MAC address with the IP address of a targethost, so that any traffic meant for the target host will be sent to the attacker's host. The attacker may choose to inspect the packets (spying), while forwarding the traffic to the actual default destination to avoid discovery, modify the data before forwarding it (man-in-the-middle attack), or launch adenial-of-service attackby causing some or all of the packets on the network to be dropped. The simplest form of certification is the use of static, read-only entries for critical services in theARP cacheof a host. IP address-to-MAC address mappings in the local ARP cache may be statically entered. Hosts don't need to transmit ARP requests where such entries exist.[5]While static entries provide some security against spoofing, they result in maintenance efforts as address mappings for all systems in the network must be generated and distributed. This does not scale on a large network since the mapping has to be set for each pair of machines resulting inn2-nARP entries that have to be configured whennmachines are present; On each machine there must be an ARP entry for every other machine on the network;n-1ARP entries on each of thenmachines. Software that detects ARP spoofing generally relies on some form of certification or cross-checking of ARP responses. Uncertified ARP responses are then blocked. These techniques may be integrated with theDHCP serverso that bothdynamicandstatic IPaddresses are certified. This capability may be implemented in individual hosts or may be integrated intoEthernet switchesor other network equipment. The existence of multiple IP addresses associated with a single MAC address may indicate an ARP spoof attack, although there are legitimate uses of such a configuration. In a more passive approach, a device listens for ARP replies on a network, and sends a notification viaemailwhen an ARP entry changes.[6] AntiARP[7]also provides Windows-based spoofing prevention at the kernel level. ArpStar is a Linux module for kernel 2.6 and Linksys routers that drops invalid packets that violate mapping, and contains an option to repoison or heal. Some virtualized environments such asKVMalso provide security mechanisms to prevent MAC spoofing between guests running on the same host.[8] Additionally some Ethernet adapters provide MAC and VLAN anti-spoofing features.[9] OpenBSDwatches passively for hosts impersonating the local host and notifies in case of any attempt to overwrite a permanent entry.[10] Operating systems react differently. Linux ignores unsolicited replies, but, on the other hand, uses responses to requests from other machines to update its cache. Solaris accepts updates on entries only after a timeout. In Microsoft Windows, the behavior of the ARP cache can be configured through several registry entries under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, ArpCacheLife, ArpCacheMinReferenceLife, ArpUseEtherSNAP, ArpTRSingleRoute, ArpAlwaysSourceRoute, ArpRetryCount.[11] The techniques that are used in ARP spoofing can also be used to implement redundancy of network services. For example, some software allows a backup server to issue agratuitous ARP requestin order to take over for a defective server and transparently offer redundancy.[12][13]Circle[14]and CUJO are two companies that have commercialized products centered around this strategy. ARP spoofing is often used by developers to debug IP traffic between two hosts when a switch is in use: if host A and host B are communicating through an Ethernet switch, their traffic would normally be invisible to a third monitoring host M. The developer configures A to have M's MAC address for B, and B to have M's MAC address for A; and also configures M to forward packets. M can now monitor the traffic, exactly as in a man-in-the-middle attack. Some of the tools that can be used to carry out ARP spoofing attacks:
https://en.wikipedia.org/wiki/ARP_spoofing
Aspidistrawas a Britishmedium-waveradio transmitter used forblack propagandaand military deception purposes againstNazi GermanyduringWorld War II. At times in its history it was the most powerful broadcast transmitter in the world. Its name – afterthe popular foliage houseplant– was inspired by the 1938 comic song "The Biggest Aspidistra in the World", best known as sung byGracie Fields. The transmitter was installed in 1942 at a purpose-built site nearCrowboroughonAshdown Forestin southeastEngland. This was equipped with other medium-wave andshort-wavetransmitters, which also used the Aspidistra name, being known as ASPI 2, ASPI 3, ASPI 4, etc. However, when the nameAspidistrawas used on its own it always referred to the original medium-wave transmitter (ASPI 1). The Crowborough station was run during the war by the British government'sPolitical Warfare Executive(PWE). After the war, the station was run by theDiplomatic Wireless Service(DWS) and used for BBC External Service broadcasts to Europe. It closed in 1982. Aspidistrabroadcast on medium wave (AM) with 600kWof power. The transmitter (originally 500 kW) had been built byRCAforWJZ radioinNewark, New Jersey, United States. But at the prompting of theUnited States Congress, spurred on by competition,[1]theFederal Communications Commissionlater imposed a 50-kW power limit on all US stations. RCA was therefore glad to sell it overseas and the United KingdomSecret Intelligence Servicebought it for £165,000. In addition to its high power,Aspidistracould be re-tuned quickly to a new frequency. This was of great use in its secret wartime work and was unusual for a medium wave transmitter, as they generally operated on a fixed frequency throughout their working life. Its antenna was three guyed masts, each 110 m (360 ft) tall, directing the signal broadly eastwards. The 1940sArt Decostyle transmitter building was in an underground shelter which had been excavated by aCanadianarmy construction unit. Power for the transmitter was supplied from two flat eight diesels with blown superchargers made byCrossley-Premierheavy oil engine.[2]A single flywheel type alternator was used to supply the power, rated at 3190 HP, as the local electricity supply did not have the necessary capacity. The facility was located in an elevated part ofAshdown Forest, about 195 metres (640 ft) above sea level, at King's Standing near Crowborough,East Sussex. The RCA transmitter used three class B modulators in parallel feeding three parallel 170 kW (nominal) power amplifiers (PAs), run at 200 kW to produce 600 kW in total. The PA units were operated in class C using four GL898 valves in each PA; four were also used in each modulator. The output of these PAs used a series-combining scheme to feed the RF power into a 100ohmco-ax line. This fed the main mast, the other two being parasitic and providing the directional element necessary as the purpose was to get the maximum signal eastwards into Europe. The GL898 valves were water/air cooled triodes utilising a three-phase heater supply and having an anode dissipation of 40 kW. Alongside the originalAspidistra, other medium wave andshort wavetransmitters were installed over the years. In the later period of the station's life these included two Doherty 250 kW medium wave units, whose outputs could be combined to give 500 kW on a single frequency.[3] Two 100 kW short wave transmitters made byGeneral Electric (USA)operated at the Crowborough site from 1943 until the 1980s.[4] At times inAspidistra's history it was the most powerful broadcast transmitter in the world, though for most of its wartime service it was less powerful than a BBClongwavetransmitter atOttringham, nearHull, which was also used for broadcasting to continental Europe and continued in service until 1953.[5] Aspidistrawas also less powerful than Germany'sGoliathtransmitter, though this was used not for broadcasting but for radiotelegraphy communications withU-boats. Aspidistrawent into service on 8 November 1942, and was in operation throughout the remainder of the war. Starting in 1943,Aspidistrawas used to disruptGerman night fighter operationsagainstAlliedbombers over Germany. German radar stations broadcast the movements of the bomber streams en route to targets duringRAF Bomber Command'sBattle of Berlin. As part of their strategies to misdirect the German fighters, German-speaking RAF operators impersonated these German ground control operators, sending fake instructions to the night fighters. They directed the night fighters to land or to move to the wrong sectors. This interference to enemy RT and WT was known as "Dartboard".[6]As German operational procedures changed to prevent impersonation so the British copied them, bringing inWAAFswhen the Germans used female operators.[7] Aspidistrawas also used forblack propagandaoperations, in which the propaganda material is issued from a disguised source. These activities were under thePolitical Warfare Executive, and directed bySefton Delmer. In particular,Aspidistraaired the broadcasts ofAtlantiksenderandSoldatensender Calais, which posed as official German military radio stations inFrance.[8] During Allied air raids, German radio transmitters in target areas were switched off to prevent their use as navigational aids by the enemy. However, such transmitters were very often connected into a network, and broadcast the same content as other transmitters which were not switched off. When a targeted transmitter switched off,Aspidistrabegan transmitting on its frequency, initially retransmitting the German network broadcast as received from an active station. This would cause German listeners to believe the original station was still broadcasting.Aspidistraoperators would then insert demoralizing false content and pro-Allied propaganda into the broadcast. This content was considered especially effective, as it appeared to be coming from official German sources. These intrusion operations were an early example of a "man in the middle attack". The first such intrusion was carried out on 25 March 1945. On 30 March 1945Aspidistraintruded on theBerlinandHamburgstations, warning that the Allies were trying to spread confusion by sending false telephone messages from occupied towns to unoccupied towns. On 8 April 1945 "Aspidistra" intruded on the Hamburg andLeipzigstations to warn of forged banknotes in circulation. On 9 April 1945 there were announcements encouraging people to evacuate to seven bomb-free zones in central and southern Germany. All these announcements were false. German radio stations tried announcing "The enemy is broadcasting counterfeit instructions on our frequencies. Do not be misled by them. Here is an official announcement of the Reich authority." However,Aspidistrabroadcasts included similar announcements, leaving the listeners confused.[9] Although mainly intended for the military and propaganda transmissions described above,Aspidistrawas also used during the war for BBC European Service broadcasts on 804kHz. After 1945,Aspidistracontinued to be used by the BBC. Frequencies used by the original transmitter and, in later years, by the Doherty transmitters at the site mentioned above, included: After aEurope-wide reorganisation of the medium wave bandin 1978,Aspidistraonly used 648 kHz, though it was able to do so without the previous restriction on its hours as the channel was no longer shared with Radio 3.[10][11][12] After the November 1978 reorganisation, the other medium wave frequency (1296 kHz) used by the BBC to broadcast to Europe was carried by the Doherty transmitters which had been moved to a newForeign Office transmitting station at Orfordnesson the Suffolk coast, as it was better placed than Crowborough, which is inland. In September 1982, Orfordness also took over responsibility for transmissions on 648 kHz. In 1970, under a swap agreement between the BBC External Service and theVoice of America, there was a daily one-hour exchange of airtime at Crowborough. From 2100 to 2200 GMT/UTC, 1295 kHz carried VOA English while the BBC's Italian Service was carried by the VoA transmitter in Munich, Germany on 1196 kHz.[citation needed]The Crowborough station was also used to a limited extent to relay broadcasts byRadio Canada International.[citation needed] Despite its almost exclusive post-war use by the BBC, the Crowborough station remained formally in the hands of the Foreign Office (from 1968, theForeign and Commonwealth Office, FCO), and its staff were members of the Diplomatic Wireless Service (later known as the FCO's Communications Department and then the Communications Engineering Department) rather than the BBC. Aspidistramade its final transmission (on 648 kHz) on 28 September 1982, the honour of pressing the "off" key for the last time going to Harold Robin, the Foreign Office engineer who had been responsible 40 years earlier for purchasing the transmitter in the US and setting up the station at Crowborough. The station was dismantled in 1984. Two years later, following extensive modifications, the bunker that housed theAspidistratransmitter was commissioned by theHome Officeas one of the 17 sites in England and Wales to be used as seats of regional government in the event of a nuclear attack.[13] From 1988,Sussex Policeused parts of the site, purchasing all of it in 1996 for use as a training facility.[14] In 2007, Building No. 3 (known as "the cinema" because of its design similarities with pre-war Art Deco cinemas), which had once housed ASPI 3 and ASPI 6, was designated aGrade II listedstructure because of its historic and architectural interest. The designation notes that it is "a remarkably intact and unaltered building through which one can understand its function as an early 1940s transmitter hall".[15] A reported offer to donate theAspidistratransmitter to London'sScience Museumwas not taken up and it was scrapped. A number ofvalves (tubes)and a large tuning coil were saved by FCO engineers and are now on display in the foyer of the Orfordness station.[16]A notice there says: One of three RF output coupling coils from the Aspidistra 1 transmitter at Crowborough in Sussex. ASP1 was a 600 kW medium wave transmitter which was installed and commissioned with great urgency by Harold Robin during the spring and summer of 1942 and which commenced broadcasting on 8 November 1942. The transmitter was in continuous service for the next 40 years, carrying "black" propaganda to the enemy in wartime and BBC External Services to Europe in peacetime. It ceased regular transmissions on 28 September 1982 and its services were transferred to Orfordness. It was dismantled in May 1984. This coil is preserved as a memento of a transmitter which played a large part in the wartime activities of the Political Warfare Executive. In peacetime it became part of the expanding broadcast transmission facilities provided for the External Services of the BBC by the Diplomatic Wireless Service – now the Communications Engineering Department of the Foreign and Commonwealth Office. October 1984 51°2′33.70″N0°6′15.15″E / 51.0426944°N 0.1042083°E /51.0426944; 0.1042083
https://en.wikipedia.org/wiki/Aspidistra_(transmitter)
TheBabington Plotwas a plan in 1586 to assassinate QueenElizabeth I, aProtestant, and putMary, Queen of Scots, herCatholiccousin, on the English throne. It led to Mary's execution, a result of a letter sent by Mary (who had been imprisoned for 19 years since 1568 in England at the behest of Elizabeth) in which she consented to the assassination of Elizabeth.[1] The long-term goal of the plot was the invasion of England by the Spanish forces of KingPhilip IIand theCatholic League in France, leading to the restoration of the old religion. The plot was discovered by Elizabeth's spymaster SirFrancis Walsinghamand used to entrap Mary for the purpose of removing her as a claimant to the English throne. The chief conspirators wereAnthony BabingtonandJohn Ballard. Babington, a youngrecusant, was recruited by Ballard, aJesuitpriest who hoped to rescue the Scottish queen. Working for Walsingham weredouble agentsRobert PoleyandGilbert Gifford, as well asThomas Phelippes, a spy agent andcryptanalyst, and the Puritan spyMaliverey Catilyn. The turbulent Catholic deacon Gifford had been in Walsingham's service since the end of 1585 or the beginning of 1586. Gifford obtained a letter of introduction to Queen Mary from a confidant and spy for her,Thomas Morgan. Walsingham then placed double agent Gifford and spy decipherer Phelippes insideChartley Castle, where Queen Mary was imprisoned. Gifford organised the Walsingham plan to place Babington's and Queen Mary'sencryptedcommunications into a beer barrel cork which were then intercepted by Phelippes, decoded and sent to Walsingham.[2] On 7 July 1586, the only Babington letter that was sent to Mary was decoded by Phelippes. Mary responded in code on 17 July 1586 ordering the would-be rescuers to assassinate Queen Elizabeth. The response letter also included deciphered phrases indicating her desire to be rescued: "The affairs being thus prepared" and "I may suddenly be transported out of this place". At the Fotheringay trial in October 1586, Elizabeth's Lord High Treasurer William Cecil –Lord Burghley– and Walsingham used the letter against Mary who refused to admit that she was guilty. However, Mary was betrayed by her secretariesNauandCurle, who confessed under pressure that the letter was mainly truthful.[3] Mary, Queen of Scots, a Roman Catholic, was regarded by Roman Catholics as the legitimate heir to the throne of England. In 1568, she escaped imprisonment by Scottish rebels and sought the aid of her first cousin once removed,Queen Elizabeth I, a year after her forced abdication from the throne ofScotland. The issuance of thepapal bullRegnans in ExcelsisbyPope Pius Von 25 February 1570, granted English Catholics authority to overthrow the English queen. Queen Mary became the focal point of numerous plots and intrigues to restore England to its former religion, Catholicism, and to depose Elizabeth and even to take her life. Rather than rendering the expected aid, Elizabeth imprisoned Mary for nineteen years in the charge of a succession of jailers, principally theEarl of Shrewsbury. In 1584, Elizabeth'sPrivy Councilsigned a "Bond of Association" designed by Cecil and Walsingham which stated that anyone within the line of succession to the throneon whose behalfanyone plotted against the Queen, would be excluded from the line and executed. This was agreed upon by hundreds of Englishmen, who likewise signed the Bond. Mary also agreed to sign the Bond. The following year, Parliament passed theAct of Association, which provided for the execution of anyone who would benefit from the death of the Queen if a plot against her was discovered. Because of the bond, Mary could be executed if a plot was initiated by others that could lead to her accession to England's throne.[4] In 1585, Elizabeth ordered Mary to be transferred in a coach and under heavy guard and placed under the strictest confinement atChartley HallinStaffordshire, under the control ofSir Amias Paulet. She was prohibited any correspondence with the outside world.PuritanPaulet was chosen by Queen Elizabeth in part because he abhorred Queen Mary's Catholic faith. Reacting to the growing threat posed by Catholics, urged on by the pope and other Catholic monarchs in Europe,Francis Walsingham, Queen Elizabeth'sSecretary of Stateandspymaster, together withWilliam Cecil, Elizabeth's chief advisor, realised that if Mary could be implicated in a plot to assassinate Elizabeth, she could be executed and thepapistthreat diminished. As he wrote to theEarl of Leicester: "So long as that devilish woman lives, neither Her Majesty must make account to continue in quiet possession of her crown, nor her faithful servants assure themselves of safety of their lives."[5]Walsingham used Babington to ensnare Queen Mary by sending his double agent,Gilbert Giffordto Paris to obtain the confidence of Morgan, then locked in the Bastille. Morgan previously worked forGeorge Talbot, 6th Earl of Shrewsbury, an earlier jailer of Queen Mary. Through Shrewsbury, Queen Mary became acquainted with Morgan. Queen Mary sent Morgan to Paris to deliver letters to the French court. While in Paris, Morgan became involved in a previous plot designed by William Parry, which resulted in Morgan's incarceration in the Bastille. In 1585 Gifford was arrested returning to England while coming through Rye in Sussex with letters of introduction from Morgan to Queen Mary. Walsingham released Gifford to work as adouble agent, in the Babington Plot. Gifford used the alias "No. 4" just as he had used other aliases such as Colerdin, Pietro and Cornelys. Walsingham had Gifford function as a courier in the entrapment plot against Queen Mary. The Babington plot was related to several separate plans: At the behest of Mary's French supporters,John Ballard, aJesuitpriest and agent of the Roman Church, went to England on various occasions in 1585 to secure promises of aid from the northern Catholic gentry on behalf of Mary. In March 1586, he met withJohn Savage, an ex-soldier who was involved in a separate plot against Elizabeth and who had sworn an oath to assassinate the queen. He was resolved in this plot after consulting with three friends: Dr. William Gifford,Christopher Hodgsonand Gilbert Gifford. Gilbert Gifford had been arrested by Walsingham and agreed to be a double agent. Gifford was already in Walsingham's employ by the time Savage was going ahead with the plot, according to Conyers Read.[7]Later that same year, Gifford reported to Charles Paget and the Spanish diplomat DonBernardino de Mendoza, and told them that English Catholics were prepared to mount an insurrection against Elizabeth, provided that they would be assured of foreign support. While it was uncertain whether Ballard's report of the extent of Catholic opposition was accurate, what was certain is that he was able to secure assurances that support would be forthcoming. He then returned to England, where he persuaded a member of the Catholic gentry, Anthony Babington, to lead and organise the English Catholics against Elizabeth. Ballard informed Babington about the plans that had been so far proposed. Babington's later confession made it clear that Ballard was sure of the support of theCatholic League: He told me he was retorned from Fraunce uppon this occasion. Being with Mendoza at Paris, he was informed that in regarde of the iniuries don by our state unto the greatest Christian princes, by the nourishinge of sedition and divisions in their provinces, by withholding violently the lawful possessions of some, by invasion of the Indies and by piracy, robbing the treasure and the wealthe of others, and sondry intolerable wronges for so great and mighty princes to indure, it was resolved by the Catholique league to seeke redresse and satisfaction, which they had vowed to performe this sommer without farther delay, havinge in readiness suche forces and all warlike preparations as the like was never scene in these partes of Christendome ... The Pope was chief disposer, the most Christian king and the king Catholic with all other princes of the league concurred as instruments for the righting of these wronges, and reformation of religion. The conductors of this enterprise for the French nation, theD. of Guise, or his brother theD. de Main; for the Italian and Hispanishe forces, the P. of Parma; the whole number about 60,000.[8] Despite this assurance of this foreign support, Babington was hesitant, as he thought that no foreign invasion would succeed for as long as Elizabeth remained, to which Ballard answered that the plans of John Savage would take care of that. After a lengthy discussion with friends and soon-to-be fellow conspirators, Babington consented to join and to lead the conspiracy.[9] Unfortunately for the conspirators, Walsingham was certainly aware of some of the aspects of the plot, based on reports by his spies, most notably Gilbert Gifford, who kept tabs on all the major participants. While he could have shut down some part of the plot and arrested some of those involved within reach, he still lacked any piece of evidence that would prove Queen Mary's active participation in the plot and he feared to commit any mistake which might cost Elizabeth her life. After theThrockmorton Plot, Queen Elizabeth had issued a decree in July 1584, which prevented all communication to and from Mary. However, Walsingham and Cecil realised that that decree also impaired their ability to entrap Mary. They needed evidence for which she could be executed based on their Bond of Association tenets. Thus Walsingham established a new line of communication, one which he could carefully control without incurring any suspicion from Mary. Gifford approached the French ambassador to England,Guillaume de l'Aubespine, Baron de Châteauneuf-sur-Cher, and described the new correspondence arrangement that had been designed by Walsingham. Gifford and jailer Paulet had arranged for a local brewer to facilitate the movement of messages between Queen Mary and her supporters by placing them in a watertight box inside a beer barrel.[10][11]Thomas Phelippes, a cipher and language expert in Walsingham's employ, was then quartered at Chartley Hall to receive the messages, decode them and send them to Walsingham. Gifford submitted a code table (supplied by Walsingham) to Chateauneuf and requested the first message be sent to Mary.[12] All subsequent messages to Mary would be sent via diplomatic packets to Chateauneuf, who then passed them on to Gifford. Gifford would pass them on to Walsingham, who would confide them to Phelippes. The cipher used was anomenclator cipher.[13]Phelippes would decode and make a copy of the letter. The letter was then resealed and given back to Gifford, who would pass it on to the brewer. The brewer would then smuggle the letter to Mary. If Mary sent a letter to her supporters, it would go through the reverse process. In short order, every message coming to and from Chartley was intercepted and read by Walsingham.[14] Babington wrote to Mary:[15] Myself with ten gentlemen and a hundred of our followers will undertake the delivery of your royal person from the hands of your enemies. For the dispatch of the usurper, from the obedience of whom we are by the excommunication of her made free, there be six noble gentlemen, all my private friends, who for the zeal they bear to the Catholic cause and your Majesty's service will undertake that tragical execution.[16][17] This letter was received by Mary on 14 July 1586, who was in a dark mood knowing that her son had betrayed her in favour of Elizabeth,[18]and three days later she replied to Babington in a long letter in which she outlined the components of a successful rescue and the need to assassinate Elizabeth. She also stressed the necessity of foreign aid if the rescue attempt was to succeed: For I have long ago shown unto the foreign Catholic princes, what they have done against the King of Spain, and in the time the Catholics here remaining, exposed to all persecutions and cruelty, do daily diminish in number, forces, means and power. So as, if remedy be not thereunto speedily provided, I fear not a little but they shall become altogether unable for ever to rise again and to receive any aid at all, whensoever it were offered. Then for mine own part, I pray you to assure our principal friends that, albeit I had not in this cause any particular interest in this case... I shall be always ready and most willing to employ therein my life and all that I have, or may ever look for, in this world."[19][20][21] Mary, in her response letter, advised the would-be rescuers to confront thePuritansand to link her case to the Queen of England as her heir. These pretexts may serve to found and establish among all associations, or considerations general, as done only for your preservation and defence, as well in religions as lands, lives and goods, against the oppressions and attempts of said Puritans, without directly writing, or giving out anything against the Queen, but rather showing yourselves willing to maintain her, and her lawful heirs after her, not naming me.[22] Mary was clear in her support for the murder of Elizabeth if that would have led to her liberty and Catholic domination of England. In addition, Queen Mary supported in that letter, and in another one to Ambassador Bernardino de Mendoza, a Spanish invasion of England. The letter was again intercepted and deciphered by Phelippes. But this time, Phelippes, on the direction of Walsingham, kept the original and made a copy, adding a request for the names of the conspirators:[23][24] I would be glad to know the names and quelityes of the sixe gentlemen which are to accomplish the dessignement, for that it may be, I shall be able uppon knowledge of the parties to give you some further advise necessarye to be followed therein; and even so do I wish to be made acquainted with the names of all such principall persons ... as also from time to time particularlye how you proceede and as sone as you may for the same purpose who bee alredye and how farr every one privye hereunto.[25][26][27] Then, a letter was sent that would destroy Mary's life. Let the great plot commence.SignedMary John Ballardwas arrested on 4 August 1586, and under torture he confessed and implicated Babington. Although Babington was able to receive the letter with the postscript, he was not able to reply with the names of the conspirators, as he was arrested. Others were taken prisoner by 15 August 1586. Mary's two secretaries,Claude NauandGilbert Curle, and a clerkJérôme Pasquierwere likewise taken into custody and interrogated.[28]A large chest filled with Mary's papers seized at Chartley was taken to London.[29] The conspirators were sentenced to death fortreasonand conspiracy against the crown, and were to behanged, drawn, and quartered. This first group included Babington, Ballard,Chidiock Tichborne,Thomas Salisbury,Henry Donn, Robert Barnewell andJohn Savage. A further group of seven men includingEdward Habington, Charles Tilney, Edward Jones, John Charnock, John Travers, Jerome Bellamy, and Robert Gage, were tried and convicted shortly afterward. Ballard and Babington were executed on 20 September 1586 along with the other men who had been tried with them. Such was the public outcry at the horror of their execution that Elizabeth changed the order for the second group to be allowed to hang until "quite dead" before disembowelling and quartering. In October 1586, Mary was sent to be tried atFotheringhay CastleinNorthamptonshireby 46 English lords, bishops and earls. She was not permitted legal counsel, not permitted to review the evidence against her, nor to call witnesses. Portions of Phellipes' letter translations were read at the trial. Mary denied knowing Babington and Ballard,[30]but it was insisted that she had sent a reply to Babington using the same cipher code, entrusting the letter to a servant in a blue coat.[31]Mary was convicted of treason against England. One English Lord voted not guilty. Elizabeth signed her cousin-once-removed's death warrant,[32]and on 8 February 1587, in front of 300 witnesses, Mary, Queen of Scots, was executed by beheading.[33] Mary Stuart(German:Maria Stuart), a dramatised version of the last days ofMary, Queen of Scots, including the Babington Plot, was written byFriedrich Schillerand performed inWeimar, Germany, in 1800. This in turn formed the basis forMaria Stuarda, an opera byDonizetti, in 1835. Although the Babington Plot occurs before the events of the opera, and is only referenced twice during the opera, the second such occasion being Mary admitting her own part in it in private to her confessor (a role taken by Lord Talbot in the opera, although not in real life). The story of the Babington Plot is dramatised in the novelConies in the HaybyJane Lane(ISBN0-7551-0835-3), and features prominently inAnthony Burgess'sA Dead Man in Deptford. A fictional account is given in theMy Storybook series,The Queen's Spies(retitledTo Kill A Queen2008) told in diary format by a fictional Elizabethan girl, Kitty. The Babington plot forms the historical background – and provides much of the intrigue – forHoly Spy, the 7th in the historical detective series by Rory Clements, featuring John Shakespeare, an intelligencer forWalsinghamand elder brother of the more famousWill. The simplified version of the Babington plot is also the subject of the children's or Young Adult novelA Traveller in Time(1939), byAlison Uttley, who grew up near the Babington family home in Derbyshire. A young modern girl finds that she slips back to the time shortly before the Plot is about to be implemented. This was later made into a BBC TV mini-series in 1978, with small changes to the original novel. The Babington Plot is also dramatized in the 2017Ken FollettnovelA Column of Fire, in Jacopo della Quercia's 2015 novelLicense to Quill, and inSJ Parris's 2020 novelExecution, the latest of her novels featuringGiordano Brunoas protagonist. The Babington Plot is also dramatized in the 2024 T.S. Milbourne novel "Gilbert Gifford". The novel focuses on Gilbert Gifford, the double agent in the Babington Plot, and portrays his complicated position through the dramatization of the people involved in the plot. The plot figures prominently in the first chapter ofThe Code Book, a survey of the history of cryptography written bySimon Singhand published in 1999. Episode four of the 1971 television miniseriesElizabeth R(titled "Horrible Conspiracies") is devoted to the Babington Plot. It is also depicted in the miniseriesElizabeth I(2005) and the filmsMary, Queen of Scots(1971),Elizabeth: The Golden Age(2007) andMary Queen of Scots(2018). A 45-minute drama entitledThe Babington Plot, written by Michael Butt and directed bySasha Yevtushenko, was broadcast onBBC Radio 4on 2 December 2008 as part of theAfternoon Drama.[34]This drama took the form of a documentary on the first anniversary of the executions, with the story being told from the perspectives of Thomas Salisbury, Robert Poley, Gilbert Gifford and others who, while not conspirators, are in some way connected with the events, all of whom are interviewed by the Presenter (played byStephen Greif). The cast also includedSamuel BarnettasThomas Salisbury,Burn GormanasRobert Poley, Jonathan Taffler asThomas Phelippesand Inam Mirza asGilbert Gifford. Episode one of the 2017 BBC miniseriesElizabeth I's Secret Agents[35](broadcast in the U.S. onPBSin 2018 asQueen Elizabeth's Secret Agents[36]) deals in part with the Babington plot.
https://en.wikipedia.org/wiki/Babington_Plot
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2] The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5] Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema. A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9] Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10] In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10] Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12] To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories: Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals. Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14] Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer. Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section. A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks. Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17] Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods. Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19] Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20] Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks. Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23] Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include: Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread. Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27] Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31] Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32] Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation: Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33] Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks. In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35] In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36] Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including: In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40] Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41] HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43] When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42] Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44] Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47] In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50] Some common countermeasures are listed in the following sections: Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature. The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51] These design principles of security by design can include some of the following techniques: Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54] Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55] Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization. A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following: Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58] In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60] Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security. Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently. Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63] The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market. It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69] Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance. Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access. Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent. Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72] Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below. One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86] In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example. Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC). Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC). A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language. The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95] As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100] The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100] Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including: Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws. The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk. The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs. TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111] The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities. The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers. Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112] Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117] Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118] In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks. Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today. The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life. Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124] The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124] Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125] Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129] Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131] Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135] Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139] Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142] Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146] Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147] Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150] Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158] TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159] TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162] While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166] An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially. Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182] In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183] Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment. The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone. Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card. Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature. Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185] However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186] As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg. Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash. High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188] Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190] A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191] Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Some illustrative examples of different types of computer security breaches are given below. In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194] In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195] In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197] In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this. In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207] A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214] In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function. In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216] International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement. The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219] The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220] Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222] On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223] Many different teams and organizations exist, including: On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO). The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security. Most countries have their own computer emergency response team to protect network security. Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233] TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236] To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240] Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231] Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill. This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241] Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242] TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243] Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245] TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247] The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249] The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded. In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework. In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256] The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258] In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260] TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262] TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265] The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266] TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271] The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272] Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together. In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274] TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276] There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race": In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277] This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281] Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285] Typical cybersecurity job titles and descriptions include:[286] Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294] In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government. In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297] The following terms used with regards to computer security are explained below: Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302] TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303] A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307] However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308] One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed] Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990. In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311] Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314] TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316] NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
https://en.wikipedia.org/wiki/Computer_security
Incomputer science,session hijacking, sometimes also known ascookie hijacking, is theexploitationof a validcomputer session—sometimes also called asession key—to gain unauthorized access to information or services in a computer system. In particular, it is used to refer to the theft of a magic cookie used to authenticate a user to a remote server. It has particular relevance to web developers, as the HTTP cookies used to maintain a session on many websites can be easily stolen by an attacker using an intermediary computer or with access to the saved cookies on the victim's computer (seeHTTP cookie theft). After successfully stealing appropriate session cookies an adversary might use the Pass the Cookie technique to perform session hijacking. Cookie hijacking is commonly used against client authentication on the internet. Modern web browsers use cookie protection mechanisms to protect the web from being attacked.[1] A popular method is using source-routed IP packets. This allows an attacker at pointBon the network to participate in a conversation betweenAandCby encouraging the IP packets to pass throughB'smachine. If source-routing is turned off, the attacker can use "blind" hijacking, whereby it guesses the responses of the two machines. Thus, the attacker can send a command, but can never see the response. However, a common command would be to set a password allowing access from elsewhere on the net. An attacker can also be "inline" betweenAandCusing a sniffing program to watch the conversation. This is known as a "man-in-the-middle attack". HTTP protocol versions 0.8 and 0.9 lacked cookies and other features necessary for session hijacking. Version 0.9beta of Mosaic Netscape, released on October 13, 1994, supported cookies. Early versions of HTTP 1.0 did have some security weaknesses relating to session hijacking, but they were difficult to exploit due to the vagaries of most early HTTP 1.0 servers and browsers. As HTTP 1.0 has been designated as a fallback for HTTP 1.1 since the early 2000s—and as HTTP 1.0 servers are all essentially HTTP 1.1 servers the session hijacking problem has evolved into a nearly permanent security risk.[2][failed verification] The introduction ofsupercookiesand other features with the modernized HTTP 1.1 has allowed for the hijacking problem to become an ongoing security problem. Webserver and browser state machine standardization has contributed to this ongoing security problem. There are four main methods used to perpetrate a session hijack. These are: After successfully acquiring appropriate session cookies an adversary would inject the session cookie into their browser to impersonate the victim user on the website from which the session cookie was stolen from.[5] Attackers often rely on specialized tools to execute session hijacking attacks. One such tool isFiresheep, a Firefox extension introduced in October 2010. Firesheep demonstrated session hijacking vulnerabilities in unsecured networks by capturing unencrypted cookies from popular websites, allowing users to take over active sessions of others on the same network. The tool worked by displaying potential targets in a sidebar, enabling session access without password theft.[6] Another widely used tool isWireshark, a network protocol analyzer that allows attackers to monitor and intercept data packets on unsecured networks. If a website does not encrypt its session cookies or authentication tokens, attackers can extract them and use them to gain unauthorized access to a victim’s account.[7] Firesheep, aFirefoxextension introduced in October 2010, demonstrated session hijacking vulnerabilities in unsecured networks. It captured unencrypted cookies from popular websites, allowing users to take over active sessions of others on the same network. The tool worked by displaying potential targets in a sidebar, enabling session access without password theft. The websites supported includedFacebook,Twitter,Flickr,Amazon,Windows LiveandGoogle, with the ability to use scripts to add other websites.[8]Only months later, Facebook and Twitter responded by offering (and later requiring)HTTP Securethroughout.[9][10] DroidSheep is a simple Android tool for web session hijacking (sidejacking). It listens for HTTP packets sent via a wireless (802.11) network connection and extracts the session id from these packets in order to reuse them. DroidSheep can capture sessions using the libpcap library and supports: open (unencrypted) networks, WEP encrypted networks, and WPA/WPA2 encrypted networks (PSK only). This software uses libpcap and arpspoof.[11][12]The apk was made available onGoogle Playbut it has been taken down by Google. CookieCadger is a graphical Java app that automates sidejacking and replay of HTTP requests, to help identify information leakage from applications that use unencrypted GET requests. It is a cross-platform open-source utility based on theWiresharksuite which can monitor wired Ethernet, insecure Wi-Fi, or load a packet capture file for offline analysis. Cookie Cadger has been used to highlight the weaknesses of youth team sharing sites such as Shutterfly (used by AYSO soccer league) and TeamSnap.[13] CookieMonster is aman-in-the-middleexploit where a third party can gainHTTPS cookiedata when the "Encrypted Sessions Only" property is not properly set. This could allow access to sites with sensitive personal or financial information. In 2008, this could affect major websites, including Gmail, Google Docs, eBay, Netflix, CapitalOne, Expedia.[14] It is aPythonbased tool, developed by security researcher Mike Perry. Perry originally announced the vulnerability exploited by CookieMonster onBugTraqin 2007. A year later, he demonstrated CookieMonster as a proof of concept tool at Defcon 16.[15][16][17][18][19][20][21][22] Methods to prevent session hijacking include:
https://en.wikipedia.org/wiki/Cookiemonster_attack
Cryptanalysis(from theGreekkryptós, "hidden", andanalýein, "to analyze") refers to the process of analyzinginformation systemsin order to understand hidden aspects of the systems.[1]Cryptanalysis is used to breachcryptographicsecurity systems and gain access to the contents ofencryptedmessages, even if thecryptographic keyis unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study ofside-channel attacksthat do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the BritishBombesandColossus computersatBletchley ParkinWorld War II, to themathematicallyadvanced computerized schemes of the present. Methods for breaking moderncryptosystemsoften involve solving carefully constructed problems inpure mathematics, the best-known beinginteger factorization. Inencryption, confidential information (called the"plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using anencryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipientdecryptsthe ciphertext by applying an inversedecryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, orbits, called acryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext. Encryption has been used throughout history to send important military, diplomatic and commercial messages, and today is very widely used incomputer networkingto protect email and internet communication. The goal of cryptanalysis is for a third party, acryptanalyst, to gain as much information as possible about the original ("plaintext"), attempting to "break" the encryption to read the ciphertext and learning the secret key so future messages can be decrypted and read.[2]A mathematical technique to do this is called acryptographic attack. Cryptographic attacks can be characterized in a number of ways: Cryptanalytical attackscan be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the generalalgorithmis known; this isShannon's Maxim"the enemy knows the system"[3]– in its turn, equivalent toKerckhoffs's principle.[4]This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously throughespionage,betrayalandreverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the GermanLorenz cipherand the JapanesePurple code, and a variety of classical schemes):[5] Attacks can also be characterised by the resources they require. Those resources include:[6] It is sometimes difficult to predict these quantities precisely, especially when the attack is not practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimatedorder of magnitudeof their attacks' difficulty, saying, for example, "SHA-1 collisions now 252."[7] Bruce Schneiernotes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128encryptions; an attack requiring 2110encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised."[8] The results of cryptanalysis can also vary in usefulness. CryptographerLars Knudsen(1998) classified various types of attack onblock ciphersaccording to the amount and quality of secret information that was discovered: Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem,[9]so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks onDES,MD5, andSHA-1were all preceded by attacks on weakened versions. In academic cryptography, aweaknessor abreakin a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to thesecret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system.[8] Cryptanalysis hascoevolvedtogether with cryptography, and the contest can be traced through thehistory of cryptography—newciphersbeing designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis.[citation needed] Although the actual word "cryptanalysis" is relatively recent (it was coined byWilliam Friedmanin 1920), methods for breakingcodesandciphersare much older.David Kahnnotes inThe CodebreakersthatArab scholarswere the first people to systematically document cryptanalytic methods.[10] The first known recorded explanation of cryptanalysis was given byAl-Kindi(c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arabpolymath,[11][12]inRisalah fi Istikhraj al-Mu'amma(A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method offrequency analysis.[13]Al-Kindi is thus regarded as the first codebreaker in history.[14]His breakthrough work was influenced byAl-Khalil(717–786), who wrote theBook of Cryptographic Messages, which contains the first use ofpermutations and combinationsto list all possibleArabicwords with and without vowels.[15] Frequency analysis is the basic tool for breaking mostclassical ciphers. In natural languages, certain letters of thealphabetappear more often than others; inEnglish, "E" is likely to be the most common letter in any sample ofplaintext. Similarly, thedigraph"TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide thesestatistics. For example, in asimple substitution cipher(where each letter is simply replaced with another), the most frequent letter in theciphertextwould be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains.[16] Al-Kindi's invention of the frequency analysis technique for breaking monoalphabeticsubstitution ciphers[17][18]was the most significant cryptanalytic advance until World War II. Al-Kindi'sRisalah fi Istikhraj al-Mu'ammadescribed the first cryptanalytic techniques, including some forpolyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis.[19]He also covered methods of encipherments, cryptanalysis of certain encipherments, andstatistical analysisof letters and letter combinations in Arabic.[20][13]An important contribution ofIbn Adlan(1187–1268) was onsample sizefor use of frequency analysis.[15] In Europe,ItalianscholarGiambattista della Porta(1535–1615) was the author of a seminal work on cryptanalysis,De Furtivis Literarum Notis.[21] Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587,Mary, Queen of Scotswas tried and executed fortreasonas a result of her involvement in three plots to assassinateElizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered byThomas Phelippes. In Europe during the 15th and 16th centuries, the idea of apolyalphabetic substitution cipherwas developed, among others by the French diplomatBlaise de Vigenère(1523–96).[22]For some three centuries, theVigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless,Charles Babbage(1791–1871) and later, independently,Friedrich Kasiski(1805–81) succeeded in breaking this cipher.[23]DuringWorld War I, inventors in several countries developedrotor cipher machinessuch asArthur Scherbius'Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system.[24] InWorld War I, the breaking of theZimmermann Telegramwas instrumental in bringing the United States into the war. InWorld War II, theAlliesbenefitted enormously from their joint success cryptanalysis of the German ciphers – including theEnigma machineand theLorenz cipher– and Japanese ciphers, particularly'Purple'andJN-25.'Ultra'intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by'Magic'intelligence.[25] Cryptanalysis of enemy messages played a significant part in theAlliedvictory in World War II.F. W. Winterbotham, quoted the western Supreme Allied Commander,Dwight D. Eisenhower, at the war's end as describingUltraintelligence as having been "decisive" to Allied victory.[26]Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended.[27] In practice, frequency analysis relies as much onlinguisticknowledge as it does on statistics, but as ciphers became more complex,mathematicsbecame more important in cryptanalysis. This change was particularly evident before and duringWorld War II, where efforts to crackAxisciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the PolishBombadevice, the BritishBombe, the use ofpunched cardequipment, and in theColossus computers– the first electronic digital computers to be controlled by a program.[28][29] With reciprocal machine ciphers such as theLorenz cipherand theEnigma machineused byNazi GermanyduringWorld War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed theindicator, as it indicates to the receiving operator how to set his machine to decipher the message.[30] Poorly designed and implemented indicator systems allowed firstPolish cryptographers[31]and then the British cryptographers atBletchley Park[32]to break the Enigma cipher system. Similar poor indicator systems allowed the British to identifydepthsthat led to the diagnosis of theLorenz SZ40/42cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine.[33] Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be"in depth."[34][35]This may be detected by the messages having the sameindicatorby which the sending operator informs the receiving operator about thekey generator initial settingsfor the message.[36] Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, theVernam cipherenciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ): Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext: (In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts: The individual plaintexts can then be worked out linguistically by tryingprobable words(or phrases), also known as"cribs,"at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component: The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed: Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them.[33] Governments have long recognized the potential benefits of cryptanalysis forintelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example,GCHQand theNSA, organizations which are still very active today. Even though computation was used to great effect in thecryptanalysis of the Lorenz cipherand other systems during World War II, it also made possible new methods of cryptographyorders of magnitudemore complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis.[citation needed]The historianDavid Kahnnotes:[37] Many are the cryptosystems offered by the hundreds of commercial vendors today that cannot be broken by any known methods of cryptanalysis. Indeed, in such systems even achosen plaintext attack, in which a selected plaintext is matched against its ciphertext, cannot yield the key that unlock[s] other messages. In a sense, then, cryptanalysis is dead. But that is not the end of the story. Cryptanalysis may be dead, but there is – to mix my metaphors – more than one way to skin a cat. Kahn goes on to mention increased opportunities for interception,bugging,side channel attacks, andquantum computersas replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field."[38] However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography:[39] Thus, while the best modern ciphers may be far more resistant to cryptanalysis than theEnigma, cryptanalysis and the broader field ofinformation securityremain quite active.[40] Asymmetric cryptography(orpublic-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on "hard"mathematical problemsas the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way.[41] Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of theDiffie–Hellman key exchangescheme depends on the difficulty of calculating thediscrete logarithm. In 1983,Don Coppersmithfound a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups).RSA's security depends (in part) upon the difficulty ofinteger factorization– a breakthrough in factoring would impact the security of RSA.[42] In 1980, one could factor a difficult 50-digit number at an expense of 1012elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012operations. Advances in computing technology also meant that the operations could be performed much faster.Moore's lawpredicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enoughkey sizefor RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such aselliptic curve cryptographyto be used.[citation needed] Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from thepublic key.[43] Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example,Shor's Algorithmcould factor large numbers inpolynomial time, in effect breaking some commonly used forms of public-key encryption.[44] By usingGrover's algorithmon a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length.[45]
https://en.wikipedia.org/wiki/Cryptanalysis
Anevil maid attackis an attack on an unattended device, in which an attacker with physical access alters it in some undetectable way so that they can later access the device, or the data on it. The name refers to the scenario where amaidcould subvert a device left unattended in a hotel room – but the concept itself also applies to situations such as a device being intercepted while in transit, or taken away temporarily by airport or law enforcement personnel. In a 2009 blog post, security analystJoanna Rutkowskacoined the term "Evil Maid Attack" due to hotel rooms being a common place where devices are left unattended.[1][2]The post detailed a method for compromising the firmware on an unattended computer via an external USB flash drive – and therefore bypassingTrueCryptdisk encryption.[2] D. Defreez, a computer security professional, first mentioned the possibility of an evil maid attack on Android smartphones in 2011.[1]He talked about the WhisperCore Android distribution and its ability to provide disk encryption for Androids.[1] In 2007, former U.S. Commerce SecretaryCarlos Gutierrezwas allegedly targeted by an evil maid attack during a business trip to China.[3]He left his computer unattended during a trade talk in Beijing, and he suspected that his device had been compromised.[3]Although the allegations have yet to be confirmed or denied, the incident caused the U.S. government to be more wary of physical attacks.[3] In 2009,SymantecCTO Mark Bregman was advised by several U.S. agencies to leave his devices in the U.S. before travelling to China.[4]He was instructed to buy new ones before leaving and dispose of them when he returned so that any physical attempts to retrieve data would be ineffective.[4] The attack begins when the victim leaves their device unattended.[5]The attacker can then proceed to tamper with the system. If the victim's device does not have password protection or authentication, an intruder can turn on the computer and immediately access the victim's information.[6]However, if the device is password protected, as with full diskencryption, the firmware of the device needs to be compromised, usually done with an external drive.[6]The compromised firmware then provides the victim with a fake password prompt identical to the original.[6]Once the password is input, the compromised firmware sends the password to the attacker and removes itself after a reboot.[6]In order to successfully complete the attack, the attacker must return to the device once it has been unattended a second time to steal the now-accessible data.[5][7] Another method of attack is through aDMA attackin which an attacker accesses the victim's information through hardware devices that connect directly to the physical address space.[6]The attacker simply needs to connect to the hardware device in order to access the information. An evil maid attack can also be done by replacing the victim's device with an identical device.[1]If the original device has abootloaderpassword, then the attacker only needs to acquire a device with an identical bootloader password input screen.[1]If the device has alock screen, however, the process becomes more difficult as the attacker must acquire the background picture to put on the lock screen of the mimicking device.[1]In either case, when the victim inputs their password on the false device, the device sends the password to the attacker, who is in possession of the original device.[1]The attacker can then access the victim's data.[1] LegacyBIOSis considered insecure against evil maid attacks.[8]Its architecture is old, updates andOption ROMsareunsigned, and configuration is unprotected.[8]Additionally, it does not supportsecure boot.[8]These vulnerabilities allow an attacker to boot from an external drive and compromise the firmware.[8]The compromised firmware can then be configured tosend keystrokesto the attacker remotely.[8] Unified Extensible Firmware Interface(UEFI) provides many necessary features for mitigating evil maid attacks.[8]For example, it offers a framework for secure boot, authenticated variables at boot-time, andTPMinitialization security.[8]Despite these available security measures, platform manufacturers are not obligated to use them.[8]Thus, security issues may arise when these unused features allow an attacker to exploit the device.[8] Manyfull disk encryptionsystems, such as TrueCrypt andPGP Whole Disk Encryption, are susceptible to evil maid attacks due to their inability to authenticate themselves to the user.[9]An attacker can still modify disk contents despite the device being powered off and encrypted.[9]The attacker can modify the encryption system's loader codes to steal passwords from the victim.[9] The ability to create a communication channel between the bootloader and the operating system to remotely steal the password for a disk protected byFileVault2, is also explored.[10]On a macOS system, this attack has additional implications due to "password forwarding" technology, in which a user's account password also serves as the FileVault password, enabling an additional attack surface through privilege escalation. In 2019 a vulnerability named "Thunderclap" in IntelThunderboltports found on many PCs was announced which could allow a rogue actor to gain access to the system viadirect memory access(DMA). This is possible despite use of an input/outputmemory management unit(IOMMU).[11][12]This vulnerability was largely patched by vendors. This was followed in 2020 by "Thunderspy" which is believed to be unpatchable and allows similar exploitation of DMA to gain total access to the system bypassing all security features.[13] Any unattended device can be vulnerable to a network evil maid attack.[1]If the attacker knows the victim's device well enough, they can replace the victim's device with an identical model with a password-stealing mechanism.[1]Thus, when the victim inputs their password, the attacker will instantly be notified of it and be able to access the stolen device's information.[1] One approach is to detect that someone is close to, or handling the unattended device. Proximity alarms, motion detector alarms, and wireless cameras, can be used to alert the victim when an attacker is nearby their device, thereby nullifying the surprise factor of an evil maid attack.[14]TheHaven Android appwas created in 2017 byEdward Snowdento do such monitoring, and transmit the results to the user's smartphone.[15] In the absence of the above,tamper-evident technologyof various kinds can be used to detect whether the device has been taken apart – including the low-cost solution of putting glitter nail polish over the screw holes.[16] After an attack has been suspected, the victim can have their device checked to see if any malware was installed, but this is challenging. Suggested approaches are checking the hashes of selected disk sectors and partitions.[2] If the device is under surveillance at all times, an attacker cannot perform an evil maid attack.[14]If left unattended, the device may also be placed inside a lockbox so that an attacker will not have physical access to it.[14]However, there will be situations, such as a device being taken away temporarily by airport or law enforcement personnel where this is not practical. Basic security measures such as having the latest up-to-date firmware and shutting down the device before leaving it unattended prevent an attack from exploiting vulnerabilities in legacy architecture and allowing external devices into open ports, respectively.[5] CPU-based disk encryption systems, such asTRESORand Loop-Amnesia, prevent data from being vulnerable to a DMA attack by ensuring it does not leak into system memory.[17] TPM-based secure boothas been shown to mitigate evil maid attacks by authenticating the device to the user.[18]It does this by unlocking itself only if the correct password is given by the user and if it measures that no unauthorized code has been executed on the device.[18]These measurements are done by root of trust systems, such as Microsoft'sBitLockerand Intel's TXT technology.[9]The Anti Evil Maid program builds upon TPM-based secure boot and further attempts to authenticate the device to the user.[1]
https://en.wikipedia.org/wiki/Evil_maid_attack
Incryptography, theextension lock protocol, as described byRon RivestandAdi Shamir, is aprotocoldesigned to frustrate eavesdropper attack against two parties that use an anonymouskey exchangeprotocol to secure their conversation. A further paper proposed using it as an authentication protocol, which was subsequently broken. Most cryptographic protocols rely on the prior establishment of secret or public keys or passwords. However, theDiffie–Hellman key exchangeprotocol introduced the concept of two parties establishing a secure channel (that is, with at least some desirable security properties) without any such prior agreement. Unauthenticated Diffie–Hellman, as an anonymous key agreement protocol, has long been known to be subject toman in the middle attack. However, the dream of a "zipless" mutually authenticated secure channel remained. The Interlock Protocol was described[1]as a method to expose a middle-man who might try to compromise two parties that use anonymous key agreement to secure their conversation. The Interlock protocol works roughly as follows: The strength of the protocol lies in the fact that half of an encrypted message cannot be decrypted. Thus, if Mallory begins her attack and intercepts Bob and Alice's keys, Mallory will be unable to decrypt Alice's half-message (encrypted using her key) and re-encrypt it using Bob's key. She must wait until both halves of the message have been received to read it, and can only succeed in duping one of the parties if she composes a completely new message. Davies and Price proposed the use of the Interlock Protocol for authentication in a book titledSecurity for Computer Networks.[2]But an attack on this was described bySteven M. Bellovin& Michael Merritt.[3]A subsequent refinement was proposed by Ellison.[4] The Bellovin/Merritt attack entails composing a fake message to send to the first party. Passwords may be sent using the Interlock Protocol between A and B as follows: where Ea,b(M) is message M encrypted with the key derived from the Diffie–Hellman exchange between A and B, <1>/<2> denote first and second halves, and Pa/Pb are the passwords of A and B. An attacker, Z, could send half of a bogus message—P?--to elicit Pa from A: At this point, Z has compromised both Pa and Pb. The attack can be defeated by verifying the passwords in parts, so that when Ea,z(P?)<1> is sent, it is known to be invalid and Ea,z(Pa)<2> is never sent (suggested by Davies). However, this does not work when the passwords are hashed, since half of a hash is useless, according to Bellovin.[3]There are also several other methods proposed in,[5][6][7][8]including using a shared secret in addition to the password. The forced-latency enhancement can also prevent certain attacks. A modified Interlock Protocol can require B (the server) to delay all responses for a known duration: Where "data" is the encrypted data that immediately follows the Interlock Protocol exchange (it could be anything), encoded using anall-or-nothing transformto prevent in-transit modification of the message. Ma<1> could contain an encrypted request and a copy of Ka. Ma<2> could contain the decryption key for Ma<1>. Mb<1> could contain an encrypted copy of Kb, and Mb<2> could contain the decryption key for Mb<1> and the response, such as OK, or NOT FOUND, and the hash digest of the data. MITM can be attempted using the attack described in the Bellovin paper (Z being the man-in-the-middle): In this case, A receives the data approximately after 3*T, since Z has to perform the interlocking exchange with B. Hence, the attempted MITM attack can be detected and the session aborted. Of course, Z could choose to not perform the Interlock Protocol with B (opting to instead send his own Mb) but then the session would be between A and Z, not A, Z, and B: Z wouldn't be in the middle. For this reason, the interlock protocol cannot be effectively used to provide authentication, although it can ensure that no third party can modify the messages in transit without detection.
https://en.wikipedia.org/wiki/Interlock_protocol
Man-in-the-browser(MITB,MitB,MIB,MiB), a form of Internetthreatrelated toman-in-the-middle(MITM), is aproxyTrojan horse[1]that infects aweb browserby taking advantage of vulnerabilities inbrowser securityto modifyweb pages, modify transaction content or insert additional transactions, all in a covert fashion invisible to both the user and hostweb application. A MitBattackwill be successful irrespective of whether security mechanisms such asSSL/PKIand/ortwo-orthree-factor authenticationsolutions are in place. A MitB attack may be countered by usingout-of-bandtransaction verification, althoughSMSverification can be defeated byman-in-the-mobile(MitMo)malwareinfection on themobile phone. Trojans may be detected and removed by antivirus software,[2]but a 2011 report concluded that additional measures on top of antivirus software were needed.[3][needs update] A related, simpler attack is theboy-in-the-browser(BitB,BITB). The majority of financial service professionals in a 2014 survey considered MitB to be the greatest threat toonline banking.[4] The MitB threat was demonstrated by Augusto Paes de Barros in his 2005 presentation about backdoor trends "The future of backdoors - worst of all worlds."[5]The name "man-in-the-browser" was coined by Philipp Gühring on 27 January 2007.[6] A MitB Trojan works by using common facilities provided to enhance browser capabilities such asBrowser Helper Objects(a feature limited toInternet Explorer),browser extensionsanduser scripts(for example inJavaScript).[6]Antivirus softwarecan detect some of these methods.[2] In a nutshell example exchange between user and host, such as anInternet bankingfunds transfer, the customer will always be shown, via confirmation screens, the exact payment information as keyed into the browser. The bank, however, will receive a transaction with materially altered instructions, i.e. a different destination account number and possibly amount. The use of strong authentication tools simply creates an increased level of misplaced confidence on the part of both customer and bank that the transaction is secure. Authentication, by definition, is concerned with the validation of identity credentials. This should not be confused with transaction verification. Examples of MitB threats on differentoperating systemsandweb browsers: Known Trojans may be detected, blocked, and removed by antivirus software.[2]In a 2009 study, the effectiveness of antivirus against Zeus was 23%,[25]and again low success rates were reported in a separate test in 2011.[3]The 2011 report concluded that additional measures on top of antivirus were needed.[3] A theoretically effective method of combating any MitB attack is through anout-of-band(OOB) transaction verification process. This overcomes the MitB trojan by verifying the transaction details, as received by the host (bank), to the user (customer) over a channel other than the browser; for example, an automated telephone call,SMS, or a dedicatedmobile appwith graphical cryptogram.[30]OOB transaction verification is ideal for mass market use since it leverages devices already in the public domain (e.g.landline,mobile phone, etc.) and requires no additional hardware devices, yet enables three-factor authentication (using voicebiometrics), transaction signing (to non-repudiation level), and transaction verification. The downside is that the OOB transaction verification adds to the level of the end-user's frustration with more and slower steps. Mobile phonemobile Trojanspywareman-in-the-mobile(MitMo)[31]can defeat OOB SMS transaction verification.[32] Web fraud detection can be implemented at the bank to automatically check for anomalous behaviour patterns in transactions.[34]TLS Negotiation failed: FAILED_PRECONDITION: starttls error (71): 126011017202752:error:1000012e:SSL routines:OPENSSL_internal:KEY_USAGE_BIT_INCORRECT:third_party/openssl/boringssl/src/ssl/ssl_cert.cc:431: Keyloggers are the most primitive form ofproxy trojans, followed by browser-session recorders that capture more data, and lastly MitBs are the most sophisticated type.[1] SSL/PKI etc. may offer protection in aman-in-the-middleattack, but offers no protection in a man-in-the-browser attack. A related attack that is simpler and quicker for malware authors to set up is termedboy-in-the-browser(BitBorBITB). Malware is used to change the client's computer network routing to perform a classic man-in-the-middle attack. Once the routing has been changed, the malware may completely remove itself, making detection more difficult.[35] Clickjacking tricks a web browser user into clicking on something different from what the user perceives, by means of malicious code in the webpage.
https://en.wikipedia.org/wiki/Man-in-the-browser
Aman-on-the-side attackis a form of activeattackincomputer securitysimilar to aman-in-the-middle attack. Instead of completely controlling a network node as in a man-in-the-middle attack, the attacker only has regular access to the communication channel, which allows them to read the traffic and insert new messages, but not to modify or delete messages sent by other participants. The attacker relies on a timing advantage to make sure that the response they send to the request of a victim arrives before the legitimate response. In real-world attacks, the response packet sent by the attacker can be used to placemalwareon the victim's computer.[1]The need for a timing advantage makes the attack difficult to execute, as it requires a privileged position in the network, for example on the internet backbone.[2]Potentially, this class of attack may be performed within a local network (assuming a privileged position), research has shown that it has been successful within critical infrastructure.[3] The2013 global surveillance revelationsrevealed that the USNational Security Agency(NSA) widely uses a man-on-the-side attack to infect targets with malware through itsQUANTUMprogram.[1] GitHubsuffered such an attack in 2015.[4]The Russian Threat Group might have suffered a similar attack in 2019. Man-on-the-side has become a more familiarized term afterEdward Snowdenleaked information about the NSA's quantum insert project. Man-on-the-side attack involves a cyber-attacker in a conversation between two people or two parties who are communicating online. The cyber-attacker is able to intercept and inject messages into the communication between the two parties.[5]However, the cyber-attacker is not able to remove any signals on communication channels. Man-on-the-side attack can be applied to websites while retrieving online file downloads. The cyber-attacker is able to receive signals and perform the attack through a satellite. As long as they have a satellite dish in the place they're residing in, they will be able to read transmissions and receive signals. Satellites tend to have high latency, which gives the cyber attacker enough time to send their injected response to the victim before the actual response from one party reaches the other through the satellite link.[5]Therefore, this is the reason why an attacker relies on timing advantage. The main difference betweenman-in-the-middle attackand man-on-the-side-attack is that man-in-the-middle attackers are able to intercept and block messages and signals from transmitting, whilst man-on-the-side attackers are able to intercept and inject messages and signals before the other party receives a legitimate response. In 2019, it was reported that man-on-the-side attack might have been conceived by the Russian Threat Group through installingMalware. When victim used the internet and requested to download a file at a particular website, man-on-the-side attackers who were present were aware that the victims were attempting to download the file. Since the man-on-the-side attackers were not able to prohibit the victim from downloading the file, what they could do was to intercept the server and send a signal to the victim before the victim received a legitimate response, which was the requested download file.[6]The attacker then intercepted and sent the victims a message that directed them to a 302 error site, which led the victim to think that the file has been removed or it simply cannot be downloaded. However, even though the victim would receive a legitimate response from the website file download, since their servers were already contaminated, they would not have been able to view the legitimate website and file since they received a so-called proper response from the attacking team.[7]At the 302 error site, the attacking team directed the victims to an alternative website to download the files they wanted to, which the attacking team controlled and ran. When the victim connected to the attacking team's server, not known to their knowledge, they would start downloading the file because on the victim's screen, it shows that this site is working and they can finally download the file.[8]However, the attacking team had already found the original file from the legitimate website and modified the file to include pieces of malwares and sent the file back to the victim. When the victim clicked on the link and started downloading the file, they were already downloading a file that now contains malware. In 2015, the twoGitHubrepositories suffered a flooded attack due to man-on-the-side attack. When a user outside of China attempts to browse a Chinese website, they are required to pass the Chinese Internet Infrastructure before automatically being directed to the website. The infrastructure allowed the request to the legitimate Chinese website the user wanted to browse to without any modifications involved. The response came back from the website, but as it passed through the Chinese Internet Infrastructure, before it could get back to the user, the response had been modified. The modification involved a malware that changed the Baidu analytics script from only accessingBaiduto the user-making request to access the two GitHub Repositories as they continued browse the website.[9]The user, who was able to continue browsing the Chinese search engine, Baidu, were innocent since they were absolutely unaware of the fact that their response involved an embedded malicious script, which would make a request to access GitHub on the side.[9]This happened to all users outside of china who was trying to seek access to a Chinese website, which resulted in extremely high volumes of requests being made to the two GitHub Repositories. The enormous load GitHub had to bear had caused the server to flood and was thus attacked.
https://en.wikipedia.org/wiki/Man-on-the-side_attack
Mutual authenticationortwo-way authentication(not to be confused withtwo-factor authentication) refers to two partiesauthenticatingeach other at the same time in anauthentication protocol. It is a default mode of authentication in some protocols (IKE,SSH) and optional in others (TLS). Mutual authentication is a desired characteristic in verification schemes that transmit sensitive data, in order to ensuredata security.[1][2]Mutual authentication can be accomplished with two types of credentials: usernames andpasswords, andpublic key certificates. Mutual authentication is often employed in theInternet of Things(IoT). Writing effective security schemes in IoT systems is challenging, especially when schemes are desired to be lightweight and have low computational costs. Mutual authentication is a crucial security step that can defend against many adversarial attacks,[3]which otherwise can have large consequences if IoT systems (such as e-Healthcare servers) are hacked. In scheme analyses done of past works, a lack of mutual authentication had been considered a weakness in data transmission schemes.[4] Schemes that have a mutual authentication step may use different methods of encryption, communication, and verification, but they all share one thing in common: each entity involved in the communication is verified. IfAlicewants to communicate withBob, they will both authenticate the other and verify that it is who they are expecting to communicate with before any data or messages are transmitted. A mutual authentication process that exchanges user IDs may be implemented as follows:[citation needed] To verify that mutual authentication has occurred successfully,Burrows-Abadi-Needham logic(BAN logic) is a well regarded and widely accepted method to use, because it verifies that a message came from a trustworthy entity. BAN logic first assumes an entity is not to be trusted, and then will verify its legality.[1][2][5][6] Mutual authentication supportszero trust networkingbecause it can protect communications against adversarial attacks,[7]notably: Mutual authentication also ensures information integrity because if the parties are verified to be the correct source, then the information received is reliable as well.[5] By default theTLSprotocol only proves the identity of the server to the client usingX.509 certificates, and the authentication of the client to the server is left to the application layer. TLS also offers client-to-server authentication using client-side X.509 authentication.[13]As it requires provisioning of the certificates to the clients and involves a less user-friendly experience, it is rarely used in end-user applications. Mutual TLS authentication (mTLS) is more often used inbusiness-to-business(B2B) applications, where a limited number of programmatic and homogeneous clients are connecting to specific web services, the operational burden is limited, and security requirements are usually much higher as compared to consumer environments. mTLS is also used inmicroservices-based applications based onruntimessuch asDapr, via systems like SPIFFE.[7] While lightweight schemes and secure schemes are notmutually exclusive, adding a mutual authentication step to data transmissions protocols can often increase performance runtime and computational costs.[2]This can become an issue for network systems that cannot handle large amounts of data or those that constantly have to update for new real-time data (e.g. location tracking, real-time health data).[2][9] Thus, it becomes a desired characteristic of many mutual authentication schemes to have lightweight properties (e.g. have a lowmemory footprint) in order to accommodate the system that is storing a lot of data.[3]Many systems implementcloud computing, which allows quick access to large amounts of data, but sometimes large amounts of data can slow down communication. Even with edge-based cloud computing, which is faster than general cloud computing due to a closer proximity between the server and user,[5]lightweight schemes allow for more speed when managing larger amounts of data. One solution to keep schemes lightweight during the mutual authentication process is to limit the number ofbitsused during communication.[3] Applications that solely rely ondevice-to-device(D2D) communication, where multiple devices can communicate locally in close proximities, removes the third party network. This in turn can speed up communication time.[14]However, the authentication still occurs through insecure channels, so researchers believe it is still important to ensure mutual authentication occurs in order to keep a secure scheme.[14] Schemes may sacrifice a better runtime or storage cost when ensuring mutual authentication in order to prioritize protecting the sensitive data.[2][11] In mutual authentication schemes that require a user's input password as part of the verification process, there is a higher vulnerability tohackersbecause the password is human-made rather than a computer-generated certificate. While applications could simply require users to use a computer-generated password, it is inconvenient for people to remember. User-made passwords and the ability to change one's password are important for making an application user-friendly,[15]so many schemes work to accommodate the characteristic. Researchers note that a password based protocol with mutual authentication is important because user identities and passwords are still protected, as the messages are only readable to the two parties involved.[16] However, a negative aspect about password-based authentication is that password tables can take up a lot of memory space.[15]One way around using a lot of memory during a password-based authentication scheme is to implementone-time passwords(OTP), which is a password sent to the user via SMS or email. OTPs are time-sensitive, which means that they will expire after a certain amount of time and that memory does not need to be stored.[17] Recently, more schemes have higher level authentication than password based schemes. While password-based authentication is considered as "single-factor authentication," schemes are beginning to implementsmart card(two-factor)[15]or biometric-based (three-factor) authentication schemes. Smart cards are simpler to implement and easy for authentication, but still have risks of being tampered with.[15]Biometricshave grown more popular over password-based schemes because it is more difficult to copy or guess session keys when using biometrics,[6]but it can be difficult to encrypt noisy data.[17]Due to these security risks and limitations, schemes can still employ mutual authentication regardless of how many authentication factors are added.[6] Mutual authentication is often found in schemes employed in theInternet of Things(IoT), where physical objects are incorporated into the Internet and can communicate via IP address.[10]Authentication schemes can be applied to many types of systems that involve data transmission.[14]As the Internet's presence in mechanical systems increases, writing effective security schemes for large numbers of users, objects, and servers can become challenging, especially when needing schemes to be lightweight and have low computational costs. Instead of password-based authentication, devices will usecertificatesto verify each other's identities. Mutual authentication can be satisfied in radio network schemes, where data transmissions throughradio frequenciesare secure after verifying the sender and receiver.[11][18] Radio frequency identification(RFID) tags are commonly used for object detection, which many manufacturers are implementing into their warehouse systems for automation.[19]This allows for a faster way to keep up with inventory and track objects. However, keeping track of items in a system with RFID tags that transmit data to acloud serverincreases the chances of security risks, as there are now more digital elements to keep track of.[19]A three way mutual authentication can occur between RFID tags, the tag readers, and the cloud network that stores this data in order to keep RFID tag data secure and unable to be manipulated.[19] Similarly, an alternate RFID tag and reader system that assigns designated readers to tags has been proposed for extra security and low memory cost.[20]Instead of considering all tag readers as one entity, only certain readers can read specific tags. With this method, if a reader is breached, it will not affect the whole system. Individual readers will communicate with specific tags during mutual authentication, which runs in constant time as readers use the same private key for the authentication process. Many e-Healthcare systems that remotely monitor patient health data use wirelessbody area networks(WBAN) that transmit data through radio frequencies.[11]This is beneficial for patients that should not be disturbed while being monitored, and can reduced the workload for medical worker and allow them to focus on the more hands-on jobs. However, a large concern for healthcare providers and patients about using remote health data tracking is that sensitive patient data is being transmitted through unsecured channels,[8]so authentication occurs between the medical body area network user (the patient), the Healthcare Service Provider (HSP) and the trusted third party. e-Healthcarecloudsare another way to store patient data collected remotely.[2]Clouds are useful for storing large amounts of data, such as medical information, that can be accessed by many devices whenever needed.TelecareMedicalInformation Systems(TMIS), an important way for medical patients to receive healthcare remotely, can ensure secured data with mutual authentication verification schemes.[14]Blockchainis one way that has been proposed to mutually authenticate the user to the database, by authenticating with the main mediBchain node and keeping patient anonymity.[21] Fog-cloud computingis a networking system that can handle large amounts of data, but still has limitations regarding computational and memory cost.[22]Mobile edge computing(MEC) is considered to be an improved, more lightweight fog-cloud computing networking system,[22]and can be used for medical technology that also revolves around location-based data. Due to the large physical range required of locational tracking,5G networkscan send data to the edge of the cloud to store data. An application likesmart watchesthat track patient health data can be used to call the nearest hospital if the patient shows a negative change in vitals.[5] Fog node networks can be implemented incar automation, keeping data about the car and its surrounding states secure. By authenticating the fog nodes and the vehicle, vehicular handoff becomes a safe process and the car’s system is safe from hackers.[9] Many systems that do not require a human user as part of the system also have protocols that mutually authenticate between parties. Inunmanned aerial vehicle(UAV) systems, a platform authentication occurs rather than user authentication.[2]Mutual authentication during vehicle communication prevents one vehicle's system from being breached, which can then affect the whole system negatively. For example, a system of drones can be employed for agriculture work and cargo delivery, but if one drone were to be breached, the whole system has the potential to collapse.[2]
https://en.wikipedia.org/wiki/Mutual_authentication
In cryptography, apassword-authenticated key agreement(PAK) method is an interactive method for two or more parties to establish cryptographic keys based on one or more parties' knowledge of a password. An important property is that an eavesdropper orman-in-the-middlecannot obtain enough information to be able tobrute-force guessa password without further interactions with the parties for each (few) guesses. This means that strong security can be obtained using weak passwords.[citation needed] Password-authenticated key agreement generally encompasses methods such as: In the most stringent password-only security models, there is no requirement for the user of the method to remember any secret or publicdataother than the password. Password-authenticated key exchange(PAKE) is a method in which two or more parties, based only on their knowledge of a shared password,[1]establish a cryptographic key using an exchange of messages, such that an unauthorized party (one who controls the communication channel but does not possess the password) cannot participate in the method and is constrained as much as possible from brute-force guessing the password. (The optimal case yields exactly one guess per run exchange.) Two forms of PAKE are balanced and augmented methods.[1] BalancedPAKE assumes the two parties in either a client-client or client-server situation use the same secret password to negotiate and authenticate a shared key.[1]Examples of these are: AugmentedPAKE is a variation applicable to client/server scenarios, in which the server does not store password-equivalent data. This means that an attacker that stole the server data still cannot masquerade as the client unless they first perform a brute force search for the password. Some augmented PAKE systems use anoblivious pseudorandom functionto mix the user's secret password with the server's secret salt value, so that the user never learns the server's secret salt value and the server never learns the user's password (or password-equivalent value) or the final key.[8] Examples include: Password-authenticated key retrievalis a process in which a client obtains a static key in a password-based negotiation with a server that knows data associated with the password, such as the Ford and Kaliski methods. In the most stringent setting, one party uses only a password in conjunction withN(two or more) servers to retrieve a static key. This is completed in a way that protects the password (and key) even ifN− 1 of the servers are completely compromised. The first successful password-authenticated key agreement methods wereEncrypted Key Exchangemethods described bySteven M. Bellovinand Michael Merritt in 1992. Although several of the first methods were flawed, the surviving and enhanced forms of EKE effectively amplify a shared password into a shared key, which can then be used forencryptionand/or message authentication. The first provably-secure PAKE protocols were given in work by M. Bellare, D. Pointcheval, and P. Rogaway (Eurocrypt 2000) and V. Boyko, P. MacKenzie, and S. Patel (Eurocrypt 2000). These protocols were proven secure in the so-calledrandom oracle model(or even stronger variants), and the first protocols proven secure under standard assumptions were those of O. Goldreich and Y. Lindell (Crypto 2001) which serves as a plausibility proof but is not efficient, and J. Katz, R. Ostrovsky, and M. Yung (Eurocrypt 2001) which is practical. The first password-authenticated key retrieval methods were described by Ford and Kaliski in 2000. A considerable number of alternative, secure PAKE protocols were given in work by M. Bellare, D. Pointcheval, and P. Rogaway, variations, and security proofs have been proposed in this growing class of password-authenticated key agreement methods. Current standards for these methods include IETF RFC 2945, RFC 5054, RFC 5931, RFC 5998, RFC 6124, RFC 6617, RFC 6628 and RFC 6631, IEEE Std 1363.2-2008,ITU-TX.1035and ISO-IEC 11770-4:2006. On request of the internet engineering task force IETF, a PAKE selection process has been carried out in 2018 and 2019 by the IRTF crypto forum research group (CFRG). The selection process has been carried out in several rounds.[15]In the final round in 2019 four finalists AuCPace, OPAQUE (augmented cases) and CPace, SPAKE2 (balanced PAKE) prevailed. As a result of the CFRG selection process, two winner protocols were declared as "recommended by the CFRG for usage in IETF protocols": CPace and OPAQUE.[16]
https://en.wikipedia.org/wiki/Password-authenticated_key_agreement
Incryptography, asecure channelis a means ofdata transmissionthat is resistant to overhearing and tampering. Aconfidential channelis a means of data transmission that is resistant to overhearing, or eavesdropping (e.g., reading the content), but not necessarily resistant to tampering (i.e., manipulating the content). Anauthentic channelis a means of data transmission that is resistant to tampering but not necessarily resistant to overhearing. In contrast to a secure channel, aninsecure channelisunencryptedand may be subject toeavesdroppingand tampering.Secure communicationsare possible over an insecure channel if the content to be communicated is encrypted prior to transmission. There are no perfectly secure channels in the real world. There are, at best, only ways to makeinsecure channels(e.g., couriers,homing pigeons,diplomatic bags, etc.) less insecure:padlocks(between courier wrists and a briefcase),loyalty tests, security investigations, and guns for courier personnel,diplomatic immunityfor diplomatic bags, and so forth. In 1976, two researchers proposed a key exchange technique (now named after them)—Diffie–Hellman key exchange(D-H). This protocol allows two parties to generate akeyonly known to them, under the assumption that a certain mathematical problem (e.g., theDiffie–Hellman problemin their proposal) is computationally infeasible (i.e., very very hard) to solve, and that the two parties have access to an authentic channel. In short, that an eavesdropper—conventionally termed 'Eve', who can listen to all messages exchanged by the two parties, but who can not modify the messages—will not learn the exchanged key. Such a key exchange was impossible with any previously known cryptographic schemes based onsymmetric ciphers, because with these schemes it is necessary that the two parties exchange a secret key at some prior time, hence they require a confidential channel at that time which is just what we are attempting to build. Most cryptographic techniques are trivially breakable if keys are not exchanged securely or, if they actually were so exchanged, if those keys become known in some other way— burglary or extortion, for instance. An actually secure channel will not be required if an insecure channel can be used to securely exchange keys, and if burglary, bribery, or threat aren't used. The eternal problem has been and of course remains—even with modern key exchange protocols—how to know when an insecure channel worked securely (or alternatively, and perhaps more importantly, when it did not), and whether anyone has actually been bribed or threatened or simply lost a notebook (or a notebook computer) with key information in it. These are hard problems in the real world and no solutions are known—only expedients,jury rigs, andworkarounds. Researchers[who?]have proposed and demonstratedquantum cryptographyin order to create a secure channel. It is not clear whether the special conditions under which it can be made to work are practical in the real world of noise, dirt, and imperfection in which most everything is required to function. Thus far, actual implementation of the technique is exquisitely finicky and expensive, limiting it to very special purpose applications. It may also be vulnerable to attacks specific to particular implementations and imperfections in the optical components of which the quantum cryptographic equipment is built. While implementations of classical cryptographic algorithms have received worldwide scrutiny over the years, only a limited amount of public research has been done to assess security of the present-day implementations of quantum cryptosystems, mostly because they are not in widespread use as of 2014. Security definitions for a secure channel try to model its properties independently from its concrete instantiation. A good understanding of these properties is needed before designing a secure channel, and before being able to assess its appropriateness of employment in a cryptographic protocol. This is a topic ofprovable security. A definition of a secure channel that remains secure, even when used in arbitrary cryptographic protocols is an important building block foruniversally composablecryptography.[citation needed] A universally composable authenticated channel can be built usingdigital signaturesand apublic key infrastructure.[1] Universally composable confidential channels are known to exist undercomputational hardness assumptionsbased onhybrid encryptionand apublic key infrastructure.[2]
https://en.wikipedia.org/wiki/Secure_channel
In the context ofinformation security, and especiallynetwork security, aspoofing attackis a situation in which a person or program successfully identifies as another by falsifyingdata, to gain an illegitimate advantage.[1] Many of the protocols in theTCP/IPsuite do not provide mechanisms forauthenticatingthe source or destination of a message,[2]leaving them vulnerable to spoofing attacks when extra precautions are not taken by applications to verify the identity of the sending or receiving host. IP spoofing andARP spoofingin particular may be used to leverageman-in-the-middle attacksagainst hosts on acomputer network. Spoofing attacks which take advantage of TCP/IP suite protocols may be mitigated with the use offirewallscapable ofdeep packet inspectionor by taking measures to verify the identity of the sender or recipient of a message. The term 'Domain name spoofing' (or simply though less accurately, 'Domain spoofing') is used generically to describe one or more of a class ofphishingattacks that depend on falsifying or misrepresenting an internetdomain name.[3][4]These are designed to persuade unsuspecting users into visiting a web site other than that intended, or opening an email that is not in reality from the address shown (or apparently shown).[5]Although website and email spoofing attacks are more widely known, any service that relies ondomain name resolutionmay be compromised. Some websites, especially pornographicpaysites, allow access to their materials only from certain approved (login-) pages. This is enforced by checking thereferrerheader of theHTTPrequest. This referrer header, however, can be changed (known as "referrer spoofing" or "Ref-tar spoofing"), allowing users to gain unauthorized access to the materials. "Spoofing" can also refer tocopyrightholders placing distorted or unlistenable versions of works onfile-sharingnetworks. The sender information shown ine-mails(theFrom:field) can be spoofed easily. This technique is commonly used byspammersto hide the origin of their e-mails and leads to problems such as misdirectedbounces(i.e. e-mail spambackscatter). E-mail address spoofing is done in quite the same way as writing a forged return address usingsnail mail. As long as the letter fits the protocol, (i.e. stamp,postal code) theSimple Mail Transfer Protocol (SMTP)will send the message. It can be done using a mail server withtelnet.[6] Geolocationspoofing occurs when a user applies technologies to make their device appear to be located somewhere other than where it is actually located.[7]The most common geolocation spoofing is through the use of aVirtual Private Network(VPN) orDNSProxy in order for the user to appear to be located in a different country, state or territory other than where they are actually located. According to a study byGlobalWebIndex, 49% of global VPN users utilize VPNs primarily to accessterritorially restrictedentertainment content.[8]This type of geolocation spoofing is also referred to as geo-piracy, since the user is illicitly accessing copyrighted materials via geolocation spoofing technology. Another example of geolocation spoofing occurred when an online poker player in California used geolocation spoofing techniques to play online poker inNew Jersey, in contravention of bothCaliforniaand New Jersey state law.[9]Forensic geolocation evidence proved the geolocation spoofing and the player forfeited more than $90,000 in winnings. Public telephone networks often providecaller IDinformation, which includes the caller's number and sometimes the caller's name, with each call. However, some technologies (especially inVoice over IP (VoIP)networks) allow callers to forge caller ID information and present false names and numbers. Gateways between networks that allow such spoofing and other public networks then forward that false information. Since spoofed calls can originate from other countries, the laws in the receiver's country may not apply to the caller. This limits laws' effectiveness against the use of spoofed caller ID information to further ascam.[10][failed verification] Aglobal navigation satellite system(GNSS) spoofing attack attempts to deceive a GNSS receiver by broadcasting fake GNSS signals, structured to resemble a set of normal GNSS signals, or by rebroadcasting genuine signals captured elsewhere or at a different time.[11]Spoofing attacks are generally harder to detect as adversaries generate counterfeit signals. These spoofed signals are challenging to recognize from legitimate signals, thus confusing ships' calculation of positioning, navigation, and timing (PNT).[12]This means that spoofed signals may be modified in such a way as to cause the receiver to estimate its position to be somewhere other than where it actually is, or to be located where it is but at a different time, as determined by the attacker. One common form of a GNSS spoofing attack, commonly termed a carry-off attack, begins by broadcasting signals synchronized with the genuine signals observed by the target receiver. The power of the counterfeit signals is then gradually increased and drawn away from the genuine signals.[11] Even though GNSS is one of the most relied upon navigational systems, it has demonstrated critical vulnerabilities towards spoofing attacks. GNSS satellite signals have been shown to be vulnerable due to the signals’ being relatively weak on Earth’s surface.[13]A reliance on GNSS could result in the loss of life, environmental contamination, navigation accidents, and financial costs.[14][15][16]However, since 80% of global trade is moved through shipping companies, relying upon GNSS systems for navigation remains unavoidable.[17][18] All GNSS systems, such as the US GPS, Russia'sGLONASS, China'sBeiDou, and Europe'sGalileoconstellation, are vulnerable to this technique.[19]In order to mitigate some of the vulnerabilities the GNSS systems face concerning spoofing attacks, the use of more than one navigational system at once is recommended.[20] The December 2011capture of a Lockheed RQ-170 Sentineldrone aircraft in northeasternIranmay have been the result of such an attack.[21]GNSS spoofing attacks had been predicted and discussed in the GNSS community as early as 2003.[22][23][24]A "proof-of-concept" attack was successfully performed in June 2013, when the luxury yachtWhite Rose of Drachswas misdirected with spoofedGPSsignals by a group of aerospace engineering students from the Cockrell School of Engineering at theUniversity of Texas in Austin. The students were aboard the yacht, allowing their spoofing equipment to gradually overpower the signal strengths of the actual GPS constellation satellites, altering the course of the yacht.[25][26][27] In 2019, the British oil tankerStena Imperowas the target of a spoofing attack that directed the ship into Iranian waters where it was seized by Iranian forces. Consequently, the vessel including its crew and cargo were used as pawns in a geopolitical conflict. Several shipping companies with vessels navigating around Iranian waters are instructing vessels to transit dangerous areas with high speed and during daylight.[28] On October 15, 2023,Israel Defense Forces(IDF) announced that GPS had been “restricted in active combat zones in accordance with various operational needs,” but has not publicly commented on more advanced interference. In April 2024, however, researchers atUniversity of Texas at Austindetected false signals and traced their origin to a particular air base in Israel run by the IDF.[29] In June 2017, approximately twenty ships in theBlack Seacomplained of GPS anomalies, showing vessels to be transpositioned miles from their actual location, in what Professor Todd Humphreys believed was most likely a spoofing attack.[27][30]GPS anomalies aroundPutin's Palaceand theMoscow Kremlin, demonstrated in 2017 by a Norwegian journalist on air, have led researchers to believe that Russian authorities use GPS spoofing whereverVladimir Putinis located.[27][31] The mobile systems namedBorisoglebsk-2,KrasukhaandZhitelare reported to be able to spoof GPS.[32] Incidents involving Russian GPS spoofing include during a November 2018 NATO exercise in Finland that led to ship collision (unconfirmed by authorities).[33]and a 2019 incident of spoofing from Syria by the Russian military that affected the civil airport inTel Aviv.[34][35] In December of 2022 significant GPS interference in several Russian cities was reported by theGPSJamservice; the interference was attributed to defensive measures taken by Russian authorities in the wake of the invasion of Ukraine.[19] Since the advent ofsoftware-defined radio(SDR), GPS simulator applications have been made available to the general public. This has made GPS spoofing much more accessible, meaning it can be performed at limited expense and with a modicum of technical knowledge.[36]Whether this technology applies to other GNSS systems remains to be demonstrated. The Department of Homeland Security, in collaboration with the National Cybersecurity and Communications Integration Center (NCCIC) and the National Coordinating Center for Communications (NCC), released a paper which lists methods to prevent this type of spoofing. Some of the most important and most recommended to use are:[37] These installation and operation strategies and development opportunities can significantly enhance the ability of GPS receivers and associated equipment to defend against a range of interference, jamming, and spoofing attacks. A system and receiver agnostic detection software offers applicability as cross-industry solution. Software implementation can be performed in different places within the system, depending on where the GNSS data is being used, for example as part of the device's firmware, operating system, or on the application level.[citation needed] A method proposed by researchers from the Department of Electrical and Computer Engineering at theUniversity of Maryland, College Parkand the School of Optical and Electronic Information at Huazhong University of Science and Technology that aims to help mitigate the effects of GNSS spoofing attacks by using data from a vehicles controller area network (CAN) bus. The information would be compared to that of received GNSS data and compared in order to detect the occurrence of a spoofing attack and to reconstruct the driving path of the vehicle using that collected data. Properties such as the vehicles speed and steering angle would be amalgamated and regression modeled in order to achieve a minimum error in position of 6.25 meters.[39]Similarly, a method outlined by researchers in a 2016IEEEIntelligent Vehicles Symposium conference paper discuss the idea of using cooperative adaptive cruise control (CACC) and vehicle to vehicle (V2V) communications in order to achieve a similar goal. In this method, the communication abilities of both cars and radar measurements are used to compare against the supplied GNSS position of both cars to determine the distance between the two cars which is then compared to the radar measurements and checked to make sure they match. If the two lengths match within a threshold value, then no spoofing has occurred, but above this threshold, the user is notified so that s/he can take action.[40] Information technology plays an increasingly large role in today's world, and different authentication methods are used for restricting access to informational resources, including voice biometrics. Examples of usingspeaker recognitionsystems include internet banking systems, customer identification during a call to a call center, as well as passive identification of a possible criminal using a preset "blacklist".[41] Technologies related to the synthesis and modeling of speech are developing very quickly, allowing one to create voice recordings almost indistinguishable from real ones. Such services are calledText-to-Speech(TTS) orStyle transferservices. The first one aimed at creating a new person. The second one aimed at identifies as another in voice identification systems. A large number of scientists are busy developing algorithms that would be able to distinguish the synthesized voice of the machine from the real one. On the other hand, these algorithms need to be thoroughly tested to make sure that the system really works.[42]However, an early study has shown that feature design and masking augmentation have a significant impact on the ability to detect spoofed voice.[43] Facial recognition technology is widely employed in various areas, including immigration checks and phone security, as well as on popular platforms likeAirbnbandUberto verify individuals' identities. However, the increased usage has rendered the system more susceptible to attacks, given the widespread integration of facial recognition systems in society. Some online sources and tutorials detail methods for tricking facial recognition systems through practices known as face spoofing or presentation attacks, which can pose risks in terms of unauthorized access. To mitigate these dangers, measures such asliveness checks(verifying blinking),deep learning, and specialized cameras like 3D cameras have been introduced to prevent facial recognition spoofing. It is important to implement comprehensive security procedures like these to protect against face spoofing attempts and uphold the overall security and integrity of systems relying on facial recognition authentication.[44]
https://en.wikipedia.org/wiki/Spoofing_attack
TheTerrapin attackis acryptographic attackon the commonly usedSSHprotocol that is used for secure command-and-control throughout the Internet. The Terrapin attack can reduce the security of SSH by using adowngrade attackviaman-in-the-middleinterception.[1][2][3]The attack works byprefix truncation; the injection and deletion of messages duringfeature negotiation, manipulating sequence numbers in a way that causes other messages to be ignored without an error being detected by either client or server.[4] According to the attack's discoverers, the majority of SSH implementations were vulnerable at the time of the discovery of the attack (2023).[4]As of January 3, 2024, an estimated 11 million publicly accessible SSH servers are still vulnerable.[5]However, the risk is mitigated by the requirement to intercept a genuine SSH session, and that the attack can only delete messages at the start of a negotiation, fortuitously resulting mostly in failed connections.[4][6]Additionally the attack requires the use of eitherChaCha20-Poly1305or a CBC cipher in combination withEncrypt-then-MACmodes of encryption.[7]The SSH developers have stated that the major impact of the attack is the capability to degrade thekeystroke timingobfuscation features of SSH.[6] The designers of SSH have implemented a fix for the Terrapin attack, but the fix is only fully effective when both client and server implementations have been upgraded to support it.[1]The researchers who discovered the attack have also created a vulnerability scanner to determine whether an SSH server or client is vulnerable.[8] The attack has been given the CVE ID CVE-2023-48795.[9][3]In addition to the main attack, two other vulnerabilities were found inAsyncSSH, and assigned the CVE IDs CVE-2023-46445 and CVE-2023-46446.[3] This Internet-related article is astub. You can help Wikipedia byexpanding it. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Terrapin_attack
TheSecure Shell Protocol(SSH Protocol) is acryptographicnetwork protocolfor operatingnetwork servicessecurely over an unsecured network.[1]Its most notable applications are remoteloginandcommand-lineexecution. SSH was designed forUnix-likeoperating systems as a replacement forTelnetandunsecuredremoteUnix shellprotocols, such as the BerkeleyRemote Shell(rsh) and the relatedrloginandrexecprotocols, which all use insecure,plaintextmethods of authentication, likepasswords. Since mechanisms likeTelnetandRemote Shellare designed to access and operate remote computers, sending the authentication tokens (e.g. username andpassword) for this access to these computers across a public network in an unsecured way poses a great risk of 3rd parties obtaining the password and achieving the same level of access to the remote system as the telnet user. Secure Shell mitigates this risk through the use of encryption mechanisms that are intended to hide the contents of the transmission from an observer, even if the observer has access to the entire data stream.[2] Finnish computer scientist Tatu Ylönen designed SSH in 1995 and provided an implementation in the form of two commands,sshandslogin, as secure replacements forrshandrlogin, respectively. Subsequent development of the protocol suite proceeded in several developer groups, producing several variants of implementation. The protocol specification distinguishes two major versions, referred to as SSH-1 and SSH-2. The most commonly implemented software stack isOpenSSH, released in 1999 as open-source software by theOpenBSDdevelopers. Implementations are distributed for all types of operating systems in common use, including embedded systems. SSH applications are based on aclient–serverarchitecture, connecting anSSH clientinstance with anSSH server.[3]SSH operates as a layered protocol suite comprising three principal hierarchical components: thetransport layerprovides server authentication, confidentiality, and integrity; theuser authentication protocolvalidates the user to the server; and theconnection protocolmultiplexes the encrypted tunnel into multiple logical communication channels.[1] SSH usespublic-key cryptographytoauthenticatethe remote computer and allow it to authenticate the user, if necessary.[3] SSH may be used in several methodologies. In the simplest manner, both ends of a communication channel use automatically generated public-private key pairs to encrypt a network connection, and then use apasswordto authenticate the user. When the public-private key pair is generated by the user manually, the authentication is essentially performed when the key pair is created, and a session may then be opened automatically without a password prompt. In this scenario, the public key is placed on all computers that must allow access to the owner of the matching private key, which the owner keeps private. While authentication is based on the private key, the key is never transferred through the network during authentication. SSH only verifies that the same person offering the public key also owns the matching private key. In all versions of SSH it is important to verify unknownpublic keys, i.e.associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user. OnUnix-likesystems, the list of authorized public keys is typically stored in the home directory of the user that is allowed to log in remotely, in the file~/.ssh/authorized_keys.[4]This file is respected by SSH only if it is not writable by anything apart from the owner and root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase. The private key can also be looked for in standard places, and its full path can be specified as a command line setting (the option-ifor ssh). Thessh-keygenutility produces the public and private keys, always in pairs. SSH is typically used to log into a remote computer'sshellorcommand-line interface(CLI) and to execute commands on a remote server. It also supports mechanisms fortunneling,forwardingofTCP portsandX11connections and it can be used to transfer files using the associatedSSH File Transfer Protocol(SFTP) orSecure Copy Protocol(SCP).[3] SSH uses theclient–server model. An SSHclientprogram is typically used for establishing connections to an SSHdaemon, such as sshd, accepting remote connections. Both are commonly present on most modernoperating systems, includingmacOS, most distributions ofLinux,OpenBSD,FreeBSD,NetBSD,SolarisandOpenVMS. Notably, versions ofWindowsprior to Windows 10 version 1709 do not include SSH by default, butproprietary,freewareandopen sourceversions of various levels of complexity and completeness did and do exist (seeComparison of SSH clients). In 2018Microsoftbegan porting theOpenSSHsource code to Windows[5]and inWindows 10 version 1709, an official Win32 port of OpenSSH is now available. File managers for UNIX-like systems (e.g.Konqueror) can use theFISHprotocol to provide a split-pane GUI with drag-and-drop. The open source Windows programWinSCP[6]provides similar file management (synchronization, copy, remote delete) capability using PuTTY as a back-end. Both WinSCP[7]and PuTTY[8]are available packaged to run directly off a USB drive, without requiring installation on the client machine. Crostini onChromeOScomes with OpenSSH by default. Setting up an SSH server in Windows typically involves enabling a feature in the Settings app. SSH is important incloud computingto solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine.[9] TheIANAhas assignedTCPport22,UDPport 22 andSCTPport 22 for this protocol.[10]IANA had listed the standard TCP port 22 for SSH servers as one of thewell-known portsas early as 2001.[11]SSH can also be run usingSCTPrather than TCP as the connection oriented transport layer protocol.[12] In 1995,Tatu Ylönen, a researcher atHelsinki University of Technologyin Finland designed the first version of the protocol (now calledSSH-1) prompted by a password-sniffingattack at hisuniversity network.[13]The goal of SSH was to replace the earlierrlogin,TELNET,FTP[14]andrshprotocols, which did not provide strong authentication nor guarantee confidentiality. He chose the port number 22 because it is betweentelnet(port 23) andftp(port 21).[15] Ylönen released his implementation asfreewarein July 1995, and the tool quickly gained in popularity. Towards the end of 1995, the SSH user base had grown to 20,000 users in fifty countries.[16] In December 1995, Ylönen founded SSH Communications Security to market and develop SSH. The original version of the SSH software used various pieces offree software, such asGNU libgmp, but later versions released by SSH Communications Security evolved into increasinglyproprietary software. It was estimated that by 2000 the number of users had grown to 2 million.[17] In 2006, after being discussed in a working group named "secsh",[18]a revised version of the SSH protocol,SSH-2was adopted as a standard.[19]This version offers improved security and new features, but is not compatible with SSH-1. For example, it introduces new key-exchange mechanisms likeDiffie–Hellman key exchange, improveddata integritychecking viamessage authentication codeslikeMD5orSHA-1, which can be negotiated between client and server. SSH-2 also adds stronger encryption methods likeAESwhich eventually replaced weaker and compromised ciphers from the previous standard like3DES.[20][21][19]New features of SSH-2 include the ability to run any number ofshellsessions over a single SSH connection.[22]Due to SSH-2's superiority and popularity over SSH-1, some implementations such as libssh (v0.8.0+),[23]Lsh[24]andDropbear[25]eventually supported only the SSH-2 protocol. In January 2006, well after version 2.1 was established,RFC4253specified that an SSH server supporting 2.0 as well as prior versions should identify its protocol version as 1.99.[26]This version number does not reflect a historical software revision, but a method to identifybackward compatibility. In 1999, developers, desiring availability of a free software version, restarted software development from the 1.2.12 release of the original SSH program, which was the last released under anopen source license.[27]This served as a code base for Björn Grönvall's OSSH software.[28]Shortly thereafter,OpenBSDdevelopersforkedGrönvall's code and createdOpenSSH, which shipped with Release 2.6 of OpenBSD. From this version, a "portability" branch was formed to port OpenSSH to other operating systems.[29] As of 2005[update],OpenSSHwas the single most popular SSH implementation, being the default version in a large number of operating system distributions. OSSH meanwhile has become obsolete.[30]OpenSSH continues to be maintained and supports the SSH-2 protocol, having expunged SSH-1 support from the codebase in the OpenSSH 7.6 release. In 2023, an alternative to traditional SSH was proposed under the name SSH3[31][32][33]by PhD student François Michel and Professor Olivier Bonaventure and its code has been made open source.[34]This new version implements the original SSH Connection Protocol but operates on top ofHTTP/3, which runs onQUIC. It offers multiple features such as: However, the name SSH3 is under discussion, and the project aims to rename itself to a more suitable name.[35]The discussion stems from the fact that this new implementation significantly revises the SSH protocol, suggesting it should not be called SSH3. SSH is a protocol that can be used for many applications across many platforms including mostUnixvariants (Linux, theBSDsincludingApple'smacOS, andSolaris), as well asMicrosoft Windows. Some of the applications below may require features that are only available or compatible with specific SSH clients or servers. For example, using the SSH protocol to implement aVPNis possible, but presently only with theOpenSSHserver and client implementation. The Secure Shell protocols are used in several file transfer mechanisms. The SSH protocol has a layered architecture with three separate components: This open architecture provides considerable flexibility, allowing the use of SSH for a variety of purposes beyond a secure shell. The functionality of the transport layer alone is comparable toTransport Layer Security(TLS); the user-authentication layer is highly extensible with custom authentication methods; and the connection layer provides the ability to multiplex many secondary sessions into a single SSH connection, a feature comparable toBEEPand not available in TLS. In 1998, a vulnerability was described in SSH 1.5 which allowed the unauthorized insertion of content into an encrypted SSH stream due to insufficient data integrity protection fromCRC-32used in this version of the protocol.[42][43]A fix known as SSH Compensation Attack Detector[44]was introduced into most implementations. Many of these updated implementations contained a newinteger overflowvulnerability[45]that allowed attackers to execute arbitrary code with the privileges of the SSH daemon, typically root. In January 2001 a vulnerability was discovered that allows attackers to modify the last block of anIDEA-encrypted session.[46]The same month, another vulnerability was discovered that allowed a malicious server to forward a client authentication to another server.[47] Since SSH-1 has inherent design flaws which make it vulnerable, it is now generally considered obsolete and should be avoided by explicitly disabling fallback to SSH-1.[47]Most modern servers and clients support SSH-2.[48] In November 2008, a theoretical vulnerability was discovered for all versions of SSH which allowed recovery of up to 32 bits of plaintext from a block of ciphertext that was encrypted using what was then the standard default encryption mode,CBC.[49]The most straightforward solution is to useCTR, counter mode, instead of CBC mode, since this renders SSH resistant to the attack.[49] On December 28, 2014Der Spiegelpublished classified information[50]leaked by whistleblowerEdward Snowdenwhich suggests that theNational Security Agencymay be able to decrypt some SSH traffic. The technical details associated with such a process were not disclosed. A 2017 analysis of theCIAhacking toolsBothanSpyandGyrfalconsuggested that the SSH protocol was not compromised.[51] A novel man-in-the-middle attack against most current ssh implementations was discovered in 2023. It was named theTerrapin attackby its discoverers.[52][53]However, the risk is mitigated by the requirement to intercept a genuine ssh session, and that the attack is restricted in its scope, fortuitously resulting mostly in failed connections.[54][55]The ssh developers have stated that the major impact of the attack is to degrade thekeystroke timingobfuscation features of ssh.[55]The vulnerability was fixed in OpenSSH 9.6, but requires both client and server to be upgraded for the fix to be fully effective. The followingRFCpublications by theIETF"secsh"working groupdocument SSH-2 as a proposedInternet standard. The protocol specifications were later updated by the following publications: In addition, theOpenSSHproject includes several vendor protocol specifications/extensions:
https://en.wikipedia.org/wiki/Ssh
Modulois the remainder operation, which carries out a division operation with a remainder as result. Modulomay also refer to:
https://en.wikipedia.org/wiki/Modulo_(disambiguation)
Johann Carl Friedrich Gauss(/ɡaʊs/ⓘ;[2]German:Gauß[kaʁlˈfʁiːdʁɪçˈɡaʊs]ⓘ;[3][4]Latin:Carolus Fridericus Gauss; 30 April 1777 – 23 February 1855) was a Germanmathematician,astronomer,geodesist, andphysicist, who contributed to many fields in mathematics and science. He was director of theGöttingen Observatoryand professor of astronomy from 1807 until his death in 1855. While studying at theUniversity of Göttingen, he propounded several mathematicaltheorems. As an independent scholar, he wrote themasterpiecesDisquisitiones ArithmeticaeandTheoria motus corporum coelestium. Gauss produced the second and third complete proofs of thefundamental theorem of algebra. Innumber theory, he made numerous contributions, such as thecomposition law, thelaw of quadratic reciprocityand theFermat polygonal number theorem. He also contributed to the theory of binary and ternary quadratic forms, the construction of theheptadecagon, and the theory ofhypergeometric series. Due to Gauss' extensive and fundamental contributions to science and mathematics, more than100 mathematical and scientific conceptsare named after him. Gauss was instrumental in the identification ofCeresas a dwarf planet. His work on the motion of planetoids disturbed by large planets led to the introduction of theGaussian gravitational constantand themethod of least squares, which he had discovered beforeAdrien-Marie Legendrepublished it. Gauss led the geodetic survey of the Kingdom of Hanover together with an arc measurement project from 1820 to 1844; he was one of the founders ofgeophysicsand formulated the fundamental principles ofmagnetism. His practical work led to the invention of theheliotropein 1821, amagnetometerin 1833 and – withWilhelm Eduard Weber– the first electromagnetictelegraphin 1833. Gauss was the first to discover and studynon-Euclidean geometry, which he also named. He developed afast Fourier transformsome 160 years beforeJohn TukeyandJames Cooley. Gauss refused to publish incomplete work and left several works to be editedposthumously. He believed that the act of learning, not possession of knowledge, provided the greatest enjoyment. Gauss was not a committed or enthusiastic teacher, generally preferring to focus on his own work. Nevertheless, some of his students, such asDedekindandRiemann, became well-known and influential mathematicians in their own right. Gauss was born on 30 April 1777 inBrunswickin theDuchy of Brunswick-Wolfenbüttel(now in the German state ofLower Saxony). His family was of relatively low social status.[5]His father Gebhard Dietrich Gauss (1744–1808) worked variously as a butcher, bricklayer, gardener, and treasurer of a death-benefit fund. Gauss characterized his father as honourable and respected, but rough and dominating at home. He was experienced in writing and calculating, whereas his second wife Dorothea, Carl Friedrich's mother, was nearly illiterate.[6]He had one elder brother from his father's first marriage.[7] Gauss was achild prodigyin mathematics. When the elementary teachers noticed his intellectual abilities, they brought him to the attention of theDuke of Brunswickwho sent him to the localCollegium Carolinum,[a]which he attended from 1792 to 1795 withEberhard August Wilhelm von Zimmermannas one of his teachers.[9][10][11]Thereafter the Duke granted him the resources for studies of mathematics, sciences, andclassical languagesat theUniversity of Göttingenuntil 1798.[12]His professor in mathematics wasAbraham Gotthelf Kästner, whom Gauss called "the leading mathematician among poets, and the leading poet among mathematicians" because of hisepigrams.[13][b]Astronomy was taught byKarl Felix Seyffer, with whom Gauss stayed in correspondence after graduation;[14]Olbersand Gauss mocked him in their correspondence.[15]On the other hand, he thought highly ofGeorg Christoph Lichtenberg, his teacher of physics, and ofChristian Gottlob Heyne, whose lectures in classics Gauss attended with pleasure.[14]Fellow students of this time wereJohann Friedrich Benzenberg,Farkas Bolyai, andHeinrich Wilhelm Brandes.[14] He was likely a self-taught student in mathematics since he independently rediscovered several theorems.[11]He solved a geometrical problem that had occupied mathematicians since theAncient Greekswhen he determined in 1796 which regularpolygonscan be constructed bycompass and straightedge. This discovery ultimately led Gauss to choose mathematics instead ofphilologyas a career.[16]Gauss's mathematical diary, a collection of short remarks about his results from the years 1796 until 1814, shows that many ideas for his mathematical magnum opusDisquisitiones Arithmeticae(1801) date from this time.[17] As an elementary student, Gauss and his class were tasked by their teacher, J.G. Büttner, to sum the numbers from 1 to 100. Much to Büttner's surprise, Gauss replied with the correct answer of 5050 in a vastly faster time than expected.[18]Gauss had realised that the sum could be rearranged as 50 pairs of 101 (1+100=101, 2+99=101, etc.). Thus, he simply multiplied 50 by 101.[19]Other accounts state that he computed the sum as 100 sets of 101 and divided by 2.[20] Gauss graduated as aDoctor of Philosophyin 1799, not in Göttingen, as is sometimes stated,[c][21]but at the Duke of Brunswick's special request from the University of Helmstedt, the only state university of the duchy.Johann Friedrich Pfaffassessed his doctoral thesis, and Gauss got the degreein absentiawithout further oral examination.[11]The Duke then granted him the cost of living as a private scholar in Brunswick. Gauss subsequently refused calls from theRussian Academy of SciencesinSt. PeterburgandLandshut University.[22][23]Later, the Duke promised him the foundation of an observatory in Brunswick in 1804. ArchitectPeter Joseph Krahemade preliminary designs, but one ofNapoleon's warscancelled those plans:[24]the Duke was killed in thebattle of Jenain 1806. The duchy was abolished in the following year, and Gauss's financial support stopped. When Gauss was calculating asteroid orbits in the first years of the century, he established contact with the astronomical communities ofBremenandLilienthal, especiallyWilhelm Olbers,Karl Ludwig Harding, andFriedrich Wilhelm Bessel, forming part of the informal group of astronomers known as theCelestial police.[25]One of their aims was the discovery of further planets. They assembled data on asteroids and comets as a basis for Gauss's research on their orbits, which he later published in his astronomical magnum opusTheoria motus corporum coelestium(1809).[26] In November 1807, Gauss was hired by theUniversity of Göttingen, then an institution of the newly foundedKingdom of WestphaliaunderJérôme Bonaparte, as full professor and director of theastronomical observatory,[27]and kept the chair until his death in 1855. He was soon confronted with the demand for two thousandfrancsfrom the Westphalian government as a war contribution, which he could not afford to pay. Both Olbers andLaplacewanted to help him with the payment, but Gauss refused their assistance. Finally, an anonymous person fromFrankfurt, later discovered to bePrince-primateDalberg,[28]paid the sum.[27] Gauss took on the directorship of the 60-year-old observatory, founded in 1748 byPrince-electorGeorge IIand built on a converted fortification tower,[29]with usable, but partly out-of-date instruments.[30]The construction of a new observatory had been approved by Prince-electorGeorge IIIin principle since 1802, and the Westphalian government continued the planning,[31]but Gauss could not move to his new place of work until September 1816.[23]He got new up-to-date instruments, including twomeridian circlesfromRepsold[32]andReichenbach,[33]and aheliometerfromFraunhofer.[34] The scientific activity of Gauss, besides pure mathematics, can be roughly divided into three periods: astronomy was the main focus in the first two decades of the 19th century, geodesy in the third decade, and physics, mainly magnetism, in the fourth decade.[35] Gauss made no secret of his aversion to giving academic lectures.[22][23]But from the start of his academic career at Göttingen, he continuously gave lectures until 1854.[36]He often complained about the burdens of teaching, feeling that it was a waste of his time. On the other hand, he occasionally described some students as talented.[22]Most of his lectures dealt with astronomy, geodesy, andapplied mathematics,[37]and only three lectures on subjects of pure mathematics.[22][d]Some of Gauss's students went on to become renowned mathematicians, physicists, and astronomers:Moritz Cantor,Dedekind,Dirksen,Encke,Gould,[e]Heine,Klinkerfues,Kupffer,Listing,Möbius,Nicolai,Riemann,Ritter,Schering,Scherk,Schumacher,von Staudt,Stern,Ursin; as geoscientistsSartorius von Waltershausen, andWappäus.[22] Gauss did not write any textbook and disliked thepopularizationof scientific matters. His only attempts at popularization were his works on the date of Easter (1800/1802) and the essayErdmagnetismus und Magnetometerof 1836.[39]Gauss published his papers and books exclusively inLatinorGerman.[f][g]He wrote Latin in a classical style but used some customary modifications set by contemporary mathematicians.[42] Gauss gave his inaugural lecture at Göttingen University in 1808. He described his approach to astronomy as based on reliable observations and accurate calculations, rather than on belief or empty hypothesizing.[37]At university, he was accompanied by a staff of other lecturers in his disciplines, who completed the educational program; these included the mathematician Thibaut with his lectures,[44]the physicistMayer, known for his textbooks,[45]his successorWebersince 1831, and in the observatoryHarding, who took the main part of lectures in practical astronomy. When the observatory was completed, Gauss occupied the western wing of the new observatory, while Harding took the eastern.[23]They had once been on friendly terms, but over time they became alienated, possibly – as some biographers presume – because Gauss had wished the equal-ranked Harding to be no more than his assistant or observer.[23][h]Gauss used the newmeridian circlesnearly exclusively, and kept them away from Harding, except for some very seldom joint observations.[47] Brendelsubdivides Gauss's astronomic activity chronologically into seven periods, of which the years since 1820 are taken as a "period of lower astronomical activity".[48]The new, well-equipped observatory did not work as effectively as other ones; Gauss's astronomical research had the character of a one-man enterprise without a long-time observation program, and the university established a place for an assistant only after Harding died in 1834.[46][47][i] Nevertheless, Gauss twice refused the opportunity to solve the problem, turning down offers from Berlin in 1810 and 1825 to become a full member of the Prussian Academy without burdening lecturing duties, as well as fromLeipzig Universityin 1810 and fromVienna Universityin 1842, perhaps because of the family's difficult situation.[46]Gauss's salary was raised from 1000Reichsthalerin 1810 to 2500 Reichsthaler in 1824,[23]and in his later years he was one of the best-paid professors of the university.[49] When Gauss was asked for help by his colleague and friendFriedrich Wilhelm Besselin 1810, who was in trouble atKönigsberg Universitybecause of his lack of an academic title, Gauss provided adoctoratehonoris causafor Bessel from the Philosophy Faculty of Göttingen in March 1811.[j]Gauss gave another recommendation for an honorary degree forSophie Germainbut only shortly before her death, so she never received it.[52]He also gave successful support to the mathematicianGotthold Eisensteinin Berlin.[53] Gauss was loyal to theHouse of Hanover. After KingWilliam IVdied in 1837, the new Hanoverian KingErnest Augustusannulled the 1833 constitution. Seven professors, later known as the "Göttingen Seven", protested against this, among them his friend and collaborator Wilhelm Weber and Gauss's son-in-law Heinrich Ewald. All of them were dismissed, and three of them were expelled, but Ewald and Weber could stay in Göttingen. Gauss was deeply affected by this quarrel but saw no possibility to help them.[54] Gauss took part in academic administration: three times he was elected asdeanof the Faculty of Philosophy.[55]Being entrusted with the widow'spension fundof the university, he dealt withactuarial scienceand wrote a report on the strategy for stabilizing the benefits. He was appointed director of the Royal Academy of Sciences in Göttingen for nine years.[55] Gauss remained mentally active into his old age, even while suffering fromgoutand general unhappiness. On 23 February 1855, he died of a heart attack in Göttingen;[13]and was interred in theAlbani Cemeterythere.Heinrich Ewald, Gauss's son-in-law, andWolfgang Sartorius von Waltershausen, Gauss's close friend and biographer, gave eulogies at his funeral.[56] Gauss was a successful investor and accumulated considerable wealth with stocks and securities, amounting to a value of more than 150,000 Thaler; after his death, about 18,000 Thaler were found hidden in his rooms.[57] The day after Gauss's death his brain was removed, preserved, and studied byRudolf Wagner, who found its mass to be slightly above average, at 1,492 grams (3.29 lb).[58][59]Wagner's sonHermann, a geographer, estimated the cerebral area to be 219,588 square millimetres (340.362 sq in) in his doctoral thesis.[60]In 2013, a neurobiologist at theMax Planck Institute for Biophysical Chemistryin Göttingen discovered that Gauss's brain had been mixed up soon after the first investigations, due to mislabelling, with that of the physicianConrad Heinrich Fuchs, who died in Göttingen a few months after Gauss.[61]A further investigation showed no remarkable anomalies in the brains of either person. Thus, all investigations of Gauss's brain until 1998, except the first ones of Rudolf and Hermann Wagner, actually refer to the brain of Fuchs.[62] Gauss married Johanna Osthoff on 9 October 1805 in St. Catherine's church in Brunswick.[63]They had two sons and one daughter: Joseph (1806–1873), Wilhelmina (1808–1840), and Louis (1809–1810). Johanna died on 11 October 1809, one month after the birth of Louis, who himself died a few months later.[64]Gauss chose the first names of his children in honour ofGiuseppe Piazzi, Wilhelm Olbers, and Karl Ludwig Harding, the discoverers of the first asteroids.[65] On 4 August 1810, Gauss married Wilhelmine (Minna) Waldeck, a friend of his first wife, with whom he had three more children: Eugen (later Eugene) (1811–1896), Wilhelm (later William) (1813–1879), and Therese (1816–1864). Minna Gauss died on 12 September 1831 after being seriously ill for more than a decade.[66]Therese then took over the household and cared for Gauss for the rest of his life; after her father's death, she married actor Constantin Staufenau.[67]Her sister Wilhelmina married the orientalistHeinrich Ewald.[68]Gauss's mother Dorothea lived in his house from 1817 until she died in 1839.[12] The eldest son Joseph, while still a schoolboy, helped his father as an assistant during the survey campaign in the summer of 1821. After a short time at university, in 1824 Joseph joined theHanoverian armyand assisted in surveying again in 1829. In the 1830s he was responsible for the enlargement of the survey network into the western parts of the kingdom. With his geodetical qualifications, he left the service and engaged in the construction of the railway network as director of theRoyal Hanoverian State Railways. In 1836 he studied the railroad system in the US for some months.[49][k] Eugen left Göttingen in September 1830 and emigrated to the United States, where he spent five years with the army. He then worked for theAmerican Fur Companyin the Midwest. He later moved toMissouriand became a successful businessman.[49]Wilhelm married a niece of the astronomerBessel;[71]he then moved to Missouri, started as a farmer and became wealthy in the shoe business inSt. Louisin later years.[72]Eugene and William have numerous descendants in America, but the Gauss descendants left in Germany all derive from Joseph, as the daughters had no children.[49] In the first two decades of the 19th century, Gauss was the only important mathematician in Germany comparable to the leading French mathematicians.[73]HisDisquisitiones Arithmeticaewas the first mathematical book from Germany to be translated into the French language.[74] Gauss was "in front of the new development" with documented research since 1799, his wealth of new ideas, and his rigour of demonstration.[75]In contrast to previous mathematicians likeLeonhard Euler, who let their readers take part in their reasoning, including certain erroneous deviations from the correct path,[76]Gauss introduced a new style of direct and complete exposition that did not attempt to show the reader the author's train of thought.[77] Gauss was the first to restore thatrigorof demonstration which we admire in the ancients and which had been forced unduly into the background by the exclusive interest of the preceding period innewdevelopments. But for himself, he propagated a quite different ideal, given in a letter to Farkas Bolyai as follows:[78] It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment. When I have clarified and exhausted a subject, then I turn away from it, in order to go into darkness again. His posthumous papers, his scientificdiary,[79]and short glosses in his own textbooks show that he empirically worked to a great extent.[80][81]He was a lifelong busy and enthusiastic calculator, working extraordinarily quickly and checking his results through estimation. Nevertheless, his calculations were not always free from mistakes.[82]He coped with the enormous workload by using skillful tools.[83]Gauss used numerousmathematical tables, examined their exactness, and constructed new tables on various matters for personal use.[84]He developed new tools for effective calculation, for example theGaussian elimination.[85]Gauss's calculations and the tables he prepared were often more precise than practically necessary.[86]Very likely, this method gave him additional material for his theoretical work.[83][87] Gauss was only willing to publish work when he considered it complete and above criticism. Thisperfectionismwas in keeping with the motto of his personalsealPauca sed Matura("Few, but Ripe"). Many colleagues encouraged him to publicize new ideas and sometimes rebuked him if he hesitated too long, in their opinion. Gauss defended himself by claiming that the initial discovery of ideas was easy, but preparing a presentable elaboration was a demanding matter for him, for either lack of time or "serenity of mind".[39]Nevertheless, he published many short communications of urgent content in various journals, but left a considerable literary estate, too.[88][89]Gauss referred to mathematics as "the queen of sciences" and arithmetics as "the queen of mathematics",[90]and supposedly once espoused a belief in the necessity of immediately understandingEuler's identityas a benchmark pursuant to becoming a first-class mathematician.[91] On certain occasions, Gauss claimed that the ideas of another scholar had already been in his possession previously. Thus his concept of priority as "the first to discover, not the first to publish" differed from that of his scientific contemporaries.[92]In contrast to his perfectionism in presenting mathematical ideas, his citations were criticized as negligent. He justified himself with an unusual view of correct citation practice: he would only give complete references, with respect to the previous authors of importance, which no one should ignore, but citing in this way would require knowledge of the history of science and more time than he wished to spend.[39] Soon after Gauss's death, his friend Sartorius published the first biography (1856), written in a rather enthusiastic style. Sartorius saw him as a serene and forward-striving man with childlike modesty,[93]but also of "iron character"[94]with an unshakeable strength of mind.[95]Apart from his closer circle, others regarded him as reserved and unapproachable "like anOlympiansitting enthroned on the summit of science".[96]His close contemporaries agreed that Gauss was a man of difficult character. He often refused to accept compliments. His visitors were occasionally irritated by his grumpy behaviour, but a short time later his mood could change, and he would become a charming, open-minded host.[39]Gauss disliked polemic natures; together with his colleagueHausmannhe opposed to a call forJustus Liebigon a university chair in Göttingen, "because he was always involved in some polemic."[97] Gauss's life was overshadowed by severe problems in his family. When his first wife Johanna suddenly died shortly after the birth of their third child, he revealed the grief in a last letter to his dead wife in the style of an ancientthrenody, the most personal of his surviving documents.[98][99]His second wife and his two daughters suffered fromtuberculosis.[100]In a letter toBessel, dated December 1831, Gauss hinted at his distress, describing himself as "the victim of the worst domestic sufferings".[39] Because of his wife's illness, both younger sons were educated for some years inCelle, far from Göttingen. The military career of his elder son Joseph ended after more than two decades at the poorly paid rank offirst lieutenant, although he had acquired a considerable knowledge of geodesy. He needed financial support from his father even after he was married.[49]The second son Eugen shared a good measure of his father's talent in computation and languages but had a lively and sometimes rebellious character. He wanted to study philology, whereas Gauss wanted him to become a lawyer. Having run up debts and caused a scandal in public,[101]Eugen suddenly left Göttingen under dramatic circumstances in September 1830 and emigrated via Bremen to the United States. He wasted the little money he had taken to start, after which his father refused further financial support.[49]The youngest son Wilhelm wanted to qualify for agricultural administration, but had difficulties getting an appropriate education, and eventually emigrated as well. Only Gauss's youngest daughter Therese accompanied him in his last years of life.[67] In his later years Gauss habitually collected various types of useful or useless numerical data, such as the number of paths from his home to certain places in Göttingen or peoples' ages in days; he congratulatedHumboldtin December 1851 for having reached the same age asIsaac Newtonat his death, calculated in days.[102] Beyond his excellent knowledge ofLatin, he was also acquainted with modern languages. Gauss read both classical and modern literature, and English and French works in the original languages.[103][m]His favorite English author wasWalter Scott, his favorite GermanJean Paul. At the age of 62, he began to teach himselfRussian, very likely to understand scientific writings from Russia, among them those ofLobachevskyon non-Euclidean geometry.[105][106]Gauss liked singing and went to concerts.[107]He was a busy newspaper reader; in his last years, he would visit an academic press salon of the university every noon.[108]Gauss did not care much for philosophy, and mocked the "splitting hairs of the so-called metaphysicians", by which he meant proponents of the contemporary school ofNaturphilosophie.[109] Gauss had an "aristocratic and through and through conservative nature", with little respect for people's intelligence and morals, following the motto "mundus vult decipi".[108]He disliked Napoleon and his system and was horrified by violence and revolution of all kinds. Thus he condemned the methods of theRevolutions of 1848, though he agreed with some of their aims, such as that of a unified Germany.[94][n]He had a low estimation of the constitutional system and he criticized parliamentarians of his time for their perceived ignorance and logical errors.[108] Some Gauss biographers have speculated on his religious beliefs. He sometimes said "God arithmetizes"[110]and "I succeeded – not on account of my hard efforts, but by the grace of the Lord."[111]Gauss was a member of theLutheran church, like most of the population in northern Germany, but it seems that he did not believe all Lutherandogmaor understand the Bible fully literally.[112]According to Sartorius, Gauss'religious tolerance, "insatiable thirst for truth" and sense of justice were motivated by his religious convictions.[113] In his doctoral thesis from 1799, Gauss proved thefundamental theorem of algebrawhich states that every non-constant single-variablepolynomialwith complex coefficients has at least one complexroot. Mathematicians includingJean le Rond d'Alemberthad produced false proofs before him, and Gauss's dissertation contains a critique of d'Alembert's work. He subsequently produced three other proofs, the last one in 1849 being generally rigorous. His attempts led to considerable clarification of the concept of complex numbers.[114] In the preface to theDisquisitiones, Gauss dates the beginning of his work on number theory to 1795. By studying the works of previous mathematicians like Fermat, Euler, Lagrange, and Legendre, he realized that these scholars had already found much of what he had independently discovered.[115]TheDisquisitiones Arithmeticae, written in 1798 and published in 1801, consolidated number theory as a discipline and covered both elementary and algebraicnumber theory. Therein he introduces thetriple barsymbol (≡) forcongruenceand uses it for a clean presentation ofmodular arithmetic.[116]It deals with theunique factorization theoremandprimitive roots modulo n. In the main sections, Gauss presents the first two proofs of the law ofquadratic reciprocity[117]and develops the theories ofbinary[118]and ternaryquadratic forms.[119] TheDisquisitionesinclude theGauss composition lawfor binary quadratic forms, as well as the enumeration of the number of representations of an integer as the sum of three squares. As an almost immediate corollary of histheorem on three squares, he proves the triangular case of theFermat polygonal number theoremforn= 3.[120]From several analytic results onclass numbersthat Gauss gives without proof towards the end of the fifth section,[121]it appears that Gauss already knew theclass number formulain 1801.[122] In the last section, Gauss gives proof for theconstructibilityof a regularheptadecagon(17-sided polygon) withstraightedge and compassby reducing this geometrical problem to an algebraic one.[123]He shows that a regular polygon is constructible if the number of its sides is either apower of 2or the product of a power of 2 and any number of distinctFermat primes. In the same section, he gives a result on the number of solutions of certain cubic polynomials with coefficients infinite fields, which amounts to counting integral points on anelliptic curve.[124]An unfinished chapter, consisting of work done during 1797–1799, was found among his papers after his death.[125][126] One of Gauss's first results was the empirically found conjecture of 1792 – the later calledprime number theorem– giving an estimation of the number of prime numbers by using theintegral logarithm.[127][o] In 1816,Olbersencouraged Gauss to compete for a prize from the French Academy for a proof forFermat's Last Theorem; he refused, considering the topic uninteresting. However, after his death a short undated paper was found with proofs of the theorem for the casesn= 3 andn= 5.[129]The particular case ofn= 3 was proved much earlier byLeonhard Euler, but Gauss developed a more streamlined proof which made use ofEisenstein integers; though more general, the proof was simpler than in the real integers case.[130] Gauss contributed to solving theKepler conjecturein 1831 with the proof that agreatest packing densityof spheres in the three-dimensional space is given when the centres of the spheres form acubic face-centredarrangement,[131]when he reviewed a book ofLudwig August Seeberon the theory of reduction of positive ternary quadratic forms.[132]Having noticed some lacks in Seeber's proof, he simplified many of his arguments, proved the central conjecture, and remarked that this theorem is equivalent to the Kepler conjecture for regular arrangements.[133] In two papers onbiquadratic residues(1828, 1832) Gauss introduced theringofGaussian integersZ[i]{\displaystyle \mathbb {Z} [i]}, showed that it is aunique factorization domain,[134]and generalized some key arithmetic concepts, such asFermat's little theoremandGauss's lemma. The main objective of introducing this ring was to formulate the law of biquadratic reciprocity[134]– as Gauss discovered, rings of complex integers are the natural setting for such higher reciprocity laws.[135] In the second paper, he stated the general law of biquadratic reciprocity and proved several special cases of it. In an earlier publication from 1818 containing his fifth and sixth proofs of quadratic reciprocity, he claimed the techniques of these proofs (Gauss sums) can be applied to prove higher reciprocity laws.[136] One of Gauss's first discoveries was the notion of thearithmetic-geometric mean(AGM) of two positive real numbers.[137]He discovered its relation to elliptic integrals in the years 1798–1799 throughLanden's transformation, and a diary entry recorded the discovery of the connection ofGauss's constanttolemniscatic elliptic functions, a result that Gauss stated "will surely open an entirely new field of analysis".[138]He also made early inroads into the more formal issues of the foundations ofcomplex analysis, and from a letter to Bessel in 1811 it is clear that he knew the "fundamental theorem of complex analysis" –Cauchy's integral theorem– and understood the notion ofcomplex residueswhen integrating aroundpoles.[124][139] Euler's pentagonal numbers theorem, together with other researches on the AGM and lemniscatic functions, led him to plenty of results onJacobi theta functions,[124]culminating in the discovery in 1808 of the later calledJacobi triple product identity, which includes Euler's theorem as a special case.[140]His works show that he knew modular transformations of order 3, 5, 7 for elliptic functions since 1808.[141][p][q] Several mathematical fragments in hisNachlassindicate that he knew parts of the modern theory ofmodular forms.[124]In his work on themultivaluedAGM of two complex numbers, he discovered a deep connection between the infinitely many values of the AGM and its two "simplest values".[138]In his unpublished writings he recognized and made a sketch of the key concept offundamental domainfor themodular group.[143][144]One of Gauss's sketches of this kind was a drawing of atessellationof theunit diskby "equilateral"hyperbolic triangleswith all angles equal toπ/4{\displaystyle \pi /4}.[145] An example of Gauss's insight in analysis is the cryptic remark that the principles of circle division by compass and straightedge can also be applied to the division of thelemniscate curve, which inspired Abel's theorem on lemniscate division.[r]Another example is his publication "Summatio quarundam serierum singularium" (1811) on the determination of the sign ofquadratic Gauss sums, in which he solved the main problem by introducingq-analogs of binomial coefficientsand manipulating them by several original identities that seem to stem from his work on elliptic function theory; however, Gauss cast his argument in a formal way that does not reveal its origin in elliptic function theory, and only the later work of mathematicians such asJacobiandHermitehas exposed the crux of his argument.[146] In the "Disquisitiones generales circa series infinitam..." (1813), he provides the first systematic treatment of the generalhypergeometric functionF(α,β,γ,x){\displaystyle F(\alpha ,\beta ,\gamma ,x)}, and shows that many of the functions known at the time are special cases of the hypergeometric function.[147]This work is the first exact inquiry intoconvergenceof infinite series in the history of mathematics.[148]Furthermore, it deals with infinitecontinued fractionsarising as ratios of hypergeometric functions, which are now calledGauss continued fractions.[149] In 1823, Gauss won the prize of the Danish Society with an essay onconformal mappings, which contains several developments that pertain to the field of complex analysis.[150]Gauss stated that angle-preserving mappings in the complex plane must be complexanalytic functions, and used the later-namedBeltrami equationto prove the existence ofisothermal coordinateson analytic surfaces. The essay concludes with examples of conformal mappings into a sphere and anellipsoid of revolution.[151] Gauss often deduced theoremsinductivelyfrom numerical data he had collected empirically.[81]As such, the use of efficient algorithms to facilitate calculations was vital to his research, and he made many contributions tonumerical analysis, such as the method ofGaussian quadrature, published in 1816.[152] In a private letter toGerlingfrom 1823,[153]he described a solution of a 4x4 system of linear equations with theGauss-Seidel method– an "indirect"iterative methodfor the solution of linear systems, and recommended it over the usual method of "direct elimination" for systems of more than two equations.[154] Gauss invented an algorithm for calculating what is now calleddiscrete Fourier transformswhen calculating the orbits of Pallas and Juno in 1805, 160 years beforeCooleyandTukeyfound their similarCooley–Tukey algorithm.[155]He developed it as atrigonometric interpolationmethod, but the paperTheoria Interpolationis Methodo Nova Tractatawas published only posthumously in 1876,[156]well afterJoseph Fourier's introduction of the subject in 1807.[157] The geodetic survey ofHanoverfuelled Gauss's interest indifferential geometryandtopology, fields of mathematics dealing withcurvesandsurfaces. This led him in 1828 to the publication of a work that marks the birth of moderndifferential geometry of surfaces, as it departed from the traditional ways of treating surfaces ascartesian graphsof functions of two variables, and that initiated the exploration of surfaces from the "inner" point of view of a two-dimensional being constrained to move on it. As a result, theTheorema Egregium(remarkable theorem), established a property of the notion ofGaussian curvature. Informally, the theorem says that the curvature of a surface can be determined entirely by measuringanglesanddistanceson the surface, regardless of theembeddingof the surface in three-dimensional or two-dimensional space.[158] The Theorema Egregium leads to the abstraction of surfaces as doubly-extendedmanifolds; it clarifies the distinction between the intrinsic properties of the manifold (themetric) and its physical realization in ambient space. A consequence is the impossibility of an isometric transformation between surfaces of different Gaussian curvature. This means practically that asphereor anellipsoidcannot be transformed to a plane without distortion, which causes a fundamental problem in designingprojectionsfor geographical maps.[158]A portion of this essay is dedicated to a profound study ofgeodesics. In particular, Gauss proves the localGauss–Bonnet theoremon geodesic triangles, and generalizesLegendre's theorem on spherical trianglesto geodesic triangles on arbitrary surfaces with continuous curvature; he found that the angles of a "sufficiently small" geodesic triangle deviate from that of a planar triangle of the same sides in a way that depends only on the values of the surface curvature at the vertices of the triangle, regardless of the behaviour of the surface in the triangle interior.[159] Gauss's memoir from 1828 lacks the conception ofgeodesic curvature. However, in a previously unpublished manuscript, very likely written in 1822–1825, he introduced the term "side curvature" (German: "Seitenkrümmung") and proved itsinvarianceunder isometric transformations, a result that was later obtained byFerdinand Mindingand published by him in 1830. This Gauss paper contains the core of his lemma on total curvature, but also its generalization, found and proved byPierre Ossian Bonnetin 1848 and known as theGauss–Bonnet theorem.[160] During Gauss' lifetime, theParallel postulateofEuclidean geometrywas heavily discussed.[161]Numerous efforts were made to prove it in the frame of the Euclideanaxioms, whereas some mathematicians discussed the possibility of geometrical systems without it.[162]Gauss thought about the basics of geometry from the 1790s on, but only realized in the 1810s that a non-Euclidean geometry without the parallel postulate could solve the problem.[163][161]In a letter toFranz Taurinusof 1824, he presented a short comprehensible outline of what he named a "non-Euclidean geometry",[164]but he strongly forbade Taurinus to make any use of it.[163]Gauss is credited with having been the one to first discover and study non-Euclidean geometry, even coining the term as well.[165][164][166] The first publications on non-Euclidean geometry in the history of mathematics were authored byNikolai Lobachevskyin 1829 andJanos Bolyaiin 1832.[162]In the following years, Gauss wrote his ideas on the topic but did not publish them, thus avoiding influencing the contemporary scientific discussion.[163][167]Gauss commended the ideas of Janos Bolyai in a letter to his father and university friend Farkas Bolyai[168]claiming that these were congruent to his own thoughts of some decades.[163][169]However, it is not quite clear to what extent he preceded Lobachevsky and Bolyai, as his written remarks are vague and obscure.[162] Sartoriusfirst mentioned Gauss's work on non-Euclidean geometry in 1856, but only the publication of Gauss'sNachlassin Volume VIII of the Collected Works (1900) showed Gauss's ideas on the matter, at a time when non-Euclidean geometry was still an object of some controversy.[163] Gauss was also an early pioneer oftopologyorGeometria Situs, as it was called in his lifetime. The first proof of thefundamental theorem of algebrain 1799 contained an essentially topological argument; fifty years later, he further developed the topological argument in his fourth proof of this theorem.[170] Another encounter with topological notions occurred to him in the course of his astronomical work in 1804, when he determined the limits of the region on thecelestial spherein which comets and asteroids might appear, and which he termed "Zodiacus". He discovered that if the Earth's and comet's orbits arelinked, then by topological reasons the Zodiacus is the entire sphere. In 1848, in the context of the discovery of the asteroid7 Iris, he published a further qualitative discussion of the Zodiacus.[171] In Gauss's letters of 1820–1830, he thought intensively on topics with close affinity to Geometria Situs, and became gradually conscious of semantic difficulty in this field. Fragments from this period reveal that he tried to classify "tract figures", which are closed plane curves with a finite number of transverse self-intersections, that may also be planar projections ofknots.[172]To do so he devised a symbolical scheme, theGauss code, that in a sense captured the characteristic features of tract figures.[173][174] In a fragment from 1833, Gauss defined thelinking numberof two space curves by a certain double integral, and in doing so provided for the first time an analytical formulation of a topological phenomenon. On the same note, he lamented the little progress made in Geometria Situs, and remarked that one of its central problems will be "to count the intertwinings of two closed or infinite curves". His notebooks from that period reveal that he was also thinking about other topological objects such asbraidsandtangles.[171] Gauss's influence in later years to the emerging field of topology, which he held in high esteem, was through occasional remarks and oral communications to Mobius and Listing.[175] Gauss applied the concept of complex numbers to solve well-known problems in a new concise way. For example, in a short note from 1836 on geometric aspects of the ternary forms and their application to crystallography,[176]he stated thefundamental theorem of axonometry, which tells how to represent a 3D cube on a 2D plane with complete accuracy, via complex numbers.[177]He described rotations of this sphere as the action of certainlinear fractional transformationson the extended complex plane,[178]and gave a proof for the geometric theorem that thealtitudesof a triangle always meet in a singleorthocenter.[179] Gauss was concerned withJohn Napier's "Pentagramma mirificum" – a certain sphericalpentagram– for several decades;[180]he approached it from various points of view, and gradually gained a full understanding of its geometric, algebraic, and analytic aspects.[181]In particular, in 1843 he stated and proved several theorems connecting elliptic functions, Napier spherical pentagons, and Poncelet pentagons in the plane.[182] Furthermore, he contributed a solution to the problem of constructing the largest-area ellipse inside a givenquadrilateral,[183][184]and discovered a surprising result about the computation of area ofpentagons.[185][186] On 1 January 1801, Italian astronomerGiuseppe Piazzidiscovered a new celestial object, presumed it to be the long searched planet between Mars and Jupiter according to the so-calledTitius–Bode law, and named itCeres.[187]He could track it only for a short time until it disappeared behind the glare of the Sun. The mathematical tools of the time were not sufficient to predict the location of its reappearance from the few data available. Gauss tackled the problem and predicted a position for possible rediscovery in December 1801. This turned out to be accurate within a half-degree whenFranz Xaver von Zachon 7 and 31 December atGotha, and independentlyHeinrich Olberson 1 and 2 January inBremen, identified the object near the predicted position.[188][t] Gauss's methodleads to an equation of the eighth degree, of which one solution, the Earth's orbit, is known. The solution sought is then separated from the remaining six based on physical conditions. In this work, Gauss used comprehensive approximation methods which he created for that purpose.[189] The discovery of Ceres led Gauss to the theory of the motion of planetoids disturbed by large planets, eventually published in 1809 asTheoria motus corporum coelestium in sectionibus conicis solem ambientum.[190]It introduced theGaussian gravitational constant.[37] Since the new asteroids had been discovered, Gauss occupied himself with theperturbationsof theirorbital elements. Firstly he examined Ceres with analytical methods similar to those of Laplace, but his favorite object wasPallas, because of its greateccentricityandorbital inclination, whereby Laplace's method did not work. Gauss used his own tools: thearithmetic–geometric mean, thehypergeometric function, and his method of interpolation.[191]He found anorbital resonancewithJupiterin proportion 18:7 in 1812; Gauss gave this result ascipher, and gave the explicit meaning only in letters to Olbers and Bessel.[192][193][u]After long years of work, he finished it in 1816 without a result that seemed sufficient to him. This marked the end of his activities in theoretical astronomy.[195] One fruit of Gauss's research on Pallas perturbations was theDeterminatio Attractionis...(1818) on a method of theoretical astronomy that later became known as the "elliptic ring method". It introduced an averaging conception in which a planet in orbit is replaced by a fictitious ring with mass density proportional to the time the planet takes to follow the corresponding orbital arcs.[196]Gauss presents the method of evaluating the gravitational attraction of such an elliptic ring, which includes several steps; one of them involves a direct application of the arithmetic-geometric mean (AGM) algorithm to calculate anelliptic integral.[197] Even after Gauss's contributions to theoretical astronomy came to an end, more practical activities inobservational astronomycontinued and occupied him during his entire career. As early as 1799, Gauss dealt with the determination of longitude by use of the lunar parallax, for which he developed more convenient formulas than those were in common use.[198]After appointment as director of observatory he attached importance to the fundamental astronomical constants in correspondence with Bessel. Gauss himself provided tables ofnutationandaberration, solar coordinates, and refraction.[199]He made many contributions tospherical geometry, and in this context solved some practical problems aboutnavigation by stars.[200]He published a great number of observations, mainly on minor planets and comets; his last observation was thesolar eclipse of 28 July 1851.[201] Gauss's first publication following his doctoral thesis dealt with the determination of thedate of Easter(1800), an elementary mathematical topic. Gauss aimed to present a convenient algorithm for people without any knowledge of ecclesiastical or even astronomical chronology, and thus avoided the usual terms ofgolden number,epact,solar cycle,domenical letter, and any religious connotations.[202]This choice of topic likely had historical grounds. The replacement of theJulian calendarby theGregorian calendarhad caused confusion in theHoly Roman Empiresince the 16th century and was not finished in Germany until 1700, when the difference of eleven days was deleted. Even after this, Easter fell on different dates in Protestant and Catholic territories, until this difference was abolished by agreement in 1776. In the Protestant states, such as the Duchy of Brunswick, the Easter of 1777, five weeks before Gauss's birth, was the first one calculated in the new manner.[203] Gauss likely used themethod of least squaresto minimize the impact ofmeasurement errorwhen calculating the orbit of Ceres.[92]The method was published first byAdrien-Marie Legendrein 1805, but Gauss claimed inTheoria motus(1809) that he had been using it since 1794 or 1795.[204][205][206]In the history of statistics, this disagreement is called the "priority dispute over the discovery of the method of least squares".[92]Gauss proved that the method has the lowest sampling variance within the class of linear unbiased estimators under the assumption ofnormally distributederrors (Gauss–Markov theorem), in the two-part paperTheoria combinationis observationum erroribus minimis obnoxiae(1823).[207] In the first paper he provedGauss's inequality(aChebyshev-type inequality) forunimodal distributions, and stated without proof another inequality formomentsof the fourth order (a special case of the Gauss-Winckler inequality).[208]He derived lower and upper bounds for thevarianceof thesample variance. In the second paper, Gauss describedrecursive least squares methods. His work on the theory of errors was extended in several directions by the geodesistFriedrich Robert Helmertto theGauss-Helmert model.[209] Gauss also contributed to problems inprobability theorythat are not directly concerned with the theory of errors. One example appears as a diary note where he tried to describe the asymptotic distribution of entries in the continued fraction expansion of a random number uniformly distributed in(0,1). He derived this distribution, now known as theGauss-Kuzmin distribution, as a by-product of the discovery of theergodicityof theGauss map for continued fractions. Gauss's solution is the first-ever result in the metrical theory of continued fractions.[210] Gauss was busy with geodetic problems since 1799 when he helpedKarl Ludwig von Lecoqwith calculations during hissurveyinWestphalia.[211]Beginning in 1804, he taught himself some practical geodesy in Brunswick[212]and Göttingen.[213] Since 1816, Gauss's former studentHeinrich Christian Schumacher, then professor inCopenhagen, but living inAltona(Holstein) nearHamburgas head of an observatory, carried out atriangulationof theJutlandpeninsula fromSkagenin the north toLauenburgin the south.[v]This project was the basis for map production but also aimed at determining the geodetic arc between the terminal sites. Data from geodetic arcs were used to determine the dimensions of the earthgeoid, and long arc distances brought more precise results. Schumacher asked Gauss to continue this work further to the south in the Kingdom of Hanover; Gauss agreed after a short time of hesitation. Finally, in May 1820, KingGeorge IVgave the order to Gauss.[214] Anarc measurementneeds a precise astronomical determination of at least two points in thenetwork. Gauss and Schumacher used the coincidence that both observatories in Göttingen and Altona, in the garden of Schumacher's house, laid nearly in the samelongitude. Thelatitudewas measured with both their instruments and azenith sectorofRamsdenthat was transported to both observatories.[215][w] Gauss and Schumacher had already determined some angles betweenLüneburg, Hamburg, and Lauenburg for the geodetic connection in October 1818.[216]During the summers of 1821 until 1825 Gauss directed the triangulation work personally, fromThuringiain the south to the riverElbein the north. ThetrianglebetweenHoher Hagen,Großer Inselsbergin theThuringian Forest, andBrockenin theHarzmountains was the largest one Gauss had ever measured with a maximum size of 107 km (66.5 miles). In the thinly populatedLüneburg Heathwithout significant natural summits or artificial buildings, he had difficulties finding suitable triangulation points; sometimes cutting lanes through the vegetation was necessary.[203][217] For pointing signals, Gauss invented a new instrument with movable mirrors and a small telescope that reflects the sunbeams to the triangulation points, and named itheliotrope.[218]Another suitable construction for the same purpose was asextantwith an additional mirror which he namedvice heliotrope.[219]Gauss was assisted by soldiers of the Hanoverian army, among them his eldest son Joseph. Gauss took part in thebaselinemeasurement (Braak Base Line) of Schumacher in the village ofBraaknear Hamburg in 1820, and used the result for the evaluation of the Hanoverian triangulation.[220] An additional result was a better value for theflatteningof the approximativeEarth ellipsoid.[221][x]Gauss developed theuniversal transverse Mercator projectionof the ellipsoidal shaped Earth (what he namedconform projection)[223]for representing geodetical data in plane charts. When the arc measurement was finished, Gauss began the enlargement of the triangulation to the west to get a survey of the wholeKingdom of Hanoverwith a Royal decree from 25 March 1828.[224]The practical work was directed by three army officers, among them Lieutenant Joseph Gauss. The complete data evaluation laid in the hands of Gauss, who applied his mathematical inventions such as themethod of least squaresand theelimination methodto it. The project was finished in 1844, and Gauss sent a final report of the project to the government; his method of projection was not edited until 1866.[225][226] In 1828, when studying differences inlatitude, Gauss first defined a physical approximation for thefigure of the Earthas the surface everywhere perpendicular to the direction of gravity;[227]later his doctoral studentJohann Benedict Listingcalled this thegeoid.[228] Gauss had been interested in magnetism since 1803.[229]AfterAlexander von Humboldtvisited Göttingen in 1826, both scientists began intensive research ongeomagnetism, partly independently, partly in productive cooperation.[230]In 1828, Gauss was Humboldt's guest during the conference of theSociety of German Natural Scientists and Physiciansin Berlin, where he got acquainted with the physicistWilhelm Weber.[231] When Weber got the chair for physics in Göttingen as successor ofJohann Tobias Mayerby Gauss's recommendation in 1831, both of them started a fruitful collaboration, leading to a new knowledge ofmagnetismwith a representation for the unit of magnetism in terms of mass, charge, and time.[232]They founded theMagnetic Association(German:Magnetischer Verein), an international working group of several observatories, which carried out measurements ofEarth's magnetic fieldin many regions of the world using equivalent methods at arranged dates in the years 1836 to 1841.[233] In 1836, Humboldt suggested the establishment of a worldwide net of geomagnetic stations in theBritish dominionswith a letter to theDuke of Sussex, then president of the Royal Society; he proposed that magnetic measures should be taken under standardized conditions using his methods.[234][235]Together with other instigators, this led to a global program known as "Magnetical crusade" under the direction ofEdward Sabine. The dates, times, and intervals of observations were determined in advance, theGöttingen mean timewas used as the standard.[236]61 stations on all five continents participated in this global program. Gauss and Weber founded a series for publication of the results, six volumes were edited between 1837 and 1843. Weber's departure toLeipzigin 1843 as late effect of theGöttingen Seven affairmarked the end of Magnetic Association activity.[233] Following Humboldt's example, Gauss ordered a magneticobservatoryto be built in the garden of the observatory, but the scientists differed over instrumental equipment; Gauss preferred stationary instruments, which he thought to give more precise results, whereas Humboldt was accustomed to movable instruments. Gauss was interested in the temporal and spatial variation of magneticdeclination,inclination, and intensity and differentiated, unlike Humboldt, between "horizontal" and "vertical" intensity. Together with Weber, he developed methods of measuring the components of the intensity of the magnetic field and constructed a suitablemagnetometerto measureabsolute valuesof the strength of the Earth's magnetic field, not more relative ones that depended on the apparatus.[233][237]The precision of the magnetometer was about ten times higher than that of previous instruments. With this work, Gauss was the first to derive a non-mechanical quantity by basic mechanical quantities.[236] Gauss carried out aGeneral Theory of Terrestrial Magnetism(1839), in what he believed to describe the nature of magnetic force; according to Felix Klein, this work is a presentation of observations by use ofspherical harmonicsrather than a physical theory.[238]The theory predicted the existence of exactly twomagnetic poleson the Earth, thusHansteen's idea of four magnetic poles became obsolete,[239]and the data allowed to determine their location with rather good precision.[240] Gauss influenced the beginning of geophysics in Russia, whenAdolph Theodor Kupffer, one of his former students, founded a magnetic observatory inSt. Petersburg, following the example of the observatory in Göttingen, and similarly,Ivan SimonovinKazan.[239] The discoveries ofHans Christian ØrstedonelectromagnetismandMichael Faradayonelectromagnetic inductiondrew Gauss's attention to these matters.[241]Gauss and Weber found rules for branchedelectriccircuits, which were later found independently and first published byGustav Kirchhoffand named after him asKirchhoff's circuit laws,[242]and made inquiries into electromagnetism. They constructed the firstelectromechanical telegraphin 1833, and Weber himself connected the observatory with the institute for physics in the town centre of Göttingen,[y]but they made no further commercial use of this invention.[243][244] Gauss's main theoretical interests in electromagnetism were reflected in his attempts to formulate quantitive laws governing electromagnetic induction. In notebooks from these years, he recorded several innovative formulations; he discovered thevector potentialfunction, independently rediscovered byFranz Ernst Neumannin 1845, and in January 1835 he wrote down an "induction law" equivalent toFaraday's law, which stated that theelectromotive forceat a given point in space is equal to theinstantaneous rate of change(with respect to time) of this function.[245][246] Gauss tried to find a unifying law for long-distance effects ofelectrostatics,electrodynamics, electromagnetism, andinduction, comparable to Newton's law of gravitation,[247]but his attempt ended in a "tragic failure".[236] Since Isaac Newton had shown theoretically that the Earth and rotating stars assume non-spherical shapes, the problem of attraction of ellipsoids gained importance in mathematical astronomy. In his first publication on potential theory, the "Theoria attractionis..." (1813), Gauss provided aclosed-form expressionto the gravitational attraction of a homogeneoustriaxial ellipsoidat every point in space.[248]In contrast to previous research ofMaclaurin, Laplace and Lagrange, Gauss's new solution treated the attraction more directly in the form of an elliptic integral. In the process, he also proved and applied some special cases of the so-calledGauss's theoreminvector analysis.[249] In theGeneral theorems concerning the attractive and repulsive forces acting in reciprocal proportions of quadratic distances(1840) Gauss gave a basic theory ofmagnetic potential, based on Lagrange, Laplace, and Poisson;[238]it seems rather unlikely that he knew the previous works ofGeorge Greenon this subject.[241]However, Gauss could never give any reasons for magnetism, nor a theory of magnetism similar to Newton's work on gravitation, that enabled scientists to predict geomagnetic effects in the future.[236] Gauss's calculations enabled instrument makerJohann Georg RepsoldinHamburgto construct a newachromatic lenssystem in 1810. A main problem, among other difficulties, was that therefractive indexanddispersionof the glass used were not precisely known.[250]In a short article from 1817 Gauss dealt with the problem of removal ofchromatic aberrationindouble lenses, and computed adjustments of the shape and coefficients of refraction required to minimize it. His work was noted by the opticianCarl August von Steinheil, who in 1860 introduced the achromaticSteinheil doublet, partly based on Gauss's calculations.[251]Many results ingeometrical opticsare scattered in Gauss's correspondences and hand notes.[252] In theDioptrical Investigations(1840), Gauss gave the first systematic analysis of the formation of images under aparaxial approximation(Gaussian optics).[253]He characterized optical systems under a paraxial approximation only by itscardinal points,[254]and he derived the Gaussianlensformula, applicable without restrictions in respect to the thickness of the lenses.[255][256] Gauss's first work in mechanics concerned theearth's rotation. When his university friendBenzenbergcarried out experiments to determine the deviation of falling masses from the perpendicular in 1802, what today is known as theCoriolis force, he asked Gauss for a theory-based calculation of the values for comparison with the experimental ones. Gauss elaborated a system of fundamental equations for the motion, and the results corresponded sufficiently with Benzenberg's data, who added Gauss's considerations as an appendix to his book on falling experiments.[257] AfterFoucaulthad demonstrated the earth's rotation by hispendulumexperiment in public in 1851, Gerling questioned Gauss for further explanations. This instigated Gauss to design a new apparatus for demonstration with a much shorter length of pendulum than Foucault's one. The oscillations were observed with a reading telescope, with a vertical scale and a mirror fastened at the pendulum. It is described in the Gauss–Gerling correspondence and Weber made some experiments with this apparatus in 1853, but no data were published.[258][259] Gauss's principle of least constraintof 1829 was established as a general concept to overcome the division of mechanics into statics and dynamics, combiningD'Alembert's principlewithLagrange'sprinciple of virtual work, and showing analogies to the method ofleast squares.[260] In 1828, Gauss was appointed as head of the board for weights and measures of the Kingdom of Hanover. He createdstandardsfor length and measure. Gauss himself took care of the time-consuming measures and gave detailed orders for the mechanical construction.[203]In the correspondence with Schumacher, who was also working on this matter, he described new ideas for high-precision scales.[261]He submitted the final reports on the Hanoverianfootandpoundto the government in 1841. This work achieved international importance due to an 1836 law that connected the Hanoverian measures with the English ones.[203] Gauss first became member of a scientific society, theRussian Academy of Sciences, in 1802.[262]Further memberships (corresponding, foreign or full) were awarded by theAcademy of Sciencesin Göttingen (1802/ 1807),[263]theFrench Academy of Sciences(1804/ 1820),[264]theRoyal Societyof London (1804),[265]theRoyal Prussian Academyin Berlin (1810),[266]theNational Academy of Sciencein Verona (1810),[267]theRoyal Society of Edinburgh(1820),[268]theBavarian Academy of Sciencesof Munich (1820),[269]theRoyal Danish Academyin Copenhagen (1821),[270]theRoyal Astronomical Societyin London (1821),[271]theRoyal Swedish Academy of Sciences(1821),[270]theAmerican Academy of Arts and Sciencesin Boston (1822),[272]theRoyal Bohemian Society of Sciencesin Prague (1833),[273]theRoyal Academy of Science, Letters and Fine Arts of Belgium(1841/1845),[274]theRoyal Society of Sciences in Uppsala(1843),[273]theRoyal Irish Academyin Dublin (1843),[273]theRoyal Institute of the Netherlands(1845/ 1851),[275]theSpanish Royal Academy of Sciencesin Madrid (1850),[276]theRussian Geographical Society(1851),[277]theImperial Academy of Sciencesin Vienna (1848),[277]theAmerican Philosophical Society(1853),[278]theCambridge Philosophical Society,[277]and theRoyal Hollandish Society of Sciencesin Haarlem.[279][280] Both theUniversity of Kazanand the Philosophy Faculty of theUniversity of Pragueappointed him honorary member in 1848.[279] Gauss received theLalande Prizefrom the French Academy of Science in 1809 for the theory of planets and the means of determining their orbits from only three observations,[281]the Danish Academy of Science prize in 1823 for his memoir on conformal projection,[273]and theCopley Medalfrom the Royal Society in 1838 for "his inventions and mathematical researches in magnetism".[280][282][37] Gauss was appointed Knight of the FrenchLegion of Honourin 1837,[283]and became one of the first members of the PrussianOrder Pour le Merite(Civil class) when it was established in 1842.[284]He received theOrder of the Crown of Westphalia(1810),[280]the DanishOrder of the Dannebrog(1817),[280]the HanoverianRoyal Guelphic Order(1815),[280]the SwedishOrder of the Polar Star(1844),[285]theOrder of Henry the Lion(1849),[285]and theBavarian Maximilian Order for Science and Art(1853).[277] The Kings of Hanover appointed him the honorary titles "Hofrath" (1816)[55]and "Geheimer Hofrath"[z](1845). In 1949, on the occasion of his golden doctor degree jubilee, he receivedhonorary citizenshipof both Brunswick and Göttingen.[277]Soon after his death a medal was issued by order of KingGeorge V of Hanoverwith the back inscription dedicated "to the Prince of Mathematicians".[286] The "Gauss-Gesellschaft Göttingen" ("Göttingen Gauss Society") was founded in 1964 for research on the life and work of Carl Friedrich Gauss and related persons. It publishes theMitteilungen der Gauss-Gesellschaft(Communications of the Gauss Society).[287] TheGöttingen Academy of Sciences and Humanitiesprovides a complete collection of the known letters from and to Carl Friedrich Gauss that is accessible online.[38]The literary estate is kept and provided by theGöttingen State and University Library.[288]Written materials from Carl Friedrich Gauss and family members can also be found in the municipal archive of Brunswick.[289]
https://en.wikipedia.org/wiki/Carl_F._Gauss
In mathematics, the termmodulo("with respect to a modulus of", theLatinablativeofmoduluswhich itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced intomathematicsin the context ofmodular arithmeticbyCarl Friedrich Gaussin 1801.[1]Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for").[2]For the most part, the term often occurs in statements of the form: which is often equivalent to "Ais the same asBup toC", and means Modulois amathematical jargonthat was introduced intomathematicsin the bookDisquisitiones ArithmeticaebyCarl Friedrich Gaussin 1801.[3]Given theintegersa,bandn, the expression "a≡b(modn)", pronounced "ais congruent tobmodulon", means thata−bis an integer multiple ofn, or equivalently,aandbboth share the same remainder when divided byn. It is theLatinablativeofmodulus, which itself means "a small measure."[4] The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of anequivalence relationR, whereaisequivalent(orcongruent)tobmoduloRifaRb. Gauss originally intended to use "modulo" as follows: given theintegersa,bandn, the expressiona≡b(modn) (pronounced "ais congruent tobmodulon") means thata−bis an integer multiple ofn, or equivalently,aandbboth leave the same remainder when divided byn. For example: means that Incomputingandcomputer science, the term can be used in several ways: The term "modulo" can be used differently—when referring to different mathematical structures. For example: In general,modding outis a somewhat informal term that means declaring things equivalent that otherwise would be considered distinct. For example, suppose the sequence 1 4 2 8 5 7 is to be regarded as the same as the sequence 7 1 4 2 8 5, because each is a cyclicly-shifted version of the other: In that case, one is"modding out by cyclic shifts".
https://en.wikipedia.org/wiki/Modulo_(mathematics)
Theturn(symboltrorpla) is a unit ofplane anglemeasurement that is the measure of acomplete angle—the anglesubtendedby a completecircleat its center. One turn is equal to2πradians, 360degreesor 400gradians. As anangular unit, one turn also corresponds to onecycle(symbolcycorc)[1]or to onerevolution(symbolrevorr).[2]Common relatedunits of frequencyarecycles per second(cps) andrevolutions per minute(rpm).[a]The angular unit of the turn is useful in connection with, among other things,electromagnetic coils(e.g.,transformers), rotating objects, and thewinding numberof curves. Divisions of a turn include the half-turn and quarter-turn, spanning astraight angleand aright angle, respectively;metric prefixescan also be used as in, e.g., centiturns (ctr), milliturns (mtr), etc. In theISQ, an arbitrary "number of turns" (also known as "number of revolutions" or "number of cycles") is formalized as adimensionless quantitycalledrotation, defined as theratioof a given angle and a full turn. It is represented by the symbolN.(Seebelowfor the formula.) Because one turn is2π{\displaystyle 2\pi }radians, some have proposed representing2π{\displaystyle 2\pi }with the single lettertau(τ{\displaystyle \tau }). There are several unit symbols for the turn. The German standardDIN 1315(March 1974) proposed the unit symbol "pla" (from Latin:plenus angulus'full angle') for turns.[3][4]Covered inDIN 1301-1[de](October 2010), the so-calledVollwinkel('full angle') is not anSI unit. However, it is alegal unit of measurementin the EU[5][6]and Switzerland.[7] The scientific calculatorsHP 39gIIandHP Primesupport the unit symbol "tr" for turns since 2011 and 2013, respectively. Support for "tr" was also added tonewRPLfor theHP 50gin 2016, and for thehp 39g+,HP 49g+,HP 39gs, andHP 40gsin 2017.[8][9]An angular modeTURNwas suggested for theWP 43Sas well,[10]but the calculator instead implements "MULπ" (multiples ofπ) as mode and unit since 2019.[11][12] Many angle units are defined as a division of the turn. For example, thedegreeis defined such that one turn is 360 degrees. Usingmetric prefixes, the turn can be divided in 100 centiturns or1000milliturns, with each milliturn corresponding to anangleof 0.36°, which can also be written as21′ 36″.[13][14]Aprotractordivided in centiturns is normally called a "percentageprotractor". While percentage protractors have existed since 1922,[15]the terms centiturns, milliturns and microturns were introduced much later by the British astronomerFred Hoylein 1962.[13][14]Some measurement devices for artillery andsatellite watchingcarry milliturn scales.[16][17] Binary fractions of a turnare also used. Sailors have traditionally divided a turn into 32compass points, which implicitly have an angular separation of⁠1/32⁠turn. Thebinary degree, also known as thebinary radian(orbrad), is⁠1/256⁠turn.[18]The binary degree is used in computing so that an angle can be represented to the maximum possible precision in a singlebyte. Other measures of angle used in computing may be based on dividing one whole turn into2nequal parts for other values ofn.[19] One turn is equal to2π{\displaystyle 2\pi }=τ{\displaystyle \tau }≈6.283185307179586[20]radians, 360degrees, or 400gradians. In theInternational System of Quantities(ISQ),rotation(symbolN) is aphysical quantitydefined asnumber of revolutions:[21] Nis the number (not necessarily an integer) of revolutions, for example, of a rotating body about a given axis. Its value is given by: where 𝜑 denotes the measure ofrotational displacement. The above definition is part of the ISQ, formalized in the international standardISO 80000-3(Space and time),[21]and adopted in theInternational System of Units(SI).[22][23] Rotation count or number of revolutions is aquantity of dimension one, resulting from a ratio of angular displacement. It can be negative and also greater than 1 in modulus. The relationship between quantity rotation,N, and unit turns, tr, can be expressed as: where {𝜑}tris the numerical value of the angle 𝜑 in units of turns (seePhysical quantity § Components). In the ISQ/SI, rotation is used to deriverotational frequency(therate of changeof rotation with respect to time), denoted byn: The SI unit of rotational frequency is thereciprocal second(s−1). Common relatedunits of frequencyarehertz(Hz),cycles per second(cps), andrevolutions per minute(rpm). The superseded version ISO 80000-3:2006 defined "revolution" as a special name for thedimensionless unit"one",[b]which also received other special names, such as the radian.[c]Despite theirdimensional homogeneity, these two specially named dimensionless units are applicable for non-comparablekinds of quantity: rotation and angle, respectively.[25]"Cycle" is also mentioned in ISO 80000-3, in the definition ofperiod.[d]
https://en.wikipedia.org/wiki/Turn_(angle)
Inmathematics, aBoolean ringRis aringfor whichx2=xfor allxinR, that is, a ring that consists of onlyidempotent elements.[1][2][3]An example is the ring ofintegers modulo 2. Every Boolean ring gives rise to aBoolean algebra, with ring multiplication corresponding toconjunctionormeet∧, and ring addition toexclusive disjunctionorsymmetric difference(notdisjunction∨,[4]which would constitute asemiring). Conversely, every Boolean algebra gives rise to a Boolean ring. Boolean rings are named after the founder of Boolean algebra,George Boole. There are at least four different and incompatible systems of notation for Boolean rings and algebras: Historically, the term "Boolean ring" has been used to mean a "Boolean ring possibly without an identity", and "Boolean algebra" has been used to mean a Boolean ring with an identity. The existence of the identity is necessary to consider the ring as an algebra over thefield of two elements: otherwise there cannot be a (unital) ring homomorphism of the field of two elements into the Boolean ring. (This is the same as the old use of the terms "ring" and "algebra" inmeasure theory.[a]) One example of a Boolean ring is thepower setof any setX, where the addition in the ring issymmetric difference, and the multiplication isintersection. As another example, we can also consider the set of allfiniteor cofinite subsets ofX, again with symmetric difference and intersection as operations. More generally with these operations anyfield of setsis a Boolean ring. ByStone's representation theoremevery Boolean ring is isomorphic to afield of sets(treated as a ring with these operations). Since the join operation∨in a Boolean algebra is often written additively, it makes sense in this context to denote ring addition by⊕, a symbol that is often used to denoteexclusive or. Given a Boolean ringR, forxandyinRwe can define These operations then satisfy all of the axioms for meets, joins, and complements in aBoolean algebra. Thus every Boolean ring becomes a Boolean algebra. Similarly, every Boolean algebra becomes a Boolean ring thus: If a Boolean ring is translated into a Boolean algebra in this way, and then the Boolean algebra is translated into a ring, the result is the original ring. The analogous result holds beginning with a Boolean algebra. A map between two Boolean rings is aring homomorphismif and only ifit is a homomorphism of the corresponding Boolean algebras. Furthermore, a subset of a Boolean ring is aring ideal(prime ring ideal, maximal ring ideal) if and only if it is anorder ideal(prime order ideal, maximal order ideal) of the Boolean algebra. Thequotient ringof a Boolean ring modulo a ring ideal corresponds to the factor algebra of the corresponding Boolean algebra modulo the corresponding order ideal. Every Boolean ringRsatisfiesx⊕x= 0for allxinR, because we know and since(R, ⊕)is an abelian group, we can subtractx⊕xfrom both sides of this equation, which givesx⊕x= 0. A similar proof shows that every Boolean ring iscommutative: The propertyx⊕x= 0shows that any Boolean ring is anassociative algebraover thefieldF2with two elements, in precisely one way.[citation needed]In particular, any finite Boolean ring has ascardinalityapower of two. Not every unital associative algebra overF2is a Boolean ring: consider for instance thepolynomial ringF2[X]. The quotient ringR/Iof any Boolean ringRmodulo any idealIis again a Boolean ring. Likewise, anysubringof a Boolean ring is a Boolean ring. AnylocalizationRS−1of a Boolean ringRby a setS⊆Ris a Boolean ring, since every element in the localization is idempotent. The maximal ring of quotientsQ(R)(in the sense of Utumi andLambek) of a Boolean ringRis a Boolean ring, since every partial endomorphism is idempotent.[6] Everyprime idealPin a Boolean ringRismaximal: thequotient ringR/Pis anintegral domainand also a Boolean ring, so it is isomorphic to thefieldF2, which shows the maximality ofP. Since maximal ideals are always prime, prime ideals and maximal ideals coincide in Boolean rings. Every finitely generated ideal of a Boolean ring isprincipal(indeed,(x,y) = (x+y+xy)). Furthermore, as all elements are idempotents, Boolean rings are commutativevon Neumann regular ringsand hence absolutely flat, which means that every module over them isflat. Unificationin Boolean rings isdecidable,[7]that is, algorithms exist to solve arbitrary equations over Boolean rings. Both unification and matching infinitely generatedfree Boolean rings areNP-complete, and both areNP-hardinfinitely presentedBoolean rings.[8](In fact, as any unification problemf(X) =g(X)in a Boolean ring can be rewritten as the matching problemf(X) +g(X) = 0, the problems are equivalent.) Unification in Boolean rings is unitary if all the uninterpreted function symbols are nullary and finitary otherwise (i.e. if the function symbols not occurring in the signature of Boolean rings are all constants then there exists amost general unifier, and otherwise theminimal complete set of unifiersis finite).[9]
https://en.wikipedia.org/wiki/Boolean_ring
Incomputer science, acircular buffer,circular queue,cyclic bufferorring bufferis adata structurethat uses a single, fixed-sizebufferas if it were connected end-to-end. This structure lends itself easily to bufferingdata streams.[1]There were early circular buffer implementations in hardware.[2][3] A circular buffer first starts out empty and has a set length. In the diagram below is a 7-element buffer: Assume that 1 is written in the center of a circular buffer (the exact starting location is not important in a circular buffer): Then assume that two more elements are added to the circular buffer — 2 & 3 — which get put after 1: If two elements are removed, the two oldest values inside of the circular buffer would be removed. Circular buffers use FIFO (first in, first out) logic. In the example, 1 & 2 were the first to enter the circular buffer, they are the first to be removed, leaving 3 inside of the buffer. If the buffer has 7 elements, then it is completely full: A property of the circular buffer is that when it is full and a subsequent write is performed, then it starts overwriting the oldest data. In the current example, two more elements — A & B — are added and theyoverwritethe 3 & 4: Alternatively, the routines that manage the buffer could prevent overwriting the data and return an error or raise anexception. Whether or not data is overwritten is up to the semantics of the buffer routines or the application using the circular buffer. Finally, if two elements are now removed then what would be removed isnotA & B, but 5 & 6 because 5 & 6 are now the oldest elements, yielding the buffer with: The useful property of a circular buffer is that it does not need to have its elements shuffled around when one is consumed. (If a non-circular buffer were used then it would be necessary to shift all elements when one is consumed.) In other words, the circular buffer is well-suited as aFIFO(first in, first out) buffer while a standard, non-circular buffer is well suited as aLIFO(last in, first out) buffer. Circular buffering makes a good implementation strategy for aqueuethat has fixed maximum size. Should a maximum size be adopted for a queue, then a circular buffer is a completely ideal implementation; all queue operations are constant time. However, expanding a circular buffer requires shifting memory, which is comparatively costly. For arbitrarily expanding queues, alinked listapproach may be preferred instead. In some situations, overwriting circular buffer can be used, e.g. in multimedia. If the buffer is used as the bounded buffer in theproducer–consumer problemthen it is probably desired for the producer (e.g., an audio generator) to overwrite old data if the consumer (e.g., thesound card) is unable to momentarily keep up. Also, theLZ77family of lossless data compression algorithms operates on the assumption that strings seen more recently in a data stream are more likely to occur soon in the stream. Implementations store the most recent data in a circular buffer. A circular buffer can be implemented using apointerand four integers:[4] This image shows a partially full buffer with Length = 7: This image shows a full buffer with four elements (numbers 1 through 4) having been overwritten: In the beginning the indexes end and start are set to 0. The circular buffer write operation writes an element to the end index position and the end index is incremented to the next buffer position. The circular buffer read operation reads an element from the start index position and the start index is incremented to the next buffer position. The start and end indexes alone are not enough to distinguish between buffer full or empty state while also utilizing all buffer slots,[5]but can be if the buffer only has a maximum in-use size of Length − 1.[6]In this case, the buffer is empty if the start and end indexes are equal and full when the in-use size is Length − 1. Another solution is to have another integer count that is incremented at a write operation and decremented at a read operation. Then checking for emptiness means testing count equals 0 and checking for fullness means testing count equals Length.[7] The following source code is aCimplementation together with a minimal test. Function put() puts an item in the buffer, function get() gets an item from the buffer. Both functions take care about the capacity of the buffer : A circular-buffer implementation may be optimized bymappingthe underlying buffer to two contiguous regions ofvirtual memory.[8][disputed–discuss](Naturally, the underlying buffer‘s length must then equal some multiple of the system’spage size.) Reading from and writing to the circular buffer may then be carried out with greater efficiency by means of direct memory access; those accesses which fall beyond the end of the first virtual-memory region will automatically wrap around to the beginning of the underlying buffer. When the read offset is advanced into the second virtual-memory region, both offsets—read and write—are decremented by the length of the underlying buffer. Perhaps the most common version of the circular buffer uses 8-bit bytes as elements. Some implementations of the circular buffer use fixed-length elements that are bigger than 8-bit bytes—16-bit integers for audio buffers, 53-byteATMcells for telecom buffers, etc. Each item is contiguous and has the correctdata alignment, so software reading and writing these values can be faster than software that handles non-contiguous and non-aligned values. Ping-pong bufferingcan be considered a very specialized circular buffer with exactly two large fixed-length elements. Thebip buffer(bipartite buffer) is very similar to a circular buffer, except it always returns contiguous blocks which can be variable length. This offers nearly all the efficiency advantages of a circular buffer while maintaining the ability for the buffer to be used in APIs that only accept contiguous blocks.[9] Fixed-sized compressed circular buffers use an alternative indexing strategy based on elementary number theory to maintain a fixed-sized compressed representation of the entire data sequence.[10]
https://en.wikipedia.org/wiki/Circular_buffer
Divisionis one of the four basic operations ofarithmetic. The other operations areaddition,subtraction, andmultiplication. What is being divided is called thedividend, which is divided by thedivisor, and the result is called thequotient. At an elementary level the division of twonatural numbersis, among otherpossible interpretations, the process of calculating the number of times one number is contained within another.[1]: 7For example, if 20 apples are divided evenly between 4 people, everyone receives 5 apples (see picture). However, this number of times or the number contained (divisor) need not beintegers. Thedivision with remainderorEuclidean divisionof twonatural numbersprovides an integerquotient, which is the number of times the second number is completely contained in the first number, and aremainder, which is the part of the first number that remains, when in the course of computing the quotient, no further full chunk of the size of the second number can be allocated. For example, if 21 apples are divided between 4 people, everyone receives 5 apples again, and 1 apple remains. For division to always yield one number rather than an integer quotient plus a remainder, the natural numbers must be extended torational numbersorreal numbers. In these enlargednumber systems, division is the inverse operation to multiplication, that isa=c/bmeansa×b=c, as long asbis not zero. Ifb= 0, then this is adivision by zero, which is not defined.[a][4]: 246In the 21-apples example, everyone would receive 5 apple and a quarter of an apple, thus avoiding any leftover. Both forms of division appear in variousalgebraic structures, different ways of defining mathematical structure. Those in which a Euclidean division (with remainder) is defined are calledEuclidean domainsand includepolynomial ringsin oneindeterminate(which define multiplication and addition over single-variabled formulas). Those in which a division (with a single result) by all nonzero elements is defined are calledfieldsanddivision rings. In aringthe elements by which division is always possible are called theunits(for example, 1 and −1 in the ring of integers). Another generalization of division to algebraic structures is thequotient group, in which the result of "division" is a group rather than a number. The simplest way of viewing division is in terms ofquotition and partition: from the quotition perspective,20 / 5means the number of 5s that must be added to get 20. In terms of partition,20 / 5means the size of each of 5 parts into which a set of size 20 is divided. For example, 20 apples divide into five groups of four apples, meaning that "twenty divided by five is equal to four". This is denoted as20 / 5 = 4, or⁠20/5⁠= 4.[2]In the example, 20 is the dividend, 5 is the divisor, and 4 is the quotient. Unlike the other basic operations, when dividing natural numbers there is sometimes aremainderthat will not go evenly into the dividend; for example,10 / 3leaves a remainder of 1, as 10 is not a multiple of 3. Sometimes this remainder is added to the quotient as afractional part, so10 / 3is equal to⁠3+1/3⁠or3.33..., but in the context ofintegerdivision, where numbers have no fractional part, the remainder is kept separately (or exceptionally, discarded orrounded).[5]When the remainder is kept as a fraction, it leads to arational number. The set of all rational numbers is created by extending the integers with all possible results of divisions of integers. Unlike multiplication and addition, division is notcommutative, meaning thata/bis not always equal tob/a.[6]Division is also not, in general,associative, meaning that when dividing multiple times, the order of division can change the result.[7]For example,(24 / 6) / 2 = 2, but24 / (6 / 2) = 8(where the use of parentheses indicates that the operations inside parentheses are performed before the operations outside parentheses). Division is traditionally considered asleft-associative. That is, if there are multiple divisions in a row, the order of calculation goes from left to right:[8][9] Division isright-distributiveover addition and subtraction, in the sense that This is the same formultiplication, as(a+b)×c=a×c+b×c{\displaystyle (a+b)\times c=a\times c+b\times c}. However, division isnotleft-distributive, as This is unlike the case in multiplication, which is both left-distributive and right-distributive, and thusdistributive. Division is often shown in algebra and science by placing thedividendover thedivisorwith a horizontal line, also called afraction bar, between them. For example, "adivided byb" can be written as: which can also be read out loud as "divideabyb" or "aoverb". A way to express division all on one line is to write thedividend(or numerator), then aslash, then thedivisor(or denominator), as follows: This is the usual way of specifying division in most computerprogramming languages, since it can easily be typed as a simple sequence ofASCIIcharacters. (It is also the only notation used forquotient objectsinabstract algebra.) Somemathematical software, such asMATLABandGNU Octave, allows the operands to be written in the reverse order by using thebackslashas the division operator: A typographical variation halfway between these two forms uses asolidus(fraction slash), but elevates the dividend and lowers the divisor: Any of these forms can be used to display afraction. A fraction is a division expression where both dividend and divisor areintegers(typically called thenumeratoranddenominator), and there is no implication that the division must be evaluated further. A second way to show division is to use thedivision sign(÷, also known asobelusthough the term has additional meanings), common in arithmetic, in this manner: This form is infrequent except in elementary arithmetic.ISO 80000-2-10.6 states it should not be used. This division sign is also used alone to represent the division operation itself, as for instance as a label on a key of acalculator. The obelus was introduced by Swiss mathematicianJohann Rahnin 1659 inTeutsche Algebra.[10]: 211The ÷ symbol is used to indicate subtraction in some European countries, so its use may be misunderstood.[11] In some non-English-speaking countries, a colon is used to denote division:[12] This notation was introduced byGottfried Wilhelm Leibnizin his 1684Acta eruditorum.[10]: 295Leibniz disliked having separate symbols for ratio and division. However, in English usage thecolonis restricted to expressing the related concept ofratios. Since the 19th century, US textbooks have usedb)a{\displaystyle b)a}orb)a¯{\displaystyle b{\overline {)a}}}to denoteadivided byb, especially when discussinglong division. The history of this notation is not entirely clear because it evolved over time.[13] Division is often introduced through the notion of "sharing out" a set of objects, for example a pile of lollies, into a number of equal portions. Distributing the objects several at a time in each round of sharing to each portion leads to the idea of 'chunking' – a form of division where one repeatedly subtracts multiples of the divisor from the dividend itself. By allowing one to subtract more multiples than what the partial remainder allows at a given stage, more flexible methods, such as the bidirectional variant of chunking, can be developed as well. More systematically and more efficiently, two integers can be divided with pencil and paper with the method ofshort division, if the divisor is small, orlong division, if the divisor is larger. If the dividend has afractionalpart (expressed as adecimal fraction), one can continue the procedure past the ones place as far as desired. If the divisor has a fractional part, one can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction, which can make the problem easier to solve (e.g., 10/2.5 = 100/25 = 4). Division can be calculated with anabacus.[14] Logarithm tablescan be used to divide two numbers, by subtracting the two numbers' logarithms, then looking up theantilogarithmof the result. Division can be calculated with aslide ruleby aligning the divisor on the C scale with the dividend on the D scale. The quotient can be found on the D scale where it is aligned with the left index on the C scale. The user is responsible, however, for mentally keeping track of the decimal point. Moderncalculatorsandcomputerscompute division either by methods similar to long division, or by faster methods; seeDivision algorithm. Inmodular arithmetic(modulo a prime number) and forreal numbers, nonzero numbers have amultiplicative inverse. In these cases, a division byxmay be computed as the product by the multiplicative inverse ofx. This approach is often associated with the faster methods in computer arithmetic. Euclidean division is the mathematical formulation of the outcome of the usual process of division of integers. It asserts that, given two integers,a, thedividend, andb, thedivisor, such thatb≠ 0, there areuniqueintegersq, thequotient, andr, the remainder, such thata=bq+rand 0 ≤r< |b|, where |b| denotes theabsolute valueofb. Integers are notclosedunder division. Apart from division by zero being undefined, the quotient is not an integer unless the dividend is an integer multiple of the divisor. For example, 26 cannot be divided by 11 to give an integer. Such a case uses one of five approaches: Dividing integers in acomputer programrequires special care. Someprogramming languagestreat integer division as in case 5 above, so the answer is an integer. Other languages, such asMATLABand everycomputer algebra systemreturn a rational number as the answer, as in case 3 above. These languages also provide functions to get the results of the other cases, either directly or from the result of case 3. Names and symbols used for integer division includediv,/,\, and%.[citation needed]Definitions vary regarding integer division when the dividend or the divisor is negative:roundingmay be toward zero (so called T-division) or toward−∞(F-division); rarer styles can occur – seemodulo operationfor the details. Divisibility rulescan sometimes be used to quickly determine whether one integer divides exactly into another. The result of dividing tworational numbersis another rational number when the divisor is not 0. The division of two rational numbersp/qandr/scan be computed asp/qr/s=pq×sr=psqr.{\displaystyle {p/q \over r/s}={p \over q}\times {s \over r}={ps \over qr}.} All four quantities are integers, and onlypmay be 0. This definition ensures that division is the inverse operation ofmultiplication. Division of tworeal numbersresults in another real number (when the divisor is nonzero). It is defined such thata/b=cif and only ifa=cbandb≠ 0. Dividing twocomplex numbers(when the divisor is nonzero) results in another complex number, which is found using the conjugate of the denominator:p+iqr+is=(p+iq)(r−is)(r+is)(r−is)=pr+qs+i(qr−ps)r2+s2=pr+qsr2+s2+iqr−psr2+s2.{\displaystyle {p+iq \over r+is}={(p+iq)(r-is) \over (r+is)(r-is)}={pr+qs+i(qr-ps) \over r^{2}+s^{2}}={pr+qs \over r^{2}+s^{2}}+i{qr-ps \over r^{2}+s^{2}}.} This process of multiplying and dividing byr−is{\displaystyle r-is}is called 'realisation' or (by analogy)rationalisation. All four quantitiesp,q,r,sare real numbers, andrandsmay not both be 0. Division for complex numbers expressed in polar form is simpler than the definition above:peiqreis=peiqe−isreise−is=prei(q−s).{\displaystyle {pe^{iq} \over re^{is}}={pe^{iq}e^{-is} \over re^{is}e^{-is}}={p \over r}e^{i(q-s)}.} Again all four quantitiesp,q,r,sare real numbers, andrmay not be 0. One can define the division operation forpolynomialsin one variable over afield. Then, as in the case of integers, one has a remainder. SeeEuclidean division of polynomials, and, for hand-written computation,polynomial long divisionorsynthetic division. One can define a division operation for matrices. The usual way to do this is to defineA/B=AB−1, whereB−1denotes theinverseofB, but it is far more common to write outAB−1explicitly to avoid confusion. Anelementwise divisioncan also be defined in terms of theHadamard product. Becausematrix multiplicationis notcommutative, one can also define aleft divisionor so-calledbackslash-divisionasA\B=A−1B. For this to be well defined,B−1need not exist, howeverA−1does need to exist. To avoid confusion, division as defined byA/B=AB−1is sometimes calledright divisionorslash-divisionin this context. With left and right division defined this way,A/ (BC)is in general not the same as(A/B) /C, nor is(AB) \Cthe same asA\ (B\C). However, it holds thatA/ (BC) = (A/C) /Band(AB) \C=B\ (A\C). To avoid problems whenA−1and/orB−1do not exist, division can also be defined as multiplication by thepseudoinverse. That is,A/B=AB+andA\B=A+B, whereA+andB+denote the pseudoinverses ofAandB. Inabstract algebra, given amagmawith binary operation ∗ (which could nominally be termed multiplication),left divisionofbbya(writtena\b) is typically defined as the solutionxto the equationa∗x=b, if this exists and is unique. Similarly,right divisionofbbya(writtenb/a) is the solutionyto the equationy∗a=b. Division in this sense does not require ∗ to have any particular properties (such as commutativity, associativity, or an identity element). A magma for which botha\bandb/aexist and are unique for allaand allb(theLatin square property) is aquasigroup. In a quasigroup, division in this sense is always possible, even without an identity element and hence without inverses. "Division" in the sense of "cancellation" can be done in any magma by an element with thecancellation property. Examples includematrixalgebras,quaternionalgebras, and quasigroups. In anintegral domain, where not every element need have an inverse,divisionby a cancellative elementacan still be performed on elements of the formaborcaby left or right cancellation, respectively. If aringis finite and every nonzero element is cancellative, then by an application of thepigeonhole principle, every nonzero element of the ring is invertible, anddivisionby any nonzero element is possible. To learn about whenalgebras(in the technical sense) have a division operation, refer to the page ondivision algebras. In particularBott periodicitycan be used to show that anyrealnormed division algebramust beisomorphicto either the real numbersR, thecomplex numbersC, thequaternionsH, or theoctonionsO. Thederivativeof the quotient of two functions is given by thequotient rule:(fg)′=f′g−fg′g2.{\displaystyle {\left({\frac {f}{g}}\right)}'={\frac {f'g-fg'}{g^{2}}}.} Division of any number byzeroin most mathematical systems is undefined, because zero multiplied by any finite number always results in aproductof zero.[15]Entry of such an expression into mostcalculatorsproduces an error message. However, in certain higher level mathematics division by zero is possible by thezero ringand algebras such aswheels.[16]In these algebras, the meaning of division is different from traditional definitions. +Addition(+) −Subtraction(−) ×Multiplication(×or·) ÷Division(÷or∕)
https://en.wikipedia.org/wiki/Division_(mathematics)
Only 0 ≤a<pare shown, since due to the first property below any otheracan be reduced modulop.Quadratic residuesare highlighted in yellow, and correspond precisely to the values 0 and 1. Innumber theory, theLegendre symbolis amultiplicative functionwith values 1, −1, 0 that is a quadratic charactermoduloof an oddprime numberp: its value at a (nonzero)quadratic residuemodpis 1 and at a non-quadratic residue (non-residue) is −1. Its value at zero is 0. The Legendre symbol was introduced byAdrien-Marie Legendrein 1797 or 1798[1]in the course of his attempts at proving thelaw of quadratic reciprocity. Generalizations of the symbol include theJacobi symbolandDirichlet charactersof higher order. The notational convenience of the Legendre symbol inspired introduction of several other "symbols" used inalgebraic number theory, such as theHilbert symboland theArtin symbol. Letp{\displaystyle p}be an oddprime number. An integera{\displaystyle a}is aquadratic residuemodulop{\displaystyle p}if it iscongruentto aperfect squaremodulop{\displaystyle p}and is a quadratic nonresidue modulop{\displaystyle p}otherwise. TheLegendre symbolis a function ofa{\displaystyle a}andp{\displaystyle p}defined as Legendre's original definition was by means of the explicit formula ByEuler's criterion, which had been discovered earlier and was known to Legendre, these two definitions are equivalent.[2]Thus Legendre's contribution lay in introducing a convenientnotationthat recorded quadratic residuosity ofamodp. For the sake of comparison,Gaussused the notationaRp,aNpaccording to whetherais a residue or a non-residue modulop. For typographical convenience, the Legendre symbol is sometimes written as (a|p) or (a/p). For fixedp, the sequence(0p),(1p),(2p),…{\displaystyle \left({\tfrac {0}{p}}\right),\left({\tfrac {1}{p}}\right),\left({\tfrac {2}{p}}\right),\ldots }isperiodicwith periodpand is sometimes called theLegendre sequence. Each row in the following table exhibits periodicity, just as described. There are a number of useful properties of the Legendre symbol which, together with the law ofquadratic reciprocity, can be used to compute it efficiently. Sums of the form∑(f(a)p){\displaystyle \sum \left({\frac {f\left(a\right)}{p}}\right)}, typically taken over all integers in the range[0,p−1]{\displaystyle \left[0,p-1\right]}for some functionf{\displaystyle f}, are a special case ofcharacter sums. They are of interest in the distribution ofquadratic residuesmodulo a prime number. Letpandqbe distinct odd primes. Using the Legendre symbol, thequadratic reciprocitylaw can be stated concisely: Manyproofs of quadratic reciprocityare based on Euler's criterion In addition, several alternative expressions for the Legendre symbol were devised in order to produce various proofs of the quadratic reciprocity law. The above properties, including the law of quadratic reciprocity, can be used to evaluate any Legendre symbol. For example: Or using a more efficient computation: The articleJacobi symbolhas more examples of Legendre symbol manipulation. Since no efficientfactorizationalgorithm is known, but efficientmodular exponentiationalgorithms are, in general it is more efficient to use Legendre's original definition, e.g. usingrepeated squaringmodulo 331, reducing every value using the modulus after every operation to avoid computation with large integers. The following is a table of values of Legendre symbol(ap){\displaystyle \left({\frac {a}{p}}\right)}withp≤ 127,a≤ 30,podd prime.
https://en.wikipedia.org/wiki/Legendre_symbol
Innumber theory, thenthPisano period, written asπ(n), is theperiodwith which thesequenceofFibonacci numberstakenmodulonrepeats. Pisano periods are named after Leonardo Pisano, better known asFibonacci. The existence of periodic functions in Fibonacci numbers was noted byJoseph Louis Lagrangein 1774.[1][2] The Fibonacci numbers are the numbers in theinteger sequence: defined by therecurrence relation For anyintegern, the sequence of Fibonacci numbersFitakenmodulonis periodic. The Pisano period, denotedπ(n), is the length of the period of this sequence. For example, the sequence of Fibonacci numbersmodulo3 begins: This sequence has period 8, soπ(3) = 8. With the exception ofπ(2) = 3, the Pisano periodπ(n) is alwayseven. A proof of this can be given by observing thatπ(n) is equal to the order of theFibonacci matrix in thegeneral linear groupGL2(Zn){\displaystyle {\text{GL}}_{2}(\mathbb {Z} _{n})}ofinvertible2 by 2matricesin thefinite ringZn{\displaystyle \mathbb {Z} _{n}}ofintegers modulon. SinceQhas determinant −1, the determinant ofQπ(n)is (−1)π(n), and since this must equal 1 inZn{\displaystyle \mathbb {Z} _{n}}, eithern≤ 2 orπ(n) is even.[3] SinceF0=0{\displaystyle F_{0}=0}andF1=1{\displaystyle F_{1}=1}we have thatn{\displaystyle n}dividesFπ(n){\displaystyle F_{\pi (n)}}and(Fπ(n)+1−1){\displaystyle (F_{\pi (n)+1}-1)}. Ifmandnarecoprime, thenπ(mn) is theleast common multipleofπ(m) andπ(n), by theChinese remainder theorem. For example,π(3) = 8 andπ(4) = 6 implyπ(12) = 24. Thus the study of Pisano periods may be reduced to that of Pisano periods ofprime powersq=pk, fork≥ 1. Ifpisprime,π(pk) dividespk–1π(p). It is unknown ifπ(pk)=pk−1π(p){\displaystyle \pi (p^{k})=p^{k-1}\pi (p)}for every primepand integerk> 1. Any primepproviding acounterexamplewould necessarily be aWall–Sun–Sun prime, and conversely every Wall–Sun–Sun primepgives a counterexample (setk= 2). So the study of Pisano periods may be further reduced to that of Pisano periods of primes. In this regard, two primes are anomalous. The prime 2 has anoddPisano period, and the prime 5 has period that is relatively much larger than the Pisano period of any other prime. The periods of powers of these primes are as follows: From these it follows that ifn= 2·5kthenπ(n) = 6n. The remaining primes all lie in the residue classesp≡±1(mod10){\displaystyle p\equiv \pm 1{\pmod {10}}}orp≡±3(mod10){\displaystyle p\equiv \pm 3{\pmod {10}}}. Ifpis a prime different from 2 and 5, then the modulopanalogue ofBinet's formulaimplies thatπ(p) is themultiplicative orderof arootofx2−x− 1modulop. Ifp≡±1(mod10){\displaystyle p\equiv \pm 1{\pmod {10}}}, these roots belong toFp=Z/pZ{\displaystyle \mathbb {F} _{p}=\mathbb {Z} /p\mathbb {Z} }(byquadratic reciprocity). Thus their order,π(p) is adivisorofp− 1. For example,π(11) = 11 − 1 = 10 andπ(29) = (29 − 1)/2 = 14. Ifp≡±3(mod10),{\displaystyle p\equiv \pm 3{\pmod {10}},}the roots modulopofx2−x− 1do not belong toFp{\displaystyle \mathbb {F} _{p}}(by quadratic reciprocity again), and belong to thefinite field As theFrobenius automorphismx↦xp{\displaystyle x\mapsto x^{p}}exchanges these roots, it follows that, denoting them byrands, we haverp=s, and thusrp+1= –1. That isr2(p+1)= 1, and the Pisano period, which is the order ofr, is the quotient of 2(p+1) by an odd divisor. This quotient is always a multiple of 4. The first examples of such ap, for whichπ(p) is smaller than 2(p+1), areπ(47) = 2(47 + 1)/3 = 32,π(107) = 2(107 + 1)/3 = 72 andπ(113) = 2(113 + 1)/3 = 76. (See the table below) It follows from above results, that ifn=pkis an odd prime power such thatπ(n) >n, thenπ(n)/4 is an integer that is not greater thann. The multiplicative property of Pisano periods imply thus that The first examples areπ(10) = 60 andπ(50) = 300. Ifnis not of the form 2 · 5r, thenπ(n) ≤ 4n. The first twelve Pisano periods (sequenceA001175in theOEIS) and their cycles (with spaces before the zeros for readability) are[5](using X and E for ten and eleven, respectively): The first 144 Pisano periods are shown in the following table: Ifn=F(2k) (k≥ 2), then π(n) = 4k; ifn=F(2k+ 1) (k≥ 2), then π(n) = 8k+ 4. That is, if the modulo base is a Fibonacci number (≥ 3) with an even index, the period is twice the index and the cycle has two zeros. If the base is a Fibonacci number (≥ 5) with an odd index, the period is four times the index and the cycle has four zeros. Ifn=L(2k) (k≥ 1), then π(n) = 8k; ifn=L(2k+ 1) (k≥ 1), then π(n) = 4k+ 2. That is, if the modulo base is a Lucas number (≥ 3) with an even index, the period is four times the index. If the base is a Lucas number (≥ 4) with an odd index, the period is twice the index. For evenk, the cycle has two zeros. For oddk, the cycle has only one zero, and the second half of the cycle, which is of course equal to the part on the left of 0, consists of alternatingly numbersF(2m+ 1) andn−F(2m), withmdecreasing. The number of occurrences of 0 per cycle is 1, 2, or 4. Letpbe the number after the first 0 after the combination 0, 1. Let the distance between the 0s beq. For generalized Fibonacci sequences (satisfying the same recurrence relation, but with other initial values, e.g. the Lucas numbers) the number of occurrences of 0 per cycle is 0, 1, 2, or 4. The ratio of the Pisano period ofnand the number of zeros modulonin the cycle gives therank of apparitionorFibonacci entry pointofn. That is, smallest indexksuch thatndividesF(k). They are: In Renault's paper the number of zeros is called the "order" ofFmodm, denotedω(m){\displaystyle \omega (m)}, and the "rank of apparition" is called the "rank" and denotedα(m){\displaystyle \alpha (m)}.[6] According to Wall's conjecture,α(pe)=pe−1α(p){\displaystyle \alpha (p^{e})=p^{e-1}\alpha (p)}. Ifm{\displaystyle m}hasprime factorizationm=p1e1p2e2…pnen{\displaystyle m=p_{1}^{e_{1}}p_{2}^{e_{2}}\dots p_{n}^{e_{n}}}thenα(m)=lcm⁡(α(p1e1),α(p2e2),…,α(pnen)){\displaystyle \alpha (m)=\operatorname {lcm} (\alpha (p_{1}^{e_{1}}),\alpha (p_{2}^{e_{2}}),\dots ,\alpha (p_{n}^{e_{n}}))}.[6] ThePisano periodsofLucas numbersare ThePisano periodsofPell numbers(or 2-Fibonacci numbers) are ThePisano periodsof 3-Fibonacci numbers are ThePisano periodsofJacobsthal numbers(or (1,2)-Fibonacci numbers) are ThePisano periodsof (1,3)-Fibonacci numbers are ThePisano periodsofTribonacci numbers(or 3-step Fibonacci numbers) are ThePisano periodsofTetranacci numbers(or 4-step Fibonacci numbers) are See alsogeneralizations of Fibonacci numbers. Pisano periods can be analyzed usingalgebraic number theory. Letπk(n){\displaystyle \pi _{k}(n)}be then-th Pisano period of thek-Fibonacci sequenceFk(n) (kcan be anynatural number, these sequences are defined asFk(0) = 0,Fk(1) = 1, and for any natural numbern> 1,Fk(n) =kFk(n−1) +Fk(n−2)). Ifmandnarecoprime, thenπk(m⋅n)=lcm(πk(m),πk(n)){\displaystyle \pi _{k}(m\cdot n)=\mathrm {lcm} (\pi _{k}(m),\pi _{k}(n))}, by theChinese remainder theorem: two numbers are congruent modulomnif and only if they are congruent modulomand modulon, assuming these latter are coprime. For example,π1(3)=8{\displaystyle \pi _{1}(3)=8}andπ1(4)=6,{\displaystyle \pi _{1}(4)=6,}soπ1(12=3⋅4)=lcm(π1(3),π1(4))=lcm(8,6)=24.{\displaystyle \pi _{1}(12=3\cdot 4)=\mathrm {lcm} (\pi _{1}(3),\pi _{1}(4))=\mathrm {lcm} (8,6)=24.}Thus it suffices to compute Pisano periods forprime powersq=pn.{\displaystyle q=p^{n}.}(Usually,πk(pn)=pn−1⋅πk(p){\displaystyle \pi _{k}(p^{n})=p^{n-1}\cdot \pi _{k}(p)}, unlesspisk-Wall–Sun–Sun prime, ork-Fibonacci–Wieferich prime, that is,p2dividesFk(p− 1) orFk(p+ 1), whereFkis thek-Fibonacci sequence, for example, 241 is a 3-Wall–Sun–Sun prime, since 2412dividesF3(242).) For prime numbersp, these can be analyzed by usingBinet's formula: Ifk2+ 4 is aquadratic residuemodulop(wherep> 2 andpdoes not dividek2+ 4), thenk2+4,1/2,{\displaystyle {\sqrt {k^{2}+4}},1/2,}andk/k2+4{\displaystyle k/{\sqrt {k^{2}+4}}}can be expressed as integers modulop, and thus Binet's formula can be expressed over integers modulop, and thus the Pisano period divides thetotientϕ(p)=p−1{\displaystyle \phi (p)=p-1}, since any power (such asφkn{\displaystyle \varphi _{k}^{n}}) has period dividingϕ(p),{\displaystyle \phi (p),}as this is theorderof thegroup of unitsmodulop. Fork= 1, this first occurs forp= 11, where 42= 16 ≡ 5 (mod 11) and 2 · 6 = 12 ≡ 1 (mod 11) and 4 · 3 = 12 ≡ 1 (mod 11) so 4 =√5, 6 = 1/2 and 1/√5= 3, yieldingφ= (1 + 4) · 6 = 30 ≡ 8 (mod 11) and the congruence Another example, which shows that the period can properly dividep− 1, isπ1(29) = 14. Ifk2+ 4 is not a quadratic residue modulop, then Binet's formula is instead defined over thequadratic extensionfield(Z/p)[k2+4]{\displaystyle (\mathbb {Z} /p)[{\sqrt {k^{2}+4}}]}, which hasp2elements and whose group of units thus has orderp2− 1, and thus the Pisano period dividesp2− 1. For example, forp= 3 one hasπ1(3) = 8 which equals 32− 1 = 8; forp= 7, one hasπ1(7) = 16, which properly divides 72− 1 = 48. This analysis fails forp= 2 andpis a divisor of the squarefree part ofk2+ 4, since in these cases arezero divisors, so one must be careful in interpreting 1/2 ork2+4{\displaystyle {\sqrt {k^{2}+4}}}. Forp= 2,k2+ 4is congruent to 1 mod 2 (forkodd), but the Pisano period is notp− 1 = 1, but rather 3 (in fact, this is also 3 for evenk). Forpdivides the squarefree part ofk2+ 4, the Pisano period isπk(k2+ 4) =p2−p=p(p− 1), which does not dividep− 1 orp2− 1. One can considerFibonacci integer sequencesand take them modulon, or put differently, considerFibonacci sequencesin the ringZ/nZ. The period is a divisor of π(n). The number of occurrences of 0 per cycle is 0, 1, 2, or 4. Ifnis not a prime the cycles include those that are multiples of the cycles for the divisors. For example, forn= 10 the extra cycles include those forn= 2 multiplied by 5, and forn= 5 multiplied by 2. Table of the extra cycles: (the original Fibonacci cycles are excluded) (using X and E for ten and eleven, respectively) Number of Fibonacci integer cycles modnare:
https://en.wikipedia.org/wiki/Pisano_period
Inmodular arithmetic, a numbergis aprimitive root modulonif every numberacoprimetoniscongruentto a power ofgmodulon. That is,gis aprimitive root modulonif for every integeracoprime ton, there is some integerkfor whichgk≡a(modn). Such a valuekis called theindexordiscrete logarithmofato the basegmodulon. Sogis aprimitive root modulonif and only ifgis ageneratorof themultiplicative group of integers modulon. Gaussdefined primitive roots in Article 57 of theDisquisitiones Arithmeticae(1801), where he creditedEulerwith coining the term. In Article 56 he stated thatLambertand Euler knew of them, but he was the first to rigorously demonstrate that primitive roots exist for aprimen. In fact, theDisquisitionescontains two proofs: The one in Article 54 is a nonconstructiveexistence proof, while the proof in Article 55 isconstructive. A primitive root exists if and only ifnis 1, 2, 4,pkor 2pk, wherepis an odd prime andk> 0. For all other values ofnthe multiplicative group of integers modulonis notcyclic.[1][2][3]This was first proved byGauss.[4] The number 3 is a primitive root modulo 7[5]because31=30×3≡1×3=3≡3(mod7)32=31×3≡3×3=9≡2(mod7)33=32×3≡2×3=6≡6(mod7)34=33×3≡6×3=18≡4(mod7)35=34×3≡4×3=12≡5(mod7)36=35×3≡5×3=15≡1(mod7){\displaystyle {\begin{array}{rcrcrcrcrcr}3^{1}&=&3^{0}\times 3&\equiv &1\times 3&=&3&\equiv &3{\pmod {7}}\\3^{2}&=&3^{1}\times 3&\equiv &3\times 3&=&9&\equiv &2{\pmod {7}}\\3^{3}&=&3^{2}\times 3&\equiv &2\times 3&=&6&\equiv &6{\pmod {7}}\\3^{4}&=&3^{3}\times 3&\equiv &6\times 3&=&18&\equiv &4{\pmod {7}}\\3^{5}&=&3^{4}\times 3&\equiv &4\times 3&=&12&\equiv &5{\pmod {7}}\\3^{6}&=&3^{5}\times 3&\equiv &5\times 3&=&15&\equiv &1{\pmod {7}}\end{array}}} Here we see that theperiodof 3kmodulo 7 is 6. The remainders in the period, which are 3, 2, 6, 4, 5, 1, form a rearrangement of all nonzero remainders modulo 7, implying that 3 is indeed a primitive root modulo 7. This derives from the fact that a sequence (gkmodulon) always repeats after some value ofk, since modulonproduces a finite number of values. Ifgis a primitive root modulonandnis prime, then the period of repetition isn− 1.Permutations created in this way (and their circular shifts) have been shown to beCostas arrays. Ifnis a positive integer, the integers from 1 ton− 1that arecoprimeton(or equivalently, thecongruence classescoprime ton) form agroup, with multiplicationmodulonas the operation; it is denoted byZ{\displaystyle \mathbb {Z} }×n, and is called thegroup of unitsmodulon, or the group of primitive classes modulon. As explained in the articlemultiplicative group of integers modulon, this multiplicative group (Z{\displaystyle \mathbb {Z} }×n) iscyclicif and only ifnis equal to 2, 4,pk, or 2pkwherepkis a power of an oddprime number.[6][2][7]When (and only when) this groupZ{\displaystyle \mathbb {Z} }×nis cyclic, ageneratorof this cyclic group is called aprimitive root modulon[8](or in fuller languageprimitive root of unity modulon, emphasizing its role as a fundamental solution of theroots of unitypolynomial equations Xm− 1 in the ringZ{\displaystyle \mathbb {Z} }n), or simply aprimitive element ofZ{\displaystyle \mathbb {Z} }×n. WhenZ{\displaystyle \mathbb {Z} }×nis non-cyclic, such primitive elements modndo not exist. Instead, each prime component ofnhas its own sub-primitive roots (see15in the examples below). For anyn(whether or notZ{\displaystyle \mathbb {Z} }×nis cyclic), the order ofZ{\displaystyle \mathbb {Z} }×nis given byEuler's totient functionφ(n) (sequenceA000010in theOEIS). And then,Euler's theoremsays thataφ(n)≡ 1 (modn)for everyacoprime ton; the lowest power ofathat is congruent to 1 modulonis called themultiplicative orderofamodulon. In particular, forato be a primitive root modulon,aφ(n)has to be the smallest power ofathat is congruent to 1 modulon. For example, ifn= 14then the elements ofZ{\displaystyle \mathbb {Z} }×nare the congruence classes {1, 3, 5, 9, 11, 13}; there areφ(14) = 6of them. Here is a table of their powers modulo 14: The order of 1 is 1, the orders of 3 and 5 are 6, the orders of 9 and 11 are 3, and the order of 13 is 2. Thus, 3 and 5 are the primitive roots modulo 14. For a second example letn= 15 .The elements ofZ{\displaystyle \mathbb {Z} }×15are the congruence classes {1, 2, 4, 7, 8, 11, 13, 14}; there areφ(15) = 8of them. Since there is no number whose order is 8, there are no primitive roots modulo 15. Indeed,λ(15) = 4, whereλis theCarmichael function. (sequenceA002322in theOEIS) Numbersn{\displaystyle n}that have a primitive root are of the shape These are the numbersn{\displaystyle n}withφ(n)=λ(n),{\displaystyle \varphi (n)=\lambda (n),}kept also in the sequenceA033948in theOEIS. The following table lists the primitive roots modulonup ton=31{\displaystyle n=31}: Gauss proved[10]that for any prime numberp(with the sole exception ofp= 3),the product of its primitive roots is congruent to 1 modulop. He also proved[11]that for any prime numberp, the sum of its primitive roots is congruent toμ(p− 1) modulop, whereμis theMöbius function. For example, E.g., the product of the latter primitive roots is26⋅34⋅7⋅112⋅13⋅17=970377408≡1(mod31){\displaystyle 2^{6}\cdot 3^{4}\cdot 7\cdot 11^{2}\cdot 13\cdot 17=970377408\equiv 1{\pmod {31}}}, and their sum is123≡−1≡μ(31−1)(mod31){\displaystyle 123\equiv -1\equiv \mu (31-1){\pmod {31}}}. Ifa{\displaystyle a}is a primitive root modulo the primep{\displaystyle p}, thenap−12≡−1(modp){\displaystyle a^{\frac {p-1}{2}}\equiv -1{\pmod {p}}}. Artin's conjecture on primitive rootsstates that a given integerathat is neither aperfect squarenor −1 is a primitive root modulo infinitely manyprimes. No simple general formula to compute primitive roots modulonis known.[a][b]There are however methods to locate a primitive root that are faster than simply trying out all candidates. If themultiplicative order(itsexponent) of a numbermmodulonis equal toφ(n){\displaystyle \varphi (n)}(the order ofZ{\displaystyle \mathbb {Z} }×n), then it is a primitive root. In fact the converse is true: Ifmis a primitive root modulon, then the multiplicative order ofmisφ(n)=λ(n).{\displaystyle \varphi (n)=\lambda (n)~.}We can use this to test a candidatemto see if it is primitive. Forn>1{\displaystyle n>1}first, computeφ(n).{\displaystyle \varphi (n)~.}Then determine the differentprime factorsofφ(n){\displaystyle \varphi (n)}, sayp1, ...,pk. Finally, compute using a fast algorithm formodular exponentiationsuch asexponentiation by squaring. A numbergfor which thesekresults are all different from 1 is a primitive root. The number of primitive roots modulon, if there are any, is equal to[12] since, in general, a cyclic group withrelements hasφ(r){\displaystyle \varphi (r)}generators. For primen, this equalsφ(n−1){\displaystyle \varphi (n-1)}, and sincen/φ(n−1)∈O(log⁡log⁡n){\displaystyle n/\varphi (n-1)\in O(\log \log n)}the generators are very common among {2, ...,n−1} and thus it is relatively easy to find one.[13] Ifgis a primitive root modulop, thengis also a primitive root modulo all powerspkunlessgp−1≡ 1 (modp2); in that case,g+pis.[14] Ifgis a primitive root modulopk, thengis also a primitive root modulo all smaller powers ofp. Ifgis a primitive root modulopk, then eithergorg+pk(whichever one is odd) is a primitive root modulo 2pk.[14] Finding primitive roots modulopis also equivalent to finding the roots of the (p− 1)stcyclotomic polynomialmodulop. The least primitive rootgpmodulop(in the range 1, 2, ...,p− 1)is generally small. Burgess (1962) proved[15][16]that for everyε> 0 there is aCsuch thatgp≤Cp14+ε.{\displaystyle g_{p}\leq C\,p^{{\frac {1}{4}}+\varepsilon }.} Grosswald(1981) proved[15][17]that ifp>ee24≈1011504079571{\displaystyle p>e^{e^{24}}\approx 10^{11504079571}}, thengp<p0.499.{\displaystyle g_{p}<p^{0.499}.} Shoup(1990, 1992) proved,[18]assuming thegeneralized Riemann hypothesis, thatgp= O(log6p). Fridlander (1949) and Salié (1950) proved[15]that there is a positive constantCsuch that for infinitely many primesgp>Clogp. It can be proved[15]in an elementary manner that for any positive integerMthere are infinitely many primes such thatM<gp<p−M. A primitive root modulonis often used inpseudorandom number generators[19]andcryptography, including theDiffie–Hellman key exchangescheme.Sound diffusershave been based on number-theoretic concepts such as primitive roots andquadratic residues.[20][21] TheDisquisitiones Arithmeticaehas been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
https://en.wikipedia.org/wiki/Primitive_root_modulo_n
Innumber theory, thelaw of quadratic reciprocityis a theorem aboutmodular arithmeticthat gives conditions for the solvability ofquadratic equationsmoduloprime numbers. Due to its subtlety, it has many formulations, but the most standard statement is: Law of quadratic reciprocity—Letpandqbe distinct odd prime numbers, and define theLegendre symbolas Then This law, together with itssupplements, allows the easy calculation of any Legendre symbol, making it possible to determine whether there is an integer solution for any quadratic equation of the formx2≡amodp{\displaystyle x^{2}\equiv a{\bmod {p}}}for an odd primep{\displaystyle p}; that is, to determine the "perfect squares" modulop{\displaystyle p}. However, this is anon-constructiveresult: it gives no help at all for finding aspecificsolution; for this, other methods are required. For example, in the casep≡3mod4{\displaystyle p\equiv 3{\bmod {4}}}usingEuler's criterionone can give an explicit formula for the "square roots" modulop{\displaystyle p}of a quadratic residuea{\displaystyle a}, namely, indeed, This formula only works if it is known in advance thata{\displaystyle a}is aquadratic residue, which can be checked using the law of quadratic reciprocity. The quadratic reciprocity theorem was conjectured byLeonhard EulerandAdrien-Marie Legendreand first proved byCarl Friedrich Gauss,[1]who referred to it as the "fundamental theorem" in hisDisquisitiones Arithmeticaeand his papers, writing Privately, Gauss referred to it as the "golden theorem".[2]He published sixproofsfor it, and two more were found in his posthumous papers. There are now over 240 published proofs.[3]The shortest known proof is includedbelow, together with short proofs of the law's supplements (the Legendre symbols of −1 and 2). Generalizing thereciprocity lawto higher powers has been a leading problem in mathematics, and has been crucial to the development of much of the machinery ofmodern algebra, number theory, andalgebraic geometry, culminating inArtin reciprocity,class field theory, and theLanglands program. Quadratic reciprocity arises from certain subtle factorization patterns involving perfect square numbers. In this section, we give examples which lead to the general case. Consider the polynomialf(n)=n2−5{\displaystyle f(n)=n^{2}-5}and its values forn∈N.{\displaystyle n\in \mathbb {N} .}The prime factorizations of these values are given as follows: The prime factorsp{\displaystyle p}dividingf(n){\displaystyle f(n)}arep=2,5{\displaystyle p=2,5}, and every prime whose final digit is1{\displaystyle 1}or9{\displaystyle 9}; no primes ending in3{\displaystyle 3}or7{\displaystyle 7}ever appear. Now,p{\displaystyle p}is a prime factor of somen2−5{\displaystyle n^{2}-5}whenevern2−5≡0modp{\displaystyle n^{2}-5\equiv 0{\bmod {p}}}, i.e. whenevern2≡5modp,{\displaystyle n^{2}\equiv 5{\bmod {p}},}i.e. whenever 5 is a quadratic residue modulop{\displaystyle p}. This happens forp=2,5{\displaystyle p=2,5}and those primes withp≡1,4mod5,{\displaystyle p\equiv 1,4{\bmod {5}},}and the latter numbers1=(±1)2{\displaystyle 1=(\pm 1)^{2}}and4=(±2)2{\displaystyle 4=(\pm 2)^{2}}are precisely the quadratic residues modulo5{\displaystyle 5}. Therefore, except forp=2,5{\displaystyle p=2,5}, we have that5{\displaystyle 5}is a quadratic residue modulop{\displaystyle p}iffp{\displaystyle p}is a quadratic residue modulo5{\displaystyle 5}. The law of quadratic reciprocity gives a similar characterization of prime divisors off(n)=n2−q{\displaystyle f(n)=n^{2}-q}for any primeq, which leads to a characterization for any integerq{\displaystyle q}. Letpbe an odd prime. A number modulopis aquadratic residuewhenever it is congruent to a square (modp); otherwise it is a quadratic non-residue. ("Quadratic" can be dropped if it is clear from the context.) Here we exclude zero as a special case. Then as a consequence of the fact that the multiplicative group of afinite fieldof orderpis cyclic of orderp-1, the following statements hold: For the avoidance of doubt, these statements donothold if the modulus is not prime. For example, there are only 3 quadratic residues (1, 4 and 9) in the multiplicative group modulo 15. Moreover, although 7 and 8 are quadratic non-residues, their product 7x8 = 11 is also a quadratic non-residue, in contrast to the prime case. Quadratic residues appear as entries in the following table, indexed by the row number as modulus and column number as root: This table is complete for odd primes less than 50. To check whether a numbermis a quadratic residue mod one of these primesp, finda≡m(modp) and 0 ≤a<p. Ifais in rowp, thenmis a residue (modp); ifais not in rowpof the table, thenmis a nonresidue (modp). The quadratic reciprocity law is the statement that certain patterns found in the table are true in general. Another way to organize the data is to see which primes are quadratic residues mod which other primes, as illustrated in the following table. The entry in rowpcolumnqisRifqis a quadratic residue (modp); if it is a nonresidue the entry isN. If the row, or the column, or both, are ≡ 1 (mod 4) the entry is blue or green; if both row and column are ≡ 3 (mod 4), it is yellow or orange. The blue and green entries are symmetric around the diagonal: The entry for rowp, columnqisR(respN) if and only if the entry at rowq, columnp, isR(respN). The yellow and orange ones, on the other hand, are antisymmetric: The entry for rowp, columnqisR(respN) if and only if the entry at rowq, columnp, isN(respR). The reciprocity law states that these patterns hold for allpandq. Ordering the rows and columns mod 4 makes the pattern clearer. The supplements provide solutions to specific cases of quadratic reciprocity. They are often quoted as partial results, without having to resort to the complete theorem. Trivially 1 is a quadratic residue for all primes. The question becomes more interesting for −1. Examining the table, we find −1 in rows 5, 13, 17, 29, 37, and 41 but not in rows 3, 7, 11, 19, 23, 31, 43 or 47. The former set of primes are all congruent to 1 modulo 4, and the latter are congruent to 3 modulo 4. Examining the table, we find 2 in rows 7, 17, 23, 31, 41, and 47, but not in rows 3, 5, 11, 13, 19, 29, 37, or 43. The former primes are all ≡ ±1 (mod 8), and the latter are all ≡ ±3 (mod 8). This leads to −2 is in rows 3, 11, 17, 19, 41, 43, but not in rows 5, 7, 13, 23, 29, 31, 37, or 47. The former are ≡ 1 or ≡ 3 (mod 8), and the latter are ≡ 5, 7 (mod 8). 3 is in rows 11, 13, 23, 37, and 47, but not in rows 5, 7, 17, 19, 29, 31, 41, or 43. The former are ≡ ±1 (mod 12) and the latter are all ≡ ±5 (mod 12). −3 is in rows 7, 13, 19, 31, 37, and 43 but not in rows 5, 11, 17, 23, 29, 41, or 47. The former are ≡ 1 (mod 3) and the latter ≡ 2 (mod 3). Since the only residue (mod 3) is 1, we see that −3 is a quadratic residue modulo every prime which is a residue modulo 3. 5 is in rows 11, 19, 29, 31, and 41 but not in rows 3, 7, 13, 17, 23, 37, 43, or 47. The former are ≡ ±1 (mod 5) and the latter are ≡ ±2 (mod 5). Since the only residues (mod 5) are ±1, we see that 5 is a quadratic residue modulo every prime which is a residue modulo 5. −5 is in rows 3, 7, 23, 29, 41, 43, and 47 but not in rows 11, 13, 17, 19, 31, or 37. The former are ≡ 1, 3, 7, 9 (mod 20) and the latter are ≡ 11, 13, 17, 19 (mod 20). The observations about −3 and 5 continue to hold: −7 is a residue modulopif and only ifpis a residue modulo 7, −11 is a residue modulopif and only ifpis a residue modulo 11, 13 is a residue (modp) if and only ifpis a residue modulo 13, etc. The more complicated-looking rules for the quadratic characters of 3 and −5, which depend upon congruences modulo 12 and 20 respectively, are simply the ones for −3 and 5 working with the first supplement. The generalization of the rules for −3 and 5 is Gauss's statement of quadratic reciprocity. Quadratic Reciprocity (Gauss's statement).Ifq≡1mod4{\displaystyle q\equiv 1{\bmod {4}}}, then the congruencex2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable if and only ifx2≡qmodp{\displaystyle x^{2}\equiv q{\bmod {p}}}is solvable. Ifq≡3mod4{\displaystyle q\equiv 3{\bmod {4}}}andp≡3mod4{\displaystyle p\equiv 3{\bmod {4}}}, then the congruencex2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable if and only ifx2≡−qmodp{\displaystyle x^{2}\equiv -q{\bmod {p}}}is solvable. Quadratic Reciprocity (combined statement).Defineq∗=(−1)q−12q{\displaystyle q^{*}=(-1)^{\frac {q-1}{2}}q}. Then the congruencex2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable if and only ifx2≡q∗modp{\displaystyle x^{2}\equiv q^{*}{\bmod {p}}}is solvable. Quadratic Reciprocity (Legendre's statement).Ifporqare congruent to 1 modulo 4, then:x2≡qmodp{\displaystyle x^{2}\equiv q{\bmod {p}}}is solvable if and only ifx2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is solvable. Ifpandqare congruent to 3 modulo 4, then:x2≡qmodp{\displaystyle x^{2}\equiv q{\bmod {p}}}is solvable if and only ifx2≡pmodq{\displaystyle x^{2}\equiv p{\bmod {q}}}is not solvable. The last is immediately equivalent to the modern form stated in the introduction above. It is a simple exercise to prove that Legendre's and Gauss's statements are equivalent – it requires no more than the first supplement and the facts about multiplying residues and nonresidues. Apparently, the shortest known proof yet was published by B. Veklych in theAmerican Mathematical Monthly.[4] The value of the Legendre symbol of−1{\displaystyle -1}(used in the proof above) follows directly fromEuler's criterion: by Euler's criterion, but both sides of this congruence are numbers of the form±1{\displaystyle \pm 1}, so they must be equal. Whether2{\displaystyle 2}is a quadratic residue can be concluded if we know the number of solutions of the equationx2+y2=2{\displaystyle x^{2}+y^{2}=2}withx,y∈Zp,{\displaystyle x,y\in \mathbb {Z} _{p},}which can be solved by standard methods. Namely, all its solutions wherexy≠0,x≠±y{\displaystyle xy\neq 0,x\neq \pm y}can be grouped into octuplets of the form(±x,±y),(±y,±x){\displaystyle (\pm x,\pm y),(\pm y,\pm x)}, and what is left are four solutions of the form(±1,±1){\displaystyle (\pm 1,\pm 1)}and possibly four additional solutions wherex2=2,y=0{\displaystyle x^{2}=2,y=0}andx=0,y2=2{\displaystyle x=0,y^{2}=2}, which exist precisely if2{\displaystyle 2}is a quadratic residue. That is,2{\displaystyle 2}is a quadratic residue precisely if the number of solutions of this equation is divisible by8{\displaystyle 8}. And this equation can be solved in just the same way here as over the rational numbers: substitutex=a+1,y=at+1{\displaystyle x=a+1,y=at+1}, where we demand thata≠0{\displaystyle a\neq 0}(leaving out the two solutions(1,±1){\displaystyle (1,\pm 1)}), then the original equation transforms into Heret{\displaystyle t}can have any value that does not make the denominator zero – for which there are1+(−1p){\displaystyle 1+\left({\frac {-1}{p}}\right)}possibilities (i.e.2{\displaystyle 2}if−1{\displaystyle -1}is a residue,0{\displaystyle 0}if not) – and also does not makea{\displaystyle a}zero, which excludes one more option,t=−1{\displaystyle t=-1}. Thus there are possibilities fort{\displaystyle t}, and so together with the two excluded solutions there are overallp−(−1p){\displaystyle p-\left({\frac {-1}{p}}\right)}solutions of the original equation. Therefore,2{\displaystyle 2}is a residue modulop{\displaystyle p}if and only if8{\displaystyle 8}dividesp−(−1)p−12{\displaystyle p-(-1)^{\frac {p-1}{2}}}. This is a reformulation of the condition stated above. The theorem was formulated in many ways before its modern form: Euler and Legendre did not have Gauss's congruence notation, nor did Gauss have the Legendre symbol. In this articlepandqalways refer to distinct positive odd primes, andxandyto unspecified integers. Fermat proved[5](or claimed to have proved)[6]a number of theorems about expressing a prime by a quadratic form: He did not state the law of quadratic reciprocity, although the cases −1, ±2, and ±3 are easy deductions from these and others of his theorems. He also claimed to have a proof that if the prime numberpends with 7, (in base 10) and the prime numberqends in 3, andp≡q≡ 3 (mod 4), then Euler conjectured, and Lagrange proved, that[7] Proving these and other statements of Fermat was one of the things that led mathematicians to the reciprocity theorem. Translated into modern notation, Euler stated[8]that for distinct odd primespandq: This is equivalent to quadratic reciprocity. He could not prove it, but he did prove the second supplement.[9] Fermat proved that ifpis a prime number andais an integer, Thus ifpdoes not dividea, using the non-obvious fact (see for example Ireland and Rosen below) that the residues modulopform afieldand therefore in particular the multiplicative group is cyclic, hence there can be at most two solutions to a quadratic equation: Legendre[10]letsaandArepresent positive primes ≡ 1 (mod 4) andbandBpositive primes ≡ 3 (mod 4), and sets out a table of eight theorems that together are equivalent to quadratic reciprocity: He says that since expressions of the form will come up so often he will abbreviate them as: This is now known as theLegendre symbol, and an equivalent[11][12]definition is used today: for all integersaand all odd primesp He notes that these can be combined: A number of proofs, especially those based onGauss's Lemma,[13]explicitly calculate this formula. From these two supplements, we can obtain a third reciprocity law for the quadratic character -2 as follows: For -2 to be a quadratic residue, either -1 or 2 are both quadratic residues, or both non-residues :modp{\displaystyle {\bmod {p}}}. So either :p−12orp2−18{\displaystyle {\frac {p-1}{2}}{\text{ or }}{\frac {p^{2}-1}{8}}}are both even, or they are both odd. The sum of these two expressions is Legendre's attempt to prove reciprocity is based on a theorem of his: Example.Theorem I is handled by lettinga≡ 1 andb≡ 3 (mod 4) be primes and assuming that(ba)=1{\displaystyle \left({\tfrac {b}{a}}\right)=1}and, contrary the theorem, that(ab)=−1.{\displaystyle \left({\tfrac {a}{b}}\right)=-1.}Thenx2+ay2−bz2=0{\displaystyle x^{2}+ay^{2}-bz^{2}=0}has a solution, and taking congruences (mod 4) leads to a contradiction. This technique doesn't work for Theorem VIII. Letb≡B≡ 3 (mod 4), and assume Then if there is another primep≡ 1 (mod 4) such that the solvability ofBx2+by2−pz2=0{\displaystyle Bx^{2}+by^{2}-pz^{2}=0}leads to a contradiction (mod 4). But Legendre was unable to prove there has to be such a primep; he was later able to show that all that is required is: but he couldn't prove that either.Hilbert symbol (below)discusses how techniques based on the existence of solutions toax2+by2+cz2=0{\displaystyle ax^{2}+by^{2}+cz^{2}=0}can be made to work. Gauss first proves[14]the supplementary laws. He sets[15]the basis for induction by proving the theorem for ±3 and ±5. Noting[16]that it is easier to state for −3 and +5 than it is for +3 or −5, he states[17]the general theorem in the form: Introducing the notationaRb(resp.aNb) to meanais a quadratic residue (resp. nonresidue) (modb), and lettinga,a′, etc. represent positive primes ≡ 1 (mod 4) andb,b′, etc. positive primes ≡ 3 (mod 4), he breaks it out into the same 8 cases as Legendre: In the next Article he generalizes this to what are basically the rules for theJacobi symbol (below). LettingA,A′, etc. represent any (prime or composite) positive numbers ≡ 1 (mod 4) andB,B′, etc. positive numbers ≡ 3 (mod 4): All of these cases take the form "if a prime is a residue (mod a composite), then the composite is a residue or nonresidue (mod the prime), depending on the congruences (mod 4)". He proves that these follow from cases 1) - 8). Gauss needed, and was able to prove,[18]a lemma similar to the one Legendre needed: The proof of quadratic reciprocity usescomplete induction. These can be combined: A number of proofs of the theorem, especially those based onGauss sums[19]or the splitting of primes inalgebraic number fields,[20][21]derive this formula. The statements in this section are equivalent to quadratic reciprocity: if, for example, Euler's version is assumed, the Legendre-Gauss version can be deduced from it, and vice versa. This can be proven usingGauss's lemma. Gauss's fourth proof consists of proving this theorem (by comparing two formulas for the value of Gauss sums) and then restricting it to two primes. He then gives an example: Leta= 3,b= 5,c= 7, andd= 11. Three of these, 3, 7, and 11 ≡ 3 (mod 4), som≡ 3 (mod 4). 5×7×11 R 3; 3×7×11 R 5; 3×5×11 R 7;  and  3×5×7 N 11, so there are an odd number of nonresidues. TheJacobi symbolis a generalization of the Legendre symbol; the main difference is that the bottom number has to be positive and odd, but does not have to be prime. If it is prime, the two symbols agree. It obeys the same rules of manipulation as the Legendre symbol. In particular and if both numbers are positive and odd (this is sometimes called "Jacobi's reciprocity law"): However, if the Jacobi symbol is 1 but the denominator is not a prime, it does not necessarily follow that the numerator is a quadratic residue of the denominator. Gauss's cases 9) - 14) above can be expressed in terms of Jacobi symbols: and sincepis prime the left hand side is a Legendre symbol, and we know whetherMis a residue modulopor not. The formulas listed in the preceding section are true for Jacobi symbols as long as the symbols are defined. Euler's formula may be written Example. 2 is a residue modulo the primes 7, 23 and 31: But 2 is not a quadratic residue modulo 5, so it can't be one modulo 15. This is related to the problem Legendre had: if(am)=−1,{\displaystyle \left({\tfrac {a}{m}}\right)=-1,}thenais a non-residue modulo every prime in the arithmetic progressionm+ 4a,m+ 8a, ..., if thereareany primes in this series, but that wasn't proved until decades after Legendre.[26] Eisenstein's formula requires relative primality conditions (which are true if the numbers are prime) The quadratic reciprocity law can be formulated in terms of theHilbert symbol(a,b)v{\displaystyle (a,b)_{v}}whereaandbare any two nonzero rational numbers andvruns over all the non-trivial absolute values of the rationals (the Archimedean one and thep-adic absolute values for primesp). The Hilbert symbol(a,b)v{\displaystyle (a,b)_{v}}is 1 or −1. It is defined to be 1 if and only if the equationax2+by2=z2{\displaystyle ax^{2}+by^{2}=z^{2}}has a solution in thecompletionof the rationals atvother thanx=y=z=0{\displaystyle x=y=z=0}. The Hilbert reciprocity law states that(a,b)v{\displaystyle (a,b)_{v}}, for fixedaandband varyingv, is 1 for all but finitely manyvand the product of(a,b)v{\displaystyle (a,b)_{v}}over allvis 1. (This formally resembles the residue theorem from complex analysis.) The proof of Hilbert reciprocity reduces to checking a few special cases, and the non-trivial cases turn out to be equivalent to the main law and the two supplementary laws of quadratic reciprocity for the Legendre symbol. There is no kind of reciprocity in the Hilbert reciprocity law; its name simply indicates the historical source of the result in quadratic reciprocity. Unlike quadratic reciprocity, which requires sign conditions (namely positivity of the primes involved) and a special treatment of the prime 2, the Hilbert reciprocity law treats all absolute values of the rationals on an equal footing. Therefore, it is a more natural way of expressing quadratic reciprocity with a view towards generalization: the Hilbert reciprocity law extends with very few changes to allglobal fieldsand this extension can rightly be considered a generalization of quadratic reciprocity to all global fields. The early proofs of quadratic reciprocity are relatively unilluminating. The situation changed when Gauss usedGauss sumsto show thatquadratic fieldsare subfields ofcyclotomic fields, and implicitly deduced quadratic reciprocity from a reciprocity theorem for cyclotomic fields. His proof was cast in modern form by later algebraic number theorists. This proof served as a template forclass field theory, which can be viewed as a vast generalization of quadratic reciprocity. Robert Langlandsformulated theLanglands program, which gives a conjectural vast generalization of class field theory. He wrote:[27] There are also quadratic reciprocity laws inringsother than the integers. In his second monograph onquartic reciprocity[29]Gauss stated quadratic reciprocity for the ringZ[i]{\displaystyle \mathbb {Z} [i]}ofGaussian integers, saying that it is a corollary of thebiquadratic lawinZ[i],{\displaystyle \mathbb {Z} [i],}but did not provide a proof of either theorem.Dirichlet[30]showed that the law inZ[i]{\displaystyle \mathbb {Z} [i]}can be deduced from the law forZ{\displaystyle \mathbb {Z} }without using quartic reciprocity. For an odd Gaussian primeπ{\displaystyle \pi }and a Gaussian integerα{\displaystyle \alpha }relatively prime toπ,{\displaystyle \pi ,}define the quadratic character forZ[i]{\displaystyle \mathbb {Z} [i]}by: Letλ=a+bi,μ=c+di{\displaystyle \lambda =a+bi,\mu =c+di}be distinct Gaussian primes whereaandcare odd andbanddare even. Then[31] Consider the following third root of unity: The ring of Eisenstein integers isZ[ω].{\displaystyle \mathbb {Z} [\omega ].}[32]For an Eisenstein primeπ,Nπ≠3,{\displaystyle \pi ,\mathrm {N} \pi \neq 3,}and an Eisenstein integerα{\displaystyle \alpha }withgcd(α,π)=1,{\displaystyle \gcd(\alpha ,\pi )=1,}define the quadratic character forZ[ω]{\displaystyle \mathbb {Z} [\omega ]}by the formula Let λ =a+bωand μ =c+dωbe distinct Eisenstein primes whereaandcare not divisible by 3 andbanddare divisible by 3. Eisenstein proved[33] The above laws are special cases of more general laws that hold for thering of integersin anyimaginary quadratic number field. Letkbe an imaginary quadratic number field with ring of integersOk.{\displaystyle {\mathcal {O}}_{k}.}For aprime idealp⊂Ok{\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}}with odd normNp{\displaystyle \mathrm {N} {\mathfrak {p}}}andα∈Ok,{\displaystyle \alpha \in {\mathcal {O}}_{k},}define the quadratic character forOk{\displaystyle {\mathcal {O}}_{k}}as for an arbitrary ideala⊂Ok{\displaystyle {\mathfrak {a}}\subset {\mathcal {O}}_{k}}factored into prime idealsa=p1⋯pn{\displaystyle {\mathfrak {a}}={\mathfrak {p}}_{1}\cdots {\mathfrak {p}}_{n}}define and forβ∈Ok{\displaystyle \beta \in {\mathcal {O}}_{k}}define LetOk=Zω1⊕Zω2,{\displaystyle {\mathcal {O}}_{k}=\mathbb {Z} \omega _{1}\oplus \mathbb {Z} \omega _{2},}i.e.{ω1,ω2}{\displaystyle \left\{\omega _{1},\omega _{2}\right\}}is anintegral basisforOk.{\displaystyle {\mathcal {O}}_{k}.}Forν∈Ok{\displaystyle \nu \in {\mathcal {O}}_{k}}with odd normNν,{\displaystyle \mathrm {N} \nu ,}define (ordinary) integersa,b,c,dby the equations, and a function Ifm=Nμandn=Nνare both odd, Herglotz proved[34] Also, if Then[35] LetFbe afinite fieldwithq=pnelements, wherepis an odd prime number andnis positive, and letF[x] be thering of polynomialsin one variable with coefficients inF. Iff,g∈F[x]{\displaystyle f,g\in F[x]}andfisirreducible,monic, and has positive degree, define the quadratic character forF[x] in the usual manner: Iff=f1⋯fn{\displaystyle f=f_{1}\cdots f_{n}}is a product of monic irreducibles let Dedekind proved that iff,g∈F[x]{\displaystyle f,g\in F[x]}are monic and have positive degrees,[36] The attempt to generalize quadratic reciprocity for powers higher than the second was one of the main goals that led 19th century mathematicians, includingCarl Friedrich Gauss,Peter Gustav Lejeune Dirichlet,Carl Gustav Jakob Jacobi,Gotthold Eisenstein,Richard Dedekind,Ernst Kummer, andDavid Hilbertto the study of general algebraic number fields and their rings of integers;[37]specifically Kummer invented ideals in order to state and prove higher reciprocity laws. Theninthin the list of23 unsolved problemswhich David Hilbert proposed to the Congress of Mathematicians in 1900 asked for the "Proof of the most general reciprocity law [f]or an arbitrary number field".[38]Building upon work byPhilipp Furtwängler,Teiji Takagi,Helmut Hasseand others, Emil Artin discoveredArtin reciprocityin 1923, a general theorem for which all known reciprocity laws are special cases, and proved it in 1927.[39] TheDisquisitiones Arithmeticaehas been translated (from Latin) into English and German. The German edition includes all of Gauss's papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. Footnotes referencing theDisquisitiones Arithmeticaeare of the form "Gauss, DA, Art.n". The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, §n". These are in Gauss'sWerke, Vol II, pp. 65–92 and 93–148. German translations are in pp. 511–533 and 534–586 ofUntersuchungen über höhere Arithmetik. Every textbook onelementary number theory(and quite a few onalgebraic number theory) has a proof of quadratic reciprocity. Two are especially noteworthy: Franz Lemmermeyer'sReciprocity Laws: From Euler to Eisensteinhasmanyproofs (some in exercises) of both quadratic and higher-power reciprocity laws and a discussion of their history. Its immense bibliography includes literature citations for 196 different publishedproofs for the quadratic reciprocity law. Kenneth Ireland andMichael Rosen'sA Classical Introduction to Modern Number Theoryalso has many proofs of quadratic reciprocity (and many exercises), and covers the cubic and biquadratic cases as well. Exercise 13.26 (p. 202) says it all Count the number of proofs to the law of quadratic reciprocity given thus far in this book and devise another one.
https://en.wikipedia.org/wiki/Quadratic_reciprocity
In mathematics,rational reconstructionis a method that allows one to recover arational numberfrom its valuemoduloasufficiently largeinteger. In the rational reconstruction problem, one is given as input a valuen≡r/s(modm){\displaystyle n\equiv r/s{\pmod {m}}}. That is,n{\displaystyle n}is an integer with the property thatns≡r(modm){\displaystyle ns\equiv r{\pmod {m}}}. The rational numberr/s{\displaystyle r/s}is unknown, and the goal of the problem is to recover it from the given information. In order for the problem to be solvable, it is necessary to assume that the modulusm{\displaystyle m}is sufficiently large relative tor{\displaystyle r}ands{\displaystyle s}. Typically, it is assumed that a range for the possible values ofr{\displaystyle r}ands{\displaystyle s}is known:|r|<N{\displaystyle |r|<N}and0<s<D{\displaystyle 0<s<D}for some two numerical parametersN{\displaystyle N}andD{\displaystyle D}. Wheneverm>2ND{\displaystyle m>2ND}and a solution exists, the solution is unique and can be found efficiently. Using a method fromPaul S. Wang, it is possible to recoverr/s{\displaystyle r/s}fromn{\displaystyle n}andm{\displaystyle m}using theEuclidean algorithm, as follows.[1][2] One putsv=(m,0){\displaystyle v=(m,0)}andw=(n,1){\displaystyle w=(n,1)}. One then repeats the following steps until the first component ofwbecomes≤N{\displaystyle \leq N}. Putq=⌊v1w1⌋{\displaystyle q=\left\lfloor {\frac {v_{1}}{w_{1}}}\right\rfloor }, putz=v−qw. The newvandware then obtained by puttingv=wandw=z. Then withwsuch thatw1≤N{\displaystyle w_{1}\leq N}, one makes the second component positive by puttingw= −wifw2<0{\displaystyle w_{2}<0}. Ifw2<D{\displaystyle w_{2}<D}andgcd(w1,w2)=1{\displaystyle \gcd(w_{1},w_{2})=1}, then the fractionrs{\displaystyle {\frac {r}{s}}}exists andr=w1{\displaystyle r=w_{1}}ands=w2{\displaystyle s=w_{2}}, else no such fraction exists.
https://en.wikipedia.org/wiki/Rational_reconstruction_(mathematics)
Inmathematics, asubsetRof theintegersis called areduced residue system modulonif: Here φ denotesEuler's totient function. A reduced residue system moduloncan be formed from acomplete residue systemmodulonby removing all integers notrelatively primeton. For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-calledtotatives1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. Thecardinalityof this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are:
https://en.wikipedia.org/wiki/Reduced_residue_system
Manyprotocolsandalgorithmsrequire the serialization or enumeration of related entities. For example, acommunication protocolmust know whether some packet comes "before" or "after" some other packet. TheIETF(Internet Engineering Task Force)RFC1982attempts to define "serial number arithmetic" for the purposes of manipulating and comparing thesesequence numbers. In short, when the absolute serial number value decreases by more than half of the maximum value (e.g. 128 in an 8-bit value), it is considered to be "after" the former, whereas other decreases are considered to be "before". This task is rather more complex than it might first appear, because most algorithms use fixed-size (binary) representations for sequence numbers. It is often important for the algorithm not to "break down" when the numbers become so large that they are incremented one last time and "wrap" around their maximum numeric ranges (go instantly from a large positive number to 0 or a large negative number). Some protocols choose to ignore these issues and simply use very large integers for their counters, in the hope that the program will be replaced (or they will retire) before the problem occurs (seeY2K). Many communication protocols apply serial number arithmetic to packet sequence numbers in their implementation of asliding window protocol. Some versions of TCP useprotection against wrapped sequence numbers (PAWS). PAWS applies the same serial number arithmetic to packet timestamps, using the timestamp as an extension of the high-order bits of the sequence number.[1] Only addition of a small positiveintegerto a sequence number and comparison of two sequence numbers are discussed. Only unsigned binary implementations are discussed, with an arbitrary size in bits noted throughout the RFC (and below) as "SERIAL_BITS". Adding an integer to a sequence number is simple unsigned integer addition, followed by unsignedmodulo operationto bring the result back into range (usually implicit in the unsigned addition, on most architectures): Addition of a value below 0 or above 2SERIAL_BITS−1− 1 is undefined. Basically, adding values beyond this range will cause the resultant sequence number to "wrap", and (often) result in a number that is considered "less than" the original sequence number. A means of comparing two sequence numbersi1andi2(the unsigned integer representations of sequence numberss1ands2) is presented. Equality is defined as simple numeric equality. The algorithm presented for comparison is complex, having to take into account whether the first sequence number is close to the "end" of its range of values, and thus a smaller "wrapped" number may actually be considered "greater" than the first sequence number. Thusi1is considered less thani2only if The algorithms presented by the RFC have at least one significant shortcoming: there are sequence numbers for which comparison is undefined. Since many algorithms are implemented independently by multiple independent cooperating parties, it is often impossible to prevent all such situations from occurring. The authors ofRFC1982acknowledge this without offering a general solution: While it would be possible to define the test in such a way that the inequality would not have this surprising property, while being defined for all pairs of values, such a definition would be unnecessarily burdensome to implement, and difficult to understand, and would still allow cases where which is just as non-intuitive. Thus the problem case is left undefined, implementations are free to return either result, or to flag an error, and users must take care not to depend on any particular outcome. Usually this will mean avoiding allowing those particular pairs of numbers to co-exist. Thus, it is often difficult or impossible to avoid all "undefined" comparisons of sequence numbers. However, a relatively simple solution is available. By mapping the unsigned sequence numbers onto signedtwo's complementarithmetic operations, every comparison of any sequence number is defined, and the comparison operation itself is dramatically simplified. All comparisons specified by the RFC retain their original truth values; only the formerly "undefined" comparisons are affected. TheRFC1982algorithm specifies that, forN-bit sequence numbers, there are 2N−1− 1 values considered "greater than" and 2N−1− 1 considered "less than". Comparison against the remaining value (exactly 2N−1-distant) is deemed to be "undefined". Most modern hardware implements signedtwo's complementbinary arithmetic operations. These operations are fully defined for the entire range of values for any operands they are given, since anyN-bit binary number can contain 2Ndistinct values, and since one of them is taken up by the value 0, there are an odd number of spots left for all the non-zero positive and negative numbers. There is simply one more negative number representable than there are positive. For example, a 16-bit 2's complement value may contain numbers ranging from−32768to+32767. So, if we simply re-cast sequence numbers as 2's complement integers and allow there to be one more sequence number considered "less than" than there are sequence numbers considered "greater than", we should be able to use simple signed arithmetic comparisons instead of the logically incomplete formula proposed by the RFC. Here are some examples (in 16 bits, again), comparing some random sequence numbers, against the sequence number with the value 0: It is easy to see that the signed interpretation of the sequence numbers are in the correct order, so long as we "rotate" the sequence number in question so that its 0 matches up with the sequence number we are comparing it against. It turns out that this is simply done using an unsigned subtraction and simply interpreting the result as a signed two's complement number. The result is the signed "distance" between the two sequence numbers. Once again, ifi1andi2are the unsigned binary representations of the sequence numberss1ands2, the distance froms1tos2is If distance is 0, the numbers are equal. If it is < 0, thens1is "less than" or "before"s2. Simple, clean and efficient, and fully defined. However, not without surprises. All sequence number arithmetic must deal with "wrapping" of sequence numbers; the number 2N−1is equidistant in both directions, inRFC1982sequence number terms. In our math, they are both considered to be "less than" each other: This is obviously true for any two sequence numbers with distance of 0x8000 between them. Furthermore, implementing serial number arithmetic using two's complement arithmetic implies serial numbers of a bit-length matching the machine's integer sizes; usually 16-bit, 32-bit and 64-bit. Implementing 20-bit serial numbers needs shifts (assuming 32-bit ints):
https://en.wikipedia.org/wiki/Serial_number_arithmetic
Inmathematicsandabstract algebra, thetwo-element Boolean algebrais theBoolean algebrawhoseunderlying set(oruniverseorcarrier)Bis theBoolean domain. The elements of the Boolean domain are 1 and 0 by convention, so thatB= {0, 1}.Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here. Bis apartially ordered setand the elements ofBare also itsbounds. Anoperationofaritynis amappingfromBntoB. Boolean algebra consists of twobinary operationsandunarycomplementation. The binary operations have been named and notated in various ways. Here they are called 'sum' and 'product', and notated by infix '+' and '∙', respectively. Sum and productcommuteandassociate, as in the usualalgebra of real numbers. As for theorder of operations, brackets are decisive if present. Otherwise '∙' precedes '+'. HenceA∙B+Cis parsed as(A∙B) +Cand not asA∙ (B+C).Complementationis denoted by writing an overbar over its argument. The numerical analog of the complement ofXis1 −X. In the language ofuniversal algebra, a Boolean algebra is a⟨B,+,{\displaystyle \langle B,+,}∙,..¯,1,0⟩{\displaystyle ,{\overline {..}},1,0\rangle }algebraoftype⟨2,2,1,0,0⟩{\displaystyle \langle 2,2,1,0,0\rangle }. Eitherone-to-one correspondencebetween {0,1} and {True,False} yields classicalbivalent logicin equational form, with complementation read asNOT. If 1 is read asTrue, '+' is read asOR, and '∙' asAND, and vice versa if 1 is read asFalse. These two operations define a commutativesemiring, known as theBoolean semiring. 2can be seen as grounded in the following trivial "Boolean" arithmetic: Note that: This Boolean arithmetic suffices to verify any equation of2, including the axioms, by examining every possible assignment of 0s and 1s to each variable (seedecision procedure). The following equations may now be verified: Each of '+' and '∙'distributesover the other: That '∙' distributes over '+' agrees withelementary algebra, but not '+' over '∙'. For this and other reasons, a sum of products (leading to aNANDsynthesis) is more commonly employed than a product of sums (leading to aNORsynthesis). Each of '+' and '∙' can be defined in terms of the other and complementation: We only need one binary operation, andconcatenationsuffices to denote it. Hence concatenation and overbar suffice to notate2. This notation is also that ofQuine'sBoolean term schemata. Letting (X) denote the complement ofXand "()" denote either 0 or 1 yields thesyntaxof the primary algebra ofG. Spencer-Brown'sLaws of Form. Abasisfor2is a set of equations, calledaxioms, from which all of the above equations (and more) can be derived. There are many known bases for all Boolean algebras and hence for2. An elegant basis notated using only concatenation and overbar is: Where concatenation = OR, 1 = true, and 0 = false, or concatenation = AND, 1 = false, and 0 = true. (overbar is negation in both cases.) If 0=1, (1)–(3) are the axioms for anabelian group. (1) only serves to prove that concatenation commutes and associates. First assume that (1) associates from either the left or the right, then prove commutativity. Then prove association from the other direction. Associativity is simply association from the left and right combined. This basis makes for an easy approach to proof, called "calculation" inLaws of Form, that proceeds by simplifying expressions to 0 or 1, by invoking axioms (2)–(4), and the elementary identitiesAA=A,A¯¯=A,1+A=1{\displaystyle AA=A,{\overline {\overline {A}}}=A,1+A=1}, and the distributive law. De Morgan's theoremstates that if one does the following, in the given order, to anyBoolean function: the result islogically equivalentto what you started with. Repeated application of De Morgan's theorem to parts of a function can be used to drive all complements down to the individual variables. A powerful and nontrivialmetatheoremstates that any identity of2holds for all Boolean algebras.[1]Conversely, an identity that holds for an arbitrary nontrivial Boolean algebra also holds in2. Hence all identities of Boolean algebra are captured by2. This theorem is useful because any equation in2can be verified by adecision procedure. Logicians refer to this fact as "2isdecidable". All knowndecision proceduresrequire a number of steps that is anexponential functionof the number of variablesNappearing in the equation to be verified. Whether there exists a decision procedure whose steps are apolynomial functionofNfalls under theP = NPconjecture. The above metatheorem does not hold if we consider the validity of more generalfirst-order logicformulas instead of only atomic positive equalities. As an example consider the formula(x= 0) ∨ (x= 1). This formula is always true in a two-element Boolean algebra. In a four-element Boolean algebra whose domain is the powerset of⁠{0,1}{\displaystyle \{0,1\}}⁠, this formula corresponds to the statement(x= ∅) ∨ (x= {0,1})and is false whenxis⁠{1}{\displaystyle \{1\}}⁠. The decidability for thefirst-order theoryof many classes ofBoolean algebrascan still be shown, usingquantifier eliminationor small model property (with the domain size computed as a function of the formula and generally larger than 2). Many elementary texts on Boolean algebra were published in the early years of the computer era. Perhaps the best of the lot, and one still in print, is: The following items reveal how the two-element Boolean algebra is mathematically nontrivial.
https://en.wikipedia.org/wiki/Two-element_Boolean_algebra
In themathematicalfield ofgroup theory,Lagrange's theoremstates that if H is a subgroup of anyfinite groupG, then|H|{\displaystyle |H|}is a divisor of|G|{\displaystyle |G|}, i.e. theorder(number of elements) of everysubgroupH divides the order of group G. The theorem is named afterJoseph-Louis Lagrange. The following variant states that for a subgroupH{\displaystyle H}of a finite groupG{\displaystyle G}, not only is|G|/|H|{\displaystyle |G|/|H|}an integer, but its value is theindex[G:H]{\displaystyle [G:H]}, defined as the number of leftcosetsofH{\displaystyle H}inG{\displaystyle G}. Lagrange's theorem—IfHis a subgroup of a groupG, then|G|=[G:H]⋅|H|.{\displaystyle \left|G\right|=\left[G:H\right]\cdot \left|H\right|.} This variant holds even ifG{\displaystyle G}is infinite, provided that|G|{\displaystyle |G|},|H|{\displaystyle |H|}, and[G:H]{\displaystyle [G:H]}are interpreted ascardinal numbers. The leftcosetsofHinGare theequivalence classesof a certainequivalence relationonG: specifically, callxandyinGequivalent if there existshinHsuch thatx=yh. Therefore, the set of left cosets forms apartitionofG. Each left cosetaHhas the same cardinality asHbecausex↦ax{\displaystyle x\mapsto ax}defines a bijectionH→aH{\displaystyle H\to aH}(the inverse isy↦a−1y{\displaystyle y\mapsto a^{-1}y}). The number of left cosets is theindex[G:H]. By the previous three sentences, Lagrange's theorem can be extended to the equation of indexes between three subgroups ofG.[1] Extension of Lagrange's theorem—IfHis a subgroup ofGandKis a subgroup ofH, then LetSbe a set of coset representatives forKinH, soH=⨆s∈SsK{\displaystyle H=\bigsqcup _{s\in S}sK}(disjoint union), and|S|=[H:K]{\displaystyle |S|=[H:K]}. For anya∈G{\displaystyle a\in G}, left-multiplication-by-ais a bijectionG→G{\displaystyle G\to G}, soaH=⨆s∈SasK{\displaystyle aH=\bigsqcup _{s\in S}asK}. Thus each left coset ofHdecomposes into[H:K]{\displaystyle [H:K]}left cosets ofK. SinceGdecomposes into[G:H]{\displaystyle [G:H]}left cosets ofH, each of which decomposes into[H:K]{\displaystyle [H:K]}left cosets ofK, the total number[G:K]{\displaystyle [G:K]}of left cosets ofKinGis[G:H][H:K]{\displaystyle [G:H][H:K]}. If we takeK= {e}(eis the identity element ofG), then[G: {e}] = |G|and[H: {e}] = |H|. Therefore, we can recover the original equation|G| = [G:H] |H|. A consequence of the theorem is that theorder of any elementaof a finite group (i.e. the smallest positive integer numberkwithak=e, whereeis the identity element of the group) divides the order of that group, since the order ofais equal to the order of thecyclicsubgroupgeneratedbya. If the group hasnelements, it follows This can be used to proveFermat's little theoremand its generalization,Euler's theorem. These special cases were known long before the general theorem was proved. The theorem also shows that any group of prime order is cyclic andsimple, since the subgroup generated by any non-identity element must be the whole group itself. Lagrange's theorem can also be used to show that there are infinitely manyprimes: suppose there were a largest primep{\displaystyle p}. Any prime divisorq{\displaystyle q}of theMersenne number2p−1{\displaystyle 2^{p}-1}satisfies2p≡1(modq){\displaystyle 2^{p}\equiv 1{\pmod {q}}}(seemodular arithmetic), meaning that the order of2{\displaystyle 2}in themultiplicative group(Z/qZ)∗{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{*}}isp{\displaystyle p}. By Lagrange's theorem, the order of2{\displaystyle 2}must divide the order of(Z/qZ)∗{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{*}}, which isq−1{\displaystyle q-1}. Sop{\displaystyle p}dividesq−1{\displaystyle q-1}, givingp<q{\displaystyle p<q}, contradicting the assumption thatp{\displaystyle p}is the largest prime.[2] Lagrange's theorem raises the converse question as to whether every divisor of the order of a group is the order of some subgroup. This does not hold in general: given a finite groupGand a divisordof |G|, there does not necessarily exist a subgroup ofGwith orderd. The smallest example isA4(thealternating groupof degree 4), which has 12 elements but no subgroup of order 6. A "Converse of Lagrange's Theorem" (CLT) group is a finite group with the property that for every divisor of the order of the group, there is a subgroup of that order. It is known that a CLT group must besolvableand that everysupersolvable groupis a CLT group. However, there exist solvable groups that are not CLT (for example,A4) and CLT groups that are not supersolvable (for example,S4, the symmetric group of degree 4). There are partial converses to Lagrange's theorem. For general groups,Cauchy's theoremguarantees the existence of an element, and hence of a cyclic subgroup, of order any prime dividing the group order.Sylow's theoremextends this to the existence of a subgroup of order equal to the maximal power of any prime dividing the group order. For solvable groups,Hall's theoremsassert the existence of a subgroup of order equal to anyunitary divisorof the group order (that is, a divisor coprime to its cofactor). The converse of Lagrange's theorem states that ifdis adivisorof the order of a groupG, then there exists a subgroupHwhere|H| =d. We will examine thealternating groupA4, the set of evenpermutationsas the subgroup of theSymmetric groupS4. |A4| = 12so the divisors are1, 2, 3, 4, 6, 12. Assume to the contrary that there exists a subgroupHinA4with|H| = 6. LetVbe thenon-cyclicsubgroup ofA4called theKlein four-group. LetK=H⋂V. Since bothHandVare subgroups ofA4,Kis also a subgroup ofA4. From Lagrange's theorem, the order ofKmust divide both6and4, the orders ofHandVrespectively. The only two positive integers that divide both6and4are1and2. So|K| = 1or2. Assume|K| = 1, thenK= {e}. IfHdoes not share any elements withV, then the 5 elements inHbesides theIdentity elementemust be of the form(a b c)wherea, b, care distinct elements in{1, 2, 3, 4}. Since any element of the form(a b c)squared is(a c b), and(a b c)(a c b) =e, any element ofHin the form(a b c)must be paired with its inverse. Specifically, the remaining 5 elements ofHmust come from distinct pairs of elements inA4that are not inV. This is impossible since pairs of elements must be even and cannot total up to 5 elements. Thus, the assumptions that|K| = 1is wrong, so|K| = 2. Then,K= {e,v}wherev∈V,vmust be in the form(a b)(c d)wherea, b, c, dare distinct elements of{1, 2, 3, 4}. The other four elements inHare cycles of length 3. Note that the cosetsgeneratedby a subgroup of a group form a partition of the group. The cosets generated by a specific subgroup are either identical to each other ordisjoint. The index of a subgroup in a group[A4:H] = |A4|/|H|is the number of cosets generated by that subgroup. Since|A4| = 12and|H| = 6,Hwill generate two left cosets, one that is equal toHand another,gH, that is of length 6 and includes all the elements inA4not inH. Since there are only 2 distinct cosets generated byH, thenHmust be normal. Because of that,H=gHg−1(∀g∈A4). In particular, this is true forg= (a b c) ∈A4. SinceH=gHg−1,gvg−1∈H. Without loss of generality, assume thata= 1,b= 2,c= 3,d= 4. Theng= (1 2 3),v= (1 2)(3 4),g−1= (1 3 2),gv= (1 3 4),gvg−1= (1 4)(2 3). Transforming back, we getgvg−1= (ad)(bc). BecauseVcontains all disjoint transpositions inA4,gvg−1∈V. Hence,gvg−1∈H⋂V=K. Sincegvg−1≠v, we have demonstrated that there is a third element inK. But earlier we assumed that|K| = 2, so we have a contradiction. Therefore, our original assumption that there is a subgroup of order 6 is not true and consequently there is no subgroup of order 6 inA4and the converse of Lagrange's theorem is not necessarily true.Q.E.D. Lagrange himself did not prove the theorem in its general form. He stated, in his articleRéflexions sur la résolution algébrique des équations,[3]that if a polynomial innvariables has its variables permuted in alln!ways, the number of different polynomials that are obtained is always a factor ofn!. (For example, if the variablesx,y, andzare permuted in all 6 possible ways in the polynomialx+y−zthen we get a total of 3 different polynomials:x+y−z,x+z−y, andy+z−x. Note that 3 is a factor of 6.) The number of such polynomials is the index in thesymmetric groupSnof the subgroupHof permutations that preserve the polynomial. (For the example ofx+y−z, the subgroupHinS3contains the identity and the transposition(x y).) So the size ofHdividesn!. With the later development of abstract groups, this result of Lagrange on polynomials was recognized to extend to the general theorem about finite groups which now bears his name. In hisDisquisitiones Arithmeticaein 1801,Carl Friedrich Gaussproved Lagrange's theorem for the special case of(Z/pZ)∗{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{*}}, the multiplicative group of nonzero integersmodulop, wherepis a prime.[4]In 1844,Augustin-Louis Cauchyproved Lagrange's theorem for the symmetric groupSn.[5] Camille Jordanfinally proved Lagrange's theorem for the case of anypermutation groupin 1861.[6]
https://en.wikipedia.org/wiki/Lagrange%27s_theorem_(group_theory)
Inmodular arithmetic,Thue's lemmaroughly states that every modular integer may be represented by a "modular fraction" such that the numerator and the denominator haveabsolute valuesnot greater than thesquare rootof the modulus. More precisely, for every pair ofintegers(a,m)withm> 1, given two positive integersXandYsuch thatX≤m<XY, there are two integersxandysuch that and Usually, one takesXandYequal to the smallest integer greater than the square root ofm, but the general form is sometimes useful, and makes the uniqueness theorem (below) easier to state.[1] The first known proof is attributed toAxel Thue(1902)[2]who used apigeonholeargument.[3]It can be used to proveFermat's theorem on sums of two squaresby takingmto be aprimepthat iscongruentto 1 modulo 4 and takingato satisfya2+ 1 ≡ 0 modp. (Such an "a" is guaranteed for "p" byWilson's theorem.[4]) In general, the solution whose existence is asserted by Thue's lemma is not unique. For example, whena= 1there are usually several solutions(x,y) = (1, 1), (2, 2), (3, 3), ..., provided thatXandYare not too small. Therefore, one may only hope for uniqueness for therational number⁠x/y⁠, to whichais congruent modulomifyandmarecoprime. Nevertheless, this rational number need not be unique; for example, ifm= 5,a= 2andX=Y= 3, one has the two solutions However, forXandYsmall enough, if a solution exists, it is unique. More precisely, with above notation, if and with and then This result is the basis forrational reconstruction, which allows using modular arithmetic for computing rational numbers for which one knows bounds for numerators and denominators.[5] The proof is rather easy: by multiplying each congruence by the otheryiand subtracting, one gets The hypotheses imply that each term has an absolute value lower thanXY<⁠m/2⁠, and thus that the absolute value of their difference is lower thanm. This implies thaty2x1−y1x2=0{\displaystyle y_{2}x_{1}-y_{1}x_{2}=0}, hence the result. The original proof of Thue's lemma is not efficient, in the sense that it does not provide any fast method for computing the solution. Theextended Euclidean algorithm, allows us to provide a proof that leads to an efficient algorithm that has the samecomputational complexityof theEuclidean algorithm.[6] More precisely, given the two integersmandaappearing in Thue's lemma, the extended Euclidean algorithm computes three sequences of integers(ti),(xi)and(yi)such that where thexiare non-negative and strictly decreasing. The desired solution is, up to the sign, the first pair(xi,yi)such thatxi<X.
https://en.wikipedia.org/wiki/Thue%27s_lemma
Ethereumis adecentralizedblockchainwithsmart contractfunctionality.Ether(abbreviation:ETH[a]) is the nativecryptocurrencyof the platform. Among cryptocurrencies, ether is second only tobitcoininmarket capitalization.[2][3]It isopen-source software. Ethereum was conceived in 2013 byprogrammerVitalik Buterin.[4]Other founders includeGavin Wood,Charles Hoskinson,Anthony Di Iorio, andJoseph Lubin.[5]In 2014,developmentwork began and wascrowdfunded, and the network went live on 30 July 2015.[6]Ethereum allows anyone to deploydecentralized applicationsonto it, with which users can interact.[7]Decentralized finance(DeFi) applications providefinancial instrumentsthat do not directly rely onfinancial intermediarieslikebrokerages,exchanges, orbanks. This facilitates borrowing against cryptocurrency holdings or lending them out forinterest.[8][9]Ethereum also allows users to create and exchangenon-fungible tokens(NFTs), which are tokens that can be tied to unique digital assets, such as images. Additionally, many other cryptocurrencies utilize the ERC-20 token standard on top of the Ethereum blockchain and have utilized the platform forinitial coin offerings. On 15 September 2022, Ethereum transitioned itsconsensus mechanismfromproof-of-work(PoW) toproof-of-stake(PoS) in an update known as "The Merge", which cut the blockchain's energy usage by 99%.[10] Ethereum was initially described in late 2013 in awhite paperbyVitalik Buterin,[4][11]a programmer and co-founder ofBitcoin Magazine, that described a way to builddecentralizedapplications.[12][13]Buterin argued to theBitcoin Coredevelopers that blockchain technology could benefit from other applications besides money and that it needed a more robust language for application development[14]: 88that could lead toattaching[clarification needed]real-world assets, such as stocks and property, to the blockchain.[15]In 2013, Buterin briefly worked witheToroCEO Yoni Assia on theColored Coinsproject and drafted its white paper outlining additional use cases for blockchain technology.[16]However, after failing to gain agreement on how the project should proceed, he proposed the development of a new platform with a more robust scripting language—aTuring-completeprogramming language[17]—that would eventually become Ethereum.[14] Ethereum was announced at the North American Bitcoin Conference in Miami, in January 2014.[18]During the conference,Gavin Wood,Charles Hoskinson, andAnthony Di Iorio(who financed the project) rented a house in Miami with Buterin at which they could develop a fuller sense of what Ethereum might become.[18]Di Iorio invited friendJoseph Lubin, who invited reporter Morgen Peck, to bear witness.[18]Peck subsequently wrote about the experience inWired.[19]Six months later the founders met again inZug, Switzerland, where Buterin told the founders that the project would proceed as a non-profit. Hoskinson left the project at that time and soon after founded IOHK, a blockchain company responsible forCardano.[18] Ethereum has an unusually long list of founders.[20]Anthony Di Iorio wrote: "Ethereum was founded by Vitalik Buterin, Myself, Charles Hoskinson, Mihai Alisie & Amir Chetrit (the initial 5) in December 2013. Joseph Lubin, Gavin Wood, & Jeffrey Wilcke were added in early 2014 as founders." Buterin chose the name Ethereum after browsinga list of elements from science fictiononWikipedia. He stated, "I immediately realized that I liked it better than all of the other alternatives that I had seen; I suppose it was that [it] sounded nice and it had the word 'ether', referring to the hypothetical invisible medium that permeates the universe and allows light to travel."[18]Buterin wanted his platform to be the underlying and imperceptible medium for the applications running on top of it.[21] Formal development of the software underlying Ethereum began in early 2014 through a Swiss company, Ethereum Switzerland GmbH (EthSuisse).[22]The idea of putting executablesmart contractsin the blockchain needed to be specified before it could be implemented in software. This work was done by Gavin Wood, then thechief technology officer, in the Ethereum Yellow Paper that specified the Ethereum Virtual Machine.[23][24]Subsequently, a Swiss non-profit foundation, theEthereum Foundation(Stiftung Ethereum), was founded. Development was funded by an online public crowd sale from July to August 2014, in which participants bought the Ethereum value token (ether) with another digital currency,bitcoin. While there was early praise for the technical innovations of Ethereum, questions were also raised about its security and scalability.[12] The Foundation funded multiple teams in different cities building three separate implementations of the protocol: Geth (Go), Pyethereum (Python), aC++implementation, and also Swarm (decentralized file storage),Mist Browser(A wallet,browserand a user interface for smart contracts, now defunct) among other projects. The intended purpose of the three distinct implementations would be so that if one of them had a bug, the other two could be used as a comparison. Several codenamed prototypes of Ethereum were developed over 18 months in 2014 and 2015 by the Ethereum Foundation as part of theirproof-of-conceptseries.[4]"Olympic" was the last prototype and public beta pre-release. The Olympic network gave users abug bountyof 25,000 ether for stress-testing the Ethereum blockchain. On 30 July 2015, "Frontier" marked the official launch of the Ethereum platform, and Ethereum created its "genesis block".[4][25]The genesis block contained 8,893 transactions allocating various amounts of ether to different addresses, and a block reward of 5 ETH.[citation needed] Since the initial launch, Ethereum has undergone a number of planned protocol upgrades, which are important changes affecting the underlying functionality and/orincentive structuresof the platform.[26][27]Protocol upgrades are accomplished by means of ahard fork.[citation needed] In 2016, adecentralized autonomous organizationcalledThe DAO—a set of smart contracts developed on the platform—raised a recordUS$150 millionin acrowd saleto fund the project.[28]Emin Gün Sirer, Vlad Zamfir and Dino Mark had written a paper calledA Call for a Temporary Moratorium on The DAOin which they described in which way the DAO might be vunerable to attacks.[29]The DAO was exploited in June 2016 whenUS$50 millionof DAO tokens were stolen by an unknown hacker.[30][31]The event sparked a debate in the crypto-community about whether Ethereum should perform a contentious "hard fork" to reappropriate the affected funds.[32]The fork resulted in the network splitting into two blockchains: Ethereum with the theft reversed, andEthereum Classicwhich continued on the original chain.[33] In March 2017, variousblockchainstartups, research groups, andFortune500companies announced the creation of the Enterprise Ethereum Alliance (EEA) with 30 founding members.[34]By May 2017, the nonprofit organization had 116 enterprise members, includingConsenSys,CME Group,Cornell University's research group,Toyota Research Institute,Samsung SDS,Microsoft,Intel,J. P. Morgan,Cooley LLP,Merck KGaA,DTCC,Deloitte,Accenture,Banco Santander,BNY Mellon,ING, andNational Bank of Canada.[35][36]By July 2017, there were over 150 members in the alliance, includingMasterCard,Cisco Systems,Sberbank, andScotiabank.[37] In 2024, Paul Brody, EEA board member for EY, was announced as the new chairperson, and Karen Scarbrough, board member for Microsoft, as the new executive director. Vanessa Grellet from Arche Capital also joined as a new board member.[38] In 2017,CryptoKitties, theblockchain gameanddecentralized application(dApp) featuring digital cat artwork asNFTs, was launched on the Ethereum network.[39]In cultivating popularity with users and collectors, it gained notable mainstream media attention providing significant exposure to Ethereum in the process.[40]It was considered the most popular smart contract in use on the network[41]but it also highlighted concerns over Ethereum's scalability due to the game's substantial consumption of network capacity at the time.[42] In January 2018, a community-driven paper (an EIP, "Ethereum Improvement Proposal") under the leadership ofcivic hackerand lead authorWilliam Entrikenwas published, calledERC-721: Non-Fungible Token Standard.[43]It introducedERC-721, the first official NFT standard on Ethereum.[44]This standardization allowed Ethereum to become central to a multi-billion dollar digital collectibles market.[45] By January 2018, ether was the second-largest cryptocurrency in terms of market capitalization, behind bitcoin.[46]As of 2021[update], it maintained that relative position.[2][3] In 2019, Ethereum Foundation employeeVirgil Griffithwas arrested by the US government for presenting at a blockchain conference in North Korea.[47]He would later plead guilty to one count of conspiring to violate the International Emergency Economic Powers Act in 2021.[48] In March 2021,Visa Inc.announced that it began settlingstablecointransactions using Ethereum.[49]In April 2021,JP Morgan Chase,UBS, andMasterCardannounced that they were investingUS$65 millionintoConsenSys, a software development firm that builds Ethereum-related infrastructure.[50] There were two network upgrades in 2021. The first was "Berlin", implemented on 14 April 2021.[51]The second was "London", which took effect on 5 August.[52]The London upgrade included Ethereum Improvement Proposal ("EIP") 1559, a mechanism for reducing transaction feevolatility. The mechanism causes a portion of the ether paid in transaction fees for each block to be destroyed rather than given to the block proposer, reducing the inflation rate of ether and potentially resulting in periods of deflation.[53] On 27 August 2021, the blockchain experienced a briefforkthat was the result of clients running different incompatible software versions.[54] "The Merge" (formerly called Ethereum 2.0) was a set of three updates (Bellatrix, Paris, and Shapella) that transitioned Ethereum's consensus protocol from Proof-of-Work to Proof-of-Stake. Bellatrix updated the Proof-of-Stake Beacon chain and introduced economic slashing. Paris was the event at block 15537393 on 15th September 2022 that merged Ethereum to the Beacon chain. Shapella (Shanghai-Capella) was the update that allowed for staking withdrawals.[55] The switch from proof-of-work to proof-of-stake on 15 September 2022 has cut Ethereum's energy usage by over 99%.[56]However, the impact this has onglobal energy consumptionand climate change may be limited since the computers previously used for mining ether may be used to mine other cryptocurrencies that are energy-intensive.[10] On March 13, 2024, "Dencun" or also "Deneb-Cancun" set of upgrade went live. These included EIP-4844 (Proto-Danksharding) which introduced "blobs" that could be used for general-purpose data-availability. Because "Blob" data is only stored temporarily, it is much cheaper than using normal call data for storage. This allows for L2 networks to greatly reduce the cost of posting compressed transaction data to L1.[57] According to Ethereum.org, Prague-Electra ("Pectra") is expected to be launched in mid-2025. It brings various updates, including EIP-7251, which increases the staking amount per validator from exactly 32 ETH to anywhere between 32 ETH to 2048 ETH, and EIP-7702, which allows for the EOA addresses to use functionality from a smart contract.[58] Ethereum is apeer-to-peer networkthat maintains adatabasecontaining the storage values of all Ethereum accounts and processes state-altering transactions.[59]Approximately every 12 seconds, a batch of new transactions, known as a "block", is processed by the network. Each block contains acryptographic hashidentifying the series of blocks that must precede it if the block is to be considered valid. This series of blocks is known as theblockchain.[60] Each "node" (network participant) connects with a relatively small subset of the network to offer blocks and unvalidated transactions (i.e. transactions not yet in the blockchain) to its peers for download, and it downloads any of these from its peers that it doesn't already have. Each node usually has a unique set of peers, so that offering an item to its peers results in the propagation of that item throughout the entire network within seconds. A node's collection of unvalidated transactions is known as its "mempool".[61] A node may choose to create a copy of the state for itself. It does this by starting with the genesis state and executing every transaction in the blockchain, in the proper order of blocks and in the order they are listed within each block.[62] Any Ethereum account may "stake" (deposit) 32 ETH to become a validator. At the end of each "epoch" (32 block slots, each slot lasting 12 seconds), each validator ispseudorandomlyassigned to one of the slots of the epoch after the next, either as the block proposer or as an attester. During a slot, the block proposer uses their mempool to create a block that is intended to become the new "head" (latest block) of the blockchain, and the attesters attest to which block is at the head of the chain. If a validator makes self-contradicting proposals or attestations, or if it is inactive, it loses a portion of its stake. It may add to its stake at any time. A validator's attestation is given a weight equal to its stake or 32, whichever is less. According to the Ethereum protocol, the blockchain with the highest accumulated weight of attestations at any given time is to be regarded as the canonical chain. Validators are rewarded for making valid proposals and attestations. A validator's rewards are paid via transactions within the same chain that contains their proposal or attestation, and therefore would have little or no market value unless that chain becomes the canonical chain. This incentivizes validators to support the chain that they think other validators view as the canonical chain, which results in a high degree of consensus.[63] Ether (ETH) is the cryptocurrency generated in accordance with the Ethereum protocol as a reward to validators in aproof-of-stakesystem for adding blocks to the blockchain. Ether is represented in the state as anunsignedintegerassociated with each account, this being the account's ETH balance denominated in wei (1018wei = 1 ether). At the end of each epoch, new ETH is generated by the addition of protocol-specified amounts to the balances of all validators that made valid block proposals or attestations in the epoch before the last (i.e. the epoch being finalized). Additionally, ether is the only currency accepted by the protocol as payment for the transaction fee. The transaction fee is composed of two parts: the base fee and the tip. The base fee is "burned" (deleted from existence) and the tip goes to the block proposer. The validator reward together with the tips provide the incentive to validators to keep the blockchain growing (i.e. to keep processing new transactions). Therefore, ETH is fundamental to the operation of the network. Ether may be "sent" from one account to another via a transaction, the execution of which simply entails subtracting theamount to be sentfrom the sender's balance and adding the same amount to the recipient's balance.[64] Ether is often erroneously referred to as "Ethereum".[65] The uppercase Greek letterXi, Ξ, is sometimes used as acurrency symbolfor ether.[citation needed] There are two types of accounts on Ethereum: user accounts (also known as externally-owned accounts), and contract accounts. Both types have an ETH balance, may transfer ETH to any account, may execute the code of another contract, or create a new contract, and are identified on the blockchain and in the state by an account address.[66] Contracts are the only type of account that has associatedbytecodeand storage (to store contract-specific state). The code of a contract is evaluated when a transaction is sent to it. The code of the contract may read user-specified data from the transaction, and may have areturn value. In addition tocontrol flowstatements, the bytecode may include instructions to send ETH, read from and write to the contract's storage, create temporary storage (memory) that vanishes at the end of code evaluation, performarithmeticand hashing operations, send transaction-like calls to other contracts (thus executing their code), create new contracts, and query information about the current transaction or the blockchain.[67][unreliable source?] Ethereum addresses are composed of the prefix "0x" (a common identifier forhexadecimal) concatenated with the rightmost 20 bytes of theKeccak-256hash of theECDSApublic key(the curve used is the so-calledsecp256k1). In hexadecimal, two digits represent a byte, and so addresses contain 40 hexadecimal digits after the "0x", e.g.0xb794f5ea0ba39494ce839613fffba74279579268. Contract addresses are in the same format, however, they are determined by sender and creation transactionnonce.[24][non-primary source needed] The Ethereum Virtual Machine (EVM) is theruntimeenvironment for transaction execution in Ethereum. The EVM is astackbased virtual machine with aninstruction setspecifically designed for Ethereum. The instruction set includes, among other things, stack operations, memory operations, and operations to inspect the current execution context, such as remaining gas, information about the current block, and the current transaction. The EVM is designed to bedeterministicon a wide variety ofhardwareandoperating systems, so that given a pre-transaction state and a transaction, each node produces the same post-transaction state, thereby enabling network consensus. The formal definition of the EVM is specified in the Ethereum Yellow Paper.[24][68]EVMs have been implemented inC++,C#,Go,Haskell,Java,JavaScript,Python,Ruby,Rust,Elixir,Erlang, and soon[when?]WebAssembly.[citation needed] Gas is aunit of accountwithin the EVM used in the calculation of the transaction fee, which is the amount of ETH a transaction's sender must pay to the network to have the transaction included in the blockchain. Each type of operation which may be performed by the EVM ishardcodedwith a certain gas cost, which is intended to be roughlyproportionalto the monetary value of the resources (e.g.computationandstorage) a node must expend or dedicate to perform that operation.[citation needed] When a sender is creating a transaction, the sender must specify agas limitandgas price. Thegas limitis the maximum amount of gas the sender is willing to use in the transaction, and thegas priceis the amount of ETH the sender wishes to pay to the network per unit of gas used. A transaction may only be included in the blockchain at a block slot that has abase gas priceless than or equal to the transaction'sgas price. The portion of thegas pricethat is in excess of thebase gas priceis known as the tip and goes to the block proposer; the higher the tip, the more incentive a block proposer has to include the transaction in their block, and thus the quicker the transaction will be included in the blockchain. The sender buys the full amount of gas (i.e. their ETH balance is debitedgas limit×gas priceand their gas balance is set togas limit) up-front, at the start of the execution of the transaction, and is refunded at the end for any unused gas. If at any point the transaction does not have enough gas to perform the next operation, the transaction is reverted but the sender is still only refunded for the unused gas. Inuser interfaces, gas prices are typically denominated in gigawei (Gwei), a subunit of ETH equal to 10−9ETH.[69] The EVM'sinstruction setisTuring-complete.[24]Popular uses of Ethereum have included the creation of fungible (ERC-20) and non-fungible (ERC-721) tokens with a variety of properties,crowdfunding(e.g.initial coin offerings),decentralized finance,decentralized exchanges,decentralized autonomous organizations(DAOs), games,prediction markets, and gambling.[citation needed] Ethereum's smart contracts are written inhigh-levelprogramming languages and thencompileddown to EVMbytecode, which is then deployed to the Ethereum blockchain. They can be written inSolidity(a language library with similarities toCandJavaScript), as well as others. Source code and compiler information are usually published on blockchain explorer websites soon after the launch of the contract so that users can see the code and verify that it compiles to the bytecode that is on-chain.[citation needed] One issue related to using smart contracts on a public blockchain is that bugs, including security holes, are visible to all but cannot be fixed quickly.[70]One example of this is the 2016 attack onThe DAO, which could not be quickly stopped or reversed.[30] The ERC-20 (Ethereum Request-for-Comments #20) Token Standard allows forfungibletokens on the Ethereumblockchain. The standard, proposed by Fabian Vogelsteller in November 2015, implements anAPIfor tokens withinsmart contracts. The standard provides functions that include the transfer of tokens from one account to another, getting the current token balance of an account, and getting the total supply of the token available on the network. Smart contracts that correctly implement ERC-20 processes are called ERC-20 Token Contracts, and they keep track of created tokens on Ethereum. Numerous cryptocurrencies have launched as ERC-20 tokens and have been distributed throughinitial coin offerings.[71] Ethereum also allows for the creation of unique and indivisible tokens, callednon-fungible tokens(NFTs).[72]Since tokens of this type are unique, they have been used to represent such things as collectibles, digital art, sports memorabilia, virtual real estate, and items within games.[73]ERC-721 is the first official NFT standard for Ethereum and was followed up by ERC-1155 which introduced semi-fungibility, both are widely used,[74]though some fully fungible tokens using ERC-20 have been used for NFTs such asCryptoPunks.[75] The first NFT project, Etheria, a 3D map of tradable and customizable hexagonal tiles, was deployed to the network in October 2015 and demonstrated live at DEVCON1 in November of that year.[76]In 2021,Christie'ssold a digital image with an NFT byBeepleforUS$69.3 million, making him thethird-most-valuable living artistin terms ofauctionprices at the time, although observers have noted that both the buyer and seller had a vested interest in driving demand for the artist's work.[77][78] Decentralized finance (DeFi) offersfinancial instrumentsin a decentralized architecture, outside of companies' and governments' control, such as money market funds which let users earn interest.[79]DeFi applications are typically accessed through aWeb3-enabled browser extension or application, such asMetaMask, which allows users to directly interact with the Ethereum blockchain through a website.[80]Many of these DApps can connect and work together to create complex financial services.[81] Examples of DeFi platforms includeMakerDAO.[82]Uniswap, adecentralized exchangefor tokens on Ethereum grew fromUS$20 millionin liquidity toUS$2.9 billionin 2020.[83]As of October 2020, overUS$11 billionwas invested in various DeFi protocols.[84]Additionally, through a process called "wrapping", certain DeFi protocols allow synthetic versions of various assets (such as bitcoin, gold, and oil) to be tradeable on Ethereum and also compatible with all of Ethereum's major wallets and applications.[84] Ethereum-based software and networks, independent from the public Ethereum chain, have been tested byenterprise softwarecompanies.[85]Interested parties have includedMicrosoft,IBM,JPMorgan Chase,[64]Deloitte, R3, andInnovate UK(cross-border payments prototype).[86]Barclays,UBS,Credit Suisse,Amazon,Visa, and other companies have also experimented with Ethereum.[87][88] Ethereum-basedpermissioned blockchainvariants are used and being investigated for various projects: As of March 2025[update], the maximum theoretical throughput for Ethereum is 142 TPS (given its current 36M gas limit, 12s blocks, and 21k gas cost for ETH transfers). Due to large smart contract and batch transaction using more gas in each block, the average transaction throughput is much less. As of January 2016[update], the Ethereum averaged about 25 transactions per second; this did not change after the move to proof-of-stake. In comparison, theVisapayment platform processes 45,000 payments per second. This has led some to question the scalability of Ethereum.[91] Ethereum's blockchain uses aMerkle–Patricia Tree to store account state in each block.[92]The trie allows for storage savings, set membership proofs (called "Merkle proofs"), and light client synchronization. The network has faced congestion problems, such as in 2017 in relation toCryptoKitties.[93] The legal status of Ether (ETH), Ethereum's native token, remains subject to uncertainty and varies substantially from one jurisdiction to another. In the United States, regulatory authorities have increasingly signaled that Ether should be treated as a commodity under the jurisdiction of theCommodity Futures Trading Commission(CFTC). The CFTC has repeatedly asserted its regulatory authority over Ethereum, with former CFTC Chairman Heath Tarbert stating in 2019 that "ETH is a commodity" and subsequent leadership maintaining this position.[94][95]This classification is largely based on the decentralized nature of the Ethereum network, distinguishing it from securities that represent investments in a common enterprise. From a private law perspective, many jurisdictions have recognized Ether as a form of intangible personal property that can be owned, transferred, and used as collateral, similar to other forms of personal property.[96]In the United States specifically, the 2022 Amendments to the Uniform Commercial Code (UCC) introduced "controllable electronic records" (CERs) as a new category of personal property that includes digital assets like Ether. Under UCC Article 12, ETH is classified as a CER, which provides a comprehensive framework for its commercial circulation.[97]
https://en.wikipedia.org/wiki/Ethash
Incomputer science,algorithmic efficiencyis a property of analgorithmwhich relates to the amount ofcomputational resourcesused by the algorithm. Algorithmic efficiency can be thought of as analogous to engineeringproductivityfor a repeating or continuous process. For maximum efficiency it is desirable to minimize resource usage. However, different resources such astimeandspacecomplexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important. For example,bubble sortandtimsortare bothalgorithms to sort a listof items from smallest to largest. Bubble sort organizes the list in time proportional to the number of elements squared (O(n2){\textstyle O(n^{2})}, seeBig O notation), but only requires a small amount of extramemorywhich is constant with respect to the length of the list (O(1){\textstyle O(1)}). Timsort sorts the list in timelinearithmic(proportional to a quantity times its logarithm) in the list's length (O(nlog⁡n){\textstyle O(n\log n)}), but has a space requirementlinearin the length of the list (O(n){\textstyle O(n)}). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing thememory footprintof the sorting is more important, bubble sort is a better choice. The importance of efficiency with respect to time was emphasized byAda Lovelacein 1843 as applied toCharles Babbage's mechanical analytical engine: "In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"[1] Earlyelectronic computershad both limitedspeedand limitedrandom access memory. Therefore, aspace–time trade-offoccurred. Ataskcould use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was therefore to use the fastest algorithm that could fit in the available memory. Modern computers are significantly faster than early computers and have a much larger amount of memory available (gigabytes instead of kilobytes). Nevertheless,Donald Knuthemphasized that efficiency is still an important consideration: "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[2] An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as afunctionof the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to theapproximate doubling of computer power every 2 years, tasks that are acceptably efficient on modernsmartphonesandembedded systemsmay have been unacceptably inefficient for industrialservers10 years ago. Computer manufacturers frequently bring out new models, often with higherperformance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it iscompatiblewith an existing computer. There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption,total cost of ownership,response timeto external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, somesorting algorithmsperform poorly on data which is already sorted, or which is sorted in reverse order. In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate tooptimizationissues. In the theoreticalanalysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" isDonald Knuth'sBig O notation, representing the complexity of an algorithm as a function of the size of the inputn{\textstyle n}. Big O notation is anasymptoticmeasure of function complexity, wheref(n)=O(g(n)){\textstyle f(n)=O{\bigl (}g(n){\bigr )}}roughly means the time requirement for an algorithm is proportional tog(n){\displaystyle g(n)}, omittinglower-order termsthat contribute less thang(n){\displaystyle g(n)}to the growth of the function asn{\textstyle n}grows arbitrarily large. This estimate may be misleading whenn{\textstyle n}is small, but is generally sufficiently accurate whenn{\textstyle n}is large as the notation is asymptotic. For example, bubble sort may be faster thanmerge sortwhen only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms thatscaleefficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs. Some examples of Big O notation applied to algorithms' asymptotic time complexity include: For new versions of software or to provide comparisons with competitive systems,benchmarksare sometimes used, which assist with gauging an algorithms relative performance. If a newsort algorithmis produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in themainframeworld certain proprietarysortproducts from independent software companies such asSyncsortcompete with products from the major suppliers such asIBMfor speed. Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example[3][4]andThe Computer Language Benchmarks Gamecompares the performance of implementations of typical programming problems in several programming languages. Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example.[5] Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded,[6]or the choice of acompilerfor a particular language, or thecompilation optionsused, or even theoperating systembeing used. In many cases a language implemented by aninterpretermay be much slower than a language implemented by a compiler.[3]See the articles onjust-in-time compilationandinterpreted languages. There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these includedata alignment,data granularity,cache locality,cache coherency,garbage collection,instruction-level parallelism,multi-threading(at either a hardware or software level),simultaneous multitasking, andsubroutinecalls.[7] Some processors have capabilities forvector processing, which allow asingle instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use ofparallel processing, or they could be easily reconfigured. Asparallelanddistributed computinggrow in importance in the late 2010s, more investments are being made into efficienthigh-levelAPIsfor parallel and distributed computing systems such asCUDA,TensorFlow,Hadoop,OpenMPandMPI. Another problem which can arise in programming is that processors compatible with the sameinstruction set(such asx86-64orARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges tooptimizing compilers, which must have extensive knowledge of the specificCPUand other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced toemulateinstructions not supported on a compilation target platform, forcing it togenerate codeorlinkan externallibrary callto produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case inembedded systemswith respect tofloating-point arithmetic, where small andlow-powermicrocontrollersoften lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations. Measures are normally expressed as a function of the size of the inputn{\displaystyle \scriptstyle {n}}. The two most common measures are: For computers whose power is supplied by a battery (e.g.laptopsandsmartphones), or for very long/large calculations (e.g.supercomputers), other measures of interest are: As of 2018[update], power consumption is growing as an important metric for computational tasks of all types and at all scales ranging fromembeddedInternet of thingsdevices tosystem-on-chipdevices toserver farms. This trend is often referred to asgreen computing. Less common measures of computational efficiency may also be relevant in some cases: Analysis of algorithms, typically using concepts liketime complexity, can be used to get an estimate of the running time as a function of the size of the input data. The result is normally expressed usingBig O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance.Parallel algorithmsmay bemore difficult to analyze. Abenchmarkcan be used to assess the performance of an algorithm in practice. Many programming languages have an available function which providesCPU timeusage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests. Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in amulti-processingandmulti-programmingenvironment. This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions. This section is concerned with use of memory resources (registers,cache,RAM,virtual memory,secondary memory) while the algorithm is being executed. As for time analysis above,analyzethe algorithm, typically usingspace complexityanalysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed usingBig O notation. There are up to four aspects of memory usage to consider: Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949Electronic Delay Storage Automatic Calculator(EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 SinclairZX80came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical forpersonal computersto have between 4 and 32GBof RAM, an increase of over 300 million times as much memory. Modern computers can have relatively large amounts of memory (possibly gigabytes), so having to squeeze an algorithm into a confined amount of memory is not the kind of problem it used to be. However, the different types of memory and their relative access speeds can be significant: An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to paging. Because of this,cache replacement policiesare extremely important to high-performance computing, as arecache-aware programminganddata alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another. In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much more memory, but at the cost of performance. Much higher speed can be obtained if an algorithm and its data fit in cache memory; in this case minimizing space will also help minimize time. This is called theprinciple of locality, and can be subdivided intolocality of reference,spatial locality, andtemporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well.
https://en.wikipedia.org/wiki/Algorithmic_efficiency
Incomputational complexity theory,Blum's speedup theorem, first stated byManuel Blumin 1967, is a fundamentaltheoremabout the complexity ofcomputable functions. Each computable function has an infinite number of different program representations in a givenprogramming language. In the theory ofalgorithmsone often strives to find a program with the smallest complexity for a given computable function and a givencomplexity measure(such a program could be calledoptimal). Blum's speedup theorem shows that for any complexity measure, there exists a computable function such that there is no optimal program computing it, because every program has a program of lower complexity. This also rules out the idea that there is a way to assign to arbitrary functionstheircomputational complexity, meaning the assignment to anyfof the complexity of an optimal program forf. This does of course not exclude the possibility of finding the complexity of an optimal program for certain specific functions. Given aBlum complexity measure(φ,Φ){\displaystyle (\varphi ,\Phi )}and atotalcomputable functionf{\displaystyle f}with two parameters, then there exists a totalcomputable predicateg{\displaystyle g}(aboolean valuedcomputable function) so that for every programi{\displaystyle i}forg{\displaystyle g}, there exists a programj{\displaystyle j}forg{\displaystyle g}so that foralmost allx{\displaystyle x} f{\displaystyle f}is called thespeedup function. The fact that it may be as fast-growing as desired (as long as it is computable) means that the phenomenon of always having a program of smaller complexity remains even if by "smaller" we mean "significantly smaller" (for instance, quadratically smaller, exponentially smaller).
https://en.wikipedia.org/wiki/Blum%27s_speedup_theorem
Incomputational complexity theory, acomputational resourceis a resource used by somecomputational modelsin the solution ofcomputational problems. The simplest computational resources arecomputation time, the number of steps necessary to solve a problem, andmemory space, the amount of storage needed while solving the problem, but many more complicated resources have been defined.[citation needed] A computational problem is generally[citation needed]defined in terms of its action on any valid input. Examples of problems might be "given an integern, determine whethernis prime", or "given two numbersxandy, calculate the productx*y". As the inputs get bigger, the amount of computational resources needed to solve a problem will increase. Thus, the resources needed to solve a problem are described in terms ofasymptotic analysis, by identifying the resources as a function of the length or size of the input. Resource usage is often partially quantified usingBigOnotation. Computational resources are useful because we can study which problems can be computed in a certain amount of each computational resource. In this way, we can determine whetheralgorithmsfor solving the problem are optimal and we can make statements about analgorithm's efficiency. The set of all of the computational problems that can be solved using a certain amount of a certain computational resource is acomplexity class, and relationships between different complexity classes are one of the most important topics in complexity theory. The term"Computational resource"is commonly used to describe accessible computing equipment and software. SeeUtility computing. There has been some effort to formally quantify computing capability. A boundedTuring machinehas been used to model specific computations using the number of state transitions and alphabet size to quantify the computational effort required to solve a particular problem.[1][2]
https://en.wikipedia.org/wiki/Computational_resource
Incomputational complexity theory,Savitch's theorem, proved byWalter Savitchin 1970,[1]gives a relationship between deterministic and non-deterministicspace complexity. It states that for anyspace-constructable functionf∈Ω(log⁡(n)){\displaystyle f\in \Omega (\log(n))},[2] In other words, if anondeterministic Turing machinecan solve a problem usingf(n){\displaystyle f(n)}space, adeterministic Turing machinecan solve the same problem in the square of that space bound.[3]Although it seems that nondeterminism may produce exponential gains in time (as formalized in the unprovenexponential time hypothesis), Savitch's theorem shows that it has a markedly more limited effect on space requirements.[4] The theorem can berelativized. That is, for any oracle, replacing every "Turing machine" with "oracle Turing machine" would still result in a theorem.[5] The proof relies on an algorithm forSTCON, the problem of determining whether there is a path between two vertices in adirected graph, which runs inO((log⁡n)2){\displaystyle O\left((\log n)^{2}\right)}space forn{\displaystyle n}vertices. The basic idea of the algorithm is to solverecursivelya somewhat more general problem, testing the existence of a path from a vertexs{\displaystyle s}to another vertext{\displaystyle t}that uses at mostk{\displaystyle k}edges, for a parameterk{\displaystyle k}given as input. STCON is a special case of this problem wherek{\displaystyle k}is set large enough to impose no restriction on the paths (for instance, equal to the total number of vertices in the graph, or any larger value). To test for ak{\displaystyle k}-edge path froms{\displaystyle s}tot{\displaystyle t}, a deterministic algorithm can iterate through all verticesu{\displaystyle u}, and recursively search for paths of half the length froms{\displaystyle s}tou{\displaystyle u}and fromu{\displaystyle u}tot{\displaystyle t}.[6]This algorithm can be expressed in pseudocode (inPythonsyntax) as follows: Because each recursive call halves the parameterk{\displaystyle k}, the number of levels of recursion is⌈log2⁡n⌉{\displaystyle \lceil \log _{2}n\rceil }. Each level requiresO(log⁡n){\displaystyle O(\log n)}bits of storage for its function arguments andlocal variables:k{\displaystyle k}and the verticess{\displaystyle s},t{\displaystyle t}, andu{\displaystyle u}require⌈log2⁡n⌉{\displaystyle \lceil \log _{2}n\rceil }bits each. The totalauxiliary space complexityis thusO((log⁡n)2){\displaystyle O\left((\log n)^{2}\right)}.[6]The input graph is considered to be represented in a separate read-only memory and does not contribute to this auxiliary space bound. Alternatively, it may be represented as animplicit graph. Although described above in the form of a program in a high-level language, the same algorithm may be implemented with the same asymptotic space bound on aTuring machine. This algorithm can be applied to an implicit graph whose vertices represent the configurations of a nondeterministic Turing machine and its tape, running within a given space boundf(n){\displaystyle f(n)}. The edges of this graph represent the nondeterministic transitions of the machine,s{\displaystyle s}is set to the initial configuration of the machine, andt{\displaystyle t}is set to a special vertex representing all accepting halting states. In this case, the algorithm returns true when the machine has a nondeterministic accepting path, and false otherwise. The number of configurations in this graph isO(2f(n)){\displaystyle O(2^{f(n)})}, from which it follows that applying the algorithm to this implicit graph uses spaceO(f(n)2){\displaystyle O(f(n)^{2})}. Thus by deciding connectivity in a graph representing nondeterministic Turing machine configurations, one can decide membership in the language recognized by that machine, in space proportional to the square of the space used by the Turing machine.[6] Some important corollaries of the theorem include:
https://en.wikipedia.org/wiki/Savitch%27s_theorem
Inalgebra, achange of ringsis an operation of changing a coefficient ring to another. Given aring homomorphismf:R→S{\displaystyle f:R\to S}, there are three ways to change the coefficient ring of amodule; namely, for a rightR-moduleMand a rightS-moduleN, one can form They are related asadjoint functors: and This is related toShapiro's lemma. Throughout this section, letR{\displaystyle R}andS{\displaystyle S}be two rings (they may or may not becommutative, or contain anidentity), and letf:R→S{\displaystyle f:R\to S}be a homomorphism. Restriction of scalars changesS-modules intoR-modules. Inalgebraic geometry, the term "restriction of scalars" is often used as a synonym forWeil restriction. Suppose thatM{\displaystyle M}is a module overS{\displaystyle S}. Then it can be regarded as a module overR{\displaystyle R}where the action ofR{\displaystyle R}is given via wherem⋅f(r){\displaystyle m\cdot f(r)}denotes the action defined by theS{\displaystyle S}-module structure onM{\displaystyle M}.[1] Restriction of scalars can be viewed as afunctorfromS{\displaystyle S}-modules toR{\displaystyle R}-modules. AnS{\displaystyle S}-homomorphismu:M→N{\displaystyle u:M\to N}automatically becomes anR{\displaystyle R}-homomorphism between the restrictions ofM{\displaystyle M}andN{\displaystyle N}. Indeed, ifm∈M{\displaystyle m\in M}andr∈R{\displaystyle r\in R}, then As a functor, restriction of scalars is theright adjointof the extension of scalars functor. IfR{\displaystyle R}is the ring of integers, then this is just theforgetful functorfrom modules to abelian groups. Extension of scalars changesR-modules intoS-modules. Letf:R→S{\displaystyle f:R\to S}be a homomorphism between two rings, and letM{\displaystyle M}be a module overR{\displaystyle R}. Consider thetensor productMS=M⊗RS{\displaystyle M^{S}=M\otimes _{R}S}, whereS{\displaystyle S}is regarded as a leftR{\displaystyle R}-module viaf{\displaystyle f}. SinceS{\displaystyle S}is also a right module over itself, and the two actions commute, that isr⋅(s⋅s′)=(r⋅s)⋅s′{\displaystyle r\cdot (s\cdot s')=(r\cdot s)\cdot s'}forr∈R{\displaystyle r\in R},s,s′∈S{\displaystyle s,s'\in S}(in a more formal language,S{\displaystyle S}is a(R,S){\displaystyle (R,S)}-bimodule),MS{\displaystyle M^{S}}inherits a right action ofS{\displaystyle S}. It is given by(m⊗s)⋅s′=m⊗ss′{\displaystyle (m\otimes s)\cdot s'=m\otimes ss'}form∈M{\displaystyle m\in M},s,s′∈S{\displaystyle s,s'\in S}. This module is said to be obtained fromM{\displaystyle M}throughextension of scalars. Informally, extension of scalars is "the tensor product of a ring and a module"; more formally, it is a special case of a tensor product of a bimodule and a module – the tensor product of anR-module with an(R,S){\displaystyle (R,S)}-bimodule is anS-module. One of the simplest examples iscomplexification, which is extension of scalars from thereal numbersto thecomplex numbers. More generally, given anyfield extensionK<L,one can extend scalars fromKtoL.In the language of fields, a module over a field is called avector space, and thus extension of scalars converts a vector space overKto a vector space overL.This can also be done fordivision algebras, as is done inquaternionification(extension from the reals to thequaternions). More generally, given a homomorphism from a field orcommutativeringRto a ringS,the ringScan be thought of as anassociative algebraoverR,and thus when one extends scalars on anR-module, the resulting module can be thought of alternatively as anS-module, or as anR-module with analgebra representationofS(as anR-algebra). For example, the result of complexifying a real vector space (R=R,S=C) can be interpreted either as a complex vector space (S-module) or as a real vector space with alinear complex structure(algebra representation ofSas anR-module). This generalization is useful even for the study of fields – notably, many algebraic objects associated to a field are not themselves fields, but are instead rings, such as algebras over a field, as inrepresentation theory. Just as one can extend scalars on vector spaces, one can also extend scalars ongroup algebrasand also on modules over group algebras, i.e.,group representations. Particularly useful is relating howirreducible representationschange under extension of scalars – for example, the representation of the cyclic group of order 4, given by rotation of the plane by 90°, is an irreducible 2-dimensionalrealrepresentation, but on extension of scalars to the complex numbers, it split into 2 complex representations of dimension 1. This corresponds to the fact that thecharacteristic polynomialof this operator,x2+1,{\displaystyle x^{2}+1,}is irreducible of degree 2 over the reals, but factors into 2 factors of degree 1 over the complex numbers – it has no real eigenvalues, but 2 complex eigenvalues. Extension of scalars can be interpreted as a functor fromR{\displaystyle R}-modules toS{\displaystyle S}-modules. It sendsM{\displaystyle M}toMS{\displaystyle M^{S}}, as above, and anR{\displaystyle R}-homomorphismu:M→N{\displaystyle u:M\to N}to theS{\displaystyle S}-homomorphismuS:MS→NS{\displaystyle u^{S}:M^{S}\to N^{S}}defined byuS=u⊗RidS{\displaystyle u^{S}=u\otimes _{R}{\text{id}}_{S}}. Consider anR{\displaystyle R}-moduleM{\displaystyle M}and anS{\displaystyle S}-moduleN{\displaystyle N}. Given a homomorphismu∈HomR(M,NR){\displaystyle u\in {\text{Hom}}_{R}(M,N_{R})}, defineFu:MS→N{\displaystyle Fu:M^{S}\to N}to be thecomposition where the last map isn⊗s↦n⋅s{\displaystyle n\otimes s\mapsto n\cdot s}. ThisFu{\displaystyle Fu}is anS{\displaystyle S}-homomorphism, and henceF:HomR(M,NR)→HomS(MS,N){\displaystyle F:{\text{Hom}}_{R}(M,N_{R})\to {\text{Hom}}_{S}(M^{S},N)}is well-defined, and is a homomorphism (ofabelian groups). In case bothR{\displaystyle R}andS{\displaystyle S}have an identity, there is an inverse homomorphismG:HomS(MS,N)→HomR(M,NR){\displaystyle G:{\text{Hom}}_{S}(M^{S},N)\to {\text{Hom}}_{R}(M,N_{R})}, which is defined as follows. Letv∈HomS(MS,N){\displaystyle v\in {\text{Hom}}_{S}(M^{S},N)}. ThenGv{\displaystyle Gv}is the composition where the first map is thecanonicalisomorphismm↦m⊗1{\displaystyle m\mapsto m\otimes 1}. This construction establishes a one to one correspondence between the setsHomS(MS,N){\displaystyle {\text{Hom}}_{S}(M^{S},N)}andHomR(M,NR){\displaystyle {\text{Hom}}_{R}(M,N_{R})}. Actually, this correspondence depends only on the homomorphismf{\displaystyle f}, and so isfunctorial. In the language ofcategory theory, the extension of scalars functor isleft adjointto the restriction of scalars functor.
https://en.wikipedia.org/wiki/Change_of_rings
Innumber theory, anatural numberis calledk-almost primeif it haskprime factors.[1][2][3]More formally, a numbernisk-almost prime if and only ifΩ(n) =k, whereΩ(n)is the total number of primes in theprime factorizationofn(can be also seen as the sum of all the primes' exponents): A natural number is thusprimeif and only if it is 1-almost prime, andsemiprimeif and only if it is 2-almost prime. The set ofk-almost primes is usually denoted byPk. The smallestk-almost prime is2k. The first fewk-almost primes are: The numberπk(n)of positive integers less than or equal tonwith exactlykprime divisors (not necessarily distinct) isasymptoticto:[4] a result ofLandau.[5]See also theHardy–Ramanujan theorem.[relevant?]
https://en.wikipedia.org/wiki/Almost_prime
Innumber theory, aFermi–Dirac primeis aprime powerwhose exponent is apower of two. These numbers are named from an analogy toFermi–Dirac statisticsinphysicsbased on the fact that each integer has a unique representation as a product of Fermi–Dirac primes without repetition. Each element of the sequence of Fermi–Dirac primes is the smallest number that does not divide the product of all previous elements.Srinivasa Ramanujanused the Fermi–Dirac primes to find the smallest number whosenumber of divisorsis a given power of two. The Fermi–Dirac primes are a sequence of numbers obtained by raising aprime numberto an exponent that is apower of two. That is, these are the numbers of the formp2k{\displaystyle p^{2^{k}}}wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a non-negativeinteger. These numbers form the sequence:[1] They can be obtained from the prime numbers by repeatedsquaring, and form the smallest set of numbers that includes all of the prime numbers and isclosedunder squaring.[1] Another way of defining this sequence is that each element is the smallest positive integer that does not divide the product of all of the previous elements of the sequence.[2] Analogously to the way that every positive integer has a uniquefactorization, its representation as a product of prime numbers (with some of these numbers repeated), every positive integer also has a unique factorization as a product of Fermi–Dirac primes, with no repetitions allowed.[3][4]For example,2400=2⋅3⋅16⋅25.{\displaystyle 2400=2\cdot 3\cdot 16\cdot 25.} The Fermi–Dirac primes are named from an analogy toparticle physics. In physics,bosonsare particles that obeyBose–Einstein statistics, in which it is allowed for multiple particles to be in the same state at the same time.Fermionsare particles that obeyFermi–Dirac statistics, which only allow a single particle in each state. Similarly, for the usual prime numbers, multiple copies of the same prime number can appear in the same prime factorization, but factorizations into a product of Fermi–Dirac primes only allow each Fermi–Dirac prime to appear once within the product.[1][5] The Fermi–Dirac primes can be used to find the smallest number that has exactlyn{\displaystyle n}divisors,[6]in the case thatn{\displaystyle n}is a power of two,n=2k{\displaystyle n=2^{k}}. In this case, asSrinivasa Ramanujanproved,[1][7]the smallest number withn=2k{\displaystyle n=2^{k}}divisors is the product of thek{\displaystyle k}smallest Fermi–Dirac primes. Its divisors are the numbers obtained by multiplying together any subset of thesek{\displaystyle k}Fermi–Dirac primes.[7][8][9]For instance, the smallest number with 1024 divisors is obtained by multiplying together the first ten Fermi–Dirac primes:[8]294053760=2⋅3⋅4⋅5⋅7⋅9⋅11⋅13⋅16⋅17.{\displaystyle 294053760=2\cdot 3\cdot 4\cdot 5\cdot 7\cdot 9\cdot 11\cdot 13\cdot 16\cdot 17.} In the theory ofinfinitarydivisors of Cohen,[10]the Fermi–Dirac primes are exactly the numbers whose only infinitary divisors are 1 and the number itself.[1]
https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_prime
Inmathematics, aperfect poweris anatural numberthat is a product of equal natural factors, or, in other words, anintegerthat can be expressed as a square or a higher integerpowerof another integer greater than one. More formally,nis a perfect power if there existnatural numbersm> 1, andk> 1 such thatmk=n. In this case,nmay be called aperfectkth power. Ifk= 2 ork= 3, thennis called aperfect squareorperfect cube, respectively. Sometimes 0 and 1 are also considered perfect powers (0k= 0 for anyk> 0, 1k= 1 for anyk). Asequenceof perfect powers can be generated by iterating through the possible values formandk. The first few ascending perfect powers in numerical order (showing duplicate powers) are (sequenceA072103in theOEIS): Thesumof thereciprocalsof the perfect powers (including duplicates such as 34and 92, both of which equal 81) is 1: which can be proved as follows: The first perfect powers without duplicates are: The sum of the reciprocals of the perfect powerspwithout duplicates is:[1] where μ(k) is theMöbius functionand ζ(k) is theRiemann zeta function. According toEuler,Goldbachshowed (in a now-lost letter) that the sum of⁠1/p− 1⁠over the set of perfect powersp, excluding 1 and excluding duplicates, is 1: This is sometimes known as theGoldbach–Euler theorem. Detecting whether or not a given natural numbernis a perfect power may be accomplished in many different ways, with varying levels ofcomplexity. One of the simplest such methods is to consider all possible values forkacross each of thedivisorsofn, up tok≤log2⁡n{\displaystyle k\leq \log _{2}n}. So if the divisors ofn{\displaystyle n}aren1,n2,…,nj{\displaystyle n_{1},n_{2},\dots ,n_{j}}then one of the valuesn12,n22,…,nj2,n13,n23,…{\displaystyle n_{1}^{2},n_{2}^{2},\dots ,n_{j}^{2},n_{1}^{3},n_{2}^{3},\dots }must be equal tonifnis indeed a perfect power. This method can immediately be simplified by instead considering onlyprimevalues ofk. This is because ifn=mk{\displaystyle n=m^{k}}for acompositek=ap{\displaystyle k=ap}wherepis prime, then this can simply be rewritten asn=mk=map=(ma)p{\displaystyle n=m^{k}=m^{ap}=(m^{a})^{p}}. Because of this result, theminimalvalue ofkmust necessarily be prime. If the full factorization ofnis known, sayn=p1α1p2α2⋯prαr{\displaystyle n=p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}\cdots p_{r}^{\alpha _{r}}}where thepi{\displaystyle p_{i}}are distinct primes, thennis a perfect powerif and only ifgcd(α1,α2,…,αr)>1{\displaystyle \gcd(\alpha _{1},\alpha _{2},\ldots ,\alpha _{r})>1}where gcd denotes thegreatest common divisor. As an example, considern= 296·360·724. Since gcd(96, 60, 24) = 12,nis a perfect 12th power (and a perfect 6th power, 4th power, cube and square, since 6, 4, 3 and 2 divide 12). In 2002 Romanian mathematicianPreda Mihăilescuproved that the only pair of consecutive perfect powers is 23= 8 and 32= 9, thus provingCatalan's conjecture. Pillai's conjecture states that for any given positive integerkthere are only a finite number of pairs of perfect powers whose difference isk. This is an unsolved problem.[2]
https://en.wikipedia.org/wiki/Perfect_power
Inmathematics, asemiprimeis anatural numberthat is theproductof exactly twoprime numbers. The two primes in the product may equal each other, so the semiprimes include thesquaresof prime numbers. Because there are infinitely many prime numbers, there are also infinitely many semiprimes. Semiprimes are also calledbiprimes,[1]since they include two primes, orsecond numbers,[2]by analogy with how "prime" means "first". The semiprimes less than 100 are: Semiprimes that are not square numbers are called discrete, distinct, or squarefree semiprimes: The semiprimes are the casek=2{\displaystyle k=2}of thek{\displaystyle k}-almost primes, numbers with exactlyk{\displaystyle k}prime factors. However some sources use "semiprime" to refer to a larger set of numbers, the numbers with at most two prime factors (including unit (1), primes, and semiprimes).[3]These are: A semiprime counting formula was discovered by E. Noel and G. Panos in 2005.[4]Letπ2(n){\displaystyle \pi _{2}(n)}denote the number of semiprimes less than or equal ton. Thenπ2(n)=∑k=1π(n)[π(npk)−k+1]{\displaystyle \pi _{2}(n)=\sum _{k=1}^{\pi \left({\sqrt {n}}\right)}\left[\pi \left({\frac {n}{p_{k}}}\right)-k+1\right]}whereπ(x){\displaystyle \pi (x)}is theprime-counting functionandpk{\displaystyle p_{k}}denotes thekth prime.[5] Semiprime numbers have nocomposite numbersas factors other than themselves.[6]For example, the number 26 is semiprime and its only factors are 1, 2, 13, and 26, of which only 26 is composite. For a squarefree semiprimen=pq{\displaystyle n=pq}(withp≠q{\displaystyle p\neq q}) the value ofEuler's totient functionφ(n){\displaystyle \varphi (n)}(the number of positive integers less than or equal ton{\displaystyle n}that arerelatively primeton{\displaystyle n}) takes the simple formφ(n)=(p−1)(q−1)=n−(p+q)+1.{\displaystyle \varphi (n)=(p-1)(q-1)=n-(p+q)+1.}This calculation is an important part of the application of semiprimes in theRSA cryptosystem.[7]For a square semiprimen=p2{\displaystyle n=p^{2}}, the formula is again simple:[7]φ(n)=p(p−1)=n−p.{\displaystyle \varphi (n)=p(p-1)=n-p.} Semiprimes are highly useful in the area ofcryptographyandnumber theory, most notably inpublic key cryptography, where they are used byRSAandpseudorandom number generatorssuch asBlum Blum Shub. These methods rely on the fact that finding two large primes and multiplying them together (resulting in a semiprime) is computationally simple, whereasfinding the original factorsappears to be difficult. In theRSA Factoring Challenge,RSA Securityoffered prizes for the factoring of specific large semiprimes and several prizes were awarded. The original RSA Factoring Challenge was issued in 1991, and was replaced in 2001 by the New RSA Factoring Challenge, which was later withdrawn in 2007.[8] In 1974 theArecibo messagewas sent with a radio signal aimed at astar cluster. It consisted of1679{\displaystyle 1679}binary digits intended to be interpreted as a23×73{\displaystyle 23\times 73}bitmapimage. The number1679=23⋅73{\displaystyle 1679=23\cdot 73}was chosen because it is a semiprime and therefore can be arranged into a rectangular image in only two distinct ways (23 rows and 73 columns, or 73 rows and 23 columns).[9]
https://en.wikipedia.org/wiki/Semiprime
Ininformation technology, abackup, ordata backupis a copy ofcomputer datataken and stored elsewhere so that it may be used torestorethe original after adata lossevent. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup".[1]Backups can be used torecover dataafter its loss fromdata deletionorcorruption, or to recover data from an earlier time.[2]Backups provide a simple form ofIT disaster recovery; however not all backup systems are able to reconstitute a computer system or other complex configuration such as acomputer cluster,active directoryserver, ordatabase server.[3] A backup system contains at least one copy of all data considered worth saving. Thedata storagerequirements can be large. Aninformation repositorymodel may be used to provide structure to this storage. There are different types ofdata storage devicesused for copying backups of data that is already in secondary storage ontoarchive files.[note 1][4]There are also different ways these devices can be arranged to provide geographic dispersion,[5]data security, andportability. Data is selected, extracted, and manipulated for storage. The process can include methods fordealing with live data, including open files, as well as compression, encryption, andde-duplication. Additional techniques apply toenterprise client-server backup. Backup schemes may includedry runsthat validate the reliability of the data being backed up. There are limitations[6]and human factors involved in any backup scheme. A backup strategy requires an information repository, "a secondary storage space for data"[7]that aggregates backups of data "sources". The repository could be as simple as a list of all backup media (DVDs, etc.) and the dates produced, or could include a computerized index, catalog, orrelational database. The backup data needs to be stored, requiring abackup rotation scheme,[4]which is a system of backing up data to computer media that limits the number of backups of different dates retained separately, by appropriate re-use of the data storage media by overwriting of backups no longer needed. The scheme determines how and when each piece of removable storage is used for a backup operation and how long it is retained once it has backup data stored on it. The 3-2-1 rule can aid in the backup process. It states that there should be at least 3 copies of the data, stored on 2 different types of storage media, and one copy should be kept offsite, in a remote location (this can includecloud storage). 2 or more different media should be used to eliminate data loss due to similar reasons (for example, optical discs may tolerate being underwater while LTO tapes may not, and SSDs cannot fail due tohead crashesor damaged spindle motors since they do not have any moving parts, unlike hard drives). An offsite copy protects against fire, theft of physical media (such as tapes or discs) and natural disasters like floods and earthquakes. Physically protected hard drives are an alternative to an offsite copy, but they have limitations like only being able to resist fire for a limited period of time, so an offsite copy still remains as the ideal choice. Because there is no perfect storage, many backup experts recommend maintaining a second copy on a local physical device, even if the data is also backed up offsite.[8][9][10][11] An unstructured repository may simply be a stack of tapes, DVD-Rs or external HDDs with minimal information about what was backed up and when. This method is the easiest to implement, but unlikely to achieve a high level of recoverability as it lacks automation. A repository using this backup method contains complete source data copies taken at one or more specific points in time. Copyingsystem images, this method is frequently used by computer technicians to record known good configurations. However, imaging[12]is generally more useful as a way of deploying a standard configuration to many systems rather than as a tool for making ongoing backups of diverse systems. Anincremental backupstores data changed since a reference point in time. Duplicate copies of unchanged data are not copied. Typically a full backup of all files is made once or at infrequent intervals, serving as the reference point for an incremental repository. Subsequently, a number of incremental backups are made after successive time periods. Restores begin with the last full backup and then apply the incrementals.[13]Some backup systems[14]can create asynthetic full backupfrom a series of incrementals, thus providing the equivalent of frequently doing a full backup. When done to modify a single archive file, this speeds restores of recent versions of files. Continuous Data Protection(CDP) refers to a backup that instantly saves a copy of every change made to the data. This allows restoration of data to any point in time and is the most comprehensive and advanced data protection.[15]Near-CDP backup applications—oftenmarketedas "CDP"—automatically take incremental backups at a specific interval, for example every 15 minutes, one hour, or 24 hours. They can therefore only allow restores to an interval boundary.[15]Near-CDP backup applications usejournalingand are typically based on periodic "snapshots",[16]read-onlycopies of the data frozen at a particularpoint in time. Near-CDP (except forApple Time Machine)[17]intent-logsevery change on the host system,[18]often by saving byte or block-level differences rather than file-level differences. This backup method differs from simpledisk mirroringin that it enables a roll-back of the log and thus a restoration of old images of data. Intent-logging allows precautions for the consistency of live data, protectingself-consistentfiles but requiringapplications"be quiesced and made ready for backup." Near-CDP is more practicable for ordinary personal backup applications, as opposed totrueCDP, which must be run in conjunction with a virtual machine[19][20]or equivalent[21]and is therefore generally used in enterprise client-server backups. Software may create copies of individual files such as written documents, multimedia projects, or user preferences, to prevent failed write events caused by power outages, operating system crashes, or exhausted disk space, from causing data loss. A common implementation is an appended".bak" extensionto thefile name. AReverse incrementalbackup method stores a recent archive file "mirror" of the source data and a series of differences between the "mirror" in its current state and its previous states. A reverse incremental backup method starts with a non-image full backup. After the full backup is performed, the system periodically synchronizes the full backup with the live copy, while storing the data necessary to reconstruct older versions. This can either be done usinghard links—as Apple Time Machine does, or using binarydiffs. Adifferential backupsaves only the data that has changed since the last full backup. This means a maximum of two backups from the repository are used to restore the data. However, as time from the last full backup (and thus the accumulated changes in data) increases, so does the time to perform the differential backup. Restoring an entire system requires starting from the most recent full backup and then applying just the last differential backup. A differential backup copies files that have been created or changed since the last full backup, regardless of whether any other differential backups have been made since, whereas an incremental backup copies files that have been created or changed since the most recent backup of any type (full or incremental). Changes in files may be detected through a more recent date/time of last modificationfile attribute, and/or changes in file size. Other variations of incremental backup include multi-level incrementals and block-level incrementals that compare parts of files instead of just entire files. Regardless of the repository model that is used, the data has to be copied onto an archive file data storage medium. The medium used is also referred to as the type of backup destination. Magnetic tapewas for a long time the most commonly used medium for bulk data storage, backup, archiving, and interchange. It was previously a less expensive option, but this is no longer the case for smaller amounts of data.[22]Tape is asequential accessmedium, so the rate of continuously writing or reading data can be very fast. While tape media itself has a low cost per space,tape drivesare typically dozens of times as expensive ashard disk drivesandoptical drives. Many tape formats have been proprietary or specific to certain markets like mainframes or a particular brand of personal computer. By 2014LTOhad become the primary tape technology.[23]The other remaining viable "super" format is theIBM 3592(also referred to as the TS11xx series). TheOracle StorageTek T10000was discontinued in 2016.[24] The use ofhard diskstorage has increased over time as it has become progressively cheaper. Hard disks are usually easy to use, widely available, and can be accessed quickly.[23]However, hard disk backups areclose-tolerance mechanical devicesand may be more easily damaged than tapes, especially while being transported.[25]In the mid-2000s, several drive manufacturers began to produce portable drives employingramp loading and accelerometertechnology (sometimes termed a "shock sensor"),[26][27]and by 2010 the industry average in drop tests for drives with that technology showed drives remaining intact and working after a 36-inch non-operating drop onto industrial carpeting.[28]Some manufacturers also offer 'ruggedized' portable hard drives, which include a shock-absorbing case around the hard disk, andclaima range of higher drop specifications.[28][29][30]Over a period of years the stability of hard disk backups is shorter than that of tape backups.[24][31][25] External hard disks can be connected via local interfaces likeSCSI,USB,FireWire, oreSATA, or via longer-distance technologies likeEthernet,iSCSI, orFibre Channel. Some disk-based backup systems, viaVirtual Tape Librariesor otherwise, support data deduplication, which can reduce the amount of disk storage capacity consumed by daily and weekly backup data.[32][33][34] Optical storageuses lasers to store and retrieve data. RecordableCDs, DVDs, andBlu-ray Discsare commonly used with personal computers and are generally cheap. The capacities and speeds of these discs have typically been lower than hard disks or tapes. Advances in optical media may shrink that gap in the future.[35][36] Potential future data losses caused by gradualmedia degradationcan bepredictedbymeasuring the rate of correctable minor data errors, of which consecutively too many increase the risk of uncorrectable sectors. Support for error scanning varies amongoptical drivevendors.[37] Many optical disc formats areWORMtype, which makes them useful for archival purposes since the data cannot be changed in any way, including by user error and by malware such asransomware. Moreover, optical discs arenot vulnerabletohead crashes, magnetism, imminent water ingress orpower surges; and, a fault of the drive typically just halts the spinning. Optical media ismodular; the storage controller is not tied to media itself like with hard drives or flash storage (→flash memory controller), allowing it to be removed and accessed through a different drive. However, recordable media may degrade earlier under long-term exposure to light.[38] Some optical storage systems allow for cataloged data backups without human contact with the discs, allowing for longer data integrity. A French study in 2008 indicated that the lifespan of typically-soldCD-Rswas 2–10 years,[39]but one manufacturer later estimated the longevity of its CD-Rs with a gold-sputtered layer to be as high as 100 years.[40]Sony'sproprietaryOptical Disc Archive[23]can in 2016 reach a read rate of 250 MB/s.[41] Solid-state drives(SSDs) useintegrated circuitassemblies to store data.Flash memory,thumb drives,USB flash drives,CompactFlash,SmartMedia,Memory Sticks, andSecure Digital carddevices are relatively expensive for their low capacity, but convenient for backing up relatively low data volumes. A solid-state drive does not contain any movable parts, making it less susceptible to physical damage, and can have huge throughput of around 500 Mbit/s up to 6 Gbit/s. Available SSDs have become more capacious and cheaper.[42][29]Flash memory backups are stable for fewer years than hard disk backups.[24] Remote backup servicesor cloud backups involve service providers storing data offsite. This has been used to protect against events such as fires, floods, or earthquakes which could destroy locally stored backups.[43]Cloud-based backup (through services like or similar toGoogle Drive, andMicrosoft OneDrive) provides a layer of data protection.[25]However, the users must trust the provider to maintain the privacy and integrity of their data, with confidentiality enhanced by the use ofencryption. Because speed and availability are limited by a user's online connection,[25]users with large amounts of data may need to use cloud seeding and large-scale recovery. Various methods can be used to manage backup media, striking a balance between accessibility, security and cost. These media management methods are not mutually exclusive and are frequently combined to meet the user's needs. Using on-line disks for staging data before it is sent to a near-linetape libraryis a common example.[44][45] Onlinebackup storage is typically the most accessible type of data storage, and can begin a restore in milliseconds. An internal hard disk or adisk array(maybe connected toSAN) is an example of an online backup. This type of storage is convenient and speedy, but is vulnerable to being deleted or overwritten, either by accident, by malevolent action, or in the wake of a data-deletingviruspayload. Nearline storageis typically less accessible and less expensive than online storage, but still useful for backup data storage. A mechanical device is usually used to move media units from storage into a drive where the data can be read or written. Generally it has safety properties similar to on-line storage. An example is atape librarywith restore times ranging from seconds to a few minutes. Off-line storagerequires some direct action to provide access to the storage media: for example, inserting a tape into a tape drive or plugging in a cable. Because the data is not accessible via any computer except during limited periods in which they are written or read back, they are largely immune to on-line backup failure modes. Access time varies depending on whether the media are on-site or off-site. Backup media may be sent to anoff-sitevault to protect against a disaster or other site-specific problem. The vault can be as simple as a system administrator's home office or as sophisticated as a disaster-hardened, temperature-controlled, high-security bunker with facilities for backup media storage. A data replica can be off-site but also on-line (e.g., an off-siteRAIDmirror). Abackup siteor disaster recovery center is used to store data that can enable computer systems and networks to be restored and properly configured in the event of a disaster. Some organisations have their own data recovery centres, while others contract this out to a third-party. Due to high costs, backing up is rarely considered the preferred method of moving data to a DR site. A more typical way would be remotedisk mirroring, which keeps the DR data as up to date as possible. A backup operation starts with selecting and extracting coherent units of data. Most data on modern computer systems is stored in discrete units, known asfiles. These files are organized intofilesystems. Deciding what to back up at any given time involves tradeoffs. By backing up too much redundant data, the information repository will fill up too quickly. Backing up an insufficient amount of data can eventually lead to the loss of critical information.[46] Files that are actively being updated present a challenge to back up. One way to back up live data is to temporarilyquiescethem (e.g., close all files), take a "snapshot", and then resume live operations. At this point the snapshot can be backed up through normal methods.[50]Asnapshotis an instantaneous function of some filesystems that presents a copy of the filesystem as if it were frozen at a specific point in time, often by acopy-on-writemechanism. Snapshotting a file while it is being changed results in a corrupted file that is unusable. This is also the case across interrelated files, as may be found in a conventional database or in applications such asMicrosoft Exchange Server.[16]The termfuzzy backupcan be used to describe a backup of live data that looks like it ran correctly, but does not represent the state of the data at a single point in time.[51] Backup options for data files that cannot be or are not quiesced include:[52] Not all information stored on the computer is stored in files. Accurately recovering a complete system from scratch requires keeping track of thisnon-file datatoo.[57] It is frequently useful or required to manipulate the data being backed up to optimize the backup process. These manipulations can improve backup speed, restore speed, data security, media usage and/or reduced bandwidth requirements. Out-of-date data can be automatically deleted, but for personal backup applications—as opposed to enterprise client-server backup applications where automated data "grooming" can be customized—the deletion[note 2][58][59]can at most[60]be globally delayed or be disabled.[61] Various schemes can be employed toshrinkthe size of the source data to be stored so that it uses less storage space. Compression is frequently a built-in feature of tape drive hardware.[62] Redundancy due to backing up similarly configured workstations can be reduced, thus storing just one copy. This technique can be applied at the file or raw block level. This potentially large reduction[62]is calleddeduplication. It can occur on a server before any data moves to backup media, sometimes referred to as source/client side deduplication. This approach also reduces bandwidth required to send backup data to its target media. The process can also occur at the target storage device, sometimes referred to as inline or back-end deduplication. Sometimes backups areduplicatedto a second set of storage media. This can be done to rearrange the archive files to optimize restore speed, or to have a second copy at a different location or on a different storage medium—as in the disk-to-disk-to-tape capability of Enterprise client-server backup. High-capacity removable storage media such as backup tapes present a data security risk if they are lost or stolen.[63]Encryptingthe data on these media can mitigate this problem, however encryption is a CPU intensive process that can slow down backup speeds, and the security of the encrypted backups is only as effective as the security of the key management policy.[62] When there are many more computers to be backed up than there are destination storage devices, the ability to use a single storage device with several simultaneous backups can be useful.[64]However cramming the scheduledbackup windowvia "multiplexed backup" is only used for tape destinations.[64] The process of rearranging the sets of backups in an archive file is known as refactoring. For example, if a backup system uses a single tape each day to store the incremental backups for all the protected computers, restoring one of the computers could require many tapes. Refactoring could be used to consolidate all the backups for a single computer onto a single tape, creating a "synthetic full backup". This is especially useful for backup systems that do incrementals forever style backups. Sometimes backups are copied to astagingdisk before being copied to tape.[64]This process is sometimes referred to as D2D2T, an acronym forDisk-to-disk-to-tape. It can be useful if there is a problem matching the speed of the final destination device with the source device, as is frequently faced in network-based backup systems. It can also serve as a centralized location for applying other data manipulation techniques. About backup Related topics
https://en.wikipedia.org/wiki/Backup
Capability-based securityis a concept in the design ofsecure computingsystems, one of the existingsecurity models. Acapability(known in some systems as akey) is a communicable, unforgeabletokenof authority. It refers to a value thatreferencesanobjectalong with an associated set ofaccess rights. Auserprogramon acapability-based operating systemmust use a capability to access an object. Capability-based security refers to the principle of designing user programs such that they directly share capabilities with each other according to theprinciple of least privilege, and to the operating system infrastructure necessary to make such transactions efficient and secure. Capability-based security is to be contrasted with an approach that usestraditional UNIX permissionsandaccess control lists. Although most operating systems implement a facility which resembles capabilities, they typically do not provide enough support to allow for the exchange of capabilities among possibly mutually untrusting entities to be the primary means of granting and distributing access rights throughout the system. A capability-based system, in contrast, is designed with that goal in mind. Capabilities achieve their objective of improving system security by being used in place of forgeablereferences. A forgeable reference (for example, apath name) identifies an object, but does not specify which access rights are appropriate for that object and the user program which holds that reference. Consequently, any attempt to access the referenced object must be validated by the operating system, based on theambient authorityof the requesting program, typically via the use of anaccess-control list(ACL). Instead, in a system with capabilities, the mere fact that a user program possesses that capability entitles it to use the referenced object in accordance with the rights that are specified by that capability. In theory, a system with capabilities removes the need for any access control list or similar mechanism by giving all entities all and only the capabilities they will actually need. A capability is typically implemented as aprivilegeddata structurethat consists of a section that specifies access rights, and a section that uniquely identifies the object to be accessed. The user does not access the data structure or object directly, but instead via ahandle. In practice, it is used much like afile descriptorin a traditional operating system (a traditional handle), but to access every object on the system. Capabilities are typically stored by the operating system in a list, with some mechanism in place to prevent the program from directly modifying the contents of the capability (so as to forge access rights or change the object it points to). Some systems have also been based oncapability-based addressing(hardware support for capabilities), such asPlessey System 250. Programs possessing capabilities can perform functions on them, such as passing them on to other programs, converting them to a less-privileged version, or deleting them. The operating system must ensure that only specific operations can occur to the capabilities in the system, in order to maintain the integrity of the security policy. Capabilities as discussed in this article should not be confused with Portable Operating System Interface (POSIX) 1e/2c "Capabilities". The latter are coarse-grained privileges that cannot be transferred between processes. A capability is defined to be a protectedobjectreference which, by virtue of its possession by a user process, grants that process the capability (hence the name) to interact with an object in certain ways. Those ways might include reading data associated with an object, modifying the object, executing the data in the object as a process, and other conceivable access rights. The capability logically consists of a reference that uniquely identifies a particular object and a set of one or more of these rights. Suppose that, in a user process's memory space, there exists the following string: Although this identifies a unique object on the system, it does not specify access rights and hence is not a capability. Suppose there is instead the following pair of values: This pair identifies an object along with a set of access rights. The pair, however, is still not a capability because the user process'spossessionof these values says nothing about whether that access would actually be legitimate. Now suppose that the user program successfully executes the following statement: The variablefdnow contains the index of a file descriptor in the process's file descriptor table. This file descriptorisa capability. Its existence in the process's file descriptor table is sufficient to show that the process does indeed have legitimate access to the object. A key feature of this arrangement is that the file descriptor table is inkernel memoryand cannot be directly manipulated by the user program. In traditional operating systems, programs often communicate with each other and with storage using references like those in the first two examples. Path names are often passed as command-line parameters, sent via sockets, and stored on disk. These references are not capabilities, and must be validated before they can be used. In these systems, a central question is "on whoseauthorityis a given reference to be evaluated?" This becomes a critical issue especially for processes which must act on behalf of two different authority-bearing entities. They become susceptible to a programming error known as theconfused deputy problem, very frequently resulting in asecurity hole. In a capability-based system, the capabilities themselves are passed between processes and storage using a mechanism that is known by the operating system to maintain the integrity of those capabilities. One novel approach to solving this problem involves the use of anorthogonally persistentoperating system. In such a system, there is no need for entities to be discarded and their capabilities be invalidated, and hence require an ACL-like mechanism to restore those capabilities at a later time. The operating system maintains the integrity and security of the capabilities contained within all storage, both volatile and nonvolatile, at all times; in part by performing allserializationtasks by itself, rather than requiring user programs to do so, as is the case in most operating systems. Because user programs are relieved of this responsibility, there is no need to trust them to reproduce only legal capabilities, nor to validate requests for access using anaccess controlmechanism. An example implementation is theFlex machinefrom the early 1980s. Portable Operating System Interface (POSIX) draft 1003.1e specifies a concept of permissions called "capabilities". However, POSIX capabilities differ from capabilities in this article. A POSIX capability is not associated with any object; a process having CAP_NET_BIND_SERVICE capability can listen on any TCP port under 1024. This system is found in Linux.[1] In contrast,CapsicumUnix hybridizes a true capability-system model with a Unix design and POSIX API. Capsicum capabilities are a refined form of file descriptor, a delegable right between processes and additional object types beyond classic POSIX, such as processes, can be referenced via capabilities. In Capsicum capability mode, processes are unable to utilize global namespaces (such as the filesystem namespace) to look up objects, and must instead inherit or be delegated them. This system is found natively in FreeBSD, but patches are available to other systems.[2] Notable research and commercial systems employing capability-based security include the following: POSIX "capabilities" in Linux:
https://en.wikipedia.org/wiki/Capability-based_security
Data-centric securityis an approach to security that emphasizes the dependability of thedataitself rather than the security ofnetworks,servers, or applications. Data-centric security is evolving rapidly as enterprises increasingly rely on digital information torun their businessandbig dataprojects become mainstream.[1][2][3]It involves the separation of data anddigital rights managementthat assign encrypted files to pre-defined access control lists, ensuring access rights to critical and confidential data are aligned with documented business needs and job requirements that are attached to user identities.[4] Data-centric security also allows organizations to overcome the disconnect between IT security technology and the objectives of business strategy by relating security services directly to the data they implicitly protect; a relationship that is often obscured by the presentation of security as an end in itself.[5] Common processes in a data-centric security model include:[6] From a technical point of view, information (data)-centric security relies on the implementation of the following:[7] Dataaccess controlis the selective restriction of access to data. Accessing may mean viewing, editing, or using. Defining proper access controls requires to map out the information, where it resides, how important it is, who it is important to, how sensitive the data is and then designing appropriate controls.[8] Encryptionis a proven data-centric technique to address the risk of data theft in smartphones, laptops, desktops and even servers, including the cloud. One limitation is that encryption is not always effective once a network intrusion has occurred and cybercriminals operate with stolen valid user credentials.[9] Data Maskingis the process of hiding specific data within a database table or cell to ensure that data security is maintained and that sensitive information is not exposed to unauthorized personnel. This may include masking the data from users, developers, third-party and outsourcing vendors, etc. Data masking can be achieved multiple ways: by duplicating data to eliminate the subset of the data that needs to be hidden, or by obscuring the data dynamically as users perform requests.[10] Monitoring all activity at the data layer is a key component of a data-centric security strategy. It provides visibility into the types of actions that users and tools have requested and been authorized to on specific data elements. Continuous monitoring at the data layer combined with precise access control can contribute significantly to the real-time detection of data breaches, limits the damages inflicted by a breach and can even stop the intrusion if proper controls are in place. A 2016 survey[11]shows that most organizations still do not assess database activity continuously and lack the capability to identify database breaches in a timely fashion. Aprivacy-enhancing technology(PET) is a method of protecting data. PETs allow online users to protect the privacy of their personally identifiable information (PII) provided to and handled by services or applications. PETs use techniques to minimize possession of personal data without losing the functionality of an information system. Cloud computingis an evolving paradigm with tremendous momentum, but its unique aspects exacerbate security and privacy challenges. Heterogeneity and diversity of cloud services and environments demand fine-grained access control policies and services that should be flexible enough to capture dynamic, context, or attribute-based access requirements and data protection.[12]Data-centric security measures can also help protect againstdata-leakageand life cycle management of information.[13]
https://en.wikipedia.org/wiki/Data-centric_security
Enterprise information security architectureis the practice of designing, constructing and maintaining information security strategies and policies in enterprise organisations. A subset ofenterprise architecture, information security frameworks are often given their own dedicated resources in larger organisations and are therefore significantly more complex and robust than insmall and medium-sized enterprises. Enterprise information security architecture is becoming a common practice withinfinancial institutionsaround theglobe. The primary purpose of creating an enterprise information security architecture is to ensure that business strategy and IT security are aligned.[1] Enterprise information security architecture was first formally positioned byGartnerin theirwhitepapercalled “Incorporating Security into the Enterprise Architecture Process”.[2] Whilst security architecture frameworks are often custom designed in enterprise organisations, several models are commonly used and adapted to the individual requirements of the organisation Commonly used frameworks include:
https://en.wikipedia.org/wiki/Enterprise_information_security_architecture
TheGordon–Loeb modelis an economic model that analyzes the optimal level of investment ininformation security. The benefits of investing in cybersecurity stem from reducing the costs associated withcyber breaches. The Gordon-Loeb model provides a framework for determining how much to invest in cybersecurity, using a cost-benefit approach. The model includes the following key components: Gordon and Loeb demonstrated that the optimal level of security investment,z*, does not exceed 37% of the expected loss from a breach. Specifically,z*(v) ≤ (1/e)vL. The model was first introduced byLawrence A. GordonandMartin P. Loebin a 2002 paper published inACM Transactions on Information and System Security, titled "The Economics of Information Security Investment".[1]It was reprinted in the 2004 bookEconomics of Information Security.[2]Both authors are professors at theUniversity of Maryland'sRobert H. Smith School of Business. The model is widely regarded as one of the leading analytical tools in cybersecurity economics.[3]It has been extensively referenced in academic and industry literature.[4][5][6]It has also been tested in various contexts by researchers such as Marc Lelarge[7]and Yuliy Baryshnikov.[8] The model has also been covered by mainstream media, includingThe Wall Street Journal[9]andThe Financial Times.[10] Subsequent research has critiqued the model's assumptions, suggesting that some security breach functions may require fixing no less than1/2the expected loss, challenging the universality of the1/efactor. Alternative formulations even propose that some loss functions may justify investment at the full estimated loss.[11]
https://en.wikipedia.org/wiki/Gordon%E2%80%93Loeb_model
Identity-based securityis a type ofsecuritythat focuses on access to digital information or services based on the authenticated identity of an entity.[1]It ensures that the users and services of these digital resources are entitled to what they receive. The most common form of identity-based security involves the login of an account with a username and password. However, recent technology has evolved intofingerprintingorfacial recognition.[2] While most forms of identity-based security are secure and reliable, none of them are perfect and each contains its own flaws and issues.[3] The earliest forms of Identity-based security was introduced in the 1960s by computer scientistFernando Corbató.[4]During this time, Corbató invented computer passwords to prevent users from going through other people's files, a problem evident in hisCompatible Time-Sharing System (C.T.S.S.), which allowed multiple users access to a computer concurrently.[5]Fingerprinting however, although not digital when first introduced, dates back even further to the 2nd and 3rd century, with KingHammurabisealing contracts through his fingerprints in ancient Babylon.[6]Evidence of fingerprinting was also discovered in ancient China as a method of identification in official courts and documents. It was then introduced in the U.S. during the early 20th century through prison systems as a method of identification.[7]On the other hand, facial recognition was developed in the 1960s, funded by American intelligence agencies and the military.[8] The most common form of Identity-based security is password authentication involving the login of an online account. Most of the largest digital corporations rely on this form of security, such asFacebook,Google, andAmazon. Account logins are easy to register, difficult to compromise, and offer a simple solution to identity-based digital services. Fingerprintbiometric authentication is another type of identity-based security. It is considered to be one of the most secure forms of identification due to its reliability and accessibility, in addition to it being extremely hard to fake. Fingerprints are also unique for every person, lasting a lifetime without significant change. Currently, fingerprint biometric authentication are most commonly used in police stations, security industries, as well as smart-phones. Facial recognition operates by first capturing an image of the face. Then, a computer algorithm determines the distinctiveness of the face, including but not limited to eye location, shape of chin, or distance from the nose. The algorithm then converts this information into a database, with each set of data having enough detail to distinguish one face from another.[9] A problem of this form of security is the tendency for consumers to forget their passwords. On average, an individual is registered to 25 online accounts requiring a password, and most individuals vary passwords for each account.[10]According to a study byMastercardand theUniversity of Oxford, "about a third of online purchases are abandoned at checkout because consumers cannot remember their passwords."[11]If the consumer does forget their password, they will usually have to request a password reset sent to their linked email account, further delaying the purchasing process. According to an article published by Phys Org, 18.75% of consumers abandon checkout due to password reset issues.[12] When individuals set a uniform password across all online platforms, this makes the login process much simpler and hard to forget. However, by doing so, it introduces another issue where a security breach in one account will lead to similar breaches in all remaining accounts, jeopardizing their online security.[13]This makes the solution to remembering all passwords much harder to achieve.[citation needed] While fingerprinting is generally considered to be secure and reliable, the physical condition of one's finger during the scan can drastically affect its results. For example, physical injuries, differing displacement, and skin conditions can all lead to faulty and unreliable biometric information that may deny one's authorization.[citation needed] Another issue with fingerprinting is known as the biometric sensor attack. In such an attack, a fake finger or a print of the finger is used in replacement to fool the sensors and grant authentication to unauthorized personnel.[14] Facial recognition relies on the face of an individual to identify and grant access to products, services, or information. However, it can be fraudulent due to limitations in technology (lighting, image resolution) as well as changes in facial structures over time. There are two types of failure for facial recognition tests.[15]The first is a false positive, where the database matches the image with a data set but not the data set of the actual user's image. The other type of failure is a false negative, where the database fails to recognize the face of the correct user. Both types of failure have trade-offs with accessibility and security, which make the percentage of each type of error significant. For instance, a facial recognition on a smart-phone would much rather have instances of false negatives rather than false positives since it is more optimal for you to take several tries logging in rather than randomly granting a stranger access to your phone. While in ideal conditions with perfect lighting, positioning, and camera placement, facial recognition technology can be as accurate as 99.97%. However, such conditions are extremely rare and therefore unrealistic. In a study conducted by theNational Institute of Standards and Technology (NIST), video-recorded facial recognition accuracy ranged from 94.4% to 36% depending on camera placement as well as the nature of the setting.[16] Aside from the technical deficiencies of Facial Recognition, racial bias has also emerged as a controversial subject. A federal study in 2019 concluded that facial recognition systems falsely identified Black and Asian faces 10 to 100 times more often than White faces.[17]
https://en.wikipedia.org/wiki/Identity-based_security
Aninformation infrastructureis defined by Ole Hanseth (2002) as "a shared, evolving, open, standardized, and heterogeneous installed base"[1]and by Pironti (2006) as all of the people, processes, procedures, tools, facilities, and technology which support the creation, use, transport, storage, and destruction of information.[2] The notion of information infrastructures, introduced in the 1990s and refined during the following decade, has proven quite fruitful to theinformation systems(IS) field. It changed the perspective from organizations to networks and from systems to infrastructure, allowing for a global and emergent perspective on information systems. Information infrastructure is a technical structure of an organizational form, an analytical perspective or asemantic network. The concept of information infrastructure (II) was introduced in the early 1990s, first as a political initiative (Gore, 1993 & Bangemann, 1994), later as a more specific concept in IS research. For the IS research community, an important inspiration was Hughes' (1983) accounts of large technical systems, analyzed as socio-technical power structures (Bygstad, 2008).[3]Information infrastructure are typically different from the previous generations of "large technological system" because these digital sociotechnical systems are considered generative, meaning they allow new users to connect with or even appropriate the system.[4] Information infrastructure, as a theory, has been used to frame a number of extensive case studies (Star and Ruhleder 1996; Ciborra 2000; Hanseth and Ciborra 2007), and in particular to develop an alternative approach to IS design: "Infrastructures should rather be built by establishing working local solutions supporting local practices which subsequently are linked together rather than by defining universal standards and subsequently implementing them" (Ciborra and Hanseth 1998). It has later been developed into a full design theory, focusing on the growth of an installed base (Hanseth and Lyytinen 2008). Information infrastructures include the Internet, health systems and corporate systems.[5]It is also consistent to include innovations such asFacebook,LinkedInandMySpaceas excellent examples (Bygstad, 2008). Bowker has described several key terms and concepts that are enormously helpful for analyzing information infrastructure: imbrication, bootstrapping, figure/ground, and a short discussion of infrastructural inversion. "Imbrication" is an analytic concept that helps to ask questions about historical data. "Bootstrapping" is the idea that infrastructure must already exist in order to exist (2011). "Technological and non-technological elements that are linked" (Hanseth and Monteiro 1996). "Information infrastructures can, as formative contexts, shape not only the work routines, but also the ways people look at practices, consider them 'natural' and give them their overarching character of necessity. Infrastructure becomes an essential factor shaping the taken-for-grantedness of organizational practices" (Ciborra and Hanseth 1998). "The technological and human components, networks, systems, and processes that contribute to the functioning of the health information system" (Braa et al. 2007). "The set of organizational practices, technical infrastructure and social norms that collectively provide for the smooth operation of scientific work at a distance (Edwards et al. 2007). "A shared, evolving, heterogeneous installed base of IT capabilities developed on open and standardized interfaces" (Hanseth and Lyytinen 2008). According to Star and Ruhleder, there are 8 dimensions of information infrastructures. Presidential Chair and Professor of Information Studies at theUniversity of California, Los Angeles,Christine L. Borgmanargues information infrastructures, like all infrastructures, are "subject to public policy".[7]In the United States, public policy defines information infrastructures as the "physical and cyber-based systems essential to the minimum operations of the economy and government" and connected by information technologies.[7] Borgman says governments, businesses, communities, and individuals can work together to create a global information infrastructure which links "the world's telecommunication and computer networks together" and would enable the transmission of "every conceivable information and communication application."[7] Currently, the Internet is the default global information infrastructure.[8] TheAsia-Pacific Economic CooperationTelecommunications and Information Working Group (TEL) Program of Asian for Information and Communications Infrastructure.[9] Association of South East Asian Nations, e-ASEAN Framework Agreement of 2000.[9] National Information Infrastructure Act of 1993National Information Infrastructure(NII). TheNational Research CouncilestablishedCA*netin 1989 and the network connecting "all provincial nodes" was operational in 1990.[10]TheCanadian Network for the Advancement of Research, Industry and Education(CANARIE)was established in 1992 and CA*net was upgraded to aT1connection in 1993 andT3in 1995.[10]By 2000, "the commercial basis for Canada's information infrastructure" was established, and the government ended its role in the project.[10] In 1994, the European Union proposed the European Information Infrastructure.:[7]European Information Infrastructure has evolved furthermore thanks toMartin Bangemannreport and projects eEurope 2003+, eEurope 2005 and iIniciaive 2010.[11] In 1995, American Vice President Al Gore askedUSAIDto help improve Africa's connection to the global information infrastructure.[12] TheUSAID Leland Initiative (LI)was designed from June to September 1995, and implemented in on 29 September 1995.[12]The Initiative was "a five-year $15 million US Government effort to support sustainable development" by bringing "full Internet connectivity" to approximately 20 African nations.[13] The initiative had three strategic objectives:
https://en.wikipedia.org/wiki/Information_infrastructure
Information privacyis the relationship between the collection and dissemination ofdata,technology, the publicexpectation of privacy,contextual information norms, and thelegalandpoliticalissues surrounding them.[1]It is also known asdata privacy[2]ordata protection. Various types ofpersonal informationoften come under privacy concerns. This describes the ability to control what information one reveals about oneself over cable television, and who can access that information. For example, third parties can trackIP TVprograms someone has watched at any given time. "The addition of any information in a broadcasting stream is not required for an audience rating survey, additional devices are not requested to be installed in the houses of viewers or listeners, and without the necessity of their cooperations, audience ratings can be automatically performed in real-time."[3] In the United Kingdom in 2012, the Education SecretaryMichael Govedescribed theNational Pupil Databaseas a "rich dataset" whose value could be "maximised" by making it more openly accessible, including to private companies. Kelly Fiveash ofThe Registersaid that this could mean "a child's school life including exam results, attendance, teacher assessments and even characteristics" could be available, with third-party organizations being responsible for anonymizing any publications themselves, rather than the data being anonymized by the government before being handed over. An example of a data request that Gove indicated had been rejected in the past, but might be possible under an improved version of privacy regulations, was for "analysis on sexual exploitation".[4] Information about a person's financial transactions, including the amount of assets, positions held in stocks or funds, outstanding debts, and purchases can be sensitive. If criminals gain access to information such as a person's accounts or credit card numbers, that person could become the victim offraudoridentity theft. Information about a person's purchases can reveal a great deal about that person's history, such as places they have visited, whom they have contact with, products they have used, their activities and habits, or medications they have used. In some cases, corporations may use this information totargetindividuals withmarketingcustomized towards those individual's personal preferences, which that person may or may not approve.[4] As heterogeneous information systems with differing privacy rules are interconnected and information is shared,policy applianceswill be required to reconcile, enforce, and monitor an increasing amount of privacy policy rules (and laws). There are two categories of technology to address privacy protection incommercialIT systems: communication and enforcement. Computer privacy can be improved throughindividualization. Currently security messages are designed for the "average user", i.e. the same message for everyone. Researchers have posited that individualized messages and security "nudges", crafted based on users' individual differences and personality traits, can be used for further improvements for each person's compliance with computer security and privacy.[5] Improve privacy through data encryption By converting data into a non-readable format, encryption prevents unauthorized access. At present, common encryption technologies include AES and RSA. Use data encryption so that only users with decryption keys can access the data.[6] The ability to control the information one reveals about oneself over the internet and who can access that information has become a growing concern. These concerns include whetheremailcan be stored or read by third parties without consent or whether third parties can continue to track the websites that someone visited. Another concern is whether websites one visits can collect, store, and possibly sharepersonally identifiable informationabout users. The advent of varioussearch enginesand the use ofdata miningcreated a capability for data about individuals to be collected and combined from a wide variety of sources very easily.[7][8][9]AI facilitated creating inferential information about individuals and groups based on such enormous amounts of collected data, transforming the information economy.[10]The FTC has provided a set of guidelines that represent widely accepted concepts concerning fair information practices in an electronic marketplace, called theFair Information Practice Principles. But these have been critiqued for their insufficiency in the context of AI-enabled inferential information.[10] On the internet many users give away a lot of information about themselves: unencrypted emails can be read by the administrators of ane-mail serverif the connection is not encrypted (noHTTPS), and also theinternet service providerand other partiessniffingthe network traffic of that connection are able to know the contents. The same applies to any kind of traffic generated on the Internet, includingweb browsing,instant messaging, and others. In order not to give away too much personal information, emails can be encrypted and browsing of webpages as well as other online activities can be done anonymously viaanonymizers, or by open source distributed anonymizers, so-calledmix networks.Nym[11]andI2P[12]are examples of well-knownmix nets. Email is not the only internet content with privacy concerns. In an age where increasing amounts of information are online, social networking sites pose additional privacy challenges. People may be tagged in photos or have valuable information exposed about themselves either by choice or unexpectedly by others, referred to asparticipatory surveillance. Data about location can also be accidentally published, for example, when someone posts a picture with a store as a background. Caution should be exercised when posting information online. Social networks vary in what they allow users to make private and what remains publicly accessible.[13]Without strong security settings in place and careful attention to what remains public, a person can be profiled by searching for and collecting disparate pieces of information, leading to cases ofcyberstalking[14]or reputation damage.[15] Cookies are used on websites so that users may allow the website to retrieve some information from the user's internet, but they usually do not mention what the data being retrieved is.[16]In 2018, the General Data Protection Regulation (GDPR) passed a regulation that forces websites to visibly disclose to consumers their information privacy practices, referred to as cookie notices.[16]This was issued to give consumers the choice of what information about their behavior they consent to letting websites track; however, its effectiveness is controversial.[16]Some websites may engage in deceptive practices such as placing cookie notices in places on the page that are not visible or only giving consumers notice that their information is being tracked but not allowing them to change their privacy settings.[16]Apps like Instagram and Facebook collect user data for a personalized app experience; however, they track user activity on other apps, which jeopardizes users' privacy and data. By controlling how visible these cookie notices are, companies can discreetly collect data, giving them more power over consumers.[16] As location tracking capabilities of mobile devices are advancing (location-based services), problems related to user privacy arise.Location datais among the most sensitive data currently being collected.[17]A list of potentially sensitive professional and personal information that could be inferred about an individual knowing only their mobility trace was published in 2009 by theElectronic Frontier Foundation.[18]These include the movements of a competitor sales force, attendance of a particular church or an individual's presence in a motel, or at an abortion clinic. A recent MIT study[19][20]by de Montjoye et al. showed that four spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5 million people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity to the person. People may not wish for their medical records to be revealed to others due to the confidentiality and sensitivity of what the information could reveal about their health. For example, they might be concerned that it might affect their insurance coverage or employment. Or, it may be because they would not wish for others to know about any medical or psychological conditions or treatments that would bring embarrassment upon themselves. Revealing medical data could also reveal other details about one's personal life.[21]There are three major categories of medical privacy: informational (the degree of control over personal information), physical (the degree of physical inaccessibility to others), and psychological (the extent to which the doctor respects patients' cultural beliefs, inner thoughts, values, feelings, and religious practices and allows them to make personal decisions).[22]Physicians and psychiatrists in many cultures and countries have standards fordoctor–patient relationships, which include maintaining confidentiality. In some cases, thephysician–patient privilegeis legally protected. These practices are in place to protect the dignity of patients, and to ensure that patients feel free to reveal complete and accurate information required for them to receive the correct treatment.[23]To view the United States' laws on governing privacy of private health information, seeHIPAAand theHITECH Act. The Australian law is the Privacy Act 1988 Australia as well as state-based health records legislation. Political privacyhas been a concern sincevoting systemsemerged in ancient times. Thesecret ballotis the simplest and most widespread measure to ensure that political views are not known to anyone other than the voters themselves—it is nearly universal in moderndemocracyand considered to be a basic right ofcitizenship. In fact, even where other rights ofprivacydo not exist, this type of privacy very often does. There are several forms of voting fraud or privacy violations possible with the use of digital voting machines.[24] The legal protection of the right toprivacyin general – and of data privacy in particular – varies greatly around the world.[25] Laws and regulations related to Privacy and Data Protection are constantly changing, it is seen as important to keep abreast of any changes in the law and to continually reassess compliance with data privacy and security regulations.[26]Within academia,Institutional Review Boardsfunction to assure that adequate measures are taken to ensure both the privacy and confidentiality of human subjects in research.[27] Privacyconcerns exist whereverpersonally identifiable informationor othersensitive informationis collected, stored, used, and finally destroyed or deleted – indigital formor otherwise. Improper or non-existent disclosure control can be the root cause for privacy issues.Informed consentmechanisms includingdynamic consentare important in communicating to data subjects the different uses of their personally identifiable information. Data privacy issues may arise in response to information from a wide range of sources, such as:[28] Data protection laws across the globe aim to secure personal information and safeguard individual privacy in a digital era. The European Union’s General Data Protection Regulation (GDPR) sets a high benchmark, emphasizing consent, transparency, and robust accountability by imposing strict penalties. Many countries adopt similar principles, mandating that organizations implement effective security measures, respect user rights, and notify breaches. In regions such as North America, Asia, and Oceania, data protection frameworks vary from sector-specific regulations to comprehensive legislation. Globally, these laws balance innovation with privacy, ensuring that personal data is appropriately accessible, managed ethically while mitigating misuse and cyber threats. TheUnited States Department of Commercecreated theInternational Safe Harbor Privacy Principlescertification program in response to the1995 Directive on Data Protection(Directive 95/46/EC) of the European Commission.[29]Both the United States and the European Union officially state that they are committed to upholding information privacy of individuals, but the former has caused friction between the two by failing to meet the standards of the EU's stricter laws on personal data. The negotiation of the Safe Harbor program was, in part, to address this long-running issue.[30]Directive 95/46/EC declares in Chapter IV Article 25 that personal data may only be transferred from the countries in theEuropean Economic Areato countries which provideadequateprivacy protection. Historically, establishing adequacy required the creation of national laws broadly equivalent to those implemented by Directive 95/46/EU. Although there are exceptions to this blanket prohibition – for example where the disclosure to a country outside the EEA is made with the consent of the relevant individual (Article 26(1)(a)) – they are limited in practical scope. As a result, Article 25 created a legal risk to organizations which transfer personal data from Europe to the United States. The program regulates the exchange ofpassenger name recordinformation between the EU and the US. According to the EU directive, personal data may only be transferred to third countries if that country provides an adequate level of protection. Some exceptions to this rule are provided, for instance when the controller themself can guarantee that the recipient will comply with the data protection rules. TheEuropean Commissionhas set up the "Working party on the Protection of Individuals with regard to the Processing of Personal Data," commonly known as the "Article 29 Working Party". The Working Party gives advice about the level of protection in theEuropean Unionand third countries.[31] The Working Party negotiated with U.S. representatives about the protection of personal data, theSafe Harbor Principleswere the result. Notwithstanding that approval, the self-assessment approach of the Safe Harbor remains controversial with a number of European privacy regulators and commentators.[32] The Safe Harbor program addresses this issue in the following way: rather than a blanket law imposed on all organizations in theUnited States, a voluntary program is enforced by theFederal Trade Commission. U.S. organizations which register with this program, having self-assessed their compliance with a number of standards, are "deemed adequate" for the purposes of Article 25. Personal information can be sent to such organizations from the EEA without the sender being in breach of Article 25 or its EU national equivalents. The Safe Harbor was approved as providing adequate protection for personal data, for the purposes of Article 25(6), by the European Commission on 26 July 2000.[33] Under the Safe Harbor, adoptee organizations need to carefully consider their compliance with theonward transfer obligations, wherepersonal dataoriginating in the EU is transferred to the US Safe Harbor, and then onward to a third country. The alternative compliance approach of "binding corporate rules", recommended by many EU privacy regulators, resolves this issue. In addition, any dispute arising in relation to the transfer of HR data to the US Safe Harbor must be heard by a panel of EU privacy regulators.[34] In July 2007, a new, controversial,[35]Passenger Name Recordagreement between the US and the EU was made.[36]A short time afterwards, theBush administrationgave exemption for theDepartment of Homeland Security, for theArrival and Departure Information System(ADIS) and for theAutomated Target Systemfrom the1974 Privacy Act.[37] In February 2008,Jonathan Faull, the head of the EU's Commission of Home Affairs, complained about the US bilateral policy concerning PNR.[38]The US had signed in February 2008 a memorandum of understanding (MOU) with theCzech Republicin exchange of a visa waiver scheme, without concerting before with Brussels.[35]The tensions between Washington and Brussels are mainly caused by a lesser level of data protection in the US, especially since foreigners do not benefit from the USPrivacy Act of 1974. Other countries approached for bilateral MOU included the United Kingdom, Estonia, Germany and Greece.[39]
https://en.wikipedia.org/wiki/Information_privacy
Ininformation technology, benchmarking ofcomputer securityrequires measurements for comparing both different IT systems and single IT systems in dedicated situations. The technical approach is a pre-defined catalog of security events (security incident andvulnerability) together with corresponding formula for the calculation of security indicators that are accepted and comprehensive. Information security indicatorshave been standardized by theETSIIndustrial Specification Group (ISG) ISI. These indicators provide the basis to switch from a qualitative to a quantitative culture in IT Security Scope of measurements: External and internal threats (attempt and success), user's deviant behaviours, nonconformities and/or vulnerabilities (software, configuration, behavioural, general security framework). In 2019 the ISG ISI terminated and related standards will be maintained via the ETSI TC CYBER. The list of Information Security Indicators belongs to the ISI framework that consists of the following eight closely linked Work Items: Preliminary work on information security indicators have been done by the French Club R2GS. The first public set of the ISI standards (security indicators list and event model) have been released in April 2013.
https://en.wikipedia.org/wiki/Information_security_indicators
Information technology(IT) is a set of related fields withininformation and communications technology(ICT), that encompass computer systems,software,programming languages,dataand information processing, and storage. Information technology is an application ofcomputer scienceandcomputer engineering. The term is commonly used as asynonymfor computers andcomputer networks, but it also encompasses other information distribution technologies such astelevisionandtelephones. Several products or services within an economy are associated with information technology, includingcomputer hardware,software, electronics, semiconductors,internet,telecom equipment, ande-commerce.[1][a] Aninformation technology system(IT system) is generally aninformation system, acommunications system, or, more specifically speaking, acomputer system— including allhardware,software, andperipheralequipment — operated by a limited group of IT users, and anIT projectusually refers to the commissioning and implementation of an IT system.[3]IT systems play a vital role in facilitating efficient data management, enhancing communication networks, and supporting organizational processes across various industries. Successful IT projects require meticulous planning and ongoing maintenance to ensure optimal functionality and alignment with organizational objectives.[4] Although humans have been storing, retrieving, manipulating, analysing and communicating information since the earliest writing systems were developed,[5]the terminformation technologyin its modern sense first appeared in a 1958 article published in theHarvard Business Review; authorsHarold J. Leavittand Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)."[6]Their definition consists of three categories: techniques for processing, the application ofstatisticaland mathematical methods todecision-making, and the simulation of higher-order thinking through computer programs.[6] Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD),mechanical(1450 – 1840),electromechanical(1840 – 1940), andelectronic(1940 to present).[5] Ideas of computer science were first mentioned before the 1950s under theMassachusetts Institute of Technology(MIT) andHarvard University, where they had discussed and began thinking of computer circuits and numerical calculations. As time went on, the field of information technology and computer science became more complex and was able to handle the processing of more data. Scholarly articles began to be published from different organizations.[7] During the early computing,Alan Turing,J. Presper Eckert, andJohn Mauchlywere considered some of the major pioneers of computer technology in the mid-1900s. Giving them such credit for their developments, most of their efforts were focused on designing the first digital computer. Along with that, topics such asartificial intelligencebegan to be brought up as Turing was beginning to question such technology of the time period.[8] Deviceshave been used to aid computation for thousands of years, probably initially in the form of atally stick.[9]TheAntikythera mechanism, dating from about the beginning of the first century BC, is generally considered the earliest known mechanicalanalog computer, and the earliest known geared mechanism.[10]Comparable geared devices did not emerge inEuropeuntil the 16th century, and it was not until 1645 that the firstmechanical calculatorcapable of performing the four basic arithmetical operations was developed.[11] Electronic computers, using eitherrelaysorvalves, began to appear in the early 1940s. TheelectromechanicalZuse Z3, completed in 1941, was the world's firstprogrammablecomputer, and by modern standards one of the first machines that could be considered a completecomputingmachine. During theSecond World War,Colossusdeveloped the first electronicdigitalcomputer to decrypt German messages. Although it wasprogrammable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring.[12]The first recognizably modernelectronicdigitalstored-program computerwas theManchester Baby, which ran its first program on 21 June 1948.[13] The development oftransistorsin the late 1940s atBell Laboratoriesallowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, theFerranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison, the first transistorized computer developed at theUniversity of Manchesterand operational by November 1953, consumed only 150 watts in its final version.[14] Several other breakthroughs insemiconductortechnology include theintegrated circuit(IC) invented byJack KilbyatTexas InstrumentsandRobert NoyceatFairchild Semiconductorin 1959, silicon dioxide surface passivation byCarl Froschand Lincoln Derick in 1955,[15]the first planar silicon dioxide transistors by Frosch and Derick in 1957,[16]theMOSFETdemonstration by a Bell Labs team,[17][18][19][20]theplanar processbyJean Hoerniin 1959,[21][22][23]and themicroprocessorinvented byTed Hoff,Federico Faggin,Masatoshi Shima, andStanley MazoratIntelin 1971. These important inventions led to the development of thepersonal computer(PC) in the 1970s, and the emergence ofinformation and communications technology(ICT).[24] By 1984, according to theNational Westminster Bank Quarterly Review, the terminformation technologyhad been redefined as "the convergence of telecommunications and computing technology (...generally known in Britain as information technology)." We then begin to see the appearance of the term in 1990 contained within documents for theInternational Organization for Standardization(ISO).[25] Innovations in technology have already revolutionized the world by the twenty-first century as people have gained access to different online services. This has changed the workforce drastically as thirty percent of U.S. workers were already in careers in this profession. 136.9 million people were personally connected to theInternet, which was equivalent to 51 million households.[26]Along with the Internet, new types of technology were also being introduced across the globe, which has improved efficiency and made things easier across the globe. As technology revolutionized society, millions of processes could be completed in seconds. Innovations in communication were crucial as people increasingly relied on computers to communicate via telephone lines and cable networks. The introduction of the email was considered revolutionary as "companies in one part of the world could communicate by e-mail with suppliers and buyers in another part of the world...".[27] Not only personally, computers and technology have also revolutionized the marketing industry, resulting in more buyers of their products. In 2002, Americans exceeded $28 billion in goods just over the Internet alone while e-commerce a decade later resulted in $289 billion in sales.[27]And as computers are rapidly becoming more sophisticated by the day, they are becoming more used as people are becoming more reliant on them during the twenty-first century. Early electronic computers such asColossusmade use ofpunched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete.[28]Electronic data storage, which is used in modern computers, dates from World War II, when a form ofdelay-line memorywas developed to remove the clutter fromradarsignals, the first practical application of which was the mercury delay line.[29]The firstrandom-accessdigital storage device was theWilliams tube, which was based on a standardcathode ray tube.[30]However, the information stored in it and delay-line memory was volatile in the fact that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was themagnetic drum, invented in 1932[31]and used in theFerranti Mark 1, the world's first commercially available general-purpose electronic computer.[32] IBMintroduced the firsthard disk drivein 1956, as a component of their305 RAMACcomputer system.[33]: 6Most digital data today is still stored magnetically on hard disks, or optically on media such asCD-ROMs.[34]: 4–5Until 2002 most information was stored onanalog devices, but that year digital storage capacity exceeded analog for the first time. As of 2007[update], almost 94% of the data stored worldwide was held digitally:[35]52% on hard disks, 28% on optical devices, and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3exabytesin 1986 to 295 exabytes in 2007,[36]doubling roughly every 3 years.[37] Database Management Systems (DMS)emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. An early such system wasIBM'sInformation Management System(IMS),[38]which is still widely deployed more than 50 years later.[39]IMS stores datahierarchically,[38]but in the 1970sTed Coddproposed an alternative relational storage model based onset theoryandpredicate logicand the familiar concepts of tables, rows, and columns. In 1981, the first commercially availablerelational database management system(RDBMS) was released byOracle.[40] All DMS consist of components; they allow the data they store to be accessed simultaneously by many users while maintaining its integrity.[41]All databases are common in one point that the structure of the data they contain is defined and stored separately from the data itself, in adatabase schema.[38] In recent years, theextensible markup language(XML) has become a popular format for data representation. Although XML data can be stored in normalfile systems, it is commonly held inrelational databasesto take advantage of their "robust implementation verified by years of both theoretical and practical effort."[42]As an evolution of theStandard Generalized Markup Language(SGML), XML's text-based structure offers the advantage of being bothmachine-andhuman-readable.[43] Data transmissionhas three aspects: transmission, propagation, and reception.[44]It can be broadly categorized asbroadcasting, in which information is transmitted unidirectionally downstream, ortelecommunications, with bidirectional upstream and downstream channels.[36] XML has been increasingly employed as a means of data interchange since the early 2000s,[45]particularly for machine-oriented interactions such as those involved in web-orientedprotocolssuch asSOAP,[43]describing "data-in-transit rather than... data-at-rest".[45] Hilbert and Lopez identify the exponential pace of technological change (a kind ofMoore's law): machines' application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world's general-purpose computers doubled every 18 months during the same two decades; the globaltelecommunicationcapacity per capita doubled every 34 months; the world's storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years.[36] Massive amounts of data are stored worldwide every day, but unless it can be analyzed and presented effectively it essentially resides in what have been called data tombs: "data archives that are seldom visited".[46]To address that issue, the field ofdata mining— "the process of discovering interesting patterns and knowledge from large amounts of data"[47]— emerged in the late 1980s.[48] The technology and services IT provides for sending and receiving electronic messages (called "letters" or "electronic letters") over a distributed (including global) computer network. In terms of the composition of elements and the principle of operation, electronic mail practically repeats the system of regular (paper) mail, borrowing both terms (mail, letter, envelope, attachment, box, delivery, and others) and characteristic features — ease of use, message transmission delays, sufficient reliability and at the same time no guarantee of delivery. The advantages of e-mail are: easily perceived and remembered by a person addresses of the form user_name@domain_name (for example, somebody@example.com); the ability to transfer both plain text and formatted, as well as arbitrary files; independence of servers (in the general case, they address each other directly); sufficiently high reliability of message delivery; ease of use by humans and programs. The disadvantages of e-mail include: the presence of such a phenomenon as spam (massive advertising and viral mailings); the theoretical impossibility of guaranteed delivery of a particular letter; possible delays in message delivery (up to several days); limits on the size of one message and on the total size of messages in the mailbox (personal for users). A search system is software and hardware complex with a web interface that provides the ability to search for information on the Internet. A search engine usually means a site that hosts the interface (front-end) of the system. The software part of a search engine is a search engine (search engine) — a set of programs that provides the functionality of a search engine and is usually a trade secret of the search engine developer company. Most search engines look for information onWorld Wide Websites, but there are also systems that can look for files on FTP servers, items in online stores, and information on Usenet newsgroups. Improving search is one of the priorities of the modern Internet (see the Deep Web article about the main problems in the work of search engines). Companies in the information technology field are often discussed as a group as the "tech sector" or the "tech industry."[49][50][51]These titles can be misleading at times and should not be mistaken for "tech companies," which are generally large scale, for-profit corporations that sell consumer technology and software. It is also worth noting that from a business perspective, information technology departments are a "cost center" the majority of the time. A cost center is a department or staff which incurs expenses, or "costs," within a company rather than generating profits or revenue streams. Modern businesses rely heavily on technology for their day-to-day operations, so the expenses delegated to cover technology that facilitates business in a more efficient manner are usually seen as "just the cost of doing business." IT departments are allocated funds by senior leadership and must attempt to achieve the desired deliverables while staying within that budget. Government and the private sector might have different funding mechanisms, but the principles are more or less the same. This is an often overlooked reason for the rapid interest in automation andartificial intelligence, but the constant pressure to do more with less is opening the door for automation to take control of at least some minor operations in large companies. Many companies now have IT departments for managing thecomputers, networks, and other technical areas of their businesses. Companies have also sought to integrate IT with business outcomes and decision-making through a BizOps or business operations department.[52] In a business context, theInformation Technology Association of Americahas defined information technology as "the study, design, development, application, implementation, support, or management of computer-based information systems".[53][page needed]The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization's technology life cycle, by which hardware and software are maintained, upgraded, and replaced. Information services is a term somewhat loosely applied to a variety of IT-related services offered by commercial companies,[54][55][56]as well asdata brokers. The field of information ethics was established by mathematicianNorbert Wienerin the 1940s.[58]: 9Some of the ethical issues associated with the use of information technology include:[59]: 20–21 Research suggests that IT projects in business and public administration can easily become significant in scale. Work conducted byMcKinseyin collaboration with theUniversity of Oxfordsuggested that half of all large-scale IT projects (those with initialcost estimatesof $15 million or more) often failed to maintain costs within their initial budgets or to complete on time.[60]
https://en.wikipedia.org/wiki/Information_technology
Information technology risk,IT risk,IT-related risk, orcyber riskis anyriskrelating toinformation technology.[1]While information has long been appreciated as a valuable and important asset, the rise of theknowledge economyand theDigital Revolutionhas led to organizations becoming increasingly dependent on information,information processingand especially IT. Various events or incidents that compromise IT in some way can therefore cause adverse impacts on the organization's business processes or mission, ranging from inconsequential to catastrophic in scale. Assessing the probability or likelihood of various types of event/incident with their predicted impacts or consequences, should they occur, is a common way to assess and measure IT risks.[2]Alternative methods of measuring IT risk typically involve assessing other contributory factors such as thethreats, vulnerabilities, exposures, and asset values.[3][4] IT risk:the potential that a giventhreatwill exploitvulnerabilitiesof anassetor group of assets and thereby cause harm to the organization. It is measured in terms of a combination of the probability of occurrence of an event and its consequence.[5] TheCommittee on National Security SystemsofUnited States of Americadefinedriskin different documents: National Information Assurance Training and Education Centerdefines risk in the IT field as:[8] ManyNISTpublications defineriskin IT context in different publications: FISMApedia[9]term[10]provide a list. Between them: NISTSP 800-30[11]defines: IT risk is the probable frequency and probable magnitude of future loss.[13] ISACApublished theRisk ITFramework in order to provide an end-to-end, comprehensive view of all risks related to the use of IT. There,[14]IT risk is defined as: According toRisk IT,[14]IT risk has a broader meaning: it encompasses not just only the negativeimpactof operations and service delivery which can bring destruction or reduction of the value of the organization, but also the benefit\value enabling risk associated to missing opportunities to use technology to enable or enhance business or the IT project management for aspects like overspending or late delivery with adverse business impact Measuring IT risk (or cyber risk) can occur at many levels. At a business level, the risks are managed categorically. Front line IT departments andNOC's tend to measure more discrete, individual risks. Managing the nexus between them is a key role for modernCISO's. When measuring risk of any kind, selecting the correct equation for a given threat, asset, and available data is an important step. Doing so is subject unto itself, but there are common components of risk equations that are helpful to understand. There are four fundamental forces involved in risk management, which also apply to cybersecurity. They are assets, impact, threats, and likelihood. You have internal knowledge of and a fair amount of control overassets, which are tangible and intangible things that have value. You also have some control overimpact, which refers to loss of, or damage to, an asset. However,threatsthat represent adversaries and their methods of attack are external to your control.Likelihoodis the wild card in the bunch. Likelihoods determine if and when a threat will materialize, succeed, and do damage. While never fully under your control, likelihoods can be shaped and influenced to manage the risk.[16] Mathematically, the forces can be represented in a formula such as:Risk=p(Asset,Threat)×d(Asset,Threat){\textstyle Risk=p(Asset,Threat)\times d(Asset,Threat)}where p() is the likelihood that a Threat will materialize/succeed against an Asset, and d() is the likelihood of various levels of damage that may occur.[17] The field of IT risk management has spawned a number of terms and techniques which are unique to the industry. Some industry terms have yet to be reconciled. For example, the termvulnerabilityis often used interchangeably with likelihood of occurrence, which can be problematic. Often encountered IT risk management terms and techniques include: The riskRis the product of the likelihoodLof a security incident occurring times theimpactIthat will be incurred to the organization due to the incident, that is:[22] The likelihood of a security incident occurrence is a function of the likelihood that a threat appears and the likelihood that the threat can successfully exploit the relevant system vulnerabilities. The consequence of the occurrence of a security incident are a function of likely impact that the incident will have on the organization as a result of the harm the organization assets will sustain. Harm is related to the value of the assets to the organization; the same asset can have different values to different organizations. So R can be function of fourfactors: If numerical values (money for impact and probabilities for the other factors), the risk can be expressed in monetary terms and compared to the cost of countermeasures and the residual risk after applying the security control. It is not always practical to express this values, so in the first step of risk evaluation, risk are graded dimensionless in three or five steps scales. OWASPproposes a practical risk measurement guideline[22]based on: An IT risk management system (ITRMS) is a component of a broaderenterprise risk management(ERM) system.[23]ITRMS are also integrated into broaderinformation security management systems(ISMS). The continuous update and maintenance of an ISMS is in turn part of an organisation's systematic approach for identifying, assessing, and managing information security risks.[24]TheCertified Information Systems AuditorReview Manual 2006 by ISACA provides this definition of risk management: "Risk management is the process of identifyingvulnerabilitiesandthreatsto the information resources used by an organization in achieving business objectives, and deciding whatcountermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."[25]TheNIST Cybersecurity Frameworkencourages organizations to manage IT risk as part theIdentify(ID) function:[26][27] Risk Assessment (ID.RA): The organization understands the cybersecurity risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals. Risk Management Strategy (ID.RM): The organization’s priorities, constraints, risk tolerances, and assumptions are established and used to support operational risk decisions. In the following a brief description of applicable rules organized by source.[28] OECDissued the following: TheEuropean Unionissued the following, divided by topic: United States issued the following, divided by topic: As legislation evolves, there has been increased focus to require'reasonable security'for information management. CCPA states that "manufacturers of connected devices to equip the device with reasonable security."[32]New York's SHIELD Act requires that organizations that manage NY residents' information “develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information including, but not limited to, disposal of data.” This concept will influence how businesses manage their risk management plan as compliance requirements develop. The list is chiefly based on:[28]
https://en.wikipedia.org/wiki/IT_risk
ITIL security managementdescribes the structured fitting of security into an organization.ITILsecurity management is based on the ISO 27001 standard. "ISO/IEC 27001:2005 covers all types of organizations (e.g. commercial enterprises, government agencies, not-for profit organizations).[1]ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within the context of the organization's overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof. ISO/IEC 27001:2005 is designed to ensure the selection of adequate and proportionate security controls that protect information assets and give confidence to interested parties." A basic concept of security management isinformation security. The primary goal of information security is to control access to information. The value of the information is what must be protected. These values includeconfidentiality,integrityandavailability. Inferred aspects are privacy, anonymity and verifiability. The goal of security management comes in two parts: SLAs define security requirements, along with legislation (if applicable) and other contracts. These requirements can act askey performance indicators(KPIs) that can be used for process management and for interpreting the results of the security management process. The security management process relates to other ITIL-processes. However, in this particular section the most obvious relations are the relations to theservice level management,incident managementandchange managementprocesses. Security management is a continuous process that can be compared toW. Edwards Deming's Quality Circle (Plan, Do, Check, Act). The inputs are requirements from clients. The requirements are translated into security services and security metrics. Both the client and the plan sub-process affect the SLA. The SLA is an input for both the client and the process. The provider develops security plans for the organization. These plans contain policies and operational level agreements. The security plans (Plan) are then implemented (Do) and the implementation is then evaluated (Check). After the evaluation, the plans and the plan implementation are maintained (Act). The activities, results/products and the process are documented. External reports are written and sent to the clients. The clients are then able to adapt their requirements based on the information received through the reports. Furthermore, the service provider can adjust their plan or the implementation based on their findings in order to satisfy all the requirements stated in the SLA (including new requirements). The first activity in the security management process is the “Control” sub-process. The Control sub-process organizes and manages the security management process. The Control sub-process defines the processes, the allocation of responsibility for the policy statements and the management framework. The security management framework defines the sub-processes for development, implementation and evaluations into action plans. Furthermore, the management framework defines how results should be reported to clients. The meta-process model of the control sub-process is based on aUMLactivity diagramand gives an overview of the activities of the Control sub-process. The grey rectangle represents the control sub-process and the smaller beam shapes inside it represent activities that take place inside it. The meta-data model of the control sub-process is based on a UMLclass diagram. Figure 2.1.2 shows the metamodel of the control sub-process. Figure 2.1.2: Meta-process model control sub-process The CONTROL rectangle with a white shadow is an open complex concept. This means that the Control rectangle consists of a collection of (sub) concepts. Figure 2.1.3 is the process data model of the control sub-process. It shows the integration of the two models. The dotted arrows indicate the concepts that are created or adjusted in the corresponding activities. Figure 2.1.3: Process-data model control sub-process The Plan sub-process contains activities that in cooperation withservice level managementlead to the (information) Security section in the SLA. Furthermore, the Plan sub-process contains activities that are related to the underpinning contracts which are specific for (information) security. In the Plan sub-process the goals formulated in the SLA are specified in the form of operational level agreements (OLA). These OLA's can be defined as security plans for a specific internal organization entity of the service provider. Besides the input of the SLA, the Plan sub-process also works with the policy statements of the service provider itself. As said earlier these policy statements are defined in the control sub-process. The operational level agreements forinformation securityare set up and implemented based on the ITIL process. This requires cooperation with other ITIL processes. For example, if security management wishes to change theIT infrastructurein order to enhance security, these changes will be done through thechange managementprocess. Security management delivers the input (Request for change) for this change. The Change Manager is responsible for the change management process. Plan consists of a combination of unordered and ordered (sub) activities. The sub-process contains three complex activities that are all closed activities and one standard activity. Just as the Control sub-process the Plan sub-process is modeled using the meta-modeling technique. The left side of figure 2.2.1 is the meta-data model of the Plan sub-process. The Plan rectangle is an open (complex) concept which has an aggregation type of relationship with two closed (complex) concepts and one standard concept. The two closed concepts are not expanded in this particular context. The following picture (figure 2.2.1) is theprocess-data diagramof the Plan sub-process. This picture shows the integration of the two models. The dotted arrows indicate which concepts are created or adjusted in the corresponding activities of the Plan sub-process. Figure 2.2.1: Process-data model Plan sub-process The Implementation sub-process makes sure that all measures, as specified in the plans, are properly implemented. During the Implementation sub-process no measures are defined nor changed. The definition or change of measures takes place in the Plan sub-process in cooperation with the Change Management Process. Process of formally identifying changes by type e.g., project scope change request, validation change request, infrastructure change request this process leads toasset classification and control documents. The left side of figure 2.3.1 is the meta-process model of the Implementation phase. The four labels with a black shadow mean that these activities are closed concepts and they are not expanded in this context. No arrows connect these four activities, meaning that these activities are unordered and the reporting will be carried out after the completion of all four activities. During the implementation phase concepts are created and /or adjusted. The concepts created and/or adjusted are modeled using the meta-modeling technique. The right side of figure 2.3.1 is the meta-data model of the implementation sub-process. Implementation documents are an open concept and is expanded upon in this context. It consists of four closed concepts that are not expanded because they are irrelevant in this particular context. In order to make the relations between the two models clearer the integration of the two models is illustrated in Figure 2.3.1. The dotted arrows running from the activities to the concepts illustrate which concepts are created/ adjusted in the corresponding activities. Figure 2.3.1: Process-data model Implementation sub-process Evaluation is necessary to measure the success of the implementation and security plans. The evaluation is important for clients (and possibly third parties). The results of the Evaluation sub-process are used to maintain the agreed measures and the implementation. Evaluation results can lead to new requirements and a correspondingRequest for Change. The request for change is then defined and sent to Change Management. The three sorts of evaluation are self-assessment, internal audit and external audit. The self-assessment is mainly carried out in the organization of the processes. Internal audits are carried out by internal IT-auditors. External audits are carried out by external, independent IT-auditors. Besides those already mentioned, an evaluation based on communicated security incidents occurs. The most important activities for this evaluation are the security monitoring of IT-systems; verify the security legislation and security plan implementation; trace and react to undesirable use of IT-supplies. Figure 2.4.1: Process-data model Evaluation sub-process The process-data diagram illustrated in the figure 2.4.1 consists of a meta-process model and a meta-data model. The Evaluation sub-process was modeled using the meta-modeling technique. The dotted arrows running from the meta-process diagram (left) to the meta-data diagram (right) indicate which concepts are created/ adjusted in the corresponding activities. All of the activities in the evaluation phase are standard activities. For a short description of the Evaluation phase concepts see Table 2.4.2 where the concepts are listed and defined. Table 2.4.2: Concept and definition evaluation sub-process Security management Because of organizational and IT-infrastructure changes, security risks change over time, requiring revisions to the security section of service level agreements and security plans. Maintenance is based on the results of the Evaluation sub-process and insight in the changing risks. These activities will produce proposals. The proposals either serve as inputs for the plan sub-process and travel through the cycle or can be adopted as part of maintaining service level agreements. In both cases the proposals could lead to activities in the action plan. The actual changes are made by the Change Management process. Figure 2.5.1 is the process-data diagram of the implementation sub-process. This picture shows the integration of the meta-process model (left) and the meta-data model (right). The dotted arrows indicate which concepts are created or adjusted in the activities of the implementation phase. Figure 2.5.1: Process-data model Maintenance sub-process The maintenance sub-process starts with the maintenance of the service level agreements and the maintenance of the operational level agreements. After these activities take place (in no particular order) and there is a request for a change the request for change activity will take place and after the request for change activity is concluded the reporting activity starts. If there is no request for a change then the reporting activity will start directly after the first two activities. The concepts in the meta-data model are created/ adjusted during the maintenance phase. For a list of the concepts and their definition take a look at table 2.5.2. Table 2.5.2: Concept and definition Plan sub-process Security management Figure 2.6.1: Complete process-data model Security Management process The Security Management Process, as stated in the introduction, has relations with almost all other ITIL-processes. These processes are: Within these processes activities concerning security are required. The concerning process and its process manager are responsible for these activities. However, Security Management gives indications to the concerning process on how to structure these activities. Internal e-mail is subject to multiple security risks, requiring corresponding security plan and policies. In this example the ITIL security Management approach is used to implement e-mail policies. The Security management team is formed and process guidelines are formulated and communicated to all employees and providers. These actions are carried out in the Control phase. In the subsequent Planning phase, policies are formulated. Policies specific to e-mail security are formulated and added to service level agreements. At the end of this phase the entire plan is ready to be implemented. Implementation is done according to the plan. After implementation the policies are evaluated, either as self-assessments, or via internal or external auditors. In the maintenance phase the e-policies are adjusted based on the evaluation. Needed changes are processed via Requests for Change.
https://en.wikipedia.org/wiki/ITIL_security_management
Thecyber kill chainis the process by which perpetrators carry out cyberattacks.[2]Lockheed Martinadapted the concept of thekill chainfrom a military setting toinformation security, using it as a method for modeling intrusions on acomputer network.[3]The cyber kill chain model has seen some adoption in the information security community.[4]However, acceptance is not universal, with critics pointing to what they believe are fundamental flaws in the model.[5] Computer scientists at Lockheed-Martin corporation described a new "intrusion kill chain" framework or model to defend computer networks in 2011.[6]They wrote that attacks may occur in phases and can be disrupted through controls established at each phase. Since then, the "cyber kill chain" has been adopted by data security organizations to define phases ofcyberattacks.[7] A cyber kill chain reveals the phases of acyberattack: from early reconnaissance to the goal ofdata exfiltration.[8]The kill chain can also be used as a management tool to help continuously improve network defense. According to Lockheed Martin, threats must progress through several phases in the model, including: Defensive courses of action can be taken against these phases:[9] A U.S. Senate investigation of the 2013 Target Corporation data breach included analysis based on the Lockheed-Martin kill chain framework. It identified several stages where controls did not prevent or detect progression of the attack.[1] Different organizations have constructed their own kill chains to try to model different threats.FireEyeproposes a linear model similar to Lockheed-Martin's. In FireEye's kill chain the persistence of threats is emphasized. This model stresses that a threat does not end after one cycle.[10] Among the critiques of Lockheed Martin's cyber kill chain model as threat assessment and prevention tool is that the first phases happen outside the defended network, making it difficult to identify or defend against actions in these phases.[11]Similarly, this methodology is said to reinforce traditional perimeter-based and malware prevention-based defensive strategies.[12]Others have noted that the traditional cyber kill chain isn't suitable to model the insider threat.[13]This is particularly troublesome given the likelihood of successful attacks that breach the internal network perimeter, which is why organizations "need to develop a strategy for dealing with attackers inside the firewall. They need to think of every attacker as [a] potential insider".[14] The Unified Kill Chain was developed in 2017 by Paul Pols in collaboration with Fox-IT andLeiden Universityto overcome common critiques against the traditional cyber kill chain, by uniting and extendingLockheed Martin's kill chain andMITRE'sATT&CKframework (both of which are based on the "Get In, Stay In, and Act" model constructed by James Tubberville and Joe Vest). The unified version of the kill chain is an ordered arrangement of 18 unique attack phases that may occur in an end-to-endcyberattack, which covers activities that occur outside and within the defended network. As such, the unified kill chain improves over the scope limitations of the traditional kill chain and the time-agnostic nature of tactics in MITRE's ATT&CK. The unified model can be used to analyze, compare, and defend against end-to-end cyberattacks byadvanced persistent threats(APTs).[15]A subsequent whitepaper on the unified kill chain was published in 2021.[16]
https://en.wikipedia.org/wiki/Cyber_kill_chain
In thecomputer securityorInformation securityfields, there are a number of tracks a professional can take to demonstrate qualifications.[Notes 1]Four sources categorizing these, and many other credentials, licenses, andcertifications, are: Quality and acceptance vary worldwide for IT security credentials, from well-known and high-quality examples like a master's degree in the field from an accredited school, CISSP, and Microsoft certification, to a controversial list of many dozens of lesser-known credentials and organizations. In addition to certification obtained by taking courses and/or passing exams (and in the case of CISSP and others noted below, demonstrating experience and/or being recommended or given a reference from an existing credential holder), award certificates also are given for winning government, university or industry-sponsored competitions, including team competitions and contests. [Notes 2] [Notes 2] Microsoft 1 year *: you have to do a free refresh exam within 180 days before expiration. if not done, the certificate expire otherwise it extends by 1 year.
https://en.wikipedia.org/wiki/List_of_computer_security_certifications
Network Security Services(NSS) is a collection of cryptographiccomputer librariesdesigned to supportcross-platformdevelopment of security-enabled client and server applications with optional support for hardwareTLS/SSL accelerationon the server side and hardware smart cards on the client side. NSS provides a completeopen-sourceimplementation of cryptographic libraries supportingTransport Layer Security(TLS) /Secure Sockets Layer(SSL) andS/MIME. NSS releases prior to version 3.14 aretri-licensedunder theMozilla Public License1.1, theGNU General Public License, and theGNU Lesser General Public License. Since release 3.14, NSS releases are licensed under GPL-compatible Mozilla Public License 2.0.[2] NSS originated from the libraries developed whenNetscapeinvented the SSL security protocol. The NSS software crypto module has been validated five times (in 1997,[3]1999, 2002,[4]2007, and 2010[5]) for conformance toFIPS 140at Security Levels 1 and 2.[6]NSS was the first open source cryptographic library to receive FIPS 140 validation.[6]The NSS libraries passed theNISCCTLS/SSL and S/MIME test suites (1.6 million test cases of invalid input data).[6] AOL,Red Hat,Sun Microsystems/Oracle Corporation,Googleand other companies and individual contributors have co-developed NSS.Mozillaprovides the source code repository, bug tracking system, and infrastructure for mailing lists and discussion groups. They and others named below use NSS in a variety of products, including the following: NSS includes a framework to which developers andOEMscan contribute patches, such as assembly code, to optimize performance on their platforms. Mozilla has certified NSS 3.x on 18 platforms.[8][9]NSS makes use ofNetscape Portable Runtime(NSPR), a platform-neutral open-source API for system functions designed to facilitate cross-platform development. Like NSS, NSPR has been used heavily in multiple products. In addition to libraries and APIs, NSS provides security tools required for debugging, diagnostics, certificate and key management, cryptography-module management, and other development tasks. NSS comes with an extensive and growing set of documentation, including introductory material, API references,manpages for command-line tools, and sample code. Programmers can utilize NSS as source and as shared (dynamic) libraries. Every NSS release is backward-compatible with previous releases, allowing NSS users to upgrade to new NSS shared libraries without recompiling or relinking their applications. NSS supports a range of security standards, including the following:[10][11] NSS supports thePKCS #11interface for access to cryptographic hardware likeTLS/SSL accelerators,hardware security modulesandsmart cards. Since most hardware vendors such asSafeNet,AEPandThalesalso support this interface, NSS-enabled applications can work with high-speed crypto hardware and use private keys residing on various smart cards, if vendors provide the necessary middleware. NSS version 3.13 and above support theAdvanced Encryption Standard New Instructions(AES-NI).[12] Network Security Services forJava(JSS) consists of a Java interface to NSS. It supports most of the security standards and encryption technologies supported by NSS. JSS also provides a pure Java interface forASN.1types andBER/DERencoding.[13]
https://en.wikipedia.org/wiki/Network_Security_Services
Privacy engineeringis an emerging field of engineering which aims to provide methodologies, tools, and techniques to ensure systems provide acceptable levels ofprivacy. Its focus lies in organizing and assessing methods to identify and tackle privacy concerns within the engineering ofinformation systems.[1] In theUS, an acceptable level of privacy is defined in terms of compliance to the functional and non-functional requirements set out through aprivacy policy, which is a contractual artifact displaying the data controlling entities compliance to legislation such asFair Information Practices, health record security regulation and otherprivacy laws. In theEU, however, theGeneral Data Protection Regulation(GDPR) sets the requirements that need to be fulfilled. In the rest of the world, the requirements change depending on local implementations ofprivacyanddata protectionlaws. The definition of privacy engineering given byNational Institute of Standards and Technology (NIST)is:[2] Focuses on providing guidance that can be used to decrease privacy risks, and enable organizations to make purposeful decisions about resource allocation and effective implementation of controls in information systems. While privacy has been developing as a legal domain, privacy engineering has only really come to the fore in recent years as the necessity of implementing said privacy laws in information systems has become a definite requirement to the deployment of such information systems. For example, IPEN outlines their position in this respect as:[3] One reason for the lack of attention to privacy issues in development is the lack of appropriate tools and best practices. Developers have to deliver quickly in order to minimize time to market and effort, and often will re-use existing components, despite their privacy flaws. There are, unfortunately, few building blocks for privacy-friendly applications and services, and security can often be weak as well. Privacy engineering involves aspects such as process management,security,ontologyandsoftware engineering.[4]The actual application of these derives from necessary legal compliances, privacy policies and 'manifestos' such asPrivacy-by-Design.[5] Towards the more implementation levels, privacy engineering employsprivacy enhancing technologiesto enableanonymisationandde-identificationof data. Privacy engineering requires suitable security engineering practices to be deployed, and some privacy aspects can be implemented using security techniques. A privacy impact assessment is another tool within this context and its use does not imply that privacy engineering is being practiced. One area of concern is the proper definition and application of terms such as personal data, personally identifiable information, anonymisation andpseudo-anonymisationwhich lack sufficient and detailed enough meanings when applied to software, information systems and data sets. Another facet of information system privacy has been the ethical use of such systems with particular concern onsurveillance,big datacollection,artificial intelligenceetc. Some members of the privacy and privacy engineering community advocate for the idea ofethics engineeringor reject the possibility of engineering privacy into systems intended for surveillance. Software engineers often encounter problems when interpreting legal norms into current technology. Legal requirements are by nature neutral to technology and will in case of legal conflict be interpreted by a court in the context of the current status of both technology and privacy practice. As this particular field is still in its infancy and somewhat dominated by the legal aspects, the following list just outlines the primary areas on which privacy engineering is based: Despite the lack of a cohesive development of the above areas, courses already exist for the training of privacy engineering.[8][9][10]The International Workshop on Privacy Engineering co-located withIEEE Symposiumon Security and Privacy provides a venue to address "the gap between research and practice in systematizing and evaluating approaches to capture and address privacy issues while engineering information systems".[11][12][13] A number of approaches to privacy engineering exist. The LINDDUN[14]methodology takes a risk-centric approach to privacy engineering where personal data flows at risk are identified and then secured with privacy controls.[15][16]Guidance for interpretation of the GDPR has been provided in the GDPR recitals,[17]which have been coded into a decision tool[18]that maps GDPR into software engineering forces[18]with the goal to identify suitable privacy design patterns.[19][20]One further approach uses eight privacy design strategies - four technical and four administrative strategies - to protect data and to implement data subject rights.[21] Privacy engineering is particularly concerned with the processing of information over the following aspects orontologiesand their relations[22]to their implementation in software: Further to this how the above then affect the security classification, risk classification and thus the levels of protection and flow within a system can then the metricised or calculated. Privacy is an area dominated by legal aspects but requires implementation using, ostensibly, engineering techniques, disciplines and skills. Privacy Engineering as an overall discipline takes its basis from considering privacy not just as a legal aspect or engineering aspect and their unification but also utilizing the following areas:[25] The impetus for technological progress in privacy engineering stems from generalprivacy lawsand various particular legal acts:
https://en.wikipedia.org/wiki/Privacy_engineering
Privacy-enhancing technologies(PET) are technologies that embody fundamental data protection principles by minimizing personal data use, maximizing data security, and empowering individuals. PETs allowonline usersto protect theprivacyof theirpersonally identifiable information(PII), which is often provided to and handled by services or applications. PETs use techniques to minimize an information system's possession ofpersonal datawithout losing functionality.[1]Generally speaking, PETs can be categorized as either hard or soft privacy technologies.[2] The objective of PETs is to protectpersonal dataand assure technology users of two key privacy points: their own information is kept confidential, and management ofdata protectionis a priority to the organizations who hold responsibility for anyPII. PETs allow users to take one or more of the following actions related to personal data that is sent to and used byonline service providers, merchants or other users (this control is known asself-determination). PETs aim to minimize personal data collected and used by service providers and merchants, usepseudonymsor anonymous data credentials to provide anonymity, and strive to achieve informed consent about giving personal data to online service providers and merchants.[3]In Privacy Negotiations, consumers and service providers establish, maintain, and refine privacy policies as individualized agreements through the ongoing choice among service alternatives, therefore providing the possibility to negotiate the terms and conditions of giving personal data to online service providers and merchants (data handling/privacy policy negotiation). Within private negotiations, the transaction partners may additionally bundle the personal information collection and processing schemes with monetary or non-monetary rewards.[4] PETs provide the possibility to remotely audit the enforcement of these terms and conditions at the online service providers and merchants (assurance), allow users to log, archive and look up past transfers of their personal data, including what data has been transferred, when, to whom and under what conditions, and facilitate the use of their legal rights of data inspection, correction and deletion. PETs also provide the opportunity for consumers or people who want privacy-protection to hide their personal identities. The process involves masking one's personal information and replacing that information with pseudo-data or an anonymous identity. Privacy-enhancing Technologies can be distinguished based on their assumptions.[2] Soft privacy technologies are used where it can be assumed that a third-party can be trusted for the processing of data. This model is based oncompliance,consent, control and auditing.[2] Example technologies areaccess control,differential privacy, andtunnel encryption (SSL/TLS). An example of soft privacy technologies is increased transparency and access. Transparency involves granting people with sufficient details about the rationale used in automated decision-making processes. Additionally, the effort to grant users access is considered soft privacy technology. Individuals are usually unaware of their right of access or they face difficulties in access, such as a lack of a clear automated process.[5] With hard privacy technologies, no single entity can violate the privacy of the user. The assumption here is that third-parties cannot be trusted. Data protection goals includedata minimizationand the reduction of trust in third-parties.[2] Examples of such technologies includeonion routing, thesecret ballot, and VPNs[6]used for democratic elections. PETs have evolved since their first appearance in the 1980s.[dubious–discuss]At intervals, review articles have been published on the state of privacy technology: Examples of existing privacy enhancing technologies are: General PET building blocks: PETs for Privacy-Preserving Communication: PETs for Privacy Preserving Data Processingare PETs that facilitate data processing or the production of statistics while preserving privacy of the individuals providing raw data, or of the specific raw data elements. Some examples include: PETs for Privacy Preserving Data Analyticsare a subset of the PETs used for data processing that are specifically designed for the publishing of statistical data. Some examples include: Examples of privacy enhancing technologies that are being researched or developed include[20]limited disclosure technology, anonymous credentials, negotiation and enforcement of data handling conditions, and data transaction logs. Limited disclosure technologyprovides a way of protecting individuals' privacy by allowing them to share only enough personal information with service providers to complete an interaction or transaction. This technology is also designed to limit tracking and correlation of users’ interactions with these third parties. Limited disclosure usescryptographictechniques and allows users to retrieve data that is vetted by a provider, to transmit that data to a relying party, and have these relying parties trust the authenticity and integrity of the data.[21] Anonymous credentialsare asserted properties or rights of thecredentialholder that don't reveal the true identity of the holder; the only information revealed is what the holder of the credential is willing to disclose. The assertion can be issued by the user himself/herself, by the provider of the online service or by a third party (another service provider, a government agency, etc.). For example: Online car rental. The car rental agency doesn't need to know the true identity of the customer. It only needs to make sure that the customer is over 23 (as an example), that the customer has a drivers license,health insurance(i.e. for accidents, etc.), and that the customer is paying. Thus there is no real need to know the customers name nor their address or any otherpersonal information. Anonymous credentials allow both parties to be comfortable: they allow the customer to only reveal so much data which the car rental agency needs for providing its service (data minimization), and they allow the car rental agency to verify their requirements and get their money. When ordering a car online, the user, instead of providing the classical name, address andcredit card number, provides the following credentials, all issued topseudonyms(i.e. not to the real name of the customer): Negotiation and enforcement of data handling conditions. Before ordering a product or service online, the user and the online service provider or merchant negotiate the type ofpersonal datathat is to be transferred to the service provider. This includes the conditions that shall apply to the handling of the personal data, such as whether or not it may be sent to third parties (profile selling) and under what conditions (e.g. only while informing the user), or at what time in the future it shall be deleted (if at all). After the transfer of personal data took place, the agreed upon data handling conditions are technically enforced by the infrastructure of the service provider, which is capable of managing and processing and data handling obligations. Moreover, this enforcement can be remotely audited by the user, for example by verifying chains of certification based onTrusted computingmodules or by verifying privacy seals/labels that were issued by third party auditing organizations (e.g. data protection agencies). Thus instead of the user having to rely on the mere promises of service providers not to abusepersonal data, users will be more confident about the service provider adhering to the negotiated data handling conditions[22] Lastly, thedata transaction logallows users the ability to log the personal data they send to service provider(s), the time in which they do it, and under what conditions. These logs are stored and allow users to determine what data they have sent to whom, or they can establish the type of data that is in possession by a specific service provider. This leads to moretransparency, which is a pre-requisite of being in control. PETs in general: Anonymous credentials: Privacy policy negotiation:
https://en.wikipedia.org/wiki/Privacy-enhancing_technologies
Security convergencerefers to the convergence of two historically distinct security functions –physical securityandinformation security– within enterprises; both are integral parts of a coherentrisk managementprogram. Security convergence is motivated by the recognition that corporate assets are increasingly information-based. In the past, physical assets demanded the bulk of protection efforts, whereas information assets are demanding increasing attention. Although generally used in relation to cyber-physical convergence, security convergence can also refer to the convergence of security with related risk and resilience disciplines, includingbusiness continuity planningandemergency management. Security convergence is often referred to as 'converged security'. According to the United StatesCybersecurity and Infrastructure Security Agency, security convergence is the "formal collaboration between previously disjointed security functions."[1]Survey participants in anASIS FoundationstudyThe State of Security Convergence in the United States, Europe, and Indiadefine security convergence as "getting security/risk management functions to work together seamlessly, closing the gaps and vulnerabilities that exist in the space between functions."[2] In his bookSecurity Convergence: Managing Enterprise Security Risk, Dave Tyson defines security convergence as "the integration of the cumulative security resources of an organization in order to deliver enterprise-wide benefits through enhanced risk mitigation, increased operational effectiveness and efficiency, and cost savings."[3] The concept of security convergence has gained currency within the context of theFourth Industrial Revolution, which, according to founder and Executive Chairman of theWorld Economic Forum(WEF)Klaus Schwab, "is characterised by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres."[4]Key results of this fusion include developments incyber-physical systems(CPS) and the growth of theInternet of Things(ioT), which have seen a proliferation in the number and types of internet connected physical objects. In 2017,Gartnerpredicted that there would be 20 billion internet-connected things by 2020.[5] Security convergence was endorsed as early as 2007 by three leading international organizations for security professionals –ASIS International,ISACAandISSA– which together co-founded the Alliance for Enterprise Security Risk Management to, in part, promote the concept. In the context of the Internet of Things, cyber threats more readily translate into physical consequences, and physical security breaches can also extend an organisation's cyber threat surface. According to the United StatesCybersecurity and Infrastructure Security Agency, "The adoption and integration of Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices has led to an increasingly interconnected mesh of cyber-physical systems (CPS), which expands the attack surface and blurs the once clear functions of cybersecurity and physical security."[6] According to the WEFGlobal Risks Report2020, "Operational technologies are at increased risk because cyberattacks could cause more traditional, kinetic impacts as technology is being extended into the physical world, creating a cyber-physical system".[7]According to theUnited States Department of Homeland Security, "The consequences of unintentional faults or malicious attacks [on cyber-physical systems] could have severe impact on human lives and the environment."[8] Notable examples of attacks on internet connected facilities include the 2010Stuxnetattack on Iran'sNatanznuclear facilities and theDecember 2015 Ukraine power grid cyberattack. “Today’s threats are a result of hybrid and blended attacks utilizingInformation Technology(IT),physical infrastructure, andOperational Technology(OT) as the enemy avenue of approach," notes former CISA Assistant Director for Infrastructure Security Brian Harrell. "Highlighting this future threat landscape will ensure better situational awareness and a more rapid response.”[9] Traditionally distinct, or 'siloed', approaches to physical security and cyber security are viewed by proponents of security convergence as unable to adequately protect an organisation from attacks involving both cyber and physical (cyber-physical) dimensions. The organisational aspect of security convergence focuses on the extent to which an organisation's internal structure is capable of adequately addressing converged security risks. According to theCybersecurity and Infrastructure Security Agency, "physical security and cybersecurity divisions are often still treated as separate entities. When security leaders operate in these siloes, they lack a holistic view of security threats targeting their enterprise. As a result, attacks are more likely to occur".[1]"Many of the conventional physical and information security risks are viewed in isolation," states aPricewaterhouseCoopersdocumentConvergence of Security Risks. "These risks may converge or overlap at specific points during the risk lifecycle, and as such, could become a blind spot to the organisation or individuals responsible for risk management."[10] In a survey of more than 1,000 senior physical security, cybersecurity, disaster management, and business continuity professionals, theASIS FoundationstudyThe State of Security Convergence in the United States, Europe, and Indiafound that despite “years of predictions about the inevitability of security convergence, just 24 percent of respondents have converged their physical and cybersecurity functions.”[2]The survey also found that 96 percent of organisations that had converged two or more security functions reported positive results from convergence, with 72 percent reporting that convergence strengthened their overall security. Overall, 78 percent of those surveyed believed that convergence would strengthen their overall security function. Citing the work ofJay Wright Forresteronsystems thinking, Optic Security Group CEO Jason Cherrington argues that asystem of systemsapproach provides a useful lens to understanding how security sub-groups within an organisation contribute to an organisation's overall security goals. "In an ideal SoS world, organisations would see their security as a collection of task-oriented or dedicated systems that pool their resources and capabilities together as part of an overall system offering more functionality and performance than the sum of its parts. Importantly, oversight of the overall system would ensure that any gaps between its component systems are identified and failures avoided."[11] The increasing prevalence of hybridised cyber-physical security threats has driven the parallel emergence of a range of converged security solutions that cover both cyber and physical domains. According to Jason Cherrington, "in contemporary security threats we’re seeing a convergence of physical and digital vectors; and that protection against these hybridised threats requires a hybridised approach."[11]According to the United StatesCybersecurity and Infrastructure Security Agency: "Organizations with converged cybersecurity and physical security functions are more resilient and better prepared to identify, prevent, mitigate, and respond to threats. Convergence also encourages information sharing and developing unified security policies across security divisions."[6]
https://en.wikipedia.org/wiki/Security_convergence
Security information management(SIM) is aninformation securityindustry term for the collection of data such aslog filesinto a central repository for trend analysis.[1] SIM products generally are software agents running on the computer systems that are monitored. The recorded log information is then sent to a centralized server that acts as a "security console". The console typically displays reports, charts, and graphs of that information, often in real time. Some software agents can incorporate local filters to reduce and manipulate the data that they send to the server, although typically from a forensic point of view you would collect all audit and accounting logs to ensure you can recreate a security incident.[2] The security console is monitored by an administrator who reviews the consolidated information and takes action in response to any alerts issued.[3][4] The data that is sent to the server to be correlated and analyzed are normalized by the software agents into a common form, usuallyXML. Those data are then aggregated in order to reduce their overall size.[3][4] The terminology can easily be mistaken as a reference to the whole aspect of protecting one's infrastructure from any computer security breach. Due to historic reasons of terminology evolution; SIM refers to just the part of information security which consists of discovery of 'bad behavior' or policy violations by using data collection techniques.[5]The term commonly used to represent an entire security infrastructure that protects an environment is commonly calledinformation security management(InfoSec). Security information management is also referred to as log management and is different from SEM (security event management), but makes up a portion of aSIEM(security information and event management) solution.[6] Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Security_information_management
Security level management(SLM) comprises aquality assurancesystem forinformation system security. The aim of SLM is to display the information technology (IT) security status transparently across anorganizationat any time, and to make IT security a measurable quantity. Transparency and measurability are the prerequisites for improving IT security throughcontinuous monitoring. SLM is oriented towards the phases of theDeming Cycle/Plan-Do-Check-Act (PDCA) Cycle: within the scope of SLM, abstract security policies or compliance guidelines at a company are transposed into operative, measureable specifications for the IT security infrastructure. The operative aims form the security level to be reached. The security level is checked permanently against the current status of thesecurity softwareused (malware scanner,update/patch management,vulnerability scanner, etc.). Deviations can be recognised at an early stage and adjustments made to the security software. Incorporatecontexts, SLM typically falls under the range of duties of thechief security officer(CSO), thechief information officer(CIO), or thechief information security officer(CISO), who report directly to anexecutive boardon IT security anddata availability. SLM is related to the disciplines ofsecurity information management(SIM) andsecurity event management(SEM) (as well as their combined practice,security information and event management(SIEM)), whichGartnerdefines as follows: […] SIM provides reporting and analysis of data primarily from host systems and applications, and secondarily from security devices — to support security policy compliance management, internal threat management and regulatory compliance initiatives. SIM supports the monitoring and incident management activities of the IT security organization […]. SEM improves security incident response capabilities. SEM processes near-real-time data from security devices, network devices and systems to provide real-time event management for security operations. […][1] SIM and SEM relate to the infrastructure for realising superordinate security aims, but are not descriptive of a strategic management system with aims, measures, revisions and actions to be derived from this. SLM unites the requisite steps for realising a measurable, functioning IT security structure in a management control cycle. SLM can be categorised under the strategic panoply ofIT governance, which, via suitable organisation structures and processes, ensures that IT supports corporate strategy and objectives. SLM allows CSOs, CIOs and CISOs to prove that SLM is contributing towards protecting electronic data relevant to processes adequately, and therefore makes a contribution in part to IT governance. Defining the Security Level (Plan):Each company specifies security policies. It defines aims in relation to the integrity, confidentiality, availability and authority of classified data. In order to be able to verify compliance with these specifications, concrete objectives for the security software used in the company must be derived from the abstract security policies. A security level consists of a collection of measurable limiting and threshold values. Limits and thresholds must be defined separately for different system classes of the network, for example, because the local IT infrastructure and other framework conditions must be taken into account. Overarching security policies therefore result in different operational objectives, such as: The security-relevant software updates should be installed on all workstations in our network no later than 30 days after their release. On certain server and host systems after 60 days at the latest. The IT control manual Control Objectives for Information and Related Technologies (COBIT) provides companies with instructions on transposing subordinate, abstract aims into measurable objectives in a few steps. Collecting and Analysing Data (Do):Information on the current status of the systems in a network can be obtained from the log data and the status reports of the management consoles of the security software used. Monitoring solutions that analyse the security software of different vendors can simplify and accelerate data collection. Checking the Security Level (Check):SLM provides continual comparison of the defined security level with the actual values collected. Automated real-time comparison supplies companies with a continuous monitoring of the security situation of the entire company network. Adjusting the Security Structure (Act):Efficient SLM allows trend analyses and long-term comparative assessments to be made. By continuously monitoring the security level, weak spots in the network can be identified at an early stage and proactive adjustments can be made to the security software to improve system protection. Besides defining the specifications for engineering, introducing, operating, monitoring, maintaining and improving a documented information security management system,ISO/IEC 27001also defines the specifications for implementing suitable security mechanisms. ITIL, a collection of best practices for IT control processes, goes far beyond IT security. In relation, it supplies criteria for how Security Officers can conceive IT security as an independent, qualitatively measurable service and integrate it into the universe of business-process-oriented IT processes. ITIL also works from the top down with policies, processes, procedures and job-related instructions, and assumes that both superordinate, but also operative aims need to be planned, implemented, controlled, evaluated and adjusted. Summary and material from the ISACA International Organization for Standardization Summary and material from AXELOS
https://en.wikipedia.org/wiki/Security_level_management
TheSecurity of Information Act(French:Loi sur la protection de l’information, R.S.C. 1985, c. O-5),[1]formerly known as theOfficial Secrets Act, is an Act of theParliament of Canadathat addressesnational securityconcerns, including threats ofespionageby foreign powers andterrorist organizations, and the intimidation or coercion of ethnocultural communities in and against Canada. Certain departments ('Scheduled department') and classes of people (past and current employees) are 'permanently bound to secrecy' under the Act. These are individuals who should be held to a higher level of accountability for unauthorized disclosures of information obtained in relation to their work. For example, Military Intelligence, employees ofCanadian Security Intelligence Service(CSIS),Communications Security Establishmentand certain members of theRoyal Canadian Mounted Police(RCMP). This act applies to anyone who has been grantedsecurity clearanceby the Federal Government, including those who have been granted Reliability Status for accessing designated information. Previously, only 'classified' information was protected under theOfficial Secrets Act1981. On 13 January 2012,Jeffrey Delisle, acommissioned officerin theRoyal Canadian Navy, was arrested and charged under theSecurity of Information Act. Delisle was accused of sellingclassifiedintelligence, including that of Canadian and otherFive Eyesintelligence agencies, to theRussian Federation.[2] In October 2012, Delisle pleaded guilty to one count each of communicating safeguarded information and attempting to communicate safeguarded information, and one count of breach of trust under theCriminal Code. Delisle was sentenced on 8 February 2013 to 20 years in prison and fined $111,817, the amount investigators claimed he received from Russia.[3] Delisle was granted day parole in August 2018 and subsequently released on full parole in March 2019 with theParole Board of Canadaconsidering Delisle of low risk to reoffend.[4] On 12 September 2019, Cameron Ortis, a senior intelligence official with theRoyal Canadian Mounted Police, was arrested and subsequently charged with ten offences under theSecurity of Information Actand theCriminal Code.[5]Ortis was accused of leakingclassifiedintelligence to individuals under investigation by Canadian and international authorities.[6] Following an eight-week-long trial, on 22 November 2023, Ortis was found guilty of four counts of intentionally and without authority communicating special operational information to unauthorized individuals. Ortis was also found guilty of one count each of fraudulently obtaining a computer service and breach of trust under theCriminal Code.[7] On 7 February 2024, Ortis was sentenced to 14 years in prison minus time served in custody.[8]
https://en.wikipedia.org/wiki/Security_of_Information_Act
Security serviceis a service, provided by a layer of communicating open systems, which ensures adequate security of the systems or of data transfers[1]as defined byITU-TX.800 Recommendation.X.800 andISO7498-2 (Information processing systems – Open systems interconnection – Basic Reference Model – Part 2: Security architecture)[2]are technically aligned. This model is widely recognized[3][4] A more general definition is in CNSS Instruction No. 4009 dated 26 April 2010 byCommittee on National Security SystemsofUnited States of America:[5] Another authoritative definition is inW3CWeb serviceGlossary[6]adopted byNISTSP 800-95:[7] Information securityandComputer securityare disciplines that are dealing with the requirements ofConfidentiality,Integrity,Availability, the so-called CIA Triad, of information asset of an organization (company or agency) or the information managed by computers respectively. There arethreatsthat canattackthe resources (information or devices to manage it)exploitingone or morevulnerabilities. The resources can be protected by one or morecountermeasuresorsecurity controls.[8] So security services implement part of the countermeasures, trying to achieve the security requirements of an organization.[3][9] In order to let different devices (computers, routers, cellular phones) to communicate data in a standardized way,communication protocolshad been defined. TheITU-Torganization published a large set of protocols. The general architecture of these protocols is defined in recommendation X.200.[10] The different means (air, cables) and ways (protocols andprotocol stacks) to communicate are called acommunication network. Security requirements are applicable to the information sent over the network. The discipline dealing with security over a network is calledNetwork security.[11] The X.800 Recommendation:[1] This Recommendation extends the field of application of Recommendation X.200, to coversecure communicationsbetweenopen systems. According to X.200 Recommendation, in the so-calledOSI Reference modelthere are 7layers, each one is generically called N layer. The N+1 entity ask for transmission services to the N entity.[10] At each level two entities (N-entity) interact by means of the (N) protocol by transmittingProtocol Data Units(PDU).Service Data Unit(SDU) is a specific unit of data that has been passed down from an OSI layer, to a lower layer, and has not yet beenencapsulatedinto a PDU, by the lower layer. It is a set of data that is sent by a user of the services of a given layer, and is transmitted semantically unchanged to a peer service user . The PDU at any given layer, layer 'n', is the SDU of the layer below, layer 'n-1'. In effect the SDU is the 'payload' of a given PDU. That is, the process of changing a SDU to a PDU, consists of an encapsulation process, performed by the lower layer. All the data contained in the SDU becomes encapsulated within the PDU. The layer n-1 adds headers or footers, or both, to the SDU, transforming it into a PDU of layer n-1. The added headers or footers are part of the process used to make it possible to get data from a source to a destination.[10] The following are considered to be the security services which can be provided optionally within the framework of the OSI Reference Model. The authentication services require authentication information comprising locally stored information and data that is transferred (credentials) to facilitate the authentication:[1][4] The security services may be provided by means of security mechanism:[1][3][4] The table1/X.800 shows the relationships between services and mechanisms Some of them can be applied to connection oriented protocols, other to connectionless protocols or both. The table 2/X.800 illustrates the relationship of security services and layers:[4] Managed security service(MSS) arenetwork securityservices that have beenoutsourcedto a service provider.
https://en.wikipedia.org/wiki/Security_service_(telecommunication)
Verification and validation(also abbreviated asV&V) are independent procedures that are used together for checking that a product, service, or system meetsrequirementsandspecificationsand that it fulfills its intended purpose.[1]These are critical components of aquality management systemsuch asISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V". In reality, asquality managementterms, the definitions of verification and validation can be inconsistent. Sometimes they are even used interchangeably.[2][3][4] However,the PMBOK guide, a standard adopted by theInstitute of Electrical and Electronics Engineers(IEEE), defines them as follows in its 4th edition:[5] Similarly, for aMedical device, the FDA (21 CFR) defines Validation and Verification as procedures that ensures that the device fulfil their intended purpose. ISO 9001:2015 (Quality management systems requirements) makes the following distinction between the two activities, when describing design and development controls:[6] It also notes that verification and validation have distinct purposes but can be conducted separately or in any combination, as is suitable for the products and services of the organization. Verification is intended to check that a product, service, or system meets a set ofdesign specifications.[7][8]In the development phase, verification procedures involve performing special tests tomodelor simulate a portion, or the entirety, of a product, service, or system, then performing a review or analysis of the modeling results. In the post-development phase, verification procedures involve regularly repeating tests devised specifically to ensure that the product, service, or system continues to meet the initial design requirements, specifications, and regulations as time progresses.[8][9]It is a process that is used to evaluate whether a product, service, or system complies with regulations,specifications, or conditions imposed at the start of a development phase. Verification can be in development, scale-up, or production. This is often an internal process.[citation needed] Validation is intended to ensure a product, service, or system (or portion thereof, or set thereof) results in a product, service, or system (or portion thereof, or set thereof) that meets the operational needs of the user.[8][10]For a new development flow or verification flow, validation procedures may involve modeling either flow and using simulations to predict faults or gaps that might lead to invalid or incomplete verification or development of a product, service, or system (or portion thereof, or set thereof).[11]A set of validation requirements (as defined by the user), specifications, and regulations may then be used as a basis for qualifying a development flow or verification flow for a product, service, or system (or portion thereof, or set thereof). Additional validation procedures also include those that are designed specifically to ensure that modifications made to an existing qualified development flow or verification flow will have the effect of producing a product, service, or system (or portion thereof, or set thereof) that meets the initial design requirements, specifications, and regulations; these validations help to keep the flow qualified.[citation needed]It is a process of establishing evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. This often involves acceptance of fitness for purpose with end users and other product stakeholders. This is often an external process.[citation needed] It is sometimes said that validation can be expressed by the query "Are you building the right thing?"[12]and verification by "Are you building it right?".[12]"Building the right thing" refers back to the user's needs, while "building it right" checks that the specifications are correctly implemented by the system. In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance.[citation needed] It is entirely possible that a product passes when verified but fails when validated. This can happen when, say, a product is built as per the specifications but the specifications themselves fail to address the user's needs.[citation needed] Verification of machinery and equipment usually consists of design qualification (DQ), installation qualification (IQ), operational qualification (OQ), and performance qualification (PQ). DQ may be performed by a vendor or by the user, by confirming through review and testing that the equipment meets the written acquisition specification. If the relevant document or manuals of machinery/equipment are provided by vendors, the later 3Q needs to be thoroughly performed by the users who work in an industrial regulatory environment. Otherwise, the process of IQ, OQ and PQ is the task of validation. The typical example of such a case could be the loss or absence of vendor's documentation for legacy equipment ordo-it-yourself(DIY) assemblies (e.g., cars, computers, etc.) and, therefore, users should endeavour to acquire DQ document beforehand. Each template of DQ, IQ, OQ and PQ usually can be found on the internet respectively, whereas the DIY qualifications of machinery/equipment can be assisted either by the vendor's training course materials and tutorials, or by the published guidance books, such asstep-by-stepseries if the acquisition of machinery/equipment is not bundled with on- site qualification services. This kind of the DIY approach is also applicable to the qualifications of software, computer operating systems and a manufacturing process. The most important and critical task as the last step of the activity is to generating and archiving machinery/equipment qualification reports for auditing purposes, if regulatory compliances are mandatory.[citation needed] Qualification of machinery/equipment is venue dependent, in particular items that are shock sensitive and require balancing orcalibration, and re-qualification needs to be conducted once the objects are relocated. The full scales of some equipment qualifications are even time dependent as consumables are used up (i.e. filters) or springsstretchout, requiringrecalibration, and hence re-certification is necessary when a specified due time lapse.[13][14]Re-qualification of machinery/equipment should also be conducted when replacement of parts, or coupling with another device, or installing a newapplication softwareand restructuring of the computer which affects especially the pre-settings, such as onBIOS,registry,disk drive partition table, dynamically-linked (shared) libraries, or anini fileetc., have been necessary. In such a situation, the specifications of the parts/devices/software and restructuring proposals should be appended to the qualification document whether the parts/devices/software are genuine or not. Torres and Hyman have discussed the suitability of non-genuine parts for clinical use and provided guidelines for equipment users to select appropriate substitutes which are capable of avoiding adverse effects.[15]In the case when genuine parts/devices/software are demanded by some of regulatory requirements, then re-qualification does not need to be conducted on the non-genuine assemblies. Instead, the asset has to be recycled for non-regulatory purposes.[citation needed] When machinery/equipment qualification is conducted by a standard endorsed third party such as by anISOstandard accredited company for a particular division, the process is called certification.[16][17]Currently, the coverage ofISO/IEC 15408certification by anISO/IEC 27001accredited organization is limited; the scheme requires a fair amount of efforts to get popularized. Validation work can generally be categorized by the following functions: The most tested attributes in validation tasks may include, but are not limited to For example, in an HPLC purity analysis of a drug substance, a standard material of the highest purity would be run before the test samples. The parameters analyzed might be (for example) % RSD of area counts for triplicate injections or chromatographic parameters checked such as retention time. The HPLC run would be considered valid if the system suitability test passes and ensures the subsequent data collected for the unknown analytes are valid. For a longer HPLC run of over 20 samples, an additional system suitability standard (called a "check standard") might be run at the end or interspersed in the HPLC run and would be included in the statistical analysis. If all system suit standards pass, this ensures all samples yield acceptable data throughout the run, and not just at the beginning. All system suitability standards must be passed to accept the run. In a broad way, it usually includes a test of ruggedness among inter-collaborators, or a test ofrobustnesswithin an organization[45][46][47]However, theU.S. Food and Drug Administration(FDA) has specifically defined it for its administration, as "System suitability testing is an integral part of many analytical procedures. The tests are based on the concept that the equipment, electronics, analytical operations and samples to be analyzed constitute an integral system that can be evaluated as such. System suitability test parameters to be established for a particular procedure depend on the type of procedure being validated".[48]In some cases ofanalytical chemistry, a system suitability test could be rather a method specific than universal. Such examples are chromatographic analysis, which is usually media (column, paper or mobile solvent) sensitive[49][50][51]However to the date of this writing, this kind of approaches are limited to some of pharmaceutical compendial methods, by which the detecting of impurities, or the quality of the intest analyzed are critical (i.e., life and death). This is probably largely due to: These terms generally apply broadly across industries and institutions. In addition, they may have very specific meanings and requirements for specific products, regulations, and industries. Some examples:
https://en.wikipedia.org/wiki/Verification_and_validation
Computation of acyclic redundancy checkis derived from the mathematics ofpolynomial division, modulo two. In practice, it resembleslong divisionof thebinarymessage string, with a fixed number of zeroes appended, by the "generator polynomial" string except thatexclusive oroperations replace subtractions. Division of this type is efficiently realised in hardware by a modifiedshift register,[1]and in software by a series of equivalentalgorithms, starting with simple code close to the mathematics and becoming faster (and arguably moreobfuscated[2]) throughbyte-wiseparallelismandspace–time tradeoffs. Various CRC standards extend the polynomial division algorithm by specifying an initial shift register value, a final Exclusive-Or step and, most critically, a bit ordering (endianness). As a result, the code seen in practice deviates confusingly from "pure" division,[2]and the register may shift left or right. As an example of implementing polynomial division in hardware, suppose that we are trying to compute an 8-bit CRC of an 8-bit message made of theASCIIcharacter "W", which is binary 010101112, decimal 8710, orhexadecimal5716. For illustration, we will use the CRC-8-ATM (HEC) polynomialx8+x2+x+1{\displaystyle x^{8}+x^{2}+x+1}. Writing the first bit transmitted (the coefficient of the highest power ofx{\displaystyle x}) on the left, this corresponds to the 9-bit string "100000111". The byte value 5716can be transmitted in two different orders, depending on the bit ordering convention used. Each one generates a different message polynomialM(x){\displaystyle M(x)}. Msbit-first, this isx6+x4+x2+x+1{\displaystyle x^{6}+x^{4}+x^{2}+x+1}= 01010111, while lsbit-first, it isx7+x6+x5+x3+x{\displaystyle x^{7}+x^{6}+x^{5}+x^{3}+x}= 11101010. These can then be multiplied byx8{\displaystyle x^{8}}to produce two 16-bit message polynomialsx8M(x){\displaystyle x^{8}M(x)}. Computing the remainder then consists of subtracting multiples of the generator polynomialG(x){\displaystyle G(x)}. This is just like decimal long division, but even simpler because the only possible multiples at each step are 0 and 1, and the subtractionsborrow"from infinity" instead of reducing the upper digits. Because we do not care about the quotient, there is no need to record it. Observe that after each subtraction, the bits are divided into three groups: at the beginning, a group which is all zero; at the end, a group which is unchanged from the original; and a blue shaded group in the middle which is "interesting". The "interesting" group is 8 bits long, matching the degree of the polynomial. Every step, the appropriate multiple of the polynomial is subtracted to make the zero group one bit longer, and the unchanged group becomes one bit shorter, until only the final remainder is left. In the msbit-first example, the remainder polynomial isx7+x5+x{\displaystyle x^{7}+x^{5}+x}. Converting to a hexadecimal number using the convention that the highest power ofxis the msbit; this is A216. In the lsbit-first, the remainder isx7+x4+x3{\displaystyle x^{7}+x^{4}+x^{3}}. Converting to hexadecimal using the convention that the highest power ofxis the lsbit, this is 1916. Writing out the full message at each step, as done in the example above, is very tedious. Efficient implementations use ann{\displaystyle n}-bitshift registerto hold only the interesting bits. Multiplying the polynomial byx{\displaystyle x}is equivalent to shifting the register by one place, as the coefficients do not change in value but only move up to the next term of the polynomial. Here is a first draft of somepseudocodefor computing ann-bit CRC. It uses a contrivedcomposite data typefor polynomials, wherexis not an integer variable, but aconstructorgenerating aPolynomialobjectthat can be added, multiplied and exponentiated. Toxortwo polynomials is to add them, modulo two; that is, toexclusive ORthe coefficients of each matching term from both polynomials. Note that this example code avoids the need to specify a bit-ordering convention by not using bytes; the inputbitStringis already in the form of a bit array, and theremainderPolynomialis manipulated in terms of polynomial operations; the multiplication byx{\displaystyle x}could be a left or right shift, and the addition ofbitString[i+n]is done to thex0{\displaystyle x^{0}}coefficient, which could be the right or left end of the register. This code has two disadvantages. First, it actually requires ann+1-bit register to hold theremainderPolynomialso that thexn{\displaystyle x^{n}}coefficient can be tested. More significantly, it requires thebitStringto be padded withnzero bits. The first problem can be solved by testing thexn−1{\displaystyle x^{n-1}}coefficient of theremainderPolynomialbefore it is multiplied byx{\displaystyle x}. The second problem could be solved by doing the lastniterations differently, but there is a more subtle optimization which is used universally, in both hardware and software implementations. Because the XOR operation used to subtract the generator polynomial from the message iscommutativeandassociative, it does not matter in what order the various inputs are combined into theremainderPolynomial. And specifically, a given bit of thebitStringdoes not need to be added to theremainderPolynomialuntil the very last instant when it is tested to determine whether toxorwith thegeneratorPolynomial. This eliminates the need to preload theremainderPolynomialwith the firstnbits of the message, as well: This is the standard bit-at-a-time hardware CRC implementation, and is well worthy of study; once you understand why this computes exactly the same result as the first version, the remaining optimizations are quite straightforward. IfremainderPolynomialis onlynbits long, then thexn{\displaystyle x^{n}}coefficients of it and ofgeneratorPolynomialare simply discarded. This is the reason that you will usually see CRC polynomials written in binary with the leading coefficient omitted. In software, it is convenient to note that while onemaydelay thexorof each bit until the very last moment, it is also possible to do it earlier. It is usually convenient to perform thexorabyteat a time, even in a bit-at-a-time implementation. Here, we take the input in 8-bit bytes: This is usually the most compact software implementation, used inmicrocontrollerswhen space is at a premium over speed. When implemented inbit serialhardware, the generator polynomial uniquely describes the bit assignment; the first bit transmitted is always the coefficient of the highest power ofx{\displaystyle x}, and the lastn{\displaystyle n}bits transmitted are the CRC remainderR(x){\displaystyle R(x)}, starting with the coefficient ofxn−1{\displaystyle x^{n-1}}and ending with the coefficient ofx0{\displaystyle x^{0}}, a.k.a. the coefficient of 1. However, when bits are processed abyteat a time, such as when usingparallel transmission, byte framing as in8B/10B encodingorRS-232-styleasynchronous serial communication, or when implementing a CRC insoftware, it is necessary to specify the bit ordering (endianness) of the data; which bit in each byte is considered "first" and will be the coefficient of the higher power ofx{\displaystyle x}. If the data is destined forserial communication, it is best to use the bit ordering the data will ultimately be sent in. This is because a CRC's ability to detectburst errorsis based on proximity in the message polynomialM(x){\displaystyle M(x)}; if adjacent polynomial terms are not transmitted sequentially, a physical error burst of one length may be seen as a longer burst due to the rearrangement of bits. For example, bothIEEE 802(ethernet) andRS-232(serial port) standards specify least-significant bit first (little-endian) transmission, so a software CRC implementation to protect data sent across such a link should map the least significant bits in each byte to coefficients of the highest powers ofx{\displaystyle x}. On the other hand,floppy disksand mosthard driveswrite the most significant bit of each byte first. The lsbit-first CRC is slightly simpler to implement in software, so is somewhat more commonly seen, but many programmers find the msbit-first bit ordering easier to follow. Thus, for example, theXMODEM-CRC extension, an early use of CRCs in software, uses an msbit-first CRC. So far, the pseudocode has avoided specifying the ordering of bits within bytes by describing shifts in the pseudocode as multiplications byx{\displaystyle x}and writing explicit conversions from binary to polynomial form. In practice, the CRC is held in a standard binary register using a particular bit-ordering convention. In msbit-first form, the most significant binary bits will be sent first and so contain the higher-order polynomial coefficients, while in lsbit-first form, the least-significant binary bits contain the higher-order coefficients. The above pseudocode can be written in both forms. For concreteness, this uses the 16-bit CRC-16-CCITTpolynomialx16+x12+x5+1{\displaystyle x^{16}+x^{12}+x^{5}+1}: Note that the lsbit-first form avoids the need to shiftstring[i]before thexor. In either case, be sure to transmit the bytes of the CRC in the order that matches your chosen bit-ordering convention. Faster software implementations process more than one bit of dividend per iteration usinglookup tables, indexed by highest order coefficients ofrem, tomemoizethe per-bit division steps. The most common technique uses a 256-entry lookup table, to process 8 bits of input per iteration.[3]This replaces the body of the outer loop (overi) with: Using a 256-entry table is usually most convenient, but other sizes can be used. In small microcontrollers, using a 16-entry table to process four bits at a time gives a useful speed improvement while keeping the table small. On computers with ample storage, a65536-entry table can be used to process 16 bits at a time. The software to generate the lookup table is so small and fast that it is usually faster to compute them on program startup than to load precomputed tables from storage. One popular technique is to use the bit-at-a-time code 256 times to generate the CRCs of the 256 possible 8-bit bytes.[4]However, this can be optimized significantly by taking advantage of the property thattable[ixorj] == table[i]xortable[j]. Only the table entries corresponding to powers of two need to be computed directly. In the following example code,crcholds the value oftable[i]: In these code samples, the table indexi + jis equivalent toixorj; you may use whichever form is more convenient. One of the most commonly encountered CRC polynomials is known asCRC-32, used by (among others)Ethernet,FDDI,ZIPand otherarchive formats, andPNGimage format. Its polynomial can be written msbit-first as 0x04C11DB7, or lsbit-first as 0xEDB88320. This is a practical example for the CRC-32 variant of CRC.[5] An alternate source is the W3C webpage on PNG, which includes an appendix with a short and simple table-driven implementation in C of CRC-32.[4]You will note that the code corresponds to the lsbit-first byte-at-a-time algorithm presented here, and the table is generated using the bit-at-a-time code. In C, the algorithm looks like: There exists a slice-by-n(typically slice-by-8 for CRC32) algorithm that usually doubles or triples the performance compared to the Sarwate algorithm. Instead of reading 8 bits at a time, the algorithm reads 8nbits at a time. Doing so maximizes performance onsuperscalarprocessors.[6][7][8][9] It is unclear who actually invented the algorithm.[10] To understand the advantages, start with the slice-by-2 case. We wish to compute a CRC two bytes (16 bits) at a time, but the standard table-based approach would require an inconveniently large 65536-entry table. As mentioned in§ Generating the lookup table, CRC tables have the property thattable[ixorj] = table[i]xortable[j]. We can use this identity to replace the large table by two 256-entry tables:table[i + 256 × j] = table_low[i]xortable_high[j]. So the large table is not stored explicitly, but each iteration computes the CRC value that would be there by combining the values in two smaller tables. That is, the 16-bit index is "sliced" into two 8-bit indexes. At first glance, this seems pointless; why do two lookups in separate tables, when the standard byte-at-a-time algorithm would do two lookups in thesametable? The difference isinstruction-level parallelism. In the standard algorithm, the index for each lookup depends on the value fetched in the previous one. Thus, the second lookup cannot begin until the first lookup is complete. When sliced tables are used, both lookups can begin at the same time. If the processor can perform two loads in parallel (2020s microprocessors can keep track of over 100 loads in progress), then this has the potential to double the speed of the inner loop. This technique can obviously be extended to as many slices as the processor can benefit from. When the slicing widthequalsthe CRC size, there is a minor speedup. In the part of the basic Sarwate algorithm where the previous CRC value is shifted by the size of the table lookup, the previous CRC value is shifted away entirely (what remains is all zero), so the XOR can be eliminated from the critical path. The resultant slice-by-ninner loop consists of: This still has the property that all of the loads in the second step must be completed before the next iteration can commence, resulting in regular pauses during which the processor's memory subsystem (in particular, the data cache) is unused. However, when the slicing widthexceedsthe CRC size, a significant second speedup appears. This is because a portion of the results of the first stepno longer dependon any previous iteration. When XORing a 32-bit CRC with 64 bits of message, half of the result is simply a copy of the message. If coded carefully (to avoid creating a falsedata dependency), half of the slice table loads can beginbeforethe previous loop iteration has completed. The result is enough work to keep the processor's memory subsystemcontinuouslybusy, which achieves maximum performance. As mentioned, on post-2000 microprocessors, slice-by-8 is generally sufficient to reach this level. There is no particular need for the slices to be 8 bits wide. For example, it would be entirely possible to compute a CRC 64 bits at a time using a slice-by-9 algorithm, using 9 128-entry lookup tables to handle 63 bits, and the 64th bit handled by the bit-at-a-time algorithm (which is effectively a 1-bit, 2-entry lookup table). This would almost halve the table size (going from 8×256 = 2048 entries to 9×128 = 1152) at the expense of one more data-dependent load per iteration. The tables for slicing computation are a simple extension of the table for the basic Sarwate algorithm. The loop for the 256 entries of each table is identical, but beginning with thecrcvalue left over from the previous table. Parallel update for a byte or a word at a time can also be done explicitly, without a table.[11]For each bit an equation is solved after 8 bits have been shifted in. Multiple reduction steps are normally expressed as a matrix operation. One shift and reduction modulo a degree-n{\displaystyle n}generator polynomialG(x){\displaystyle G(x)}is equivalent to multiplication by then×n{\displaystyle n\times n}companion matrixA=C(G){\displaystyle A=C(G)}.r{\displaystyle r}steps are written as the matrixAr=C(G)r{\displaystyle A^{r}=C(G)^{r}}. This technique is normally used in high-speed hardware implementations, but is practical in software for small or sparse CRC polynomials.[12]For large, dense CRC polynomials, the code becomes impractically long. The following tables list the equations processing 8 bits at a time modulo some commonly used polynomials, using the following symbols: For dense polynomials, such as the CRC-32 polynomial, computing the remainder a byte at a time produces equations where each bit depends on up to 8 bits of the previous iteration. In byte-parallel hardware implementations this calls for either 8-input or cascaded XOR gates which have substantialgate delay. To maximise computation speed, anintermediate remaindercan be calculated by first computing the CRC of the message modulo a sparse polynomial which is a multiple of the CRC polynomial. For CRC-32, the polynomialx123+x111+x92+x84+x64+x46+x23+ 1 has the property that its terms (feedback taps) are at least 8 positions apart. Thus, a 123-bit shift register can be advanced 8 bits per iteration using only two-input XOR gates, the fastest possible. Finally the intermediate remainder can be reduced modulo the standard polynomial in a second slower shift register (once per CRC, rather than once per input byte) to yield the CRC-32 remainder.[13] If 3- or 4-input XOR gates are permitted, shorter intermediate polynomials of degree 71 or 53, respectively, can be used. The preceding technique works, but requires a large intermediate shift register. A more hardware-efficient technique which has been used for high-speed networking sincec.2000isstate-space transformation. The inner loop of ar{\displaystyle r}-bit-at-a-time CRC engine is to repeatedly update the intermediate remaindery{\displaystyle y}to reflect anr{\displaystyle r}-bit portion of the messageyi{\displaystyle y_{i}}using: The implementation challenge is that the matrix multiplication byAr{\displaystyle A^{r}}must be performed inr{\displaystyle r}bit times. In general, asr{\displaystyle r}increases, the so does the complexity of this multiplication, resulting in a maximum speedup of aboutr/2.{\displaystyle r/2.}[14]To improve on this, first break this up the equation usingdistributivityinto: Then, we find an invertible matrixT{\displaystyle T}and perform achange of basis, multipling the intermediate state byT−1{\displaystyle T^{-1}}. So the iteration becomes: The final CRC is recovered asy=T(T−1y).{\displaystyle y=T(T^{-1}y).}It's important to note that the input multiplication byT−1Ar{\displaystyle T^{-1}A^{r}}and the output multiplication byT{\displaystyle T}are not time-critical, as they can bepipelinedto whatever depth is required to meet the performance target. Only the central multiplication byT−1ArT{\displaystyle T^{-1}A^{r}T}must be completed withinr{\displaystyle r}bit times. It is possible to find a transformation matrixT{\displaystyle T}which gives that the form of a companion matrix. In other words, it can be implemented using the same (fast) 2-input XOR gates as the bit-at-a-time algorithm.[15][16]This allows anr{\displaystyle r}-bit parallel CRC to operater{\displaystyle r}times as fast as a 1-bit serial implementation. There are many possiblen×n{\displaystyle n\times n}transformation matrices with this property, so it is possible to choose one which also minimizes the complexity of the input and output matricesT−1Ar{\displaystyle T^{-1}A^{r}}andT{\displaystyle T}.[16] Block-wise computation of the remainder can be performed in hardware for any CRC polynomial by factorizing the state space transformation matrix needed to compute the remainder into two simpler Toeplitz matrices.[17] When appending a CRC to a message, it is possible to detach the transmitted CRC, recompute it, and verify the recomputed value against the transmitted one. However, a simpler technique is commonly used in hardware. When the CRC is transmitted with the correct byte order (matching the chosen bit-ordering convention), a receiver can compute an overall CRC, over the messageandthe CRC, and if they are correct, the result will be zero.[18]This possibility is the reason that most network protocols which include a CRC do sobeforethe ending delimiter; it is not necessary to know whether the end of the packet is imminent to check the CRC. In fact, a few protocols use the CRCasthe message delimiter, a technique calledCRC-based framing. (This requires multiple frames to detect acquisition or loss of framing, so is limited to applications where the frames are a known length, and the frame contents are sufficiently random that valid CRCs in misaligned data are rare.) In practice, most standards specify presetting the register to all-ones and inverting the CRC before transmission. This has no effect on the ability of a CRC to detect changed bits, but gives it the ability to notice bits that are added to the message. The basic mathematics of a CRC accepts (considers as correctly transmitted) messages which, when interpreted as a polynomial, are a multiple of the CRC polynomial. If some leading 0 bits are prepended to such a message, they will not change its interpretation as a polynomial. This is equivalent to the fact that 0001 and 1 are the same number. But if the message being transmitted does care about leading 0 bits, the inability of the basic CRC algorithm to detect such a change is undesirable. If it is possible that a transmission error could add such bits, a simple solution is to start with theremshift register set to some non-zero value; for convenience, the all-ones value is typically used. This is mathematically equivalent to complementing (binary NOT) the firstnbits of the message, wherenis the number of bits in the CRC register. This does not affect CRC generation and checking in any way, as long as both generator and checker use the same initial value. Any non-zero initial value will do, and a few standards specify unusual values,[19]but the all-ones value (−1 in twos complement binary) is by far the most common. Note that a one-pass CRC generate/check will still produce a result of zero when the message is correct, regardless of the preset value. The same sort of error can occur at the end of a message, albeit with a more limited set of messages. Appending 0 bits to a message is equivalent to multiplying its polynomial byx, and if it was previously a multiple of the CRC polynomial, the result of that multiplication will be, as well. This is equivalent to the fact that, since 726 is a multiple of 11, so is 7260. A similar solution can be applied at the end of the message, inverting the CRC register before it is appended to the message. Again, any non-zero change will do; inverting all the bits (XORing with an all-ones pattern) is simply the most common. This has an effect on one-pass CRC checking: instead of producing a result of zero when the message is correct, it produces a fixed non-zero result. (To be precise, the result is the CRC, with zero preset but with post-invert, of the inversion pattern.) Once this constant has been obtained (e.g. by performing a one-pass CRC generate/check on an arbitrary message), it can be used directly to verify the correctness of any other message checked using the same CRC algorithm. General category Non-CRC checksums
https://en.wikipedia.org/wiki/Computation_of_cyclic_redundancy_checks
This is a list ofhash functions, includingcyclic redundancy checks,checksumfunctions, andcryptographic hash functions. Adler-32is often mistaken for a CRC, but it is not: it is achecksum.
https://en.wikipedia.org/wiki/List_of_checksum_algorithms