text
stringlengths
11
320k
source
stringlengths
26
161
Aprivacy policyis a statement or legal document (in privacy law) that discloses some or all of the ways a party gathers, uses, discloses, and manages a customer or client's data.[1]Personal information can be anything that can be used to identify an individual, not limited to the person's name, address, date of birth, marital status, contact information, ID issue, and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services.[2]In the case of a business, it is often a statement that declares a party's policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises.[3][4]Privacy policies typically represent a broader, more generalized treatment, as opposed to data use statements, which tend to be more detailed and specific. The exact contents of a certain privacy policy will depend upon the applicable law and may need to address requirements across geographical boundaries and legal jurisdictions. Most countries have own legislation and guidelines of who is covered, what information can be collected, and what it can be used for. In general, data protection laws in Europe cover the private sector, as well as the public sector. Their privacy laws apply not only to government operations but also to private enterprises and commercial transactions. In 1968, theCouncil of Europebegan to study the effects of technology onhuman rights, recognizing the new threats posed by computer technology that could link and transmit in ways not widely available before. In 1969 theOrganisation for Economic Co-operation and Development(OECD) began to examine the implications of personal information leaving the country. All this led the council to recommend that policy be developed to protectpersonal dataheld by both the private and public sectors, leading to Convention 108. In 1981,Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data(Convention 108) was introduced. One of the first privacy laws ever enacted was theSwedish Data Actin 1973, followed by the West German Data Protection Act in 1977 and the French Law on Informatics, Data Banks and Freedoms in 1978.[5] In the United States, concern over privacy policy starting around the late 1960s and 1970s led to the passage of theFair Credit Reporting Act. Although this act was not designed to be a privacy law, the act gave consumers the opportunity to examine their credit files and correct errors. It also placed restrictions on the use of information in credit records. Several congressional study groups in the late 1960s examined the growing ease with which automated personal information could be gathered and matched with other information. One such group was an advisory committee of theUnited States Department of Health and Human Services, which in 1973 drafted a code of principles called the Fair Information Practices. The work of the advisory committee led to the Privacy Act in 1974. The United States signed theOrganisation for Economic Co-operation and Developmentguidelines in 1980.[5] In Canada, aPrivacy Commissioner of Canadawas established under theCanadian Human Rights Actin 1977. In 1982, the appointment of a Privacy Commissioner was part of the new Privacy Act. Canada signed the OECD guidelines in 1984.[5] There are significant differences between the EU data protection and US data privacy laws. These standards must be met not only by businesses operating in the EU but also by any organization that transfers personal information collected concerning citizens of the EU. In 2001 the United States Department of Commerce worked to ensure legal compliance for US organizations under an opt-in Safe Harbor Program. The FTC has approved eTRUST to certify streamlined compliance with the US-EU Safe Harbor. In 1995 theEuropean Union(EU) introduced theData Protection Directive[6]for its member states. As a result, many organizations doing business within the EU began to draft policies to comply with this Directive. In the same year, the U.S.Federal Trade Commission(FTC) published the Fair Information Principles[7]which provided a set of non-binding governing principles for the commercial use ofpersonal information. While not mandating policy, these principles provided guidance of the developing concerns of how to draft privacy policies. The United States does not have a specific federal regulation establishing universal implementation of privacy policies. Congress has, at times, considered comprehensive laws regulating the collection of information online, such as the Consumer Internet Privacy Enhancement Act[8]and the Online Privacy Protection Act of 2001,[9]but none have been enacted. In 2001, the FTC stated an express preference for "more law enforcement, not more laws"[10]and promoted continued focus onindustry self-regulation. In many cases, the FTC enforces the terms of privacy policies as promises made to consumers using the authority granted by Section 5 of theFTC Actwhich prohibits unfair or deceptive marketing practices.[11]The FTC's powers are statutorily restricted in some cases; for example, airlines are subject to the authority of theFederal Aviation Administration(FAA),[12]and cell phone carriers are subject to the authority of theFederal Communications Commission(FCC).[13] In some cases, private parties enforce the terms of privacy policies by filingclass actionlawsuits, which may result in settlements or judgments. However, such lawsuits are often not an option, due toarbitration clausesin the privacy policies or otherterms of serviceagreements.[14] While no generally applicable law exists, some federal laws govern privacy policies in specific circumstances, such as: Some states have implemented more stringent regulations for privacy policies. The CaliforniaOnline Privacy Protection Actof 2003 – Business and Professions Code sections 22575-22579requires "any commercial websites or online services that collect personal information on California residents through a web site to conspicuously post a privacy policy on the site".[26]Both Nebraska and Pennsylvania have laws treating misleading statements in privacy policies published on websites as deceptive or fraudulent business practices.[27] Canada's federalPrivacy Lawapplicable to the private sector is formally referred to asPersonal Information Protection and Electronic Documents Act(PIPEDA). The purpose of the act is to establish rules to govern the collection, use, and disclosure of personal information by commercial organizations. The organization is allowed to collect, disclose and use the amount of information for the purposes that a reasonable person would consider appropriate in the circumstance.[28] The Act establishes thePrivacy Commissioner of Canadaas theOmbudsmanfor addressing any complaints that are filed against organizations. The Commissioner works to resolve problems through voluntary compliance, rather than heavy-handed enforcement. The Commissioner investigates complaints, conducts audits, promotes awareness of and undertakes research about privacy matters.[29] Theright to privacyis a highly developed area of law in Europe. All the member states of theEuropean Union(EU) are also signatories of theEuropean Convention on Human Rights(ECHR). Article 8 of the ECHR provides a right to respect for one's "private and family life, his home and his correspondence", subject to certain restrictions. TheEuropean Court of Human Rightshas given this article a very broad interpretation in its jurisprudence.[30] In 1980, in an effort to create a comprehensive data protection system throughout Europe, theOrganization for Economic Co-operation and Development(OECD) issued its "Recommendations of the Council Concerning Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data".[31]The seven principles governing theOECD’s recommendations for protection of personal data were: TheOECDguidelines, however, were nonbinding, and data privacy laws still varied widely across Europe. The US, while endorsing theOECD’s recommendations, did nothing to implement them within the United States.[32]However, all seven principles were incorporated into the EU Directive.[32] In 1995, the EU adopted theData Protection Directive, which regulates the processing of personal data within the EU. There were significant differences between the EU data protection and equivalent U.S. data privacy laws. These standards must be met not only by businesses operating in the EU but also by any organization that transfers personal information collected concerning a citizen of the EU. In 2001 theUnited States Department of Commerceworked to ensure legal compliance for US organizations under an opt-inSafe Harbor Program.[33]The FTC has approved a number of US providers to certify compliance with the US-EU Safe Harbor. Since 2010 Safe Harbor is criticised especially by German publicly appointed privacy protectors because the FTC's will to assert the defined rules hadn't been implemented in a proper even after revealing disharmonies.[34] Effective 25 May 2018, the Data Protection Directive is superseded by theGeneral Data Protection Regulation(GDPR), which harmonizes privacy rules across all EU member states. GDPR imposes more stringent rules on the collection of personal information belonging to EU data subjects, including a requirement for privacy policies to be more concise, clearly-worded, and transparent in their disclosure of any collection, processing, storage, or transfer ofpersonally identifiable information. Data controllers must also provide the opportunity for their data to be madeportablein a common format, and for it to be erased under certain circumstances.[35][36] ThePrivacy Act 1988provides the legal framework for privacy in Australia.[37]It includes a number of national privacy principles.[38]There are thirteen privacy principles under the Privacy Act.[39]It oversees and regulates the collection, use and disclosure of people's private information, makes sure who is responsible if there is a violation, and the rights of individuals to access their information.[39] The Information Technology (Amendment) Act, 2008 made significant changes to theInformation Technology Act, 2000, introducing Section 43A. This section provides compensation in the case where a corporate body is negligent in implementing and maintaining reasonable security practices and procedures and thereby causes wrongful loss or wrongful gain to any person. This applies when a corporate body possesses, deals or handles any sensitivepersonal dataor information in a computer resource that it owns, controls or operates. In 2011, the Government of India prescribed the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011[40]by publishing it in the Official Gazette.[41]These rules require a body corporate to provide a privacy policy for handling of or dealing in personal information including sensitive personal data or information.[42]Such a privacy policy should consist of the following information in accordance with the rules: The privacy policy should be published on the website of the body corporate, and be made available for view by providers of information who have provided personal information under lawful contract. Online certification or "seal" programs are an example of industry self-regulation of privacy policies. Seal programs usually require implementation of fair information practices as determined by the certification program and may require continued compliance monitoring.TRUSTArc(formerly TRUSTe),[43]the first online privacy seal program, included more than 1,800 members by 2007.[44]Other online seal programs include the Trust Guard Privacy Verified program,[45]eTrust,[46]andWebtrust.[47] Some websites also define their privacy policies usingP3PorInternet Content Rating Association(ICRA), allowing browsers to automatically assess the level of privacy offered by the site, and allowing access only when the site's privacy practices are in line with the user's privacy settings. However, these technical solutions do not guarantee websites actually follows the claimed privacy policies. These implementations also require users to have a minimum level of technical knowledge to configure their own browser privacy settings.[48]These automated privacy policies have not been popular either with websites or their users.[49]To reduce the burden of interpreting individual privacy policies, re-usable, certified policies available from a policy server have been proposed by Jøsang, Fritsch and Mahler.[50] Many critics have attacked the efficacy and legitimacy of privacy policies found on the Internet. Concerns exist about the effectiveness of industry-regulated privacy policies. For example, a 2000 FTC report Privacy Online: Fair Information Practices in the Electronic Marketplace found that while the vast majority of websites surveyed had some manner of privacy disclosure, most did not meet the standard set in the FTC Principles. In addition, many organizations reserve the express right to unilaterally change the terms of their policies. In June 2009 theEFFwebsite TOSback began tracking such changes on 56 popular internet services, including monitoring the privacy policies ofAmazon,GoogleandFacebook.[51] There are also questions about whether consumers understand privacy policies and whether they help consumers make more informed decisions. A 2002 report from theStanford Persuasive Technology Labcontended that a website's visual designs had more influence than the website's privacy policy when consumers assessed the website's credibility.[52]A 2007 study byCarnegie Mellon Universityclaimed "when not presented with prominent privacy information..." consumers were "…likely to make purchases from the vendor with the lowest price, regardless of that site's privacy policies".[53]However, the same study also showed that when information about privacy practices is clearly presented, consumers prefer retailers who better protect their privacy and some are willing to "pay a premium to purchase from more privacy protective websites". Furthermore, a 2007 study at theUniversity of California, Berkeleyfound that "75% of consumers think as long as a site has a privacy policy it means it won't share data with third parties," confusing the existence of a privacy policy with extensive privacy protection.[54]Based on the common nature of this misunderstanding, researcher Joseph Turow argued to the U.S.Federal Trade Commissionthat the term "privacy policy" thus constitutes a deceptive trade practice and that alternative phrasing like "how we use your information" should be used instead.[55] Privacy policies suffer generally from a lack of precision, especially when compared with the emerging form of the Data Use Statement. Where privacy statements provide a more general overview of data collection and use, data use statements represent a much more specific treatment. As a result, privacy policies may not meet the increased demand for transparency that data use statements provide. Critics also question if consumers even read privacy policies or can understand what they read. A 2001 study by the Privacy Leadership Initiative claimed only 3% of consumers read privacy policies carefully, and 64% briefly glanced at, or never read privacy policies.[56]The average website user once having read a privacy statement may have more uncertainty about the trustworthiness of the website than before.[57][58]One possible issue is length and complexity of policies. According to a 2008Carnegie Mellonstudy, the average length of a privacy policy is 2,500 words and requires an average of 10 minutes to read. The study cited that "Privacy policies are hard to read" and, as a result, "read infrequently".[59]However, any efforts to make the information more presentable simplify the information to the point that it does not convey the extent to which users' data is being shared and sold.[60]This is known as the "transparency paradox". There have been many studies carried out by researchers to evaluate the privacy policies of the websites of companies. One study usesnatural language processinganddeep learningas a proposed solution to automatically assess the efficiency of companies' privacy policies, in order to help the users become more aware.[61]
https://en.wikipedia.org/wiki/Privacy_policy
Information security management(ISM) defines and manages controls that an organization needs to implement to ensure that it is sensibly protecting theconfidentiality, availability, and integrity ofassetsfromthreatsandvulnerabilities. The core of ISM includesinformation risk management, a process that involves the assessment of the risks an organization must deal with in the management and protection of assets, as well as the dissemination of the risks to all appropriatestakeholders.[1]This requires proper asset identification and valuation steps, including evaluating the value ofconfidentiality,integrity,availability, and replacement of assets.[2]As part of information security management, an organization may implement an information security management system and other best practices found in theISO/IEC 27001,ISO/IEC 27002, and ISO/IEC 27035 standards oninformation security.[3][4] Managing information security in essence means managing and mitigating the various threats and vulnerabilities to assets, while at the same time balancing the management effort expended on potential threats and vulnerabilities by gauging the probability of them actually occurring.[1][5][6]A meteorite crashing into aserver roomis certainly a threat, for example, but an information security officer will likely put little effort into preparing for such a threat. Just as people don't have to start preparing for the end of the world just because of the existence of aglobal seed bank.[7] After appropriate asset identification and valuation have occurred,[2]risk management and mitigation of risks to those assets involves the analysis of the following issues:[5][6][8] Once a threat and/or vulnerability has been identified and assessed as having sufficient impact/likelihood on information assets, a mitigation plan can be enacted. The mitigation method is chosen largely depends on which of the seven information technology (IT) domains the threat and/or vulnerability resides in. The threat of user apathy toward security policies (the user domain) will require a much different mitigation plan than the one used to limit the threat of unauthorized probing andscanningof a network (the LAN-to-WAN domain).[8] An information security management system (ISMS) represents the collation of all the interrelated/interacting information security elements of an organization so as to ensure policies, procedures, and objectives can be created, implemented, communicated, and evaluated to better guarantee the organization's overall information security. This system is typically influenced by an organization's needs, objectives, security requirements, size, and processes.[9]An ISMS includes and lends to risk management and mitigation strategies. Additionally, an organization's adoption of an ISMS indicates that it is systematically identifying, assessing, and managing information security risks and "will be capable of successfully addressing information confidentiality, integrity, and availability requirements."[10]However, the human factors associated with ISMS development, implementation, and practice (the user domain[8]) must also be considered to best ensure the ISMS' ultimate success.[11] Implementing an effective information security management (including risk management and mitigation) requires a management strategy that takes note of the following:[12] Without sufficient budgetary considerations for all the above—in addition to the money allotted to standard regulatory, IT, privacy, and security issues—an information security management plan/system can not fully succeed. Standards that are available to assist organizations with implementing the appropriate programs and controls to mitigate threats and vulnerabilities include theISO/IEC 27000family of standards, theITIL framework, theCOBIT framework, andO-ISM3 2.0. The ISO/IEC 27000 family represents some of the most well-known standards governing information security management and their ISMS is based on global expert opinion. They lay out the requirements for best "establishing, implementing, deploying, monitoring, reviewing, maintaining, updating, and improving information security management systems."[3][4]ITIL acts as a collection of concepts, policies, and best practices for the effective management of information technology infrastructure, service, and security, differing from ISO/IEC 27001 in only a few ways.[13][14]COBIT, developed byISACA, is a framework for helping information security personnel develop and implement strategies for information management and governance while minimizing negative impacts and controlling information security and risk management,[4][13][15]andO-ISM32.0 isThe Open Group's technology-neutral information security model for enterprise.[16]
https://en.wikipedia.org/wiki/Information_security_management
Security information and event management (SIEM)is a field withincomputer securitythat combinessecurity information management(SIM) andsecurity event management(SEM) to enable real-time analysis of security alerts generated by applications and network hardware.[1][2]SIEM systems are central tosecurity operations centers(SOCs), where they are employed to detect, investigate, and respond to security incidents.[3]SIEM technology collects and aggregates data from various systems, allowing organizations to meet compliance requirements while safeguarding againstthreats. National Institute of Standards and Technology (NIST) definition for SIEM tool is application that provides the ability to gather security data from information system components and present that data as actionable information via a single interface.[4] SIEM tools can be implemented as software, hardware, or managed services.[5]SIEM systems log security events and generating reports to meet regulatory frameworks such as theHealth Insurance Portability and Accountability Act(HIPAA) and thePayment Card Industry Data Security Standard(PCI DSS). The integration of SIM and SEM within SIEM provides organizations with a centralized approach for monitoring security events and responding to threats in real-time. First introduced byGartneranalysts Mark Nicolett and Amrit Williams in 2005, the term SIEM has evolved to incorporate advanced features such as threat intelligence and behavioral analytics, which allow SIEM solutions to manage complex cybersecurity threats, includingzero-day vulnerabilitiesandpolymorphic malware. In recent years, SIEM has become increasingly incorporated into national cybersecurity initiatives. For instance, Executive Order 14028 signed in 2021 by U.S. President Joseph Biden mandates the use of SIEM technologies to improve incident detection and reporting in federal systems. Compliance with these mandates is further reinforced by frameworks such as NIST SP 800-92, which outlines best practices for managing computer security logs.[2] Modern SIEM platforms are aggregating and normalizing data not only from variousInformation Technology (IT)sources, but from production and manufacturingOperational Technology (OT)environments as well. Initially,system loggingwas primarily used for troubleshooting and debugging. However, as operating systems and networks have grown more complex, so has the generation of system logs. The monitoring of system logs has also become increasingly common due to the rise of sophisticatedcyberattacksand the need for compliance with regulatory frameworks, which mandate loggingsecurity controlswithinrisk management frameworks(RMF). Starting in the late 1970s, working groups began establishing criteria for managing auditing and monitoring programs, laying the groundwork for modern cybersecurity practices, such as insider threat detection and incident response. A key publication during this period was NIST’s Special Publication 500-19.[6] In 2005, the term "SIEM" (Security Information and Event Management) was introduced by Gartner analysts Mark Nicolett and Amrit Williams. SIEM systems provide a single interface for gathering security data from information systems and presenting it as actionable intelligence.[7]TheNational Institute of Standards and Technologyprovides the following definition of SIEM: "Application that provides the ability to gather security data from information system components and present that data as actionable information via a single interface."[2]In addition, NIST has designed and implemented a federally mandated RMF. With the implementation of RMFs globally, auditing and monitoring have become central toinformation assuranceand security. Cybersecurity professionals now rely on logging data to perform real-time security functions, driven by governance models that incorporate these processes into analytical tasks. As information assurance matured in the late 1990s and into the 2000s, the need to centralize system logs became apparent. Centralized log management allows for easier oversight and coordination across networked systems. On May 17, 2021, U.S. President Joseph Biden signed Executive Order 14028, "Improving the Nation's Cybersecurity," which established further logging requirements, including audit logging and endpoint protection, to enhance incident response capabilities.[8]This order was a response to an increase inransomwareattacks targeting critical infrastructure. By reinforcing information assurance controls within RMFs, the order aimed to drive compliance and secure funding for cybersecurity initiatives. Published in September 2006, the NIST SP 800-92 Guide to Computer Security Log Management serves as a key document within theNIST Risk Management Frameworkto guide what should be auditable. As indicated by the absence of the term "SIEM", the document was released before the widespread adoption of SIEM technologies.[9][10]Although the guide is not exhaustive due to rapid changes in technology since its publication, it remains relevant by anticipating industry growth. NIST is not the only source of guidance on regulatory mechanisms for auditing and monitoring, and many organizations are encouraged to adopt SIEM solutions rather than relying solely on host-based checks. Several regulations and standards reference NIST’s logging guidance, including the Federal Information Security Management Act (FISMA),[11]Gramm-Leach-Bliley Act (GLBA),[12]Health Insurance Portability and Accountability Act (HIPAA),[13]Sarbanes-Oxley Act (SOX) of 2002,[14]Payment Card Industry Data Security Standard (PCI DSS),[15]and ISO 27001.[16]Public and private organizations frequently reference NIST documents in their security policies. NIST SP 800-53 AU-2 Event Monitoring is a key security control that supports system auditing and ensures continuous monitoring for information assurance and cybersecurity operations. SIEM solutions are typically employed as central tools for these efforts. Federal systems categorized based on their impact on confidentiality, integrity, and availability (CIA) have five specific logging requirements (AU-2 a-e) that must be met.[17]While logging every action is possible, it is generally not recommended due to the volume of logs and the need for actionable security data. AU-2 provides a foundation for organizations to build a logging strategy that aligns with other controls. NIST SP 800-53 SI-4 System Monitoring outlines the requirements for monitoring systems, including detecting unauthorized access and tracking anomalies, malware, and potential attacks. This security control specifies both the hardware and software requirements for detecting suspicious activities.[18]Similarly, NIST SP 800-53 RA-10 Threat Hunting, added in Revision 5, emphasizes proactive network defense by identifying threats that evade traditional controls. SIEM solutions play a critical role in aggregating security information for threat hunting teams.[19] Together, AU-2, SI-4, and RA-10 demonstrate how NIST controls integrate into a comprehensive security strategy. These controls, supported by SIEM solutions, help ensure continuous monitoring, risk assessments, and in-depth defense mechanisms across federal and private networks.[19] The acronymsSEM,SIMandSIEMhave sometimes been used interchangeably,[20]but generally refer to the different primary focus of products: In practice many products in this area will have a mix of these functions, so there will often be some overlap – and many commercial vendors also promote their own terminology.[22]Oftentimes commercial vendors provide different combinations of these functionalities which tend to improve SIEM overall. Log management alone doesn't provide real-time insights on network security, SEM on its own won't provide complete data for deep threat analysis. When SEM and log management are combined, more information is available for SIEM to monitor. A key focus is to monitor and help manage user and service privileges,directory servicesand other[clarification needed]system-configuration changes; as well as providing log auditing and review and incident response.[21] SIEM architectures may vary by vendor; however, generally, essential components comprise the SIEM engine. The essential components of a SIEM are as follows:[26] A basic SIEM infrastructure is depicted in the image to the right. Computer security researcherChris Kubeckaidentified the following SIEM use cases, presented at the hacking conference 28C3 (Chaos Communication Congress).[31] Modern SIEM platforms support not only detection, but response too. The response can be manual or automated including AI based response. SIEM systems can have hundreds and thousands of correlation rules. Some of these are simple, and some are more complex. Once a correlation rule is triggered the system can take appropriate steps to mitigate a cyber attack. Usually, this includes sending a notification to a user and then possibly limiting or even shutting down the system. Brute force detection is relatively straightforward. Brute forcing relates to continually trying to guess a variable. It most commonly refers to someone trying to constantly guess your password - either manually or with a tool. However, it can refer to trying to guess URLs or important file locations on your system. An automated brute force is easy to detect as someone trying to enter their password 60 times in a minute is impossible. When a user logs in to a system, generally speaking, it creates a timestamp of the event. Alongside the time, the system may often record other useful information such as the device used, physical location, IP address, incorrect login attempts, etc. The more data is collected the more use can be gathered from it. For impossible travel, the system looks at the current and last login date/time and the difference between the recorded distances. If it deems it's not possible for this to happen, for example traveling hundreds of miles within a minute, then it will set off a warning. Many employees and users are now using VPN services which may obscure physical location. This should be taken into consideration when setting up such a rule. The average user does not typically copy or move files on the system repeatedly. Thus, any excessive file copying on a system could be attributed to an attacker wanting to cause harm to an organization. Unfortunately, it's not as simple as stating someone has gained access to your network illegally and wants to steal confidential information. It could also be an employee looking to sell company information, or they could just want to take home some files for the weekend. Monitoring network traffic against unusual patterns that includes any threats or attacks ranging from DDOS to network scans. Note SIEM can monitor data flow in the network and to detect and prevent potential data exfiltration efforts. In general dedicatedData loss prevention (DLP)take care about data loss prevention. A DDoS (Distributed Denial of Service) Attack could cause significant damage to a company or organization. A DDoS attack can not only take a website offline, it can also make a system weaker. With suitable correlation rules in place, a SIEM should trigger an alert at the start of the attack so that the company can take the necessary precautionary measures to protect vital systems. File Integrity and Change Monitoring (FIM) is the process of monitoring the files on your system. Unexpected changes in your system files will trigger an alert as it's a likely indication of a cyber attack. Some examples of customized rules to alert on event conditions involve user authentication rules, attacks detected and infections detected.[32]
https://en.wikipedia.org/wiki/Security_Information_and_Event_Management
Security event management(SEM), and the relatedSIMandSIEM, are computer security disciplines that use data inspection tools to centralize the storage and interpretation of logs or events generated by other software running on a network.[1][2][3] The acronymsSEM,SIM,andSIEMhave sometimes been used interchangeably,[3]:3[4]but generally refer to the different primary focus of products: Many systems and applications which run on a computer network generate events which are kept in event logs. These logs are essentially lists of activities that occurred, with records of new events being appended to the end of the logs as they occur.Protocols, such assyslogandSNMP, can be used to transport these events, as they occur, to logging software that is not on the same host on which the events are generated. The better SEMs provide a flexible array of supported communication protocols to allow for the broadest range of event collection. It is beneficial to send all events to a centralized SEM system for the following reasons: Although centralised logging has existed for long time, SEMs are a relatively new idea, pioneered in 1999 by a small company called E-Security,[8]and are still evolving rapidly. The key feature of a Security Event Management tool is the ability to analyse the collected logs to highlight events or behaviors of interest, for example an Administrator orSuper User logon, outside of normal business hours. This may include attaching contextual information, such as host information (value, owner, location, etc.), identity information (user info related to accounts referenced in the event like first/last name, workforce ID, manager's name, etc.), and so forth. This contextual information can be leveraged to provide better correlation and reporting capabilities and is often referred to as Meta-data. Products may also integrate with external remediation, ticketing, and workflow tools to assist with the process of incident resolution. The better SEMs will provide a flexible, extensible set of integration capabilities to ensure that the SEM will work with most customer environments. SEMs are often sold to help satisfy U.S. regulatory requirements such as those ofSarbanes–Oxley,PCI-DSS,GLBA.[citation needed] One of the major problems in the SEM space is the difficulty in consistently analyzing event data. Every vendor, and indeed in many cases different products by one vendor, uses a different proprietary event data format and delivery method. Even in cases where a "standard" is used for some part of the chain, likeSyslog, the standards don't typically contain enough guidance to assist developers in how to generate events, administrators in how to gather them correctly and reliably, and consumers to analyze them effectively. As an attempt to combat this problem, a couple of parallel standardization efforts are underway. First,The Open Groupis updating their circa 1997XDASstandard, which never made it past draft status. This new effort, dubbed XDAS v2, will attempt to formalize an event format including which data should be included in events and how it should be expressed.[citation needed]The XDAS v2 standard will not include event delivery standards but other standards in development by theDistributed Management Task Forcemay provide a wrapper. In addition,MITREdeveloped efforts to unify event reporting with theCommon Event Expression(CEE) which was somewhat broader in scope as it attempted to define an event structure as well as delivery methods. The project, however, ran out of funding in 2014.
https://en.wikipedia.org/wiki/Security_event_manager
Security managementis the identification of an organization'sassetsi.e. including people, buildings, machines, systems andinformation assets, followed by the development, documentation, and implementation of policies and procedures for protecting assets. An organization uses such security management procedures forinformation classification, threat assessment,risk assessment, andrisk analysisto identify threats, categorize assets, and rate system vulnerabilities.[1] Loss prevention focuses on what one's critical assets are and how they are going to protect them. A key component toloss preventionis assessing the potential threats to the successful achievement of the goal. This must include the potential opportunities that further the object (why take the risk unless there's an upside?) Balance probability and impact determine and implement measures to minimize or eliminate those threats.[2] Security managementincludes the theories, concepts, ideas, methods, procedures, and practices that are used to manage and control organizational resources in order to accomplish security goals. Policies, procedures, administration, operations, training, awareness campaigns, financial management, contracting, resource allocation, and dealing with problems like security degradation are all included in this vast sector.[3] The management ofsecurity risksapplies the principles of risk management to the management of security threats. It consists of identifying threats (or risk causes), assessing the effectiveness of existing controls to face those threats, determining the risks' consequence(s), prioritizing the risks by rating the likelihood and impact, classifying the type of risk, and selecting an appropriate risk option or risk response. In 2016, a universal standard for managing risks was developed in The Netherlands. In 2017, it was updated and named: Universal Security Management Systems Standard 2017. Risk options The first choice to be considered is the possibility of eliminating the existence of criminal opportunity or avoiding the creation of such an opportunity. When additional considerations or factors are not created as a result of this action that would create a greater risk. For example, removing all the cash flow from aretailoutlet would eliminate the opportunity for stealing the money, but it would also eliminate the ability to conduct business. When avoiding or eliminating the criminal opportunity conflicts with the ability to conduct business, the next step is reducing the opportunity of potential loss to the lowest level consistent with the function of the business. In the example above, the application of risk reduction might result in the business keeping only enough cash on hand for one day's operation. Assets that remain exposed after the application of reduction and avoidance are the subjects of risk spreading. This is the concept that limits loss or potential losses by exposing the perpetrator to the probability of detection and apprehension prior to the consummation of the crime through the application of perimeter lighting, barred windows, andintrusion detection systems. The idea is to reduce the time available for thieves to steal assets and escape without apprehension. The two primary methods of accomplishing risk transfer is to insure the assets or raise prices to cover the loss in the event of a criminal act. Generally speaking, when the first three steps have been properly applied, the cost of transferring risks is much lower. All of the remaining risks must simply be assumed by the business as a part of doing business. Included with these accepted losses are deductibles, which have been made as part of the insurance coverage.
https://en.wikipedia.org/wiki/Security_management
Information technology management(IT management) is the discipline whereby all of theinformation technologyresources of a firm are managed in accordance with its needs and priorities. Managing the responsibility within a company entails many of the basic management functions, likebudgeting, staffing,change management, and organizing and controlling, along with other aspects that are unique to technology, likesoftware design, network planning, tech support etc.[1] The central aim of IT management is to generate value through the use of technology. To achieve this,business strategiesand technology must be aligned. IT Management is different frommanagement information systems. The latter refers to management methods tied to the automation or support of human decision making.[2]IT Management refers to IT related management activities in organizations. MIS is focused mainly on the business aspect, with a strong input into the technology phase of the business/organization. A primary focus of IT management is the value creation made possible by technology. This requires the alignment of technology andbusiness strategies. While the value creation for an organization involves a network of relationships between internal and external environments, technology plays an important role in improving the overallvalue chainof an organization. However, this increase requires business and technology management to work as a creative, synergistic, and collaborative team instead of a purely mechanistic span of control.[3] Historically, one set of resources was dedicated to one particular computing technology, business application or line of business, and managed in a silo-like fashion.[4]These resources supported a single set of requirements and processes, and couldn't easily be optimized or reconfigured to support actual demand.[5]This led technology providers to build out and complement their product-centric infrastructure and management offerings withConverged Infrastructureenvironments that converge servers, storage, networking, security, management and facilities.[6][7]The efficiencies of having this type of integrated and automated management environment allows enterprises to get their applications up and running faster, with simpler manageability and maintenance, and enables IT to adjust IT resources (such as servers, storage and networking) quicker to meet unpredictable business demand.[8][9] The below concepts are commonly listed or investigated under the broad term IT Management:[10][11][12][13][14] IT managers have a lot in common withproject managersbut their main difference is one of focus: an IT manager is responsible and accountable for an ongoing program of IT services while the project manager's responsibility and accountability are both limited to a project with a clear start and end date.[18] Most IT management programs are designed to educate and develop managers who can effectively manage the planning, design, selection, implementation, use, and administration of emerging and converging information and communications technologies. The program curriculum provides students with the technical knowledge and management knowledge and skills needed to effectively integrate people, information and communication technologies, and business processes in support of organizational strategic goals.[19] IT Managers need to know predominantly Technical and Managerial skills such as analyst ofcomputer systems,information security analyst,compute,planning,communication technologies, andbusiness processes.[15] Graduates should be able: In 2013, hackers managed to install malware with the intent of stealing Target's customers' information. The malware targeted “40 million credit card numbers—and 70 million addresses, phone numbers, and other pieces of personal information”. About six months before this happened, Target invested 1.6 million dollars to install the malware detection tool made by FireEye, whose security product is also used by the CIA. The software spotted the malware, and alert was sent out as intended. However, nothing was done beyond that point. The hackers successfully got away with one third of US Consumers’ confidential information. Target's security system’s unresponsiveness led to 90 lawsuits being filed against Target, which went on top of another approximate $61 million USD spent just responding to the breach,[21]
https://en.wikipedia.org/wiki/IT_management
ACanada security clearanceis required for viewing classified information in Canada. Governmentclassified information is governed by theTreasury BoardStandard on Security Screening, theSecurity of Information ActandPrivacy Act. Only those that are deemed to be loyal and reliable, and have been cleared are allowed to access sensitive information. The policy was most recently revised on 20 October 2014.[1] Checks include basic demographic andfingerprintbased criminal record checks for all levels, and, depending on an individual appointment's requirements, credit checks, loyalty, and field checks might be conducted by theRCMPand/orCSIS. Clearance is granted, depending on types of appointment, by individual Federal government departments or agencies or by private company security officers. Those who have contracts withPublic Works and Government Services Canadaare bound by the Industrial Security Program, a sub-set of the GSP. To accessdesignated information, one must have at least standard reliability status (see Hierarchy below). Reliability checks and assessments are conditions of employment under thePublic Service Employment Act, and, thus, allGovernment of Canadaemployees have at least reliability status screening completed prior to their appointment.[2]However, Government employees byOrder-in-councilare not subjected to this policy.[3] Clearances at the reliability status and secret levels are valid for 10 years, whereas top secret is valid for 5 years. However, departments are free to request their employees to undergo security screening any time for cause.[4]Because security clearances are granted by individual departments instead of one central government agency, clearances are inactivated at the end of appointment or when an individual transfers out of the department. The individual concerned can then apply to reactivate and transfer the security clearance to his/her new position.[2] Three levels of personnel screening exist, with two sub-screening categories:[4][5] Standard screenings are completed for individuals without law enforcement, security and intelligence functions with the government, whereas Enhanced screenings are for individuals with law enforcement, security and intelligence functions, or access to those data or facilities. Individuals who need to have RS because of their job or access to federal government assets will be required to sign thePersonnel Screening, Consent and Authorization Form(TBS/SCT 330-23e). Individuals who require access to more sensitive information (or access to sensitive federal government sites and/or assets) because of their job will be required to sign theSecurity Clearance Form(TBS/SCT 330-60e). There are two levels of clearance: Two additional categories called "Site Access Status" and "Site Access Clearance" exist not for access to information purposes but for those that require physical access to sites or facilities designated byCSISas areas "reasonably be expected to be targeted by those who engage in activities constituting threats to the security of Canada". Designated areas includeGovernment Houses, official residences of government officials,Parliament,nuclear facilities, airport restricted areas, maritime ports, and any large-scale events that are sponsored by the federal government (e.g.,2010 Winter Olympics).[11] Where reliability is the primary concern, a site access status screening (similar to a reliability status, standard screening) is conducted; where loyalty to Canada is the primary concern, a site access clearance (similar to a Secret clearance screening) is required. They are both valid for 10 years. Prior to granting access to information, an individual who has been cleared must sign aSecurity Screening Certificate and Briefing Form(TBS/SCT 330–47), indicating their willingness to be bound by severalActs of Parliamentduring and after their appointment finishes. Anyone who has been given a security clearance and releases designated/classified information without legal authority is in breach of trust under section 18(2) of theSecurity of Information Actwith a punishment up to 2 years in jail. Those who have access to Special Operational Information are held to a higher standard. The release of such information is punishable by law, under section 17(2) of theSecurity of Information Act, liable to imprisonment for life.[12] Section 750(3) of theCriminal Code, states that no person convicted of an offence under section 121 (frauds on the Government), section 124 (selling or purchasing office), section 380 (Fraud - if directed against His Majesty) or section 418 (selling defective stores to His Majesty), has, after that conviction, the capacity to contract with His Majesty or to receive any benefits under a contract between His Majesty and any other person or to hold office under His Majesty unless a pardon has been granted. (This effectively prohibits granting of a Reliability Status to any such individual.)[13]
https://en.wikipedia.org/wiki/Canada_security_clearance
Classified informationis confidential material that a government deems to besensitive informationwhich must be protected from unauthorized disclosure that requires special handling and dissemination controls. Access is restricted bylawor regulation to particular groups of individuals with the necessarysecurity clearancewith aneed to know. A formalsecurity clearanceis required to view or handle classified material. The clearance process requires a satisfactory background investigation. Documents and other information must be properly marked "by the author" with one of several (hierarchical) levels of sensitivity—e.g. Confidential (C), Secret (S), and Top Secret (S). All classified documents require designation markings on the technical file which is usually located either on the cover sheet, header and footer of page. The choice of level is based on an impact assessment; governments have their own criteria, including how to determine the classification of an information asset and rules on how to protect information classified at each level. This process often includes security clearances for personnel handling the information. Mishandling of the material can incur criminal penalties. Somecorporationsand non-government organizations also assign levels of protection to their private information, either from a desire to protecttrade secrets, or because of laws and regulations governing various matters such aspersonal privacy, sealed legal proceedings and the timing of financial information releases. With the passage of time much classified information can become less sensitive, and may be declassified and made public. Since the late twentieth century there has beenfreedom of information legislationin some countries, whereby the public is deemed to have the right to all information that is not considered to be damaging if released. Sometimes documents are released with information still considered confidential obscured (redacted), as in the adjacent example. The question exists among some political science and legal experts whether the definition of classified ought to be information that would cause injury to the cause of justice, human rights, etc., rather than information that would cause injury to the national interest; to distinguish when classifying information is in the collective best interest of a just society, or merely the best interest of a society acting unjustly to protect its people, government, or administrative officials from legitimate recourses consistent with a fair and justsocial contract. The purpose of classification is to protect information. Higher classifications protect information that might endangernational security. Classification formalises what constitutes a "state secret" and accords different levels of protection based on the expected damage the information might cause in the wrong hands. However, classified information is frequently "leaked" to reporters by officials for political purposes. Several U.S. presidents have leaked sensitive information to influence public opinion.[2][3] Former government intelligence officials are usually able to retain their security clearance, but it is a privilege not a right, with the President being the grantor.[4] Although the classification systems vary from country to country, most have levels corresponding to the following British definitions (from the highest level to lowest). Top Secretis the highest level of classified information.[5]Information is further compartmented so that specific access using a code word aftertop secretis a legal way to hide collective and important information.[6]Such material would cause "exceptionally grave damage" tonational securityif made publicly available.[7]Prior to 1942, the United Kingdom and other members of the British Empire usedMost Secret, but this was later changed to match the United States' category name ofTop Secretin order to simplify Allied interoperability. The unauthorized disclosure of Top Secret (TS) information is expected to cause harm and be of grave threat to national security. The Washington Postreported in an investigation entitled "Top Secret America" that, as of 2010, "An estimated 854,000 people ... hold top-secret security clearances" in the United States.[8] It is desired that no document be released which refers toexperiments with humansand might have adverse effect on public opinion or result in legal suits. Documents covering such work field should be classified "secret". Secretmaterial would cause "serious damage" to national security if it were publicly available.[11] In the United States, operational "Secret" information can be marked with an additional "LimDis", to limit distribution. Confidentialmaterial would cause "damage" or be prejudicial to national security if publicly available.[12] Restrictedmaterial would cause "undesirable effects" if publicly available. Some countries do not have such a classification in public sectors, such as commercial industries. Such a level is also known as "PrivateInformation". Official(equivalent to U.S. DOD classificationControlled Unclassified Informationor CUI) material forms the generality of government business, public service delivery and commercial activity. This includes a diverse range of information, of varying sensitivities, and with differing consequences resulting from compromise or loss. Official information must be secured against athreat modelthat is broadly similar to that faced by a large private company. The Official Sensitive classification replaced the Restricted classification in April 2014 in the UK; Official indicates the previously used Unclassified marking.[13] Unclassifiedis technically not a classification level. Though this is a feature of some classification schemes, used for government documents that do not merit a particular classification or which have been declassified. This is because the information is low-impact, and therefore does not require any special protection, such as vetting of personnel. A plethora of pseudo-classifications exist under this category.[citation needed] Clearanceis a general classification, that comprises a variety of rules controlling the level of permission required to view some classified information, and how it must be stored, transmitted, and destroyed. Additionally, access is restricted on a "need to know" basis. Simply possessing a clearance does not automatically authorize the individual to view all material classified at that level or below that level. The individual must present a legitimate "need to know" in addition to the proper level of clearance. In addition to the general risk-based classification levels, additionalcompartmented constraints on accessexist, such as (in the U.S.) Special Intelligence (SI), which protects intelligence sources and methods, No Foreign dissemination (NoForn), which restricts dissemination to U.S. nationals, and Originator Controlled dissemination (OrCon), which ensures that the originator can track possessors of the information. Information in these compartments is usually marked with specific keywords in addition to the classification level. Government information aboutnuclear weaponsoften has an additional marking to show it contains such information (CNWDI). When a government agency or group shares information between an agency or group of other country's government they will generally employ a special classification scheme that both parties have previously agreed to honour. For example, the marking Atomal, is applied to U.S. Restricted Data or Formerly Restricted Data and United Kingdom Atomic information that has been released to NATO. Atomal information is marked COSMIC Top Secret Atomal (CTSA), NATO Secret Atomal (NSAT), or NATO Confidential Atomal (NCA). BALK and BOHEMIA are also used. For example, sensitive information shared amongstNATOallies has four levels of security classification; from most to least classified:[14][15] A special case exists with regard to NATO Unclassified (NU) information. Documents with this marking are NATO property (copyright) and must not be made public without NATO permission. COSMIC is an acronym for "Control of Secret Material in an International Command".[17] Most countries employ some sort of classification system for certain government information. For example, inCanada, information that the U.S. would classify SBU (Sensitive but Unclassified) is called "protected" and further subcategorised into levels A, B, and C. On 19 July 2011, the National Security (NS) classification marking scheme and the Non-National Security (NNS) classification marking scheme inAustraliawas unified into one structure. As of 2018, the policy detailing howAustralian governmententities handle classified information is defined in the Protective Security Policy Framework (PSPF). The PSPF is published by theAttorney-General's Departmentand covers security governance,information security, personal security, andphysical security. A security classification can be applied to the information itself or an asset that holds information e.g., aUSBorlaptop.[23] The Australian Government uses four security classifications: OFFICIAL: Sensitive, PROTECTED, SECRET and TOP SECRET. The relevant security classification is based on the likely damage resulting from compromise of the information's confidentiality. All other information from business operations and services requires a routine level of protection and is treated as OFFICIAL. Information that does not form part of official duty is treated as UNOFFICIAL. OFFICIAL and UNOFFICIAL are not security classifications and are not mandatory markings. Caveats are a warning that the information has special protections in addition to those indicated by the security classification of PROTECTED or higher (or in the case of the NATIONAL CABINET caveat, OFFICIAL: Sensitive or higher). Australia has four caveats: Codewords are primarily used within the national security community. Each codeword identifies a special need-to-knowcompartment. Foreign government markings are applied to information created by Australian agencies from foreign source information. Foreign government marking caveats require protection at least equivalent to that required by the foreign government providing the source information. Special handling instructions are used to indicate particular precautions for information handling. They include: A releasability caveat restricts information based oncitizenship. The three in use are: Additionally, the PSPF outlines Information Management Markers (IMM) as a way for entities to identify information that is subject to non-security related restrictions on access and use. These are: There are three levels ofdocument classificationunder Brazilian Law No. 12.527, theAccess to Information Act:[24]ultrassecreto(top secret),secreto(secret) andreservado(restricted). A top secret (ultrassecreto) government-issued document may be classified for a period of 25 years, which may be extended up to another 25 years.[25]Thus, no document remains classified for more than 50 years. This is mandated by the 2011 Information Access Law (Lei de Acesso à Informação), a change from the previous rule, under which documents could have their classification time length renewed indefinitely, effectively shuttering state secrets from the public. The 2011 law applies retroactively to existing documents. The government of Canada employs two main types of sensitive information designation: Classified and Protected. The access and protection of both types of information is governed by theSecurity of Information Act, effective 24 December 2001, replacing theOfficial Secrets Act 1981.[26]To access the information, a person must have the appropriate security clearance and the need to know. In addition, the caveat "Canadian Eyes Only" is used to restrict access to Classified or Protected information only to Canadian citizens with the appropriate security clearance and need to know.[27] SOI is not a classification of dataper se. It is defined under theSecurity of Information Act, and unauthorised release of such information constitutes a higher breach of trust, with a penalty of up to life imprisonment if the information is shared with a foreign entity or terrorist group. SOIs include: In February 2025, the Department of National Defence announced a new category of Persons Permanently Bound to Security (PPBS). The protection would apply to some units, sections or elements, and select positions (both current and former), with access to sensitive Special Operational Information (SOI) for national defense and intelligence work. If a unit or organization routinely handles SOI, all members of that unit will be automatically bound to secrecy. If an individual has direct access to SOI, deemed to be integral to national security, that person may be recommended for PPBS designation. The designation is for life, punishable by imprisonment.[28] Classified information can be designatedTop Secret,SecretorConfidential. These classifications are only used on matters of national interest. Protected information is not classified. It pertains to any sensitive information that does not relate to national security and cannot be disclosed under the access and privacy legislation because of the potential injury to particular public or private interests.[29][30] Federal Cabinet (King's Privy Council for Canada) papers are either protected (e.g., overhead slides prepared to make presentations to Cabinet) or classified (e.g., draft legislation, certain memos).[31] TheCriminal Lawof thePeople's Republic of China(which is not operative in the special administrative regions ofHong KongandMacau) makes it a crime to release a state secret. Regulation and enforcement is carried out by theNational Administration for the Protection of State Secrets. Under the 1989 "Law on Guarding State Secrets",[32]state secrets are defined as those that concern: Secrets can be classified into three categories: In France, classified information is defined by article 413-9 of the Penal Code.[34]The three levels of military classification are Less sensitive information is "protected". The levels are A further caveat,spécial France(reserved France) restricts the document to French citizens (in its entirety or by extracts). This is not a classification level. Declassification of documents can be done by theCommission consultative du secret de la défense nationale(CCSDN), an independent authority. Transfer of classified information is done with double envelopes, the outer layer being plastified and numbered, and the inner in strong paper. Reception of the document involves examination of the physical integrity of the container and registration of the document. In foreign countries, the document must be transferred through specialised military mail ordiplomatic bag. Transport is done by an authorised conveyor or habilitated person for mail under 20 kg. The letter must bear a seal mentioning "Par Valise Accompagnee-Sacoche". Once a year, ministers have an inventory of classified information and supports by competent authorities. Once their usage period is expired, documents are transferred to archives, where they are either destroyed (by incineration, crushing, or overvoltage), or stored. In case of unauthorized release of classified information, competent authorities are theMinistry of Interior, the 'Haut fonctionnaire de défense et de sécurité("high civil servant for defence and security") of the relevant ministry, and the General secretary for National Defence. Violation of such secrets is an offence punishable with seven years of imprisonment and a 100,000-euro fine; if the offence is committed by imprudence or negligence, the penalties are three years of imprisonment and a 45,000-euro fine. TheSecurity Bureauis responsible for developing policies in regards to the protection and handling of confidential government information. In general, the system used in Hong Kong is very similar to the UK system, developed from thecolonial era of Hong Kong. Four classifications exists in Hong Kong, from highest to lowest in sensitivity:[35] Restricted documents are not classifiedper se, but only those who have a need to know will have access to such information, in accordance with thePersonal Data (Privacy) Ordinance.[36] New Zealanduses the Restricted classification, which is lower than Confidential. People may be given access to Restricted information on the strength of an authorisation by their Head of department, without being subjected to the backgroundvettingassociated with Confidential, Secret and Top Secret clearances. New Zealand's security classifications and the national-harm requirements associated with their use are roughly similar to those of the United States. In addition to national security classifications there are two additional security classifications, In Confidence and Sensitive, which are used to protect information of a policy and privacy nature. There are also a number of information markings used within ministries and departments of the government, to indicate, for example, that information should not be released outside the originating ministry. Because of strict privacy requirements around personal information, personnel files are controlled in all parts of the public and private sectors. Information relating to the security vetting of an individual is usually classified at the In Confidence level. InRomania, classified information is referred to as "state secrets" (secrete de stat) and is defined by the Penal Code as "documents and data that manifestly appear to have this status or have been declared or qualified as such by decision of Government".[37]There are three levels of classification: "Secret" (Secret/S), "Top Secret" (Strict Secret/SS), and "Top Secret of Particular Importance" (Strict secret de interes deosebit/SSID).[38]The levels are set by theRomanian Intelligence Serviceand must be aligned with NATO regulations—in case of conflicting regulations, the latter are applied with priority. Dissemination of classified information to foreign agents or powers is punishable by up to life imprisonment, if such dissemination threatens Romania's national security.[39] In theRussian Federation, a state secret (Государственная тайна) is information protected by the state on its military, foreign policy, economic, intelligence, counterintelligence, operational and investigative and other activities, dissemination of which could harm state security. The Swedish classification has been updated due to increased NATO/PfP cooperation. All classified defence documents will now have both a Swedish classification (Kvalificerat hemlig,Hemlig,KonfidentiellorBegränsat Hemlig), and an English classification (Top Secret, Secret, Confidential, or Restricted).[citation needed]The termskyddad identitet, "protected identity", is used in the case of protection of a threatened person, basically implying "secret identity", accessible only to certain members of the police force and explicitly authorised officials. At the federal level, classified information in Switzerland is assigned one of three levels, which are from lowest to highest: Internal, Confidential, Secret.[40]Respectively, these are, in German,Intern,Vertraulich,Geheim; in French,Interne,Confidentiel,Secret; in Italian,Ad Uso Interno,Confidenziale,Segreto. As in other countries, the choice of classification depends on the potential impact that the unauthorised release of the classified document would have on Switzerland, the federal authorities or the authorities of a foreign government. According to the Ordinance on the Protection of Federal Information, information is classified as Internal if its "disclosure to unauthorised persons may be disadvantageous to national interests."[40]Information classified as Confidential could, if disclosed, compromise "the free formation of opinions and decision-making ofthe Federal Assemblyorthe Federal Council," jeopardise national monetary/economic policy, put the population at risk or adversely affect the operations of theSwiss Armed Forces. Finally, the unauthorised release of Secret information could seriously compromise the ability of either the Federal Assembly or the Federal Council to function or impede the ability of the Federal Government or the Armed Forces to act. According to the related regulations inTurkey, there are four levels of document classification:[41]çok gizli(top secret),gizli(secret),özel(confidential) andhizmete özel(restricted). The fifth istasnif dışı, which means unclassified. Until 2013, theUnited Kingdomused five levels of classification—from lowest to highest, they were: Protect, Restricted, Confidential, Secret and Top Secret (formerly Most Secret). TheCabinet Officeprovides guidance on how to protect information, including thesecurity clearancesrequired for personnel. Staff may be required to sign to confirm their understanding and acceptance of theOfficial Secrets Acts 1911 to 1989, although the Act applies regardless of signature. Protect is not in itself a security protective marking level (such as Restricted or greater), but is used to indicate information which should not be disclosed because, for instance, the document contains tax, national insurance, or other personal information. Government documents without a classification may be marked as Unclassified or Not Protectively Marked.[42] This system was replaced by theGovernment Security Classifications Policy, which has a simpler model: Top Secret, Secret, and Official from April 2014.[13]Official Sensitive is a security marking which may be followed by one of three authorised descriptors: Commercial, LocSen (location sensitive) or Personal. Secret and Top Secret may include a caveat such as UK Eyes Only. Also useful is that scientific discoveries may be classified via theD-Noticesystem if they are deemed to have applications relevant to national security. These may later emerge when technology improves so for example the specialised processors and routing engines used in graphics cards are loosely based on top secret military chips designed for code breaking and image processing. They may or may not have safeguards built in to generate errors when specific tasks are attempted and this is invariably independent of the card's operating system.[citation needed] The U.S. classification system is currently established underExecutive Order 13526and has three levels of classification—Confidential, Secret, and Top Secret. The U.S. had a Restricted level duringWorld War IIbut no longer does. U.S. regulations state that information received from other countries at the Restricted level should be handled as Confidential. A variety of markings are used for material that is not classified, but whose distribution is limited administratively or by other laws, e.g.,For Official Use Only(FOUO), orsensitive but unclassified(SBU). The Atomic Energy Act of 1954 provides for the protection of information related to the design of nuclear weapons. The term "Restricted Data" is used to denote certain nuclear technology. Information about the storage, use or handling of nuclear material or weapons is marked "Formerly Restricted Data". These designations are used in addition to level markings (Confidential, Secret and Top Secret). Information protected by the Atomic Energy Act is protected by law and information classified under the Executive Order is protected by Executive privilege. The U.S. government insists it is "not appropriate" for a court to question whether any document is legally classified.[43]In the1973 trial of Daniel Ellsberg for releasing the Pentagon Papers, the judge did not allow any testimony from Ellsberg, claiming it was "irrelevant", because the assigned classification could not be challenged. The charges against Ellsberg were ultimately dismissed after it was revealed that the government had broken the law in secretly breaking into the office of Ellsberg's psychiatrist and in tapping his telephone without a warrant. Ellsberg insists that the legal situation in the U.S. in 2014 is worse than it was in 1973, andEdward Snowdencould not get a fair trial.[44]TheState Secrets Protection Actof 2008 might have given judges the authority to review such questionsin camera, but the bill was not passed.[43] When a government agency acquires classified information through covert means, or designates a program as classified, the agency asserts "ownership" of that information and considers any public availability of it to be a violation of their ownership—even if the same information was acquired independently through "parallel reporting" by the press or others. For example, although theCIA drone programhas been widely discussed in public since the early 2000s, and reporters personally observed and reported on drone missile strikes, the CIA still considers the very existence of the program to be classified in its entirety, and any public discussion of it technically constitutes exposure of classified information. "Parallel reporting" was an issue in determining what constitutes "classified" information during theHillary Clinton email controversywhenAssistant Secretary of State for Legislative AffairsJulia Frifieldnoted, "When policy officials obtain information from open sources, 'think tanks,' experts, foreign government officials, or others, the fact that some of the information may also have been available through intelligence channels does not mean that the information is necessarily classified."[45][46][47] Strictly Secret and Confidential Secret Confidential Reserved US, French, EU, Japan "Confidential" marking to be handled as SECRET.[49] Top Secret Highly Secret Secret Internal Foreign Service:Fortroligt(thin black border) Top Secret Secret Confidential For Official Use Only Top Secret Secret Confidential Limited Use Top Secret Secret Confidential Restricted Distribution Absolute Secret Secret Confidential Service Document Class 1 Secret Class 2 Secret Class 3 Secret Confidential Philippines(Tagalog) Matinding Lihim Mahigpit na Lihim Lihim Ipinagbabawal Strict Secret of Special Importance Secret for Service Use Of Special Importance (variant: Completely Secret) Completely Secret (variant: Secret) Secret (variant: Not To Be Disclosed (Confidential)) For Official Use State Secret Strictly Confidential Confidential Internal Most Secret Very Secret Secret Restricted Top Secret Secret Confidential Restricted Table source:US Department of Defense(January 1995)."National Industrial Security Program - Operating Manual (DoD 5220.22-M)"(PDF). pp. B1 - B3 (PDF pages:121–123 ).Archived(PDF)from the original on 27 July 2019. Retrieved27 July2019. Privatecorporationsoften require writtenconfidentiality agreementsand conductbackground checkson candidates for sensitive positions.[53]In the U.S., theEmployee Polygraph Protection Actprohibits private employers from requiring lie detector tests, but there are a few exceptions. Policies dictating methods for marking and safeguarding company-sensitive information (e.g. "IBM Confidential") are common and some companies have more than one level. Such information is protected undertrade secretlaws. New product development teams are often sequestered and forbidden to share information about their efforts with un-cleared fellow employees, the originalApple Macintoshproject being a famous example. Other activities, such asmergersandfinancial reportpreparation generally involve similar restrictions. However, corporate security generally lacks the elaborate hierarchical clearance and sensitivity structures and the harsh criminal sanctions that give government classification systems their particular tone. TheTraffic Light Protocol[54][55]was developed by theGroup of Eightcountries to enable the sharing of sensitive information between government agencies and corporations. This protocol has now been accepted as a model for trusted information exchange by over 30 other countries. The protocol provides for four "information sharing levels" for the handling of sensitive information.
https://en.wikipedia.org/wiki/Classified_information#Canada
Various governments require acertificationofvoting machines. In theUnited Statesthere is only a voluntary federal certification forvoting machinesand each state has ultimate jurisdiction over certification, though most states currently require national certification for the voting systems.[1] In Germany thePhysikalisch-Technische Bundesanstaltwas responsible for certification of the voting machines for federal and European elections till 2009. Since the respective law, theBundeswahlgeräteverordnung("Federal Voting Machine Ordinance") is considered to be in contradiction to Germany's Constitution, this responsibility is suspended. The only machines certified so far are theNedapESD1 and ESD2.
https://en.wikipedia.org/wiki/Certification_of_voting_machines
Withinquality management systems(QMS) andinformation technology(IT) systems,change controlis a process—either formal or informal[1]—used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. It reduces the possibility that unnecessary changes will be introduced to a system without forethought, introducing faults into the system or undoing changes made by other users of software. The goals of a change control procedure usually include minimal disruption to services, reduction in back-out activities, and cost-effective utilization of resources involved in implementing change. According to theProject Management Institute, change control is a "process whereby modifications to documents, deliverables, or baselines associated with the project are identified, documented, approved, or rejected."[2] Change control is used in various industries, including in IT,[3]software development,[1]the pharmaceutical industry,[4]the medical device industry,[5]and other engineering/manufacturing industries.[6]For the IT and software industries, change control is a major aspect of the broader discipline of change management. Typical examples from thecomputerandnetworkenvironments are patches to software products, installation of newoperating systems, upgrades to networkroutingtables, or changes to theelectrical powersystems supporting suchinfrastructure.[1][3] Certain portions ofITILcover change control.[7] There is considerable overlap and confusion betweenchange management,configuration managementand change control. The definition below is not yet integrated with definitions of the others. Change control can be described as a set of six steps: Consider the primary and ancillary detail of the proposed change. This should include aspects such as identifying the change, its owner(s), how it will be communicated and executed,[8]how success will be verified, the change's estimate of importance, its added value, its conformity to business and industry standards, and its target date for completion.[3][9][10] Impact and risk assessment is the next vital step. When executed, will the proposed plan cause something to go wrong? Will related systems be impacted by the proposed change? Even minor details should be considered during this phase. Afterwards, a risk category should ideally be assigned to the proposed change: high-, moderate-, or low-risk. High-risk change requires many additional steps such as management approval and stakeholder notification, whereas low-risk change may only require project manager approval and minimal documentation.[3][9][10]If not addressed in the plan/scope, the desire for a backout plan should be expressed, particularly for high-risk changes that have significant worst-case scenarios.[3] Whether it's a change controller,change control board, steering committee, or project manager, a review and approval process is typically required.[11]The plan/scope and impact/risk assessments are considered in the context of business goals, requirements, and resources. If, for example, the change request is deemed to address a low severity, low impact issue that requires significant resources to correct, the request may be made low priority or shelved altogether. In cases where a high-impact change is requested but without a strong plan, the review/approval entity may request a fullbusiness casemay be requested for further analysis.[1][3][9][10] If the change control request is approved to move forward, the delivery team will execute the solution through a small-scale development process in test or development environments. This allows the delivery team an opportunity to design and make incremental changes, withunitand/orregression testing.[1][3][9]Little in the way of testing and validation may occur for low-risk changes, though major changes will require significant testing before implementation.[9]They will then seek approval and request a time and date to carry out the implementation phase. In rare cases where the solution can't be tested, special consideration should be made towards the change/implementation window.[3] In most cases a special implementation team with the technical expertise to quickly move a change along is used to implement the change. The team should also be implementing the change not only according to the approved plan but also according to organizational standards, industry standards, and quality management standards.[9]The implementation process may also require additional staff responsibilities outside the implementation team, including stakeholders[11]who may be asked to assist with troubleshooting.[3]Following implementation, the team may also carry out a post-implementation review, which would take place at another stakeholder meeting or during project closing procedures.[1][9] The closing process can be one of the more difficult and important phases of change control.[12]Three primary tasks at this end phase include determining that the project is actually complete, evaluating "the project plan in the context of project completion," and providing tangible proof of project success.[12]If despite best efforts something went wrong during the change control process, a post-mortem on what happened will need to be run, with the intent of applying lessons learned to future changes.[3] In agood manufacturing practiceregulated industry, the topic is frequently encountered by its users. Various industrial guidances and commentaries are available for people to comprehend this concept.[13][14][15]As a common practice, the activity is usually directed by one or moreSOPs.[16]From theinformation technologyperspective forclinical trials, it has been guided by another U.S.Food and Drug Administrationdocument.[17]
https://en.wikipedia.org/wiki/Change_control
Inmathematics, two elementsxandyof a setPare said to becomparablewith respect to abinary relation≤ if at least one ofx≤yory≤xis true. They are calledincomparableif they are not comparable. Abinary relationon a setP{\displaystyle P}is by definition any subsetR{\displaystyle R}ofP×P.{\displaystyle P\times P.}Givenx,y∈P,{\displaystyle x,y\in P,}xRy{\displaystyle xRy}is written if and only if(x,y)∈R,{\displaystyle (x,y)\in R,}in which casex{\displaystyle x}is said to berelatedtoy{\displaystyle y}byR.{\displaystyle R.}An elementx∈P{\displaystyle x\in P}is said to beR{\displaystyle R}-comparable, orcomparable(with respect toR{\displaystyle R}), to an elementy∈P{\displaystyle y\in P}ifxRy{\displaystyle xRy}oryRx.{\displaystyle yRx.}Often, a symbol indicating comparison, such as<{\displaystyle \,<\,}(or≤,{\displaystyle \,\leq \,,}>,{\displaystyle \,>,\,}≥,{\displaystyle \geq ,}and many others) is used instead ofR,{\displaystyle R,}in which casex<y{\displaystyle x<y}is written in place ofxRy,{\displaystyle xRy,}which is why the term "comparable" is used. Comparability with respect toR{\displaystyle R}induces a canonical binary relation onP{\displaystyle P}; specifically, thecomparability relationinduced byR{\displaystyle R}is defined to be the set of all pairs(x,y)∈P×P{\displaystyle (x,y)\in P\times P}such thatx{\displaystyle x}is comparable toy{\displaystyle y}; that is, such that at least one ofxRy{\displaystyle xRy}andyRx{\displaystyle yRx}is true. Similarly, theincomparability relationonP{\displaystyle P}induced byR{\displaystyle R}is defined to be the set of all pairs(x,y)∈P×P{\displaystyle (x,y)\in P\times P}such thatx{\displaystyle x}is incomparable toy;{\displaystyle y;}that is, such that neitherxRy{\displaystyle xRy}noryRx{\displaystyle yRx}is true. If the symbol<{\displaystyle \,<\,}is used in place of≤{\displaystyle \,\leq \,}then comparability with respect to<{\displaystyle \,<\,}is sometimes denoted by the symbol=><{\displaystyle {\overset {<}{\underset {>}{=}}}}, and incomparability by the symbol=><{\displaystyle {\cancel {\overset {<}{\underset {>}{=}}}}\!}.[1][failed verification]Thus, for any two elementsx{\displaystyle x}andy{\displaystyle y}of a partially ordered set, exactly one ofx=><y{\displaystyle x\ {\overset {<}{\underset {>}{=}}}\ y}andx=><y{\displaystyle x{\cancel {\overset {<}{\underset {>}{=}}}}y}is true. Atotally orderedset is apartially ordered setin which any two elements are comparable. TheSzpilrajn extension theoremstates that every partial order is contained in a total order. Intuitively, the theorem says that any method of comparing elements that leaves some pairs incomparable can be extended in such a way that every pair becomes comparable. Both of the relationscomparabilityandincomparabilityaresymmetric, that isx{\displaystyle x}is comparable toy{\displaystyle y}if and only ify{\displaystyle y}is comparable tox,{\displaystyle x,}and likewise for incomparability. The comparability graph of a partially ordered setP{\displaystyle P}has as vertices the elements ofP{\displaystyle P}and has as edges precisely those pairs{x,y}{\displaystyle \{x,y\}}of elements for whichx=><y{\displaystyle x\ {\overset {<}{\underset {>}{=}}}\ y}.[2] Whenclassifyingmathematical objects (e.g.,topological spaces), twocriteriaare said to be comparable when the objects that obey one criterion constitute a subset of the objects that obey the other, which is to say when they are comparable under the partial order ⊂. For example, theT1andT2criteria are comparable, while the T1andsobrietycriteria are not.
https://en.wikipedia.org/wiki/Comparability
Functional verificationis the task of verifying that thelogic designconforms to specification.[1]Functional verification attempts to answer the question "Does this proposed design do what is intended?"[2]This is complex and takes the majority of time and effort (up to 70% of design and development time)[1]in most large electronic system design projects. Functional verification is a part of more encompassingdesign verification, which, besides functional verification, considers non-functional aspects like timing, layout and power.[3] Although the number of transistors increasedexponentiallyaccording toMoore's law, increasing the number of engineers and time taken to produce the designs only increaselinearly. As the transistors' complexity increases, the number of coding errors also increases. Most of the errors in logic coding come from careless coding (12.7%), miscommunication (11.4%), and microarchitecture challenges (9.3%).[1]Thus,electronic design automation(EDA) tools are produced to catch up with the complexity of transistors design. Languages such as Verilog and VHDL are introduced together with the EDA tools.[1] Functional verification is very difficult because of the sheer volume of possible test-cases that exist in even a simple design. Frequently there are more than 10^80 possible tests to comprehensively verify a design – a number that is impossible to achieve in a lifetime. This effort is equivalent toprogram verification, and isNP-hardor even worse – and no solution has been found that works well in all cases. However, it can be attacked by many methods. None of them are perfect, but each can be helpful in certain circumstances: There are three types of functional verification, namely: dynamic functional, hybrid dynamic functional/static, and static verification.[1] Simulation based verification (also called 'dynamic verification') is widely used to "simulate" the design, since this method scales up very easily. Stimulus is provided to exercise each line in the HDL code. A test-bench is built to functionally verify the design by providing meaningful scenarios to check that given certain input, the design performs to specification. A simulation environment is typically composed of several types of components: Differentcoveragemetrics are defined to assess that the design has been adequately exercised. These include functional coverage (has every functionality of the design been exercised?), statement coverage (has each line of HDL been exercised?), and branch coverage (has each direction of every branch been exercised?).
https://en.wikipedia.org/wiki/Functional_verification
ISO/IEC17025General requirements for the competence of testing and calibration laboratoriesis the main standard used by testing and calibration laboratories. In most countries, ISO/IEC 17025 is the standard for which most labs must hold accreditation in order to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited. Originally known as ISO/IEC Guide 25, ISO/IEC 17025 was initially issued by ISO/IEC in 1999. There are many commonalities with theISO 9000standard, but ISO/IEC 17025 is more specific in requirements for competence and applies directly to those organizations that produce testing and calibration results and is based on more technical principles.[1]Laboratories use ISO/IEC 17025 to implement a quality system aimed at improving their ability to consistently produce valid results.[2]Material in the standard also forms the basis for accreditation from an accreditation body. There have been three releases; in 1999, 2005 and 2017. The most significant changes between the 1999 and 2005 release were a greater emphasis on the responsibilities of senior management, explicit requirements for continual improvement of the management system itself, and communication with the customer. The 2005 release also aligned more closely with the 2000 version of ISO 9001 with regards to implementing continuous improvement.[3] The 2005 version of the standard comprises four elements: The 2017 version comprises eight elements: Some national systems (e.g.UKASM10 in the UK) were the forerunners of ISO/IEC 17025:1999 but could also be exceedingly prescriptive. ISO/IEC 17025 allows laboratories to carry out procedures in their own ways, but require the laboratory to justify using a particular method. In common with other ISO quality standards, ISO/IEC 17025 requires continual improvement. Additionally, the laboratory will be expected to keep abreast of scientific and technological advances in relevant areas. In common with other accreditation standards of the ISO 17000 series (and unlike most ISO standards for management systems), assessment of the laboratory is normally carried out by the national organization responsible foraccreditation. Laboratories are therefore "accredited" under ISO/IEC 17025, rather than "certified" or "registered" by a third party service as is the case with ISO 9000 quality standard. In short, accreditation differs from certification by adding the concept of a third party (Accreditation Body (AB)) attesting to technical competence within a laboratory in addition to its adherence and operation under a documented quality system, specific to a Scope of Accreditation. In order for accreditation bodies to recognize each other's accreditations, theInternational Laboratory Accreditation Cooperation(ILAC) worked to establish methods of evaluating accreditation bodies against another ISO/CASCO standard (ISO/IEC Guide 58 - which became ISO/IEC 17011). Around the world, regions such as theEuropean Community, the Asia-Pacific, the Americas and others, established regional cooperations to manage the work needed for such mutual recognition. These regional bodies (all working within the ILAC umbrella) include European Accreditation Cooperation (EA), the Asia Pacific Laboratory Accreditation Cooperation (APLAC), Southern African Development Community Cooperation in Accreditation (SADCA) and the Inter-American Accreditation Cooperation (IAAC). The first laboratory accreditation bodies to be established wereNational Association of Testing Authorities(NATA) in Australia (1947) andTeLaRCin New Zealand (1973).[4][5]Most other bodies are based on the NATA/TELARC model includeUKASin the UK,FINASin Finland andDANAKin Denmark to name a few. In the U.S. there are several, multidisciplinary accreditation bodies that serve the laboratory community. These bodies accredit testing and calibration labs, reference material producers, PT providers, product certifiers, inspection bodies, forensic institutions and others to a multitude of standards and programs. These ILAC MRA signatory accreditation bodies carry identical acceptance across the globe. It does not matter which AB is utilized for accreditation. The MRA arrangement was designed with equal weight across all economies. ABs include: In Canada, there are two accreditation bodies: The accreditation of calibration laboratories is the shared responsibility of the Standards Council of Canada (SCC) Program for the Accreditation of Laboratories-Canada (PALCAN), and theNational Research Council of Canada(NRC) Calibration Laboratory Assessment Service (CLAS). The CLAS program provides quality system and technical assessment services and certification of specific measurement capabilities of calibration laboratories in support of the Canadian National Measurement System. In other countries there is often only one Accreditation Body. Typically these bodies encompass accreditation programs for management systems, product certification, laboratory, inspection, personnel and others:
https://en.wikipedia.org/wiki/ISO_17025
Positive recallis a term used inquality systems, most notablyISO9000. It is part of receiving inspection procedures.[1]It defines the concept that if a producer or manufacturer receives aproductorprocessthat requiresinspectionand it wishes to postpone theinspection process, it must have a system in place that will ensure that the postponed inspection process will take place at some point prior to final product/process acceptance. In ISO 9000 it is defined as clause 4.10.2.3, also known as Urgent production release.[2] This business-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Positive_recall
Process validationis the analysis of data gathered throughout the design and manufacturing of a product in order to confirm that the process can reliably output products of a determined standard. Regulatory authorities likeEMAandFDAhave published guidelines relating to process validation.[1]The purpose of process validation is to ensure varied inputs lead to consistent and high quality outputs. Process validation is an ongoing process that must be frequently adapted as manufacturing feedback is gathered. End-to-end validation of production processes is essential in determining product quality because quality cannot always be determined by finished-product inspection. Process validation can be broken down into 3 steps: process design (Stage 1a, Stage 1b), process qualification (Stage 2a, Stage 2b), and continued process verification (Stage 3a, Stage 3b). In this stage, data from the development phase are gathered and analyzed to define the commercial manufacturing process. By understanding the commercial process, a framework for quality specifications can be established and used as the foundation of a control strategy.Process design[2]is the first of three stages of process validation. Data from the development phase is gathered and analyzed to understand end-to-end system processes. These data are used to establish benchmarks for quality and production control. Design of experiments is used to discover possible relationships and sources of variation as quickly as possible. A cost-benefit analysis should be conducted to determine if such an operation is necessary.[3] Quality by designis an approach to pharmaceutical manufacturing that stresses quality should be built into products rather than tested in products; that product quality should be considered at the earliest possible stage rather than at the end of the manufacturing process. Input variables are isolated in order to identify the root cause of potential quality issues and the manufacturing process is adapted accordingly. Process analytical technologyis used to measure critical process parameters (CPP) and critical quality attributes (CQA). PAT facilitates measurement of quantitative production variables in real time and allows access to relevant manufacturing feedback. PAT can also be used in the design process to generate a process qualification.[4] Critical process parametersare operating parameters that are considered essential to maintaining product output within specified quality target guidelines.[5] Critical quality attributes(CQA) are chemical, physical, biological, and microbiological attributes that can be defined, measured, and continually monitored to ensure final product outputs remain within acceptable quality limits.[6]CQA are an essential aspect of a manufacturing control strategy and should be identified in stage 1 of process validation:process design. During this stage, acceptable limits, baselines, and data collection and measurement protocols should be established. Data from the design process and data collected during production should be kept by the manufacturer and used to evaluateproduct qualityandprocess control.[7]Historical data can also help manufacturers better understand operational process and input variables as well as better identify true deviations from quality standards compared to false positives. Should a serious product quality issue arise, historical data would be essential in identifying the sources of errors and implementing corrective measures. In this stage, the process design is assessed to conclude if the process is able to meet determined manufacturing criteria. In this stage all production processes and manufacturing equipment is proofed to confirm quality and output capabilities. Critical quality attributes are evaluated, and critical process parameters taken into account, to confirm product quality. Once the process qualification stage has been successfully accomplished, production can begin. Process Performance Qualification[8]is the second phase of process validation. Continued process verificationis the ongoing monitoring of all aspects of the production cycle.[9]It aims to ensure that all levels of production are controlled and regulated. Deviations from prescribed output methods and final product irregularities are flagged by a process analytics database system. The FDA requires production data be recorded (FDA requirements (§ 211.180(e)). Continued process verification is stage 3 of process validation. TheEuropean Medicines Agencydefines a similar process known asongoing process verification. This alternative method of process validation is recommended by the EMA for validating processes on a continuous basis. Continuous process verification analysescritical process parametersandcritical quality attributesin real time to confirm production remains within acceptable levels and meets standards set by ICH Q8, Pharmaceutical Quality Systems, andGood manufacturing practice.
https://en.wikipedia.org/wiki/Process_validation
Insoftware project management,software testing, andsoftware engineering,verification and validationis the process of checking that a software engineer system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to assoftware quality control. It is normally the responsibility ofsoftware testersas part of thesoftware development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?" Verification and validation are not the same thing, although they are often confused.Boehmsuccinctly expressed the difference as[1] "Building the product right" checks that thespecificationsare correctly implemented by the system while "building the right product" refers back to theuser's needs. In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance. Ideally,formal methodsprovide a mathematical guarantee that software meets its specifications. Building the product right implies the use of the Requirements Specification as input for the next phase of the development process, the design process, the output of which is the Design Specification. Then, it also implies the use of the Design Specification to feed the construction process. Every time the output of a process correctly implements its input specification, the software product is one step closer to final verification. If the output of a process is incorrect, the developers have not correctly implemented some component of that process. This kind of verification is called "artifact or specification verification". It would imply to verify if the specifications are met by running the software but this is not possible (e.g., how can anyone know if the architecture/design/etc. are correctly implemented by running the software?). Only by reviewing its associated artifacts, can someone conclude whether or not the specifications are met. The output of each software development process stage can also be subject to verification when checked against its input specification (see the definition by CMMI below). Examples of artifact verification: Software validation checks that the software product satisfies or fits the intended use (high-level checking), i.e., the software meets the user requirements, not as specification artifacts or as needs of those who will operate the software only; but, as the needs of all the stakeholders (such as users, operators, administrators, managers, investors, etc.). There are two ways to perform software validation: internal and external. During internal software validation, it is assumed that the goals of the stakeholders were correctly understood and that they were expressed in the requirement artifacts precisely and comprehensively. If the software meets the requirement specification, it has been internally validated. External validation happens when it is performed by asking the stakeholders if the software meets their needs. Different software development methodologies call for different levels of user and stakeholder involvement and feedback; so, external validation can be a discrete or a continuous event. Successful final external validation occurs when all the stakeholders accept the software product and express that it satisfies their needs. Such final external validation requires the use of anacceptance testwhich is adynamic test. However, it is also possible to perform internal static tests to find out if the software meets the requirements specification but that falls into the scope of static verification because the software is not running. Requirements should be validated before the software product as a whole is ready (the waterfall development process requires them to be perfectly defined before design starts; but iterative development processes do not require this to be so and allow their continual improvement). Examples of artifact validation: According to theCapability Maturity Model(CMMI-SW v1.1),[2] Validation during the software development process can be seen as a form of User Requirements Specification validation; and, that at the end of the development process is equivalent to Internal and/or External Software validation. Verification, from CMMI's point of view, is evidently of the artifact kind. In other words, software verification ensures that the output of each phase of the software development process effectively carries out what its corresponding input artifact specifies (requirement -> design -> software product), while software validation ensures that the software product meets the needs of all the stakeholders (therefore, the requirement specification was correctly and accurately expressed in the first place). Software verification ensures that "you built it right" and confirms that the product, as provided, fulfills the plans of the developers. Software validation ensures that "you built the right thing" and confirms that the product, as provided, fulfills the intended use and goals of the stakeholders. This article has used the strict ornarrowdefinition of verification. From a testing perspective: Both verification and validation are related to the concepts ofqualityand ofsoftware quality assurance. By themselves, verification and validation do not guarantee software quality; planning,traceability, configuration management and other aspects of software engineering are required. Within themodeling and simulation(M&S) community, the definitions of verification, validation and accreditation are similar: The definition of M&S validation focuses on the accuracy with which the M&S represents the real-world intended use(s). Determining the degree of M&S accuracy is required because all M&S are approximations of reality, and it is usually critical to determine if the degree of approximation is acceptable for the intended use(s). This stands in contrast to software validation. Inmission-criticalsoftware systems,formal methodsmay be used to ensure the correct operation of a system. These formal methods can prove costly, however, representing as much as 80 percent of total software design cost. Independent Software Verification and Validation (ISVV)is targeted at safety-criticalsoftwaresystems and aims to increase the quality of software products, thereby reducing risks and costs throughout the operational life of the software. The goal of ISVV is to provide assurance that software performs to the specified level of confidence and within its designed parameters and defined requirements.[4][5] ISVV activities are performed by independent engineering teams, not involved in the software development process, to assess the processes and the resulting products. The ISVV team independency is performed at three different levels: financial, managerial and technical. ISVV goes beyond "traditional" verification and validation techniques, applied by development teams. While the latter aims to ensure that the software performs well against the nominal requirements, ISVV is focused on non-functional requirements such as robustness and reliability, and on conditions that can lead the software to fail. ISVV results and findings are fed back to the development teams for correction and improvement. ISVV derives from the application of IV&V (Independent Verification and Validation) to the software. Early ISVV application (as known today) dates back to the early 1970s when theU.S. Armysponsored the first significant program related to IV&V for the SafeguardAnti-Ballistic MissileSystem.[6]Another example is NASA's IV&V Program, which was established in 1993.[7] By the end of the 1970s IV&V was rapidly becoming popular. The constant increase in complexity, size and importance of the software led to an increasing demand on IV&V applied to software. Meanwhile, IV&V (and ISVV for software systems) consolidated and is now widely used by organizations such as theDoD,FAA,[8]NASA[7]andESA.[9]IV&V is mentioned inDO-178B,ISO/IEC 12207and formalized inIEEE 1012. Initially in 2004-2005, a European consortium led by theEuropean Space Agency, and composed ofDNV,Critical Software SA,TermaandCODA SciSys plccreated the first version of a guide devoted to ISVV, called "ESA Guide for Independent Verification and Validation" with support from other organizations.[10]This guide covers the methodologies applicable to all the software engineering phases in what concerns ISVV. In 2008 the European Space Agency released a second version, having received inputs from many different European Space ISVV stakeholders.[10] ISVV is usually composed of five principal phases, these phases can be executed sequentially or as results of a tailoring process. Software often must meet the compliance requirements of legally regulated industries, which is often guided by government agencies[11][12]or industrial administrative authorities. For instance, theFDArequires software versions andpatchesto be validated.[13]
https://en.wikipedia.org/wiki/Software_verification_and_validation
Instatistics,model validationis the task of evaluating whether a chosenstatistical modelis appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstanding by researchers of the actual relevance of their model. To combat this, model validation is used to test whether a statistical model can hold up to permutations in the data. Model validation is also calledmodel criticismormodel evaluation. This topic is not to be confused with the closely related task ofmodel selection, the process of discriminating between multiple candidate models: model validation does not concern so much the conceptual design of models as it tests only the consistency between a chosen model and its stated outputs. There are many ways to validate a model.Residual plotsplot the difference between the actual data and the model's predictions: correlations in the residual plots may indicate a flaw in the model.Cross validationis a method of model validation that iteratively refits the model, each time leaving out just a small sample and comparing whether the samples left out are predicted by the model: there aremany kinds of cross validation.Predictive simulationis used to compare simulated data to actual data.External validationinvolves fitting the model to new data.Akaike information criterionestimates the quality of a model. Model validation comes in many forms and the specific method of model validation a researcher uses is often a constraint of their research design. To emphasize, what this means is that there is no one-size-fits-all method to validating a model. For example, if a researcher is operating with a very limited set of data, but data they have strong prior assumptions about, they may consider validating the fit of their model by using a Bayesian framework and testing the fit of their model using various prior distributions. However, if a researcher has a lot of data and is testing multiple nested models, these conditions may lend themselves toward cross validation and possibly a leave one out test. These are two abstract examples and any actual model validation will have to consider far more intricacies than describes here but these example illustrate that model validation methods are always going to be circumstantial. In general, models can be validated using existing data or with new data, and both methods are discussed more in the following subsections, and a note of caution is provided, too. Validation based on existing data involves analyzing thegoodness of fitof the model or analyzing whether theresidualsseem to be random (i.e.residual diagnostics). This method involves using analyses of the models closeness to the data and trying to understand how well the model predicts its own data. One example of this method is in Figure 1, which shows a polynomial function fit to some data. We see that the polynomial function does not conform well to the data, which appears linear, and might invalidate this polynomial model. Commonly, statistical models on existing data are validated using a validation set, which may also be referred to as a holdout set. A validation set is a set of data points that the user leaves out when fitting a statistical model. After the statistical model is fitted, the validation set is used as a measure of the model's error. If the model fits well on the initial data but has a large error on the validation set, this is a sign of overfitting. If new data becomes available, an existing model can be validated by assessing whether the new data is predicted by the old model. If the new data is not predicted by the old model, then the model might not be valid for the researcher's goals. With this in mind, a modern approach is to validate a neural network is to test its performance on domain-shifted data. This ascertains if the model learned domain-invariant features.[1] A model can be validated only relative to some application area.[2][3]A model that is valid for one application might be invalid for some other applications. As an example, consider the curve in Figure 1: if the application only used inputs from the interval [0, 2], then the curve might well be an acceptable model. When doing a validation, there are three notable causes of potential difficulty, according to theEncyclopedia of Statistical Sciences.[4]The three causes are these: lack of data; lack of control of the input variables; uncertainty about the underlying probability distributions and correlations. The usual methods for dealing with difficulties in validation include the following: checking the assumptions made in constructing the model; examining the available data and related model outputs; applying expert judgment.[2]Note that expert judgment commonly requires expertise in the application area.[2] Expert judgment can sometimes be used to assess the validity of a predictionwithoutobtaining real data: e.g. for the curve in Figure 1, an expert might well be able to assess that a substantial extrapolation will be invalid. Additionally, expert judgment can be used inTuring-type tests, where experts are presented with both real data and related model outputs and then asked to distinguish between the two.[5] For some classes of statistical models, specialized methods of performing validation are available. As an example, if the statistical model was obtained via aregression, then specialized analyses forregression model validationexist and are generally employed. Residual diagnostics comprise analyses of theresidualsto determine whether the residuals seem to be effectively random. Such analyses typically requires estimates of the probability distributions for the residuals. Estimates of the residuals' distributions can often be obtained by repeatedly running the model, i.e. by using repeatedstochastic simulations(employing apseudorandom number generatorfor random variables in the model). If the statistical model was obtained via a regression, thenregression-residual diagnosticsexist and may be used; such diagnostics have been well studied. Cross validation is a method of sampling that involves leaving some parts of the data out of the fitting process and then seeing whether those data that are left out are close or far away from where the model predicts they would be. What that means practically is that cross validation techniques fit the model many, many times with a portion of the data and compares each model fit to the portion it did not use. If the models very rarely describe the data that they were not trained on, then the model is probably wrong.
https://en.wikipedia.org/wiki/Statistical_model_validation
Usability testingis a technique used inuser-centeredinteraction designto evaluate a product by testing it on users. This can be seen as an irreplaceableusabilitypractice, since it gives direct input on how real users use the system.[1]It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning application that creates confusion amongst its users will not last for long.[2]This is in contrast withusability inspectionmethods where experts use different methods to evaluate a user interface without involving users. Usability testing focuses on measuring a human-made product's capacity to meet its intended purposes. Examples of products that commonly benefit from usability testing arefood, consumer products,websitesor web applications,computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas generalhuman–computer interactionstudies attempt to formulate universal principles. Simply gathering opinions on an object or a document ismarket researchorqualitative researchrather than usability testing. Usability testing usually involves systematic observation under controlled conditions to determine how well people can use the product.[3]However, often both qualitative research and usability testing are used in combination, to better understand users' motivations/perceptions, in addition to their actions. Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching people trying tousesomething for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts and, rather than being asked to comment on the parts and materials, they should be asked to put the toy together. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process. Setting up a usability test involves carefully creating ascenario, or a realistic situation, wherein the person performs a list of tasks using the product beingtestedwhile observers watch and take notes (dynamic verification). Several othertestinstruments such as scripted instructions,paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested (static verification). For example, to test the attachment function of ane-mailprogram, a scenario would describe a situation where a person needs to send an e-mail attachment, and asking them to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can identify the problem areas and fix them. Techniques popularly used to gather data during a usability test includethink aloud protocol, co-discovery learning andeye tracking. Hallway testing, also known asguerrilla usability, is a quick and cheap method of usability testing in which people — such as those passing by in the hallway—are asked to try using the product or service. This can help designers identify "brick walls", problems so serious that users simply cannot advance, in the early stages of a new design. Anyone but project designers and engineers can be used (they tend to act as "expert reviewers" because they are too close to the project). This type of testing is an example ofconvenience samplingand thus the results are potentially biased. In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the user's other tasks and technology, can be either synchronous or asynchronous. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately.[4]Numerous tools are available to address the needs of both these approaches. Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. WebEx and GoToMeeting are the most commonly used technologies to conduct a synchronous remote usability test.[5]However, synchronous remote testing may lack the immediacy and sense of "presence" desired to support a collaborative testing process. Moreover, managing interpersonal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants in their native environment.[6]One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.[7] Asynchronous methodologies include automatic collection of user's click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users.[6]Similar to an in-lab study, an asynchronous remote usability test is task-based and the platform allows researchers to capture clicks and task times. Hence, for many large companies, this allows researchers to better understand visitors' intents when visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organizational overheads. In recent years, conducting usability testing asynchronously has also become prevalent and allows testers to provide feedback in their free time and from the comfort of their own home. Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the usability of a product. Aheuristic evaluationorusability auditis an evaluation of an interface by one or more human factors experts. Evaluators measure the usability, efficiency, and effectiveness of the interface based on usability principles, such as the 10 usability heuristics originally defined byJakob Nielsenin 1994.[8] Nielsen's usability heuristics, which have continued to evolve in response to user research and new devices, include: Similar to expert reviews,automated expert reviewsprovide usability testing but through the use of programs given rules for good design and heuristics. Though an automated review might not provide as much detail and insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate users for usability testing is an ambitious direction for the artificial intelligence community. In web development and marketing, A/B testing or split testing is an experimental approach to web design (especially user experience design), which aims to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the one currently used, while version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can be seen through testing elements like copy text, layouts, images and colors. Multivariate testing or bucket testing is similar to A/B testing but tests more than two versions at the same time. In the early 1990s,Jakob Nielsen, at that time a researcher atSun Microsystems, popularized the concept of using numerous small usability tests—typically with only five participants each—at various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than five users and running as many small tests as you can afford."[9] The claim of "Five users is enough" was later described by a mathematical model[10]which states for the proportion of uncovered problems U U=1−(1−p)n{\displaystyle U=1-(1-p)^{n}} where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below). In later research Nielsen's claim has been questioned using bothempiricalevidence[11]and more advancedmathematical models.[12]Two key challenges to this assertion are: Nielsen does not advocate stopping after a single test with five users; his point is that testing with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice per week during the entire development cycle, using three to five test subjects per round, and with the results delivered within 24 hours to the designers. The number of users actually tested over the course of the project can thus easily reach 50 to 100 people. Research shows that user testing conducted by organisations most commonly involves the recruitment of 5-10 participants.[14] In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects across a broad spectrum of abilities. For example, in one study, experienced users showed no problem using any design, from the first to the last, while naive users and self-identified power users both failed repeatedly.[15]Later on, as the design smooths out, users should be recruited from the target population. When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed: The sample size ceases to be small and usability problems that arise with only occasional users are found. The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over. While it's true that the initial problems in the design may be tested by only five users, when the method is properly applied, the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people. A 1982Apple Computermanual for developers advised on usability testing:[16] Apple advised developers, "You should begin testing as soon as possible, using drafted friends, relatives, and new employees":[16] Our testing method is as follows. We set up a room with five to six computer systems. We schedule two to three groups of five to six users at a time to try out the systems (often without their knowing that it is the software rather than the system that we are testing). We have two of the designers in the room. Any fewer, and they miss a lot of what is going on. Any more and the users feel as though there is always someone breathing down their necks. Designers must watch people use the program in person, because[16] Ninety-five percent of the stumbling blocks are found by watching the body language of the users. Watch for squinting eyes, hunched shoulders, shaking heads, and deep, heart-felt sighs. When a user hits a snag, he will assume it is "on account of he is not too bright": he will not report it; he will hide it ... Do not make assumptions about why a user became confused. Ask him. You will often be surprised to learn what the user thought the program was doing at the time he got lost. Usability testing has been a formal subject of academic instruction in different disciplines.[17]Usability testing is important to composition studies and online writing instruction (OWI).[18]Scholar Collin Bjork argues that usability testing is "necessary but insufficient for developing effective OWI, unless it is also coupled with the theories ofdigital rhetoric."[19] Survey products include paper and digitalsurveys, forms, and instruments that can be completed or used by the survey respondent alone or with a data collector. Usability testing is most often done inweb surveysand focuses on how people interact with survey, such as navigating the survey, entering survey responses, and finding help information. Usability testing complements traditional surveypretestingmethods such ascognitive pretesting(how people understand the products),pilot testing(how will the survey procedures work), and expert review by asubject matter expertinsurvey methodology.[20] In translated survey products, usability testing has shown that "cultural fitness" must be considered in the sentence and word levels and in the designs for data entry and navigation,[21]and that presenting translation and visual cues of common functionalities (tabs,hyperlinks,drop-down menus, andURLs) help to improve the user experience.[22]
https://en.wikipedia.org/wiki/Usability_testing
AValidation Master Plan, also referred to as "VMP", outlines the principles involved in the qualification of a facility, defining the areas and systems to be validated, and provides a written program for achieving and maintaining a qualified facility.[1]A VMP is the foundation for thevalidationprogram and should include process validation, facility and utility qualification and validation, equipment qualification, cleaning and computer validation. It is a key document in the GMP (Good manufacturing practice) regulated pharmaceutical industry as it drives a structured approach to validation projects.[2] Food and Drug Administration inspectors often look at VMPs during audits to see whether or not a facility's validation strategy is well thought-out and organized. A VMP should have logical reasoning for including or excluding every system associated with a validation project based on a risk assessment. TheGAMP 5standard recommends an approach to the creation of the plan.[3] Topics commonly covered include: Introduction, scope, responsibilities, description of facility and design, building and plant Layout, cleanrooms and associated controlled environments, storage areas, personnel, personnel and material Flow, water and solid waste handling, infrastructure and utilities, water system, ventilation and air-conditioning system, clean steam, compressed air, gases and vacuum system, list manufacturing equipment, building management systems, products that are planned to be validated, qualification/validation approach, process validation andcleaning validationapproach, microbiological monitoring, computer Validation, calibration, maintenance, related SOPs.
https://en.wikipedia.org/wiki/Validation_master_plan
Verification and validation of computer simulation modelsis conducted during the development of asimulationmodel with the ultimate goal of producing an accurate and credible model.[1][2]"Simulation models are increasingly being used to solve problems and to aid in decision-making. The developers and users of these models, the decision makers using information obtained from the results of these models, and the individuals affected by decisions based on such models are all rightly concerned with whether a model and its results are "correct".[3]This concern is addressed through verification and validation of the simulation model. Simulation models are approximate imitations of real-world systems and they never exactly imitate the real-world system. Due to that, a model should beverifiedand validated to the degree needed for the model's intended purpose or application.[3] The verification and validation of a simulation model starts after functional specifications have been documented and initial model development has been completed.[4]Verification and validation is an iterative process that takes place throughout the development of a model.[1][4] In the context of computer simulation,verificationof a model is the process of confirming that it is correctly implemented with respect to the conceptual model (it matches specifications and assumptions deemed acceptable for the given purpose of application).[1][4]During verification the model is tested to find and fix errors in the implementation of the model.[4]Various processes and techniques are used to assure the model matches specifications and assumptions with respect to the model concept. The objective of model verification is to ensure that the implementation of the model is correct. There are many techniques that can be utilized to verify a model. These include, but are not limited to, having the model checked by an expert, making logic flow diagrams that include each logically possible action, examining the model output for reasonableness under a variety of settings of the input parameters, and using an interactive debugger.[1]Many software engineering techniques used forsoftware verificationare applicable to simulation model verification.[1] Validation checks the accuracy of the model's representation of the real system. Model validation is defined to mean "substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model".[3]A model should be built for a specific purpose or set of objectives and its validity determined for that purpose.[3] There are many approaches that can be used to validate a computer model. The approaches range from subjective reviews to objective statistical tests. One approach that is commonly used is to have the model builders determine validity of the model through a series of tests.[3] Naylor and Finger [1967] formulated a three-step approach to model validation that has been widely followed:[1] Step 1. Build a model that has high face validity. Step 2. Validate model assumptions. Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.[5] A model that hasface validityappears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system.[4]Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies.[1]An added advantage of having the users involved in validation is that the model's credibility to the users and the user's confidence in the model increases.[1][4]Sensitivity to model inputs can also be used to judge face validity.[1]For example, if a simulation of a fast food restaurant drive through was run twice with customer arrival rates of 20 per hour and 40 per hour then model outputs such as average wait time or maximum number of customers waiting would be expected to increase with the arrival rate. Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions. Also we can consider the simplification assumptions that are those that we use to simplify the reality.[6] Assumptions made about how the system operates and how it is physically arranged are structural assumptions. For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized? Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions.[4]If possible the workings of the actual system should be closely observed to understand how it operates.[4]The systems structure and operation should also be verified with users of the actual system.[1] There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model. Lack of appropriate data is often the reason attempts to validate a model fail.[3]Data should be verified to come from a reliable source. A typical error is assuming an inappropriate statistical distribution for the data.[1]The assumed statistical model should be tested usinggoodness of fittests and other techniques.[1][3]Examples of goodness of fit tests are theKolmogorov–Smirnov testand thechi-square test. Any outliers in the data should be checked.[3] Are those assumptions that we know that are not true, but are needed to simplify the problem we want to solve.[6]The use of this assumptions must be restricted to assure that the model is correct enough to serve as an answer for the problem we want to solve. The model is viewed as an input-output transformation for these tests. The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions. Data recorded while observing the system must be available in order to perform this test.[3]The model output that is of primary interest should be used as the measure of performance.[1]For example, if system under consideration is a fast food drive through where input to model is customer arrival time and the output measure of performance is average customer time in line, then the actual arrival time and time spent in line for customers at the drive through would be recorded. The model would be run with the actual arrival times and the model average time in line would be compared with the actual average time spent in line using one or more tests. Statistical hypothesis testingusing thet-testcan be used as a basis to accept the model as valid or reject it as invalid. The hypothesis to be tested is versus The test is conducted for a given sample size and level of significance or α. To perform the test a numbernstatistically independent runs of the model are conducted and an average or expected value, E(Y), for the variable of interest is produced. Then thetest statistic,t0is computed for the given α,n, E(Y) and the observed value for the system μ0 If reject H0, the model needs adjustment. There are two types of error that can occur using hypothesis testing, rejecting a valid model called type I error or "model builders risk" and accepting an invalid model called Type II error, β, or "model user's risk".[3]The level of significance or α is equal the probability of type I error.[3]If α is small then rejecting the null hypothesis is a strong conclusion.[1]For example, if α = 0.05 and the null hypothesis is rejected there is only a 0.05 probability of rejecting a model that is valid. Decreasing the probability of a type II error is very important.[1][3]The probability of correctly detecting an invalid model is 1 - β. The probability of a type II error is dependent of the sample size and the actual difference between the sample value and the observed value. Increasing the sample size decreases the risk of a type II error. A statistical technique where the amount of model accuracy is specified as a range has recently been developed. The technique uses hypothesis testing to accept a model if the difference between a model's variable of interest and a system's variable of interest is within a specified range of accuracy.[7]A requirement is that both the system data and model data be approximatelyNormallyIndependent and Identically Distributed (NIID). Thet-teststatistic is used in this technique. If the mean of the model is μmand the mean of system is μsthen the difference between the model and the system is D = μm- μs. The hypothesis to be tested is if D is within the acceptable range of accuracy. Let L = the lower limit for accuracy and U = upper limit for accuracy. Then versus is to be tested. The operating characteristic (OC) curve is the probability that the null hypothesis is accepted when it is true. The OC curve characterizes the probabilities of both type I and II errors. Risk curves for model builder's risk and model user's can be developed from the OC curves. Comparing curves with fixed sample size tradeoffs between model builder's risk and model user's risk can be seen easily in the risk curves.[7]If model builder's risk, model user's risk, and the upper and lower limits for the range of accuracy are all specified then the sample size needed can be calculated.[7] Confidence intervals can be used to evaluate if a model is "close enough"[1]to a system for some variable of interest. The difference between the known model value, μ0, and the system value, μ, is checked to see if it is less than a value small enough that the model is valid with respect that variable of interest. The value is denoted by the symbol ε. To perform the test a number,n, statistically independent runs of the model are conducted and a mean or expected value, E(Y) or μ for simulation output variable of interest Y, with a standard deviationSis produced. A confidence level is selected, 100(1-α). An interval, [a,b], is constructed by where is the critical value from the t-distribution for the given level of significance and n-1 degrees of freedom. If statistical assumptions cannot be satisfied or there is insufficient data for the system a graphical comparisons of model outputs to system outputs can be used to make a subjective decisions, however other objective tests are preferable.[3] Documents and standards involving verification and validation of computational modeling and simulation are developed by theAmerican Society of Mechanical Engineers(ASME) Verification and Validation (V&V) Committee. ASME V&V 10 provides guidance in assessing and increasing the credibility of computational solid mechanics models through the processes of verification, validation, anduncertainty quantification.[8]ASME V&V 10.1 provides a detailed example to illustrate the concepts described in ASME V&V 10.[9]ASME V&V 20 provides a detailed methodology for validating computational simulations as applied to fluid dynamics and heat transfer.[10]ASME V&V 40 provides a framework for establishing model credibility requirements for computational modeling, and presents examples specific in the medical device industry.[11]
https://en.wikipedia.org/wiki/Verification_and_validation_of_computer_simulation_models
Incomputing,telecommunication,information theory, andcoding theory,forward error correction(FEC) orchannel coding[1][2][3]is a technique used forcontrolling errorsindata transmissionover unreliable or noisycommunication channels. The central idea is that the sender encodes the message in aredundantway, most often by using anerror correction code, orerror correcting code(ECC).[4][5]The redundancy allows the receiver not only todetect errorsthat may occur anywhere in the message, but often to correct a limited number of errors. Therefore areverse channelto request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematicianRichard Hammingpioneered this field in the 1940s and invented the first error-correcting code in 1950: theHamming (7,4) code.[5] FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers inmulticast. Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used inmodemsand incellular networks. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initialanalog-to-digital conversionin the receiver. TheViterbi decoderimplements asoft-decision algorithmto demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate abit-error rate(BER) signal which can be used as feedback to fine-tune the analog receiving electronics. FEC information is added tomass storage(magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used asECCcomputer memoryon systems that require special provisions for reliability. The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effectivesignal-to-noise ratio. Thenoisy-channel coding theoremofClaude Shannoncan be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems likepolar code[3]come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame. ECC is accomplished by addingredundancyto the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output aresystematic, while those that do not arenon-systematic. A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1)repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below. This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is: Though simple to implement and widely used, thistriple modular redundancyis a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits). ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data. Most telecommunication systems use a fixedchannel codedesigned to tolerate the expected worst-casebit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances ofhybrid automatic repeat-requestuse a fixed ECC method as long as the ECC can handle the error rate, then switch toARQwhen the error rate gets too high;adaptive modulation and codinguses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed. The two main categories of ECC codes areblock codesandconvolutional codes. There are many types of block codes;Reed–Solomon codingis noteworthy for its widespread use incompact discs,DVDs, andhard disk drives. Other examples of classical block codes includeGolay,BCH,Multidimensional parity, andHamming codes. Hamming ECC is commonly used to correctNAND flashmemory errors.[6]This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliablesingle-level cell(SLC) NAND. Densermulti-level cell(MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon.[7][8]NOR Flash typically does not use any error correction.[7] Classical block codes are usually decoded usinghard-decisionalgorithms,[9]which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded usingsoft-decisionalgorithms like the Viterbi, MAP orBCJRalgorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding. Nearly all classical block codes apply the algebraic properties offinite fields. Hence classical block codes are often referred to as algebraic codes. In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such asLDPC codeslack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates. Mostforward error correctioncodes correct only bit-flips, but not bit-insertions or bit-deletions. In this setting, theHamming distanceis the appropriate way to measure thebit error rate. A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes. TheLevenshtein distanceis a more appropriate way to measure the bit error rate when using such codes.[10] The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code. The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate.[11]In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection. One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:[12]His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement. The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.[13] Classical (algebraic) block codes and convolutional codes are frequently combined inconcatenatedcoding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended. Concatenated codes have been standard practice in satellite and deep space communications sinceVoyager 2first used the technique in its 1986 encounter withUranus. TheGalileocraft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna. Low-density parity-check(LDPC) codes are a class of highly efficient linear block codes made from many single parity check (SPC) codes. They can provide performance very close to thechannel capacity(the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel. LDPC codes were first introduced byRobert G. Gallagerin his PhD thesis in 1960, but due to the computational effort in implementing encoder and decoder and the introduction ofReed–Solomoncodes, they were mostly ignored until the 1990s. LDPC codes are now used in many recent high-speed communication standards, such asDVB-S2(Digital Video Broadcasting – Satellite – Second Generation),WiMAX(IEEE 802.16estandard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n),[14]10GBase-T Ethernet(802.3an) andG.hn/G.9960(ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within3GPPMBMS(seefountain codes). Turbo codingis an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of theShannon limit. PredatingLDPC codesin terms of practical application, they now provide similar performance. One of the earliest commercial applications of turbo coding was theCDMA2000 1x(TIA IS-2000) digital cellular technology developed byQualcommand sold byVerizon Wireless,Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access,1xEV-DO(TIA IS-856). Like 1x, EV-DO was developed byQualcomm, and is sold byVerizon Wireless,Sprint, and other carriers (Verizon's marketing name for 1xEV-DO isBroadband Access, Sprint's consumer and business marketing names for 1xEV-DO arePower VisionandMobile Broadband, respectively). Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool incomputational complexity theory, e.g., for the design ofprobabilistically checkable proofs. Locally decodable codesare error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions.Locally testable codesare error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal. Not all testing codes are locally decoding and testing of codes Not all locally decodable codes (LDCs) are locally testable codes (LTCs)[15]neither locally correctable codes (LCCs),[16]q-query LCCs are bounded exponentially[17][18]while LDCs can havesubexponentiallengths.[19][20] Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Manycommunication channelsare not memoryless: errors typically occur inburstsrather than independently. If the number of errors within acode wordexceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a moreuniform distributionof errors.[21]Therefore, interleaving is widely used forburst error-correction. The analysis of modern iterated codes, liketurbo codesandLDPC codes, typically assumes an independent distribution of errors.[22]Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.[23] For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance.[21][24]The iterative decoding algorithm works best when there are not short cycles in thefactor graphthat represents the decoder; the interleaver is chosen to avoid short cycles. Interleaver designs include: In multi-carriercommunication systems, interleaving across carriers may be employed to provide frequencydiversity, e.g., to mitigatefrequency-selective fadingor narrowband interference.[28] Transmission without interleaving: Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword cccc is altered in one bit and can be corrected, but the codeword dddd is altered in three bits, so either it cannot be decoded at all or it might bedecoded incorrectly. With interleaving: In each of the codewords "aaaa", "eeee", "ffff", and "gggg", only one bit is altered, so one-bit error-correcting code will decode everything correctly. Transmission without interleaving: The term "AnExample" ends up mostly unintelligible and difficult to correct. With interleaving: No word is completely lost and the missing letters can be recovered with minimal guesswork. Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded.[29]Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver[citation needed]. An example of such an algorithm is based onneural network[30]structures. Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: theCloud Radio Access Networks (C-RAN)in aSoftware-defined radio (SDR)context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system. In this context, there are various available Open-source software listed below (non exhaustive).
https://en.wikipedia.org/wiki/Error_correction_code
Aparity bit, orcheck bit, is abitadded to a string ofbinary code. Parity bits are a simple form oferror detecting code. Parity bits are generally applied to the smallest units of a communication protocol, typically 8-bitoctets(bytes), although they can also be applied separately to an entire message string of bits. The parity bit ensures that thetotal number of 1-bitsin the string iseven or odd.[1]Accordingly, there are two variants of parity bits:even parity bitandodd parity bit. In the case of even parity, for a given set of bits, the bits whose value is 1 are counted. If that count is odd, the parity bit value is set to 1, making the total count of occurrences of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a given set of bits is already even, the parity bit's value is 0. In the case of odd parity, the coding is reversed. For a given set of bits, if the count of bits with a value of 1 is even, the parity bit value is set to 1 making the total count of 1s in the whole set (including the parity bit) an odd number. If the count of bits with a value of 1 is odd, the count is already odd so the parity bit's value is 0. Even parity is a special case of acyclic redundancy check(CRC), where the 1-bit CRC is generated by thepolynomialx+1. In mathematicsparitycan refer to the evenness or oddness of an integer, which, when written in itsbinary form, can be determined just by examining only itsleast significant bit. In information technology parity refers to the evenness or oddness, given any set of binary digits, of the number of those bits with value one. Because parity is determined by the state of every one of the bits, this property of parity—being dependent upon all the bits and changing its value from even to odd parity if any one bit changes—allows for its use in error detection and correction schemes. In telecommunications the parity referred to by some protocols is for error-detection. The transmission medium is preset, at both end points, to agree on either odd parity or even parity. For each string of bits ready to transmit (data packet) the sender calculates its parity bit, zero or one, to make it conform to the agreed parity, even or odd. The receiver of that packet first checks that the parity of the packet as a whole is in accordance with the preset agreement, then, if there was a parity error in that packet, requests a retransmission of that packet. In computer science the parity stripe or parity disk in aRAIDprovideserror-correction. Parity bits are written at the rate of one parity bit pernbits, wherenis the number of disks in the array. When a read error occurs, each bit in the error region is recalculated from its set ofnbits. In this way, using one parity bit creates "redundancy" for a region from the size of one bit to the size of one disk. See§ RAID arraybelow. In electronics, transcoding data with parity can be very efficient, asXORgates output what is equivalent to a check bit that creates an even parity, and XOR logic design easily scales to any number of inputs. XOR and AND structures comprise the bulk of most integrated circuitry. If an odd number of bits (including the parity bit) aretransmittedincorrectly, the parity bit will be incorrect, thus indicating that aparity erroroccurred in the transmission. The parity bit is suitable only for detecting errors; it cannotcorrectany errors, as there is no way to determine the particular bit that is corrupted. The data must be discarded entirely, andretransmitted from scratch. On a noisy transmission medium, successful transmission can therefore take a long time or even never occur. However, parity has the advantage that it uses only a single bit and requires only a number ofXOR gatesto generate. SeeHamming codefor an example of an error-correcting code. Parity bit checking is used occasionally for transmittingASCIIcharacters, which have 7 bits, leaving the 8th bit as a parity bit. For example, the parity bit can be computed as follows. AssumeAlice and Bobare communicating and Alice wants to send Bob the simple 4-bit message 1001. Alice wants to transmit: 1001 and 1011 Alice computes parity bit value:1+0+0+1 (mod 2) = 01+0+1+1 (mod 2) = 1 Alice adds parity bit and sends:10010and 10111 Bob receives: 10010 and 10111 Bob computes parity:1+0+0+1+0 (mod 2) = 01+0+1+1+1 (mod 2) = 0 Bob reports correct transmission after observing expected even result. Alice wants to transmit: 1001 and 1011 Alice computes parity bit value:1+0+0+1 (+ 1 mod 2) = 11+0+1+1 (+ 1 mod 2) = 0 Alice adds parity bit and sends:10011and 10110 Bob receives: 10011 and 10110 Bob computes overall parity:1+0+0+1+1 (mod 2) = 11+0+1+1+0 (mod 2) = 1 Bob reports correct transmission after observing expected odd result. This mechanism enables the detection of single bit errors, because if one bit gets flipped due to line noise, there will be an incorrect number of ones in the received data. In the two examples above, Bob's calculated parity value matches the parity bit in its received value, indicating there are no single bit errors. Consider the following example with a transmission error in the second bit using XOR: Error in the second bit Alice computes parity bit value: 1^0^0^1 = 0 Alice adds parity bit and sends: 10010 ...TRANSMISSION ERROR... Bob receives: 11010 Bob computes overall parity: 1^1^0^1^0 = 1 Bob reports incorrect transmission after observing unexpected odd result. Error in the parity bit Alice computes even parity value: 1^0^0^1 = 0 Alice sends: 10010 ...TRANSMISSION ERROR... Bob receives: 10011 Bob computes overall parity: 1^0^0^1^1 = 1 Bob reports incorrect transmission after observing unexpected odd result. There is a limitation to parity schemes. A parity bit is guaranteed to detect only an odd number of bit errors. If an even number of bits have errors, the parity bit records the correct number of ones even though the data is corrupt. (See alsoerror detection and correction.) Consider the same example as before but with an even number of corrupted bits: Two corrupted bits Alice computes even parity value: 1^0^0^1 = 0 Alice sends: 10010 ...TRANSMISSION ERROR... Bob receives: 11011 Bob computes overall parity: 1^1^0^1^1 = 0 Bob reports correct transmission though actually incorrect. Bob observes even parity, as expected, thereby failing to catch the two bit errors. Because of its simplicity, parity is used in manyhardwareapplications in which an operation can be repeated in case of difficulty, or simply detecting the error is helpful. For example, theSCSIandPCI busesuse parity to detect transmission errors, and manymicroprocessorinstructioncachesinclude parity protection. Because theInstruction cachedata is just a copy of themain memory, it can be disregarded and refetched if it is found to be corrupted. Inserialdata transmission, a common format is 7 data bits, an even parity bit, and one or twostop bits. That format accommodates all the 7-bitASCIIcharacters in an 8-bit byte. Other formats are possible; 8 bits of data plus a parity bit can convey all 8-bit byte values. In serial communication contexts, parity is usually generated and checked by interface hardware (such as aUART) and, on reception, the result made available to aprocessorsuch as the CPU (and so too, for instance, theoperating system) via a status bit in ahardware registerin theinterfacehardware. Recovery from the error is usually done by retransmitting the data, the details of which are usually handled by software (such as the operating system I/O routines). When the total number of transmitted bits, including the parity bit, is even, odd parity has the advantage that both all-zeros and all-ones patterns are detected as errors. If the total number of bits is odd, only one of the patterns is detected as an error, and the choice can be made based on what the more common error is expected to be. Parity data is used by RAID arrays (redundant array of independent/inexpensive disks) to achieveredundancy. If a drive in the array fails, remaining data on the other drives can be combined with the parity data (using the BooleanXORfunction) to reconstruct the missing data. For example, suppose two drives in a three-driveRAID 4array contained the following data: To calculate parity data for the two drives, an XOR is performed on their data: The resulting parity data,10111001, is then stored on Drive 3. Should any of the three drives fail, the contents of the failed drive can be reconstructed on a replacement drive by subjecting the data from the remaining drives to the same XOR operation. If Drive 2 were to fail, its data could be rebuilt using the XOR results of the contents of the two remaining drives, Drive 1 and Drive 3: as follows: The result of that XOR calculation yields Drive 2's contents.11010100is then stored on Drive 2, fully repairing the array. XOR logic is also equivalent to even parity (becauseaXORbXORcXOR ... may be treated as XOR(a,b,c,...), which is an n-ary operator that is true if and only if an odd number of arguments is true). So the same XOR concept above applies similarly to larger RAID arrays with parity, using any number of disks. In the case of a RAID 3 array of 12 drives, 11 drives participate in the XOR calculation shown above and yield a value that is then stored on the dedicated parity drive. Extensions and variations on the parity bit mechanism "double," "dual," or "diagonal" parity, are used inRAID-DP. Aparity trackwas present on the firstmagnetic-tape data storagein 1951. Parity in this form, applied across multiple parallel signals, is known as atransverse redundancy check. This can be combined with parity computed over multiple bits sent on a single signal, alongitudinal redundancy check. In a parallel bus, there is one longitudinal redundancy check bit per parallel signal. Parity was also used on at least some paper-tape (punched tape) data entry systems (which preceded magnetic-tape systems). On the systems sold by British company ICL (formerly ICT) the 1-inch-wide (25 mm) paper tape had 8 hole positions running across it, with the 8th being for parity. 7 positions were used for the data, e.g., 7-bit ASCII. The 8th position had a hole punched in it depending on the number of data holes punched.
https://en.wikipedia.org/wiki/Parity_(telecommunication)
Incomputing,telecommunication,information theory, andcoding theory,forward error correction(FEC) orchannel coding[1][2][3]is a technique used forcontrolling errorsindata transmissionover unreliable or noisycommunication channels. The central idea is that the sender encodes the message in aredundantway, most often by using anerror correction code, orerror correcting code(ECC).[4][5]The redundancy allows the receiver not only todetect errorsthat may occur anywhere in the message, but often to correct a limited number of errors. Therefore areverse channelto request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematicianRichard Hammingpioneered this field in the 1940s and invented the first error-correcting code in 1950: theHamming (7,4) code.[5] FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers inmulticast. Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used inmodemsand incellular networks. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initialanalog-to-digital conversionin the receiver. TheViterbi decoderimplements asoft-decision algorithmto demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate abit-error rate(BER) signal which can be used as feedback to fine-tune the analog receiving electronics. FEC information is added tomass storage(magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used asECCcomputer memoryon systems that require special provisions for reliability. The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effectivesignal-to-noise ratio. Thenoisy-channel coding theoremofClaude Shannoncan be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems likepolar code[3]come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame. ECC is accomplished by addingredundancyto the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output aresystematic, while those that do not arenon-systematic. A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1)repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below. This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is: Though simple to implement and widely used, thistriple modular redundancyis a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits). ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data. Most telecommunication systems use a fixedchannel codedesigned to tolerate the expected worst-casebit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances ofhybrid automatic repeat-requestuse a fixed ECC method as long as the ECC can handle the error rate, then switch toARQwhen the error rate gets too high;adaptive modulation and codinguses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed. The two main categories of ECC codes areblock codesandconvolutional codes. There are many types of block codes;Reed–Solomon codingis noteworthy for its widespread use incompact discs,DVDs, andhard disk drives. Other examples of classical block codes includeGolay,BCH,Multidimensional parity, andHamming codes. Hamming ECC is commonly used to correctNAND flashmemory errors.[6]This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliablesingle-level cell(SLC) NAND. Densermulti-level cell(MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon.[7][8]NOR Flash typically does not use any error correction.[7] Classical block codes are usually decoded usinghard-decisionalgorithms,[9]which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded usingsoft-decisionalgorithms like the Viterbi, MAP orBCJRalgorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding. Nearly all classical block codes apply the algebraic properties offinite fields. Hence classical block codes are often referred to as algebraic codes. In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such asLDPC codeslack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates. Mostforward error correctioncodes correct only bit-flips, but not bit-insertions or bit-deletions. In this setting, theHamming distanceis the appropriate way to measure thebit error rate. A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes. TheLevenshtein distanceis a more appropriate way to measure the bit error rate when using such codes.[10] The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code. The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate.[11]In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection. One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:[12]His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement. The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.[13] Classical (algebraic) block codes and convolutional codes are frequently combined inconcatenatedcoding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended. Concatenated codes have been standard practice in satellite and deep space communications sinceVoyager 2first used the technique in its 1986 encounter withUranus. TheGalileocraft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna. Low-density parity-check(LDPC) codes are a class of highly efficient linear block codes made from many single parity check (SPC) codes. They can provide performance very close to thechannel capacity(the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel. LDPC codes were first introduced byRobert G. Gallagerin his PhD thesis in 1960, but due to the computational effort in implementing encoder and decoder and the introduction ofReed–Solomoncodes, they were mostly ignored until the 1990s. LDPC codes are now used in many recent high-speed communication standards, such asDVB-S2(Digital Video Broadcasting – Satellite – Second Generation),WiMAX(IEEE 802.16estandard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n),[14]10GBase-T Ethernet(802.3an) andG.hn/G.9960(ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within3GPPMBMS(seefountain codes). Turbo codingis an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of theShannon limit. PredatingLDPC codesin terms of practical application, they now provide similar performance. One of the earliest commercial applications of turbo coding was theCDMA2000 1x(TIA IS-2000) digital cellular technology developed byQualcommand sold byVerizon Wireless,Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access,1xEV-DO(TIA IS-856). Like 1x, EV-DO was developed byQualcomm, and is sold byVerizon Wireless,Sprint, and other carriers (Verizon's marketing name for 1xEV-DO isBroadband Access, Sprint's consumer and business marketing names for 1xEV-DO arePower VisionandMobile Broadband, respectively). Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool incomputational complexity theory, e.g., for the design ofprobabilistically checkable proofs. Locally decodable codesare error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions.Locally testable codesare error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal. Not all testing codes are locally decoding and testing of codes Not all locally decodable codes (LDCs) are locally testable codes (LTCs)[15]neither locally correctable codes (LCCs),[16]q-query LCCs are bounded exponentially[17][18]while LDCs can havesubexponentiallengths.[19][20] Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Manycommunication channelsare not memoryless: errors typically occur inburstsrather than independently. If the number of errors within acode wordexceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a moreuniform distributionof errors.[21]Therefore, interleaving is widely used forburst error-correction. The analysis of modern iterated codes, liketurbo codesandLDPC codes, typically assumes an independent distribution of errors.[22]Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.[23] For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance.[21][24]The iterative decoding algorithm works best when there are not short cycles in thefactor graphthat represents the decoder; the interleaver is chosen to avoid short cycles. Interleaver designs include: In multi-carriercommunication systems, interleaving across carriers may be employed to provide frequencydiversity, e.g., to mitigatefrequency-selective fadingor narrowband interference.[28] Transmission without interleaving: Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword cccc is altered in one bit and can be corrected, but the codeword dddd is altered in three bits, so either it cannot be decoded at all or it might bedecoded incorrectly. With interleaving: In each of the codewords "aaaa", "eeee", "ffff", and "gggg", only one bit is altered, so one-bit error-correcting code will decode everything correctly. Transmission without interleaving: The term "AnExample" ends up mostly unintelligible and difficult to correct. With interleaving: No word is completely lost and the missing letters can be recovered with minimal guesswork. Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded.[29]Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver[citation needed]. An example of such an algorithm is based onneural network[30]structures. Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: theCloud Radio Access Networks (C-RAN)in aSoftware-defined radio (SDR)context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system. In this context, there are various available Open-source software listed below (non exhaustive).
https://en.wikipedia.org/wiki/Error_correcting_code
Acyclic redundancy check(CRC) is anerror-detecting codecommonly used in digitalnetworksand storage devices to detect accidental changes to digital data. Blocks of data entering these systems get a shortcheck valueattached, based on the remainder of apolynomial divisionof their contents. On retrieval, the calculation is repeated and, in the event the check values do not match, corrective action can be taken against data corruption. CRCs can be used forerror correction(seebitfilters).[1] CRCs are so called because thecheck(data verification) value is aredundancy(it expands the message without addinginformation) and thealgorithmis based oncycliccodes. CRCs are popular because they are simple to implement in binaryhardware, easy to analyze mathematically, and particularly good at detecting common errors caused bynoisein transmission channels. Because the check value has a fixed length, thefunctionthat generates it is occasionally used as ahash function. CRCs are based on the theory ofcyclicerror-correcting codes. The use ofsystematiccyclic codes, which encode messages by adding a fixed-length check value, for the purpose of error detection in communication networks, was first proposed byW. Wesley Petersonin 1961.[2]Cyclic codes are not only simple to implement but have the benefit of being particularly well suited for the detection ofburst errors: contiguous sequences of erroneous data symbols in messages. This is important because burst errors are common transmission errors in manycommunication channels, including magnetic and optical storage devices. Typically ann-bit CRC applied to a data block of arbitrary length will detect any single error burst not longer thannbits, and the fraction of all longer error bursts that it will detect is approximately(1 − 2−n). Specification of a CRC code requires definition of a so-calledgenerator polynomial. This polynomial becomes thedivisorin apolynomial long division, which takes the message as thedividendand in which thequotientis discarded and theremainderbecomes the result. The important caveat is that the polynomialcoefficientsare calculated according to the arithmetic of afinite field, so the addition operation can always be performed bitwise-parallel (there is no carry between digits). In practice, all commonly used CRCs employ the finite field of two elements,GF(2). The two elements are usually called 0 and 1, comfortably matching computer architecture. A CRC is called ann-bit CRC when its check value isnbits long. For a givenn, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degreen, which means it hasn+ 1terms. In other words, the polynomial has a length ofn+ 1; its encoding requiresn+ 1bits. Note that most polynomial specifications either drop theMSborLSb, since they are always 1. The CRC and associated polynomial typically have a name of the form CRC-n-XXX as in thetablebelow. The simplest error-detection system, theparity bit, is in fact a 1-bit CRC: it uses the generator polynomialx+ 1(two terms),[3]and has the name CRC-1. A CRC-enabled device calculates a short, fixed-length binary sequence, known as thecheck valueorCRC, for each block of data to be sent or stored and appends it to the data, forming acodeword. When a codeword is received or read, the device either compares its check value with one freshly calculated from the data block, or equivalently, performs a CRC on the whole codeword and compares the resulting check value with an expectedresidueconstant. If the CRC values do not match, then the block contains a data error. The device may take corrective action, such as rereading the block or requesting that it be sent again. Otherwise, the data is assumed to be error-free (though, with some small probability, it may contain undetected errors; this is inherent in the nature of error-checking).[4] CRCs are specifically designed to protect against common types of errors on communication channels, where they can provide quick and reasonable assurance of theintegrityof messages delivered. However, they are not suitable for protecting against intentional alteration of data. Firstly, as there is no authentication, an attacker can edit a message and recompute the CRC without the substitution being detected. When stored alongside the data, CRCs and cryptographic hash functions by themselves do not protect againstintentionalmodification of data. Any application that requires protection against such attacks must use cryptographic authentication mechanisms, such asmessage authentication codesordigital signatures(which are commonly based oncryptographic hashfunctions). Secondly, unlike cryptographic hash functions, CRC is an easily reversible function, which makes it unsuitable for use in digital signatures.[5] Thirdly, CRC satisfies a relation similar to that of alinear function(or more accurately, anaffine function):[6] wherec{\displaystyle c}depends on the length ofx{\displaystyle x}andy{\displaystyle y}. This can be also stated as follows, wherex{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}have the same length as a result, even if the CRC is encrypted with astream cipherthat usesXORas its combining operation (ormodeofblock cipherwhich effectively turns it into a stream cipher, such as OFB or CFB), both the message and the associated CRC can be manipulated without knowledge of the encryption key; this was one of the well-known design flaws of theWired Equivalent Privacy(WEP) protocol.[7] To compute ann-bit binary CRC, line the bits representing the input in a row, and position the (n+ 1)-bit pattern representing the CRC's divisor (called a "polynomial") underneath the left end of the row. In this example, we shall encode 14 bits of message with a 3-bit CRC, with a polynomialx3+x+ 1. The polynomial is written in binary as the coefficients; a 3rd-degree polynomial has 4 coefficients (1x3+ 0x2+ 1x+ 1). In this case, the coefficients are 1, 0, 1 and 1. The result of the calculation is 3 bits long, which is why it is called a 3-bit CRC. However, you need 4 bits to explicitly state the polynomial. Start with the message to be encoded: This is first padded with zeros corresponding to the bit lengthnof the CRC. This is done so that the resulting code word is insystematicform. Here is the first calculation for computing a 3-bit CRC: The algorithm acts on the bits directly above the divisor in each step. The result for that iteration is the bitwise XOR of the polynomial divisor with the bits above it. The bits not above the divisor are simply copied directly below for that step. The divisor is then shifted right to align with the highest remaining 1 bit in the input, and the process is repeated until the divisor reaches the right-hand end of the input row. Here is the entire calculation: Since the leftmost divisor bit zeroed every input bit it touched, when this process ends the only bits in the input row that can be nonzero are the n bits at the right-hand end of the row. Thesenbits are the remainder of the division step, and will also be the value of the CRC function (unless the chosen CRC specification calls for some postprocessing). The validity of a received message can easily be verified by performing the above calculation again, this time with the check value added instead of zeroes. The remainder should equal zero if there are no detectable errors. The followingPythoncode outlines a function which will return the initial CRC remainder for a chosen input and polynomial, with either 1 or 0 as the initial padding. Note that this code works with string inputs rather than raw numbers: Mathematical analysis of this division-like process reveals how to select a divisor that guarantees good error-detection properties. In this analysis, the digits of the bit strings are taken as the coefficients of a polynomial in some variablex—coefficients that are elements of the finite fieldGF(2)(the integers modulo 2, i.e. either a zero or a one), instead of more familiar numbers. The set of binary polynomials is a mathematicalring. The selection of the generator polynomial is the most important part of implementing the CRC algorithm. The polynomial must be chosen to maximize the error-detecting capabilities while minimizing overall collision probabilities. The most important attribute of the polynomial is its length (largest degree(exponent) +1 of any one term in the polynomial), because of its direct influence on the length of the computed check value. The most commonly used polynomial lengths are 9 bits (CRC-8), 17 bits (CRC-16), 33 bits (CRC-32), and 65 bits (CRC-64).[3] A CRC is called ann-bit CRC when its check value isn-bits. For a givenn, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degreen, and hencen+ 1terms (the polynomial has a length ofn+ 1). The remainder has lengthn. The CRC has a name of the form CRC-n-XXX. The design of the CRC polynomial depends on the maximum total length of the block to be protected (data + CRC bits), the desired error protection features, and the type of resources for implementing the CRC, as well as the desired performance. A common misconception is that the "best" CRC polynomials are derived from eitherirreducible polynomialsor irreducible polynomials times the factor1 +x, which adds to the code the ability to detect all errors affecting an odd number of bits.[8]In reality, all the factors described above should enter into the selection of the polynomial and may lead to a reducible polynomial. However, choosing a reducible polynomial will result in a certain proportion of missed errors, due to the quotient ring havingzero divisors. The advantage of choosing aprimitive polynomialas the generator for a CRC code is that the resulting code has maximal total block length in the sense that all 1-bit errors within that block length have different remainders (also calledsyndromes) and therefore, since the remainder is a linear function of the block, the code can detect all 2-bit errors within that block length. Ifr{\displaystyle r}is the degree of the primitive generator polynomial, then the maximal total block length is2r−1{\displaystyle 2^{r}-1}, and the associated code is able to detect any single-bit or double-bit errors.[9]However, if we use the generator polynomialg(x)=p(x)(1+x){\displaystyle g(x)=p(x)(1+x)}, wherep{\displaystyle p}is a primitive polynomial of degreer−1{\displaystyle r-1}, then the maximal total block length is2r−1−1{\displaystyle 2^{r-1}-1}, and the code is able to detect single, double, triple and any odd number of errors. A polynomialg(x){\displaystyle g(x)}that admits other factorizations may be chosen then so as to balance the maximal total blocklength with a desired error detection power. TheBCH codesare a powerful class of such polynomials. They subsume the two examples above. Regardless of the reducibility properties of a generator polynomial of degreer, if it includes the "+1" term, the code will be able to detect error patterns that are confined to a window ofrcontiguous bits. These patterns are called "error bursts". The concept of the CRC as an error-detecting code gets complicated when an implementer or standards committee uses it to design a practical system. Here are some of the complications: These complications mean that there are three common ways to express a polynomial as an integer: the first two, which are mirror images in binary, are the constants found in code; the third is the number found in Koopman's papers.In each case, one term is omitted.So the polynomialx4+x+1{\displaystyle x^{4}+x+1}may be transcribed as: In the table below they are shown as: CRCs inproprietary protocolsmight beobfuscatedby using a non-trivial initial value and a final XOR, but these techniques do not add cryptographic strength to the algorithm and can bereverse engineeredusing straightforward methods.[10] Numerous varieties of cyclic redundancy checks have been incorporated intotechnical standards. By no means does one algorithm, or one of each degree, suit every purpose; Koopman and Chakravarty recommend selecting a polynomial according to the application requirements and the expected distribution of message lengths.[11]The number of distinct CRCs in use has confused developers, a situation which authors have sought to address.[8]There are three polynomials reported for CRC-12,[11]twenty-two conflicting definitions of CRC-16, and seven of CRC-32.[12] The polynomials commonly applied are not the most efficient ones possible. Since 1993, Koopman, Castagnoli and others have surveyed the space of polynomials between 3 and 64 bits in size,[11][13][14][15]finding examples that have much better performance (in terms ofHamming distancefor a given message size) than the polynomials of earlier protocols, and publishing the best of these with the aim of improving the error detection capacity of future standards.[14]In particular,iSCSIandSCTPhave adopted one of the findings of this research, the CRC-32C (Castagnoli) polynomial. The design of the 32-bit polynomial most commonly used by standards bodies, CRC-32-IEEE, was the result of a joint effort for theRome Laboratoryand the Air Force Electronic Systems Division by Joseph Hammond, James Brown and Shyan-Shiang Liu of theGeorgia Institute of Technologyand Kenneth Brayer of theMitre Corporation. The earliest known appearances of the 32-bit polynomial were in their 1975 publications: Technical Report 2956 by Brayer for Mitre, published in January and released for public dissemination throughDTICin August,[16]and Hammond, Brown and Liu's report for the Rome Laboratory, published in May.[17]Both reports contained contributions from the other team. During December 1975, Brayer and Hammond presented their work in a paper at the IEEE National Telecommunications Conference: the IEEE CRC-32 polynomial is the generating polynomial of aHamming codeand was selected for its error detection performance.[18]Even so, the Castagnoli CRC-32C polynomial used in iSCSI or SCTP matches its performance on messages from 58 bits to 131 kbits, and outperforms it in several size ranges including the two most common sizes of Internet packet.[14]TheITU-TG.hnstandard also uses CRC-32C to detect errors in the payload (although it uses CRC-16-CCITT forPHY headers). CRC-32C computation is implemented in hardware as an operation (CRC32) ofSSE4.2instruction set, first introduced inIntelprocessors'Nehalemmicroarchitecture.ARMAArch64architecture also provides hardware acceleration for both CRC-32 and CRC-32C operations. The table below lists only the polynomials of the various algorithms in use. Variations of a particular protocol can impose pre-inversion, post-inversion and reversed bit ordering as described above. For example, the CRC32 used in Gzip and Bzip2 use the same polynomial, but Gzip employs reversed bit ordering, while Bzip2 does not.[12]Note that even parity polynomials inGF(2)with degree greater than 1 are never primitive. Even parity polynomial marked as primitive in this table represent a primitive polynomial multiplied by(x+1){\displaystyle \left(x+1\right)}. The most significant bit of a polynomial is always 1, and is not shown in the hex representations.
https://en.wikipedia.org/wiki/Polynomial_representations_of_cyclic_redundancy_checks
Inmathematics, theHilbert symbolornorm-residue symbolis a function (–, –) fromK××K×to the group ofnthroots of unityin alocal fieldKsuch as the fields ofrealsorp-adic numbers. It is related toreciprocity laws, and can be defined in terms of theArtin symboloflocal class field theory. The Hilbert symbol was introduced byDavid Hilbert(1897, sections 64, 131,1998, English translation) in hisZahlbericht, with the slight difference that he defined it for elements ofglobal fieldsrather than for the larger local fields. The Hilbert symbol has been generalized tohigher local fields. Over a local fieldK{\displaystyle K}withmultiplicative groupof non-zero elementsK×{\displaystyle K^{\times }}, the quadratic Hilbert symbol is thefunctionK××K×→{±1}{\displaystyle K^{\times }\times K^{\times }\to \{\pm 1\}}defined by Equivalently,(a,b)=1{\displaystyle (a,b)=1}if and only ifb{\displaystyle b}is equal to thenormof an element of the quadratic extensionK[a]{\displaystyle K[{\sqrt {a}}]}.[1] The following three properties follow directly from the definition, by choosing suitable solutions of theDiophantine equationabove: The (bi)multiplicativity, i.e., for anya,b1{\displaystyle a,b_{1}}andb2{\displaystyle b_{2}}inK×{\displaystyle K^{\times }}is, however, more difficult to prove, and requires the development oflocal class field theory. The third property shows that the Hilbert symbol is an example of aSteinberg symboland thus factors over the secondMilnor K-groupK2M(K){\displaystyle K_{2}^{M}(K)}, which is by definition By the first property it even factors overK2M(K)/2{\displaystyle K_{2}^{M}(K)/2}. This is the first step towards theMilnor conjecture. The Hilbert symbol can also be used to denote thecentral simple algebraoverKwith basis 1,i,j,kand multiplication rulesi2=a{\displaystyle i^{2}=a},j2=b{\displaystyle j^{2}=b},ij=−ji=k{\displaystyle ij=-ji=k}. In this case the algebra represents an element of order 2 in theBrauer groupofK, which is identified with -1 if it is a division algebra and +1 if it is isomorphic to the algebra of 2 by 2 matrices. For aplacevof therational number fieldand rational numbersa,bwe let (a,b)vdenote the value of the Hilbert symbol in the correspondingcompletionQv. As usual, ifvis the valuation attached to a prime numberpthen the corresponding completion is thep-adic fieldand ifvis the infinite place then the completion is thereal numberfield. Over the reals, (a,b)∞is +1 if at least one ofaorbis positive, and −1 if both are negative. Over the p-adics withpodd, writinga=pαu{\displaystyle a=p^{\alpha }u}andb=pβv{\displaystyle b=p^{\beta }v}, whereuandvare integerscoprimetop, we have and the expression involves twoLegendre symbols. Over the 2-adics, again writinga=2αu{\displaystyle a=2^{\alpha }u}andb=2βv{\displaystyle b=2^{\beta }v}, whereuandvareodd numbers, we have It is known that ifvranges over all places, (a,b)vis 1 for almost all places. Therefore, the following product formula makes sense. It is equivalent to the law ofquadratic reciprocity. The Hilbert symbol on a fieldFdefines a map where Br(F) is the Brauer group ofF. The kernel of this mapping, the elementsasuch that (a,b)=1 for allb, is theKaplansky radicalofF.[2] The radical is a subgroup of F*/F*2, identified with a subgroup of F*. The radical is equal to F*if and only ifFhasu-invariantat most 2.[3]In the opposite direction, a field with radical F*2is termed aHilbert field.[4] IfKis a local field containing the group ofnth roots of unity for some positive integernprime to the characteristic ofK, then the Hilbert symbol (,) is a function fromK*×K* to μn. In terms of the Artin symbol it can be defined by[5] Hilbert originally defined the Hilbert symbol before the Artin symbol was discovered, and his definition (fornprime) used the power residue symbol whenKhas residue characteristic coprime ton, and was rather complicated whenKhas residue characteristic dividingn. The Hilbert symbol is (multiplicatively) bilinear: skew symmetric: nondegenerate: It detects norms (hence the name norm residue symbol): It has the"symbol" properties: Hilbert's reciprocity law states that ifaandbare in an algebraic number field containing thenth roots of unity then[6] where the product is over the finite and infinite primespof the number field, and where (,)pis the Hilbert symbol of the completion atp. Hilbert's reciprocity law follows from theArtin reciprocity lawand the definition of the Hilbert symbol in terms of the Artin symbol. IfKis a number field containing thenth roots of unity,pis a prime ideal not dividingn, π is a prime element of the local field ofp, andais coprime top, then thepower residue symbol(ap) is related to the Hilbert symbol by[7] The power residue symbol is extended to fractional ideals by multiplicativity, and defined for elements of the number field by putting (ab)=(a(b)) where (b) is the principal ideal generated byb. Hilbert's reciprocity law then implies the following reciprocity law for the residue symbol, foraandbprime to each other and ton:
https://en.wikipedia.org/wiki/Hilbert_symbol
Inmathematics,modular arithmeticis a system ofarithmeticoperations forintegers, other than the usual ones from elementary arithmetic, where numbers "wrap around" when reaching a certain value, called themodulus. The modern approach to modular arithmetic was developed byCarl Friedrich Gaussin his bookDisquisitiones Arithmeticae, published in 1801. A familiar example of modular arithmetic is the hour hand on a12-hour clock. If the hour hand points to 7 now, then 8 hours later it will point to 3. Ordinary addition would result in7 + 8 = 15, but 15 reads as 3 on the clock face. This is because the hour hand makes one rotation every 12 hours and the hour number starts over when the hour hand passes 12. We say that 15 iscongruentto 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12). Similarly, if one starts at 12 and waits 8 hours, the hour hand will be at 8. If one instead waited twice as long, 16 hours, the hour hand would be on 4. This can be written as 2 × 8 ≡ 4 (mod 12). Note that after a wait of exactly 12 hours, the hour hand will always be right where it was before, so 12 acts the same as zero, thus 12 ≡ 0 (mod 12). Given anintegerm≥ 1, called amodulus, two integersaandbare said to becongruentmodulom, ifmis adivisorof their difference; that is, if there is an integerksuch that Congruence modulomis acongruence relation, meaning that it is anequivalence relationthat is compatible withaddition,subtraction, andmultiplication. Congruence modulomis denoted by The parentheses mean that(modm)applies to the entire equation, not just to the right-hand side (here,b). This notation is not to be confused with the notationbmodm(without parentheses), which refers to the remainder ofbwhen divided bym, known as themodulooperation: that is,bmodmdenotes the unique integerrsuch that0 ≤r<mandr≡b(modm). The congruence relation may be rewritten as explicitly showing its relationship withEuclidean division. However, thebhere need not be the remainder in the division ofabym.Rather,a≡b(modm)asserts thataandbhave the sameremainderwhen divided bym. That is, where0 ≤r<mis the common remainder. We recover the previous relation (a−b=k m) by subtracting these two expressions and settingk=p−q. Because the congruence modulomis defined by thedivisibilitybymand because−1is aunitin the ring of integers, a number is divisible by−mexactly if it is divisible bym. This means that every non-zero integermmay be taken as modulus. In modulus 12, one can assert that: because the difference is38 − 14 = 24 = 2 × 12, a multiple of12. Equivalently,38and14have the same remainder2when divided by12. The definition of congruence also applies to negative values. For example: The congruence relation satisfies all the conditions of anequivalence relation: Ifa1≡b1(modm)anda2≡b2(modm), or ifa≡b(modm), then:[1] Ifa≡b(modm), then it is generally false thatka≡kb(modm). However, the following is true: For cancellation of common terms, we have the following rules: The last rule can be used to move modular arithmetic into division. Ifbdividesa, then(a/b) modm= (amodb m) /b. Themodular multiplicative inverseis defined by the following rules: The multiplicative inversex≡a−1(modm)may be efficiently computed by solvingBézout's equationa x+m y= 1forx,y, by using theExtended Euclidean algorithm. In particular, ifpis a prime number, thenais coprime withpfor everyasuch that0 <a<p; thus a multiplicative inverse exists for allathat is not congruent to zero modulop. Some of the more advanced properties of congruence relations are the following: The congruence relation is anequivalence relation. Theequivalence classmodulomof an integerais the set of all integers of the forma+k m, wherekis any integer. It is called thecongruence classorresidue classofamodulom, and may be denoted(amodm), or asaor[a]when the modulusmis known from the context. Each residue class modulomcontains exactly one integer in the range0,...,|m|−1{\displaystyle 0,...,|m|-1}. Thus, these|m|{\displaystyle |m|}integers arerepresentativesof their respective residue classes. It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes. Consequently,(amodm)denotes generally the unique integerrsuch that0 ≤r<mandr≡a(modm); it is called theresidueofamodulom. In particular,(amodm) = (bmodm)is equivalent toa≡b(modm), and this explains why "=" is often used instead of "≡" in this context. Each residue class modulommay be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2](since this is the proper remainder which results from division). Any two members of different residue classes modulomare incongruent modulom. Furthermore, every integer belongs to one and only one residue class modulom.[3] The set of integers{0, 1, 2, ...,m− 1}is called theleast residue system modulom. Any set ofmintegers, no two of which are congruent modulom, is called acomplete residue system modulom. The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely onerepresentativeof each residue class modulom.[4]For example, the least residue system modulo4is{0, 1, 2, 3}. Some other complete residue systems modulo4include: Some sets that arenotcomplete residue systems modulo 4 are: Given theEuler's totient functionφ(m), any set ofφ(m)integers that arerelatively primetomand mutually incongruent under modulusmis called areduced residue system modulom.[5]The set{5, 15}from above, for example, is an instance of a reduced residue system modulo 4. Covering systems represent yet another type of residue system that may contain residues with varying moduli. In the context of this paragraph, the modulusmis almost always taken as positive. The set of allcongruence classesmodulomis aringcalled thering of integers modulom, and is denotedZ/mZ{\textstyle \mathbb {Z} /m\mathbb {Z} },Z/m{\displaystyle \mathbb {Z} /m}, orZm{\displaystyle \mathbb {Z} _{m}}.[6]The ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is fundamental to various branches of mathematics (see§ Applicationsbelow). (In some parts ofnumber theorythe notationZm{\displaystyle \mathbb {Z} _{m}}is avoided because it can be confused with the set ofm-adic integers.) Form> 0one has Whenm= 1,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is thezero ring; whenm= 0,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is not anempty set; rather, it isisomorphictoZ{\displaystyle \mathbb {Z} }, sincea0= {a}. Addition, subtraction, and multiplication are defined onZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }by the following rules: The properties given before imply that, with these operations,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acommutative ring. For example, in the ringZ/24Z{\displaystyle \mathbb {Z} /24\mathbb {Z} }, one has as in the arithmetic for the 24-hour clock. The notationZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is used because this ring is thequotient ringofZ{\displaystyle \mathbb {Z} }by theidealmZ{\displaystyle m\mathbb {Z} }, the set formed by all multiples ofm, i.e., all numbersk mwithk∈Z.{\displaystyle k\in \mathbb {Z} .} Under addition,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acyclic group. All finite cyclic groups are isomorphic withZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }for somem.[7] The ring of integers modulomis afield, i.e., every nonzero element has amultiplicative inverse, if and only ifmisprime. Ifm=pkis aprime powerwithk> 1, there exists a unique (up to isomorphism) finite fieldGF(m)=Fm{\displaystyle \mathrm {GF} (m)=\mathbb {F} _{m}}withmelements, which isnotisomorphic toZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }, which fails to be a field because it haszero-divisors. Ifm> 1,(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}denotes themultiplicative group of the integers modulomthat are invertible. It consists of the congruence classesam, whereais coprimetom; these are precisely the classes possessing a multiplicative inverse. They form anabelian groupunder multiplication; its order isφ(m), whereφisEuler's totient function. In pure mathematics, modular arithmetic is one of the foundations ofnumber theory, touching on almost every aspect of its study, and it is also used extensively ingroup theory,ring theory,knot theory, andabstract algebra. In applied mathematics, it is used incomputer algebra,cryptography,computer science,chemistryand thevisualandmusicalarts. A very practical application is to calculate checksums within serial number identifiers. For example,International Standard Book Number(ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise,International Bank Account Numbers(IBANs) use modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of theCAS registry number(a unique identifying number for each chemical compound) is acheck digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10. In cryptography, modular arithmetic directly underpinspublic keysystems such asRSAandDiffie–Hellman, and providesfinite fieldswhich underlieelliptic curves, and is used in a variety ofsymmetric key algorithmsincludingAdvanced Encryption Standard(AES),International Data Encryption Algorithm(IDEA), andRC4. RSA and Diffie–Hellman usemodular exponentiation. In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used inpolynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations ofpolynomial greatest common divisor, exactlinear algebraandGröbner basisalgorithms over the integers and the rational numbers. As posted onFidonetin the 1980s and archived atRosetta Code, modular arithmetic was used to disproveEuler's sum of powers conjectureon aSinclair QLmicrocomputerusing just one-fourth of the integer precision used by aCDC 6600supercomputerto disprove it two decades earlier via abrute force search.[8] In computer science, modular arithmetic is often applied inbitwise operationsand other operations involving fixed-width, cyclicdata structures. The modulo operation, as implemented in manyprogramming languagesandcalculators, is an application of modular arithmetic that is often used in this context. The logical operatorXORsums 2 bits, modulo 2. The use oflong divisionto turn a fraction into arepeating decimalin any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10. In music, arithmetic modulo 12 is used in the consideration of the system oftwelve-tone equal temperament, whereoctaveandenharmonicequivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharpis considered the same as D-flat). The method ofcasting out ninesoffers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9). Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular,Zeller's congruenceand theDoomsday algorithmmake heavy use of modulo-7 arithmetic. More generally, modular arithmetic also has application in disciplines such aslaw(e.g.,apportionment),economics(e.g.,game theory) and other areas of thesocial sciences, whereproportionaldivision and allocation of resources plays a central part of the analysis. Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved inpolynomial timewith a form ofGaussian elimination, for details seelinear congruence theorem. Algorithms, such asMontgomery reduction, also exist to allow simple arithmetic operations, such as multiplication andexponentiation modulom, to be performed efficiently on large numbers. Some operations, like finding adiscrete logarithmor aquadratic congruenceappear to be as hard asinteger factorizationand thus are a starting point forcryptographic algorithmsandencryption. These problems might beNP-intermediate. Solving a system of non-linear modular arithmetic equations isNP-complete.[9]
https://en.wikipedia.org/wiki/Modular_arithmetic#Residue_class
Innumber theory, anintegerqis aquadratic residuemodulonif it iscongruentto aperfect squaremodulon; that is, if there exists an integerxsuch that Otherwise,qis aquadratic nonresiduemodulon. Quadratic residues are used in applications ranging fromacoustical engineeringtocryptographyand thefactoring of large numbers. Fermat,Euler,Lagrange,Legendre, and other number theorists of the 17th and 18th centuries established theorems[1]and formed conjectures[2]about quadratic residues, but the first systematic treatment is § IV ofGauss'sDisquisitiones Arithmeticae(1801). Article 95 introduces the terminology "quadratic residue" and "quadratic nonresidue", and says that if the context makes it clear, the adjective "quadratic" may be dropped. For a givenn, a list of the quadratic residues modulonmay be obtained by simply squaring all the numbers 0, 1, ...,n− 1. Sincea≡b(modn) impliesa2≡b2(modn), any other quadratic residue is congruent (modn) to some in the obtained list. But the obtained list is not composed of mutually incongruent quadratic residues (mod n) only. Sincea2≡(n−a)2(modn), the list obtained by squaring all numbers in the list 1, 2, ...,n− 1(or in the list 0, 1, ...,n) is symmetric (modn) around its midpoint, hence it is actually only needed to square all the numbers in the list 0, 1, ...,⌊{\displaystyle \lfloor }n/2⌋{\displaystyle \rfloor }. The list so obtained may still contain mutually congruent numbers (modn). Thus, the number of mutually noncongruent quadratic residues moduloncannot exceedn/2 + 1 (neven) or (n+ 1)/2 (nodd).[3] The product of two residues is always a residue. Modulo 2, every integer is a quadratic residue. Modulo an oddprime numberpthere are (p+ 1)/2 residues (including 0) and (p− 1)/2 nonresidues, byEuler's criterion. In this case, it is customary to consider 0 as a special case and work within themultiplicative group of nonzero elementsof thefield(Z/pZ){\displaystyle (\mathbb {Z} /p\mathbb {Z} )}. In other words, every congruence class except zero modulophas a multiplicative inverse. This is not true for composite moduli.[4] Following this convention, the multiplicative inverse of a residue is a residue, and the inverse of a nonresidue is a nonresidue.[5] Following this convention, modulo an odd prime number there is an equal number of residues and nonresidues.[4] Modulo a prime, the product of two nonresidues is a residue and the product of a nonresidue and a (nonzero) residue is a nonresidue.[5] The first supplement[6]to thelaw of quadratic reciprocityis that ifp≡ 1 (mod 4) then −1 is a quadratic residue modulop, and ifp≡ 3 (mod 4) then −1 is a nonresidue modulop. This implies the following: Ifp≡ 1 (mod 4) the negative of a residue modulopis a residue and the negative of a nonresidue is a nonresidue. Ifp≡ 3 (mod 4) the negative of a residue modulopis a nonresidue and the negative of a nonresidue is a residue. All odd squares are ≡ 1 (mod 8) and thus also ≡ 1 (mod 4). Ifais an odd number andm= 8, 16, or some higher power of 2, thenais a residue modulomif and only ifa≡ 1 (mod 8).[7] For example, mod (32) the odd squares are and the even ones are So a nonzero number is a residue mod 8, 16, etc., if and only if it is of the form 4k(8n+ 1). A numberarelatively prime to an odd primepis a residue modulo any power ofpif and only if it is a residue modulop.[8] If the modulus ispn, Notice that the rules are different for powers of two and powers of odd primes. Modulo an odd prime powern=pk, the products of residues and nonresidues relatively prime topobey the same rules as they do modp;pis a nonresidue, and in general all the residues and nonresidues obey the same rules, except that the products will be zero if the power ofpin the product ≥n. Modulo 8, the product of the nonresidues 3 and 5 is the nonresidue 7, and likewise for permutations of 3, 5 and 7. In fact, the multiplicative group of the non-residues and 1 form theKlein four-group. The basic fact in this case is Modulo a composite number, the product of two residues is a residue. The product of a residue and a nonresidue may be a residue, a nonresidue, or zero. For example, from the table for modulus 61, 2,3,4, 5 (residues inbold). The product of the residue 3 and the nonresidue 5 is the residue 3, whereas the product of the residue 4 and the nonresidue 2 is the nonresidue 2. Also, the product of two nonresidues may be either a residue, a nonresidue, or zero. For example, from the table for modulus 151, 2, 3,4, 5,6, 7, 8,9,10, 11, 12, 13, 14 (residues inbold). The product of the nonresidues 2 and 8 is the residue 1, whereas the product of the nonresidues 2 and 7 is the nonresidue 14. This phenomenon can best be described using the vocabulary of abstract algebra. The congruence classes relatively prime to the modulus are agroupunder multiplication, called thegroup of unitsof thering(Z/nZ){\displaystyle (\mathbb {Z} /n\mathbb {Z} )}, and the squares are asubgroupof it. Different nonresidues may belong to differentcosets, and there is no simple rule that predicts which one their product will be in. Modulo a prime, there is only the subgroup of squares and a single coset. The fact that, e.g., modulo 15 the product of the nonresidues 3 and 5, or of the nonresidue 5 and the residue 9, or the two residues 9 and 10 are all zero comes from working in the full ring(Z/nZ){\displaystyle (\mathbb {Z} /n\mathbb {Z} )}, which haszero divisorsfor compositen. For this reason some authors[10]add to the definition that a quadratic residueamust not only be a square but must also berelatively primeto the modulusn. (ais coprime tonif and only ifa2is coprime ton.) Although it makes things tidier, this article does not insist that residues must be coprime to the modulus. Gauss[11]usedRandNto denote residuosity and non-residuosity, respectively; Although this notation is compact and convenient for some purposes,[12][13]a more useful notation is theLegendre symbol, also called thequadratic character, which is defined for all integersaand positive oddprime numberspas There are two reasons why numbers ≡ 0 (modp) are treated specially. As we have seen, it makes many formulas and theorems easier to state. The other (related) reason is that the quadratic character is ahomomorphismfrom themultiplicative group of nonzero congruence classes modulopto thecomplex numbersunder multiplication. Setting(npp)=0{\displaystyle ({\tfrac {np}{p}})=0}allows itsdomainto be extended to the multiplicativesemigroupof all the integers.[14] One advantage of this notation over Gauss's is that the Legendre symbol is a function that can be used in formulas.[15]It can also easily be generalized tocubic, quartic and higher power residues.[16] There is a generalization of the Legendre symbol for composite values ofp, theJacobi symbol, but its properties are not as simple: ifmis composite and the Jacobi symbol(am)=−1,{\displaystyle ({\tfrac {a}{m}})=-1,}thenaNm, and ifaRmthen(am)=1,{\displaystyle ({\tfrac {a}{m}})=1,}but if(am)=1{\displaystyle ({\tfrac {a}{m}})=1}we do not know whetheraRmoraNm. For example:(215)=1{\displaystyle ({\tfrac {2}{15}})=1}and(415)=1{\displaystyle ({\tfrac {4}{15}})=1}, but2 N 15and4 R 15. Ifmis prime, the Jacobi and Legendre symbols agree. Although quadratic residues appear to occur in a rather random pattern modulon, and this has been exploited in suchapplicationsasacousticsandcryptography, their distribution also exhibits some striking regularities. UsingDirichlet's theoremon primes inarithmetic progressions, thelaw of quadratic reciprocity, and theChinese remainder theorem(CRT) it is easy to see that for anyM> 0 there are primespsuch that the numbers 1, 2, ...,Mare all residues modulop. For example, ifp≡ 1 (mod 8), (mod 12), (mod 5) and (mod 28), then by the law of quadratic reciprocity 2, 3, 5, and 7 will all be residues modulop, and thus all numbers 1–10 will be. The CRT says that this is the same asp≡ 1 (mod 840), and Dirichlet's theorem says there are an infinite number of primes of this form. 2521 is the smallest, and indeed 12≡ 1, 10462≡ 2, 1232≡ 3, 22≡ 4, 6432≡ 5, 872≡ 6, 6682≡ 7, 4292≡ 8, 32≡ 9, and 5292≡ 10 (mod 2521). The first of these regularities stems fromPeter Gustav Lejeune Dirichlet's work (in the 1830s) on theanalytic formulafor theclass numberof binaryquadratic forms.[17]Letqbe a prime number,sa complex variable, and define aDirichlet L-functionas Dirichlet showed that ifq≡ 3 (mod 4), then Therefore, in this case (primeq≡ 3 (mod 4)), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ...,q− 1 is a negative number. For example, modulo 11, In fact the difference will always be an odd multiple ofqifq> 3.[18]In contrast, for primeq≡ 1 (mod 4), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ...,q− 1 is zero, implying that both sums equalq(q−1)4{\displaystyle {\frac {q(q-1)}{4}}}.[citation needed] Dirichlet also proved that for primeq≡ 3 (mod 4), This implies that there are more quadratic residues than nonresidues among the numbers 1, 2, ..., (q− 1)/2. For example, modulo 11 there are four residues less than 6 (namely 1, 3, 4, and 5), but only one nonresidue (2). An intriguing fact about these two theorems is that all known proofs rely on analysis; no-one has ever published a simple or direct proof of either statement.[19] Ifpandqare odd primes, then: ((pis a quadratic residue modq) if and only if (qis a quadratic residue modp)) if and only if (at least one ofpandqis congruent to 1 mod 4). That is: where(pq){\displaystyle \left({\frac {p}{q}}\right)}is theLegendre symbol. Thus, for numbersaand odd primespthat don't dividea: Modulo a primep, the number of pairsn,n+ 1 wherenRpandn+ 1 Rp, ornNpandn+ 1 Rp, etc., are almost equal. More precisely,[20][21]letpbe an odd prime. Fori,j= 0, 1 define the sets and let That is, Then ifp≡ 1 (mod 4) and ifp≡ 3 (mod 4) For example: (residues inbold) Modulo 17 Modulo 19 Gauss (1828)[22]introduced this sort of counting when he proved that ifp≡ 1 (mod 4) thenx4≡ 2 (modp) can be solved if and only ifp=a2+ 64b2. The values of(ap){\displaystyle ({\tfrac {a}{p}})}for consecutive values ofamimic a random variable like acoin flip.[23]Specifically,PólyaandVinogradovproved[24](independently) in 1918 that for any nonprincipalDirichlet characterχ(n) moduloqand any integersMandN, inbig O notation. Setting this shows that the number of quadratic residues moduloqin any interval of lengthNis It is easy[25]to prove that In fact,[26] MontgomeryandVaughanimproved this in 1977, showing that, if thegeneralized Riemann hypothesisis true then This result cannot be substantially improved, forSchurhad proved in 1918 that andPaleyhad proved in 1932 that for infinitely manyd> 0. The least quadratic residue modpis clearly 1. The question of the magnitude of the least quadratic non-residuen(p) is more subtle, but it is always prime, with 7 appearing for the first time at 71. The Pólya–Vinogradov inequality above gives O(√plogp). The best unconditional estimate isn(p) ≪pθfor any θ>1/4√e, obtained by estimates of Burgess oncharacter sums.[27] Assuming theGeneralised Riemann hypothesis, Ankeny obtainedn(p) ≪ (logp)2.[28] Linnikshowed that the number ofpless thanXsuch thatn(p) > Xεis bounded by a constant depending on ε.[27] The least quadratic non-residues modpfor odd primespare: Letpbe an odd prime. Thequadratic excessE(p) is the number of quadratic residues on the range (0,p/2) minus the number in the range (p/2,p) (sequenceA178153in theOEIS). Forpcongruent to 1 mod 4, the excess is zero, since −1 is a quadratic residue and the residues are symmetric underr↔p−r. Forpcongruent to 3 mod 4, the excessEis always positive.[29] That is, given a numberaand a modulusn, how hard is it An important difference between prime and composite moduli shows up here. Modulo a primep, a quadratic residueahas 1 + (a|p) roots (i.e. zero ifaNp, one ifa≡ 0 (modp), or two ifaRpand gcd(a,p) = 1.) In general if a composite modulusnis written as a product of powers of distinct primes, and there aren1roots modulo the first one,n2mod the second, ..., there will ben1n2... roots modulon. The theoretical way solutions modulo the prime powers are combined to make solutions modulonis called theChinese remainder theorem; it can be implemented with an efficient algorithm.[30] For example: First off, if the modulusnis prime theLegendre symbol(an){\displaystyle \left({\frac {a}{n}}\right)}can bequickly computedusing a variation ofEuclid's algorithm[31]or theEuler's criterion. If it is −1 there is no solution. Secondly, assuming that(an)=1{\displaystyle \left({\frac {a}{n}}\right)=1}, ifn≡ 3 (mod 4),Lagrangefound that the solutions are given by andLegendrefound a similar solution[32]ifn≡ 5 (mod 8): For primen≡ 1 (mod 8), however, there is no known formula.Tonelli[33](in 1891) andCipolla[34]found efficient algorithms that work for all prime moduli. Both algorithms require finding a quadratic nonresidue modulon, and there is no efficient deterministic algorithm known for doing that. But since half the numbers between 1 andnare nonresidues, picking numbersxat random and calculating the Legendre symbol(xn){\displaystyle \left({\frac {x}{n}}\right)}until a nonresidue is found will quickly produce one. A slight variant of this algorithm is theTonelli–Shanks algorithm. If the modulusnis aprime powern=pe, a solution may be found modulopand "lifted" to a solution modulonusingHensel's lemmaor an algorithm of Gauss.[8] If the modulusnhas been factored into prime powers the solution was discussed above. Ifnis not congruent to 2 modulo 4 and theKronecker symbol(an)=−1{\displaystyle \left({\tfrac {a}{n}}\right)=-1}then there is no solution; ifnis congruent to 2 modulo 4 and(an/2)=−1{\displaystyle \left({\tfrac {a}{n/2}}\right)=-1}, then there is also no solution. Ifnis not congruent to 2 modulo 4 and(an)=1{\displaystyle \left({\tfrac {a}{n}}\right)=1}, ornis congruent to 2 modulo 4 and(an/2)=1{\displaystyle \left({\tfrac {a}{n/2}}\right)=1}, there may or may not be one. If the complete factorization ofnis not known, and(an)=1{\displaystyle \left({\tfrac {a}{n}}\right)=1}andnis not congruent to 2 modulo 4, ornis congruent to 2 modulo 4 and(an/2)=1{\displaystyle \left({\tfrac {a}{n/2}}\right)=1}, the problem is known to be equivalent tointeger factorizationofn(i.e. an efficient solution to either problem could be used to solve the other efficiently). The above discussion indicates how knowing the factors ofnallows us to find the roots efficiently. Say there were an efficient algorithm for finding square roots modulo a composite number. The articlecongruence of squaresdiscusses how finding two numbers x and y wherex2≡y2(modn)andx≠ ±ysuffices to factorizenefficiently. Generate a random number, square it modulon, and have the efficient square root algorithm find a root. Repeat until it returns a number not equal to the one we originally squared (or its negative modulon), then follow the algorithm described in congruence of squares. The efficiency of the factoring algorithm depends on the exact characteristics of the root-finder (e.g. does it return all roots? just the smallest one? a random one?), but it will be efficient.[35] Determining whetherais a quadratic residue or nonresidue modulon(denotedaRnoraNn) can be done efficiently for primenby computing the Legendre symbol. However, for compositen, this forms thequadratic residuosity problem, which is not known to be ashardas factorization, but is assumed to be quite hard. On the other hand, if we want to know if there is a solution forxless than some given limitc, this problem isNP-complete;[36]however, this is afixed-parameter tractableproblem, wherecis the parameter. In general, to determine ifais a quadratic residue modulo compositen, one can use the following theorem:[37] Letn> 1, andgcd(a,n) = 1. Thenx2≡a(modn)is solvable if and only if: Note: This theorem essentially requires that the factorization ofnis known. Also notice that ifgcd(a,n) =m, then the congruence can be reduced toa/m≡x2/m(modn/m), but then this takes the problem away from quadratic residues (unlessmis a square). The list of the number of quadratic residues modulon, forn= 1, 2, 3 ..., looks like: A formula to count the number of squares modulonis given by Stangl.[38] Sound diffusershave been based on number-theoretic concepts such asprimitive rootsand quadratic residues.[39] Paley graphsare dense undirected graphs, one for each primep≡ 1 (mod 4), that form an infinite family ofconference graphs, which yield an infinite family ofsymmetricconference matrices. Paley digraphs are directed analogs of Paley graphs, one for eachp≡ 3 (mod 4), that yieldantisymmetricconference matrices. The construction of these graphs uses quadratic residues. The fact that finding a square root of a number modulo a large compositenis equivalent to factoring (which is widely believed to be ahard problem) has been used for constructingcryptographic schemessuch as theRabin cryptosystemand theoblivious transfer. Thequadratic residuosity problemis the basis for theGoldwasser-Micali cryptosystem. Thediscrete logarithmis a similar problem that is also used in cryptography. Euler's criterionis a formula for the Legendre symbol (a|p) wherepis prime. Ifpis composite the formula may or may not compute (a|p) correctly. TheSolovay–Strassen primality testfor whether a given numbernis prime or composite picks a randomaand computes (a|n) using a modification of Euclid's algorithm,[40]and also using Euler's criterion.[41]If the results disagree,nis composite; if they agree,nmay be composite or prime. For a compositenat least 1/2 the values ofain the range 2, 3, ...,n− 1 will return "nis composite"; for primennone will. If, after using many different values ofa,nhas not been proved composite it is called a "probable prime". TheMiller–Rabin primality testis based on the same principles. There is a deterministic version of it, but the proof that it works depends on thegeneralized Riemann hypothesis; the output from this test is "nis definitely composite" or "eithernis prime or the GRH is false". If the second output ever occurs for a compositen, then the GRH would be false, which would have implications through many branches of mathematics. In § VI of theDisquisitiones Arithmeticae[42]Gauss discusses two factoring algorithms that use quadratic residues and thelaw of quadratic reciprocity. Several modern factorization algorithms (includingDixon's algorithm, thecontinued fraction method, thequadratic sieve, and thenumber field sieve) generate small quadratic residues (modulo the number being factorized) in an attempt to find acongruence of squareswhich will yield a factorization. The number field sieve is the fastest general-purpose factorization algorithm known. The following table (sequenceA096008in theOEIS) lists the quadratic residues mod 1 to 75 (ared numbermeans it is not coprime ton). (For the quadratic residues coprime ton, seeOEIS:A096103, and for nonzero quadratic residues, seeOEIS:A046071.) TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
https://en.wikipedia.org/wiki/Quadratic_residue#Prime_power_modulus
TheArtin reciprocity law, which was established byEmil Artinin a series of papers (1924; 1927; 1930), is a general theorem innumber theorythat forms a central part of globalclass field theory.[1]The term "reciprocity law" refers to a long line of more concrete number theoretic statements which it generalized, from thequadratic reciprocity lawand the reciprocity laws ofEisensteinandKummertoHilbert'sproduct formula for thenorm symbol. Artin's result provided a partial solution toHilbert's ninth problem. LetL/K{\displaystyle L/K}be aGalois extensionofglobal fieldsandCL{\displaystyle C_{L}}stand for theidèle class groupofL{\displaystyle L}. One of the statements of theArtin reciprocity lawis that there is a canonical isomorphism called theglobal symbol map[2][3] whereab{\displaystyle {\text{ab}}}denotes theabelianizationof a group, andGal⁡(L/K){\displaystyle \operatorname {Gal} (L/K)}is theGalois groupofL{\displaystyle L}overK{\displaystyle K}. The mapθ{\displaystyle \theta }is defined by assembling the maps called thelocal Artin symbol, thelocal reciprocity mapor thenorm residue symbol[4][5] for differentplacesv{\displaystyle v}ofK{\displaystyle K}. More precisely,θ{\displaystyle \theta }is given by the local mapsθv{\displaystyle \theta _{v}}on thev{\displaystyle v}-component of an idèle class. The mapsθv{\displaystyle \theta _{v}}are isomorphisms. This is the content of thelocal reciprocity law, a main theorem oflocal class field theory. A cohomological proof of the global reciprocity law can be achieved by first establishing that constitutes aclass formationin the sense of Artin and Tate.[6]Then one proves that whereH^i{\displaystyle {\hat {H}}^{i}}denote theTate cohomology groups. Working out the cohomology groups establishes thatθ{\displaystyle \theta }is an isomorphism. Artin's reciprocity law implies a description of theabelianizationof the absoluteGalois groupof aglobal fieldKwhich is based on theHasse local–global principleand the use of theFrobenius elements. Together with theTakagi existence theorem, it is used to describe theabelian extensionsofKin terms of the arithmetic ofKand to understand the behavior of thenonarchimedean placesin them. Therefore, theArtin reciprocity lawcan be interpreted as one of the main theorems of global class field theory. It can be used to prove thatArtin L-functionsaremeromorphic, and also to prove theChebotarev density theorem.[7] Two years after the publication of his general reciprocity law in 1927, Artin rediscovered thetransfer homomorphismof I. Schur and used the reciprocity law to translate theprincipalization problemfor ideal classes ofalgebraic numberfields into the group theoretic task of determining the kernels of transfers of finite non-abelian groups.[8] (Seemath.stackexchange.comfor an explanation of some of the terms used here) The definition of the Artin map for afiniteabelian extensionL/Kofglobal fields(such as a finite abelian extension ofQ{\displaystyle \mathbb {Q} }) has a concrete description in terms ofprime idealsandFrobenius elements. Ifp{\displaystyle {\mathfrak {p}}}is a prime ofKthen thedecomposition groupsof primesP{\displaystyle {\mathfrak {P}}}abovep{\displaystyle {\mathfrak {p}}}are equal in Gal(L/K) since the latter group isabelian. Ifp{\displaystyle {\mathfrak {p}}}isunramifiedinL, then the decomposition groupDp{\displaystyle D_{\mathfrak {p}}}is canonically isomorphic to the Galois group of the extension of residue fieldsOL,P/P{\displaystyle {\mathcal {O}}_{L,{\mathfrak {P}}}/{\mathfrak {P}}}overOK,p/p{\displaystyle {\mathcal {O}}_{K,{\mathfrak {p}}}/{\mathfrak {p}}}. There is therefore a canonically defined Frobenius element in Gal(L/K) denoted byFrobp{\displaystyle \mathrm {Frob} _{\mathfrak {p}}}or(L/Kp){\displaystyle \left({\frac {L/K}{\mathfrak {p}}}\right)}. If Δ denotes therelative discriminantofL/K, theArtin symbol(orArtin map, or(global) reciprocity map) ofL/Kis defined on thegroup of prime-to-Δ fractional ideals,IKΔ{\displaystyle I_{K}^{\Delta }}, by linearity: TheArtin reciprocity law(orglobal reciprocity law) states that there is amoduluscofKsuch that the Artin map induces an isomorphism whereKc,1is theray moduloc, NL/Kis the norm map associated toL/KandILc{\displaystyle I_{L}^{\mathbf {c} }}is the fractional ideals ofLprime toc. Such a moduluscis called adefining modulus forL/K. The smallest defining modulus is called theconductor ofL/Kand typically denotedf(L/K).{\displaystyle {\mathfrak {f}}(L/K).} Ifd≠1{\displaystyle d\neq 1}is asquarefree integer,K=Q,{\displaystyle K=\mathbb {Q} ,}andL=Q(d){\displaystyle L=\mathbb {Q} ({\sqrt {d}})}, thenGal⁡(L/Q){\displaystyle \operatorname {Gal} (L/\mathbb {Q} )}can be identified with {±1}. The discriminant Δ ofLoverQ{\displaystyle \mathbb {Q} }isdor 4ddepending on whetherd≡ 1 (mod 4) or not. The Artin map is then defined on primespthat do not divide Δ by where(Δp){\displaystyle \left({\frac {\Delta }{p}}\right)}is theKronecker symbol.[9]More specifically, the conductor ofL/Q{\displaystyle L/\mathbb {Q} }is the principal ideal (Δ) or (Δ)∞ according to whether Δ is positive or negative,[10]and the Artin map on a prime-to-Δ ideal (n) is given by the Kronecker symbol(Δn).{\displaystyle \left({\frac {\Delta }{n}}\right).}This shows that a primepis split or inert inLaccording to whether(Δp){\displaystyle \left({\frac {\Delta }{p}}\right)}is 1 or −1. Letm> 1 be either an odd integer or a multiple of 4, letζm{\displaystyle \zeta _{m}}be aprimitivemth root of unity, and letL=Q(ζm){\displaystyle L=\mathbb {Q} (\zeta _{m})}be themthcyclotomic field.Gal⁡(L/Q){\displaystyle \operatorname {Gal} (L/\mathbb {Q} )}can be identified with(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}by sending σ toaσgiven by the rule The conductor ofL/Q{\displaystyle L/\mathbb {Q} }is (m)∞,[11]and the Artin map on a prime-to-mideal (n) is simplyn(modm) in(Z/mZ)×.{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }.}[12] Letpandℓ{\displaystyle \ell }be distinct odd primes. For convenience, letℓ∗=(−1)ℓ−12ℓ{\displaystyle \ell ^{*}=(-1)^{\frac {\ell -1}{2}}\ell }(which is always 1 (mod 4)). Then, quadratic reciprocity states that The relation between the quadratic and Artin reciprocity laws is given by studying the quadratic fieldF=Q(ℓ∗){\displaystyle F=\mathbb {Q} ({\sqrt {\ell ^{*}}})}and the cyclotomic fieldL=Q(ζℓ){\displaystyle L=\mathbb {Q} (\zeta _{\ell })}as follows.[9]First,Fis a subfield ofL, so ifH= Gal(L/F) andG=Gal⁡(L/Q),{\displaystyle G=\operatorname {Gal} (L/\mathbb {Q} ),}thenGal⁡(F/Q)=G/H.{\displaystyle \operatorname {Gal} (F/\mathbb {Q} )=G/H.}Since the latter has order 2, the subgroupHmust be the group of squares in(Z/ℓZ)×.{\displaystyle (\mathbb {Z} /\ell \mathbb {Z} )^{\times }.}A basic property of the Artin symbol says that for every prime-to-ℓ ideal (n) Whenn=p, this shows that(ℓ∗p)=1{\displaystyle \left({\frac {\ell ^{*}}{p}}\right)=1}if and only if,pmodulo ℓ is inH, i.e. if and only if,pis a square modulo ℓ. An alternative version of the reciprocity law, leading to theLanglands program, connectsArtin L-functionsassociated to abelian extensions of anumber fieldwith Hecke L-functions associated to characters of the idèle class group.[13] AHecke character(or Größencharakter) of a number fieldKis defined to be aquasicharacterof the idèle class group ofK.Robert Langlandsinterpreted Hecke characters asautomorphic formson thereductive algebraic groupGL(1) over thering of adelesofK.[14] LetE/K{\displaystyle E/K}be an abelian Galois extension withGalois groupG. Then for anycharacterσ:G→C×{\displaystyle \sigma :G\to \mathbb {C} ^{\times }}(i.e. one-dimensional complexrepresentationof the groupG), there exists a Hecke characterχ{\displaystyle \chi }ofKsuch that where the left hand side is the Artin L-function associated to the extension with character σ and the right hand side is the Hecke L-function associated with χ, Section 7.D of.[14] The formulation of the Artin reciprocity law as an equality ofL-functions allows formulation of a generalisation ton-dimensional representations, though a direct correspondence is still lacking.
https://en.wikipedia.org/wiki/Artin_symbol
Inmathematics, more particularly in the fields ofdynamical systemsandgeometric topology, anAnosov mapon amanifoldMis a certain type of mapping, fromMto itself, with rather clearly marked local directions of "expansion" and "contraction". Anosov systems are a special case ofAxiom Asystems. Anosov diffeomorphismswere introduced byDmitri Victorovich Anosov, who proved that their behaviour was in an appropriate sensegeneric(when they exist at all).[1] Three closely related definitions must be distinguished: A classical example of Anosov diffeomorphism is theArnold's cat map. Anosov proved that Anosov diffeomorphisms arestructurally stableand form an open subset of mappings (flows) with theC1topology. Not every manifold admits an Anosov diffeomorphism; for example, there are no such diffeomorphisms on thesphere. The simplest examples of compact manifolds admitting them are the tori: they admit the so-calledlinear Anosov diffeomorphisms, which are isomorphisms having no eigenvalue of modulus 1. It was proved that any other Anosov diffeomorphism on a torus istopologically conjugateto one of this kind. The problem of classifying manifolds that admit Anosov diffeomorphisms turned out to be very difficult, and still as of 2023[update]has no answer for dimension over 3. The only known examples areinfranilmanifolds, and it is conjectured that they are the only ones. A sufficient condition for transitivity is that all points are nonwandering:Ω(f)=M{\displaystyle \Omega (f)=M}. This in turn holds for codimension-one Anosov diffeomorphisms (i.e., those for which the contracting or the expanding subbundle is one-dimensional)[2]and for codimension one Anosov flows on manifolds of dimension greater than three[3]as well as Anosov flows whose Mather spectrum is contained in two sufficiently thin annuli.[4]It is not known whether Anosov diffeomorphisms are transitive (except on infranilmanifolds), but Anosov flows need not be topologically transitive.[5] Also, it is unknown if everyC1{\displaystyle C^{1}}volume-preserving Anosov diffeomorphism is ergodic. Anosov proved it under aC2{\displaystyle C^{2}}assumption. It is also true forC1+α{\displaystyle C^{1+\alpha }}volume-preserving Anosov diffeomorphisms. ForC2{\displaystyle C^{2}}transitive Anosov diffeomorphismf:M→M{\displaystyle f\colon M\to M}there exists a unique SRB measure (the acronym stands for Sinai, Ruelle and Bowen)μf{\displaystyle \mu _{f}}supported onM{\displaystyle M}such that its basinB(μf){\displaystyle B(\mu _{f})}is of full volume, where As an example, this section develops the case of the Anosov flow on thetangent bundleof aRiemann surfaceof negativecurvature. This flow can be understood in terms of the flow on the tangent bundle of thePoincaré half-plane modelof hyperbolic geometry. Riemann surfaces of negative curvature may be defined asFuchsian models, that is, as the quotients of theupper half-planeand aFuchsian group. For the following, letHbe the upper half-plane; let Γ be a Fuchsian group; letM=H/Γ be a Riemann surface of negative curvature as the quotient of "M" by the action of the group Γ, and letT1M{\displaystyle T^{1}M}be the tangent bundle of unit-length vectors on the manifoldM, and letT1H{\displaystyle T^{1}H}be the tangent bundle of unit-length vectors onH. Note that a bundle of unit-length vectors on a surface is theprincipal bundleof a complexline bundle. One starts by noting thatT1H{\displaystyle T^{1}H}is isomorphic to theLie groupPSL(2,R). This group is the group of orientation-preservingisometriesof the upper half-plane. TheLie algebraof PSL(2,R) is sl(2,R), and is represented by the matrices which have the algebra Theexponential maps define right-invariantflowson the manifold ofT1H=PSL⁡(2,R){\displaystyle T^{1}H=\operatorname {PSL} (2,\mathbb {R} )}, and likewise onT1M{\displaystyle T^{1}M}. DefiningP=T1H{\displaystyle P=T^{1}H}andQ=T1M{\displaystyle Q=T^{1}M}, these flows define vector fields onPandQ, whose vectors lie inTPandTQ. These are just the standard, ordinary Lie vector fields on the manifold of a Lie group, and the presentation above is a standard exposition of a Lie vector field. The connection to the Anosov flow comes from the realization thatgt{\displaystyle g_{t}}is thegeodesic flowonPandQ. Lie vector fields being (by definition) left invariant under the action of a group element, one has that these fields are left invariant under the specific elementsgt{\displaystyle g_{t}}of the geodesic flow. In other words, the spacesTPandTQare split into three one-dimensional spaces, orsubbundles, each of which are invariant under the geodesic flow. The final step is to notice that vector fields in one subbundle expand (and expand exponentially), those in another are unchanged, and those in a third shrink (and do so exponentially). More precisely, the tangent bundleTQmay be written as thedirect sum or, at a pointg⋅e=q∈Q{\displaystyle g\cdot e=q\in Q}, the direct sum corresponding to the Lie algebra generatorsY,JandX, respectively, carried, by the left action of group elementg, from the origineto the pointq. That is, one hasEe+=Y,Ee0=J{\displaystyle E_{e}^{+}=Y,E_{e}^{0}=J}andEe−=X{\displaystyle E_{e}^{-}=X}. These spaces are eachsubbundles, and are preserved (are invariant) under the action of thegeodesic flow; that is, under the action of group elementsg=gt{\displaystyle g=g_{t}}. To compare the lengths of vectors inTqQ{\displaystyle T_{q}Q}at different pointsq, one needs a metric. Anyinner productatTeP=sl(2,R){\displaystyle T_{e}P=sl(2,\mathbb {R} )}extends to a left-invariantRiemannian metriconP, and thus to a Riemannian metric onQ. The length of a vectorv∈Eq+{\displaystyle v\in E_{q}^{+}}expands exponentially as exp(t) under the action ofgt{\displaystyle g_{t}}. The length of a vectorv∈Eq−{\displaystyle v\in E_{q}^{-}}shrinks exponentially as exp(-t) under the action ofgt{\displaystyle g_{t}}. Vectors inEq0{\displaystyle E_{q}^{0}}are unchanged. This may be seen by examining how the group elements commute. The geodesic flow is invariant, but the other two shrink and expand: and where we recall that a tangent vector inEq+{\displaystyle E_{q}^{+}}is given by thederivative, with respect tot, of thecurveht{\displaystyle h_{t}}, the settingt=0{\displaystyle t=0}. When acting on the pointz=i{\displaystyle z=i}of the upper half-plane,gt{\displaystyle g_{t}}corresponds to ageodesicon the upper half plane, passing through the pointz=i{\displaystyle z=i}. The action is the standardMöbius transformationaction ofSL(2,R)on the upper half-plane, so that A general geodesic is given by witha,b,canddreal, withad−bc=1{\displaystyle ad-bc=1}. The curvesht∗{\displaystyle h_{t}^{*}}andht{\displaystyle h_{t}}are calledhorocycles. Horocycles correspond to the motion of the normal vectors of ahorosphereon the upper half-plane.
https://en.wikipedia.org/wiki/Anosov_diffeomorphism
Inmathematics,Arnold's cat mapis achaoticmap from thetorusinto itself, named afterVladimir Arnold, who demonstrated its effects in the 1960s using an image of a cat, hence the name.[1]It is a simple and pedagogical example forhyperbolic toral automorphisms. Thinking of the torusT2{\displaystyle \mathbb {T} ^{2}}as thequotient spaceR2/Z2{\displaystyle \mathbb {R} ^{2}/\mathbb {Z} ^{2}}, Arnold's cat map is the transformationΓ:T2→T2{\displaystyle \Gamma :\mathbb {T} ^{2}\to \mathbb {T} ^{2}}given by the formula Equivalently, inmatrixnotation, this is That is, with a unit equal to the width of the square image, the image isshearedone unit up, then two units to the right, and all that lies outside that unit square is shifted back by the unit until it is within the square. The map receives its name from Arnold's 1967 manuscript with André Avez,Problèmes ergodiques de la mécanique classique,[1]in which the outline of a cat was used to illustrate the action of the map on the torus. In the original book it was captioned by a humorous footnote, TheSociété Protectrice des Animauxhas given permission to reproduce this image, as well as others. In Arnold's native Russian, the map is known as "okroshka(cold soup) from a cat" (Russian:окрошка из кошки), in reference to the map's mixing properties, and which forms a play on words. Arnold later wrote that he found the name "Arnold's Cat" by which the map is known in English and other languages to be "strange".[2] It is possible to define a discrete analogue of the cat map. One of this map's features is that image being apparently randomized by the transformation but returning to its original state after a number of steps. As can be seen in the adjacent picture, the original image of the cat isshearedand then wrapped around in the first iteration of the transformation. After some iterations, the resulting image appears ratherrandomor disordered, yet after further iterations the image appears to have further order—ghost-like images of the cat, multiple smaller copies arranged in a repeating structure and even upside-down copies of the original image—and ultimately returns to the original image. The discrete cat map describes thephase spaceflow corresponding to the discrete dynamics of a bead hopping from siteqt(0 ≤qt<N) to siteqt+1on a circular ring with circumferenceN, according to thesecond order equation: Defining the momentum variablept=qt−qt−1, the above second order dynamics can be re-written as a mapping of the square 0 ≤q,p<N(thephase spaceof the discrete dynamical system) onto itself: This Arnold cat mapping showsmixingbehavior typical for chaotic systems. However, since the transformation has adeterminantequal to unity, it isarea-preservingand thereforeinvertiblethe inverse transformation being: For real variablesqandp, it is common to setN= 1. In that case a mapping of the unit square with periodic boundary conditions onto itself results. When N is set to an integer value, the position and momentum variables can be restricted to integers and the mapping becomes a mapping of a toroidial square grid of points onto itself. Such an integer cat map is commonly used to demonstratemixingbehavior withPoincaré recurrenceutilising digital images. The number of iterations needed to restore the image can be shown never to exceed 3N.[5] For an image, the relationship between iterations could be expressed as follows:
https://en.wikipedia.org/wiki/Arnold%27s_cat_map
Intheoretical physics, agravitational anomalyis an example of agauge anomaly: it is an effect ofquantum mechanics— usually aone-loop diagram—that invalidates thegeneral covarianceof a theory ofgeneral relativitycombined with some other fields.[citation needed]The adjective "gravitational" is derived from the symmetry of a gravitational theory, namely from general covariance. A gravitational anomaly is generally synonymous withdiffeomorphism anomaly, sincegeneral covarianceis symmetry under coordinate reparametrization; i.e.diffeomorphism. General covariance is the basis ofgeneral relativity, the classical theory ofgravitation. Moreover, it is necessary for the consistency of any theory ofquantum gravity, since it is required in order to cancel unphysical degrees of freedom with a negative norm, namelygravitonspolarized along the time direction. Therefore, all gravitational anomalies must cancel out. The anomaly usually appears as aFeynman diagramwith achiralfermionrunning in the loop (a polygon) withnexternalgravitonsattached to the loop wheren=1+D/2{\displaystyle n=1+D/2}whereD{\displaystyle D}is thespacetimedimension. Consider a classical gravitational field represented by the vielbeineμa{\displaystyle e_{\;\mu }^{a}}and a quantized Fermi fieldψ{\displaystyle \psi }. The generating functional for this quantum field is Z[eμa]=e−W[eμa]=∫dψ¯dψe−∫d4xeLψ,{\displaystyle Z[e_{\;\mu }^{a}]=e^{-W[e_{\;\mu }^{a}]}=\int d{\bar {\psi }}d\psi \;\;e^{-\int d^{4}xe{\mathcal {L}}_{\psi }},} whereW{\displaystyle W}is the quantum action and thee{\displaystyle e}factor before the Lagrangian is the vielbein determinant, the variation of the quantum action renders δW[eμa]=∫d4xe⟨Taμ⟩δeμa{\displaystyle \delta W[e_{\;\mu }^{a}]=\int d^{4}x\;e\langle T_{\;a}^{\mu }\rangle \delta e_{\;\mu }^{a}} in which we denote a mean value with respect to the path integral by the bracket⟨⟩{\displaystyle \langle \;\;\;\rangle }. Let us label the Lorentz, Einstein and Weyl transformations respectively by their parametersα,ξ,σ{\displaystyle \alpha ,\,\xi ,\,\sigma }; they spawn the following anomalies: Lorentz anomaly δαW=∫d4xeαab⟨Tab⟩,{\displaystyle \delta _{\alpha }W=\int d^{4}xe\,\alpha _{ab}\langle T^{ab}\rangle ,} which readily indicates that the energy-momentum tensor has an anti-symmetric part. Einstein anomaly δξW=−∫d4xeξν(∇ν⟨Tνμ⟩−ωabν⟨Tab⟩),{\displaystyle \delta _{\xi }W=-\int d^{4}xe\,\xi ^{\nu }\left(\nabla _{\nu }\langle T_{\;\nu }^{\mu }\rangle -\omega _{ab\nu }\langle T^{ab}\rangle \right),} this is related to the non-conservation of the energy-momentum tensor, i.e.∇μ⟨Tμν⟩≠0{\displaystyle \nabla _{\mu }\langle T^{\mu \nu }\rangle \neq 0}. Weyl anomaly δσW=∫d4xeσ⟨Tμμ⟩,{\displaystyle \delta _{\sigma }W=\int d^{4}xe\,\sigma \langle T_{\;\mu }^{\mu }\rangle ,} which indicates that the trace is non-zero. Thisquantum mechanics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Diffeo_anomaly
Intheoretical physics, agravitational anomalyis an example of agauge anomaly: it is an effect ofquantum mechanics— usually aone-loop diagram—that invalidates thegeneral covarianceof a theory ofgeneral relativitycombined with some other fields.[citation needed]The adjective "gravitational" is derived from the symmetry of a gravitational theory, namely from general covariance. A gravitational anomaly is generally synonymous withdiffeomorphism anomaly, sincegeneral covarianceis symmetry under coordinate reparametrization; i.e.diffeomorphism. General covariance is the basis ofgeneral relativity, the classical theory ofgravitation. Moreover, it is necessary for the consistency of any theory ofquantum gravity, since it is required in order to cancel unphysical degrees of freedom with a negative norm, namelygravitonspolarized along the time direction. Therefore, all gravitational anomalies must cancel out. The anomaly usually appears as aFeynman diagramwith achiralfermionrunning in the loop (a polygon) withnexternalgravitonsattached to the loop wheren=1+D/2{\displaystyle n=1+D/2}whereD{\displaystyle D}is thespacetimedimension. Consider a classical gravitational field represented by the vielbeineμa{\displaystyle e_{\;\mu }^{a}}and a quantized Fermi fieldψ{\displaystyle \psi }. The generating functional for this quantum field is Z[eμa]=e−W[eμa]=∫dψ¯dψe−∫d4xeLψ,{\displaystyle Z[e_{\;\mu }^{a}]=e^{-W[e_{\;\mu }^{a}]}=\int d{\bar {\psi }}d\psi \;\;e^{-\int d^{4}xe{\mathcal {L}}_{\psi }},} whereW{\displaystyle W}is the quantum action and thee{\displaystyle e}factor before the Lagrangian is the vielbein determinant, the variation of the quantum action renders δW[eμa]=∫d4xe⟨Taμ⟩δeμa{\displaystyle \delta W[e_{\;\mu }^{a}]=\int d^{4}x\;e\langle T_{\;a}^{\mu }\rangle \delta e_{\;\mu }^{a}} in which we denote a mean value with respect to the path integral by the bracket⟨⟩{\displaystyle \langle \;\;\;\rangle }. Let us label the Lorentz, Einstein and Weyl transformations respectively by their parametersα,ξ,σ{\displaystyle \alpha ,\,\xi ,\,\sigma }; they spawn the following anomalies: Lorentz anomaly δαW=∫d4xeαab⟨Tab⟩,{\displaystyle \delta _{\alpha }W=\int d^{4}xe\,\alpha _{ab}\langle T^{ab}\rangle ,} which readily indicates that the energy-momentum tensor has an anti-symmetric part. Einstein anomaly δξW=−∫d4xeξν(∇ν⟨Tνμ⟩−ωabν⟨Tab⟩),{\displaystyle \delta _{\xi }W=-\int d^{4}xe\,\xi ^{\nu }\left(\nabla _{\nu }\langle T_{\;\nu }^{\mu }\rangle -\omega _{ab\nu }\langle T^{ab}\rangle \right),} this is related to the non-conservation of the energy-momentum tensor, i.e.∇μ⟨Tμν⟩≠0{\displaystyle \nabla _{\mu }\langle T^{\mu \nu }\rangle \neq 0}. Weyl anomaly δσW=∫d4xeσ⟨Tμμ⟩,{\displaystyle \delta _{\sigma }W=\int d^{4}xe\,\sigma \langle T_{\;\mu }^{\mu }\rangle ,} which indicates that the trace is non-zero. Thisquantum mechanics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Gravitational_anomaly
Inquantum physicsananomalyorquantum anomalyis the failure of asymmetryof a theory's classicalactionto be a symmetry of anyregularizationof the full quantum theory.[1][2]Inclassical physics, aclassical anomalyis the failure of a symmetry to be restored in the limit in which the symmetry-breaking parameter goes to zero. Perhaps the first known anomaly was the dissipative anomaly[3]inturbulence: time-reversibility remains broken (and energy dissipation rate finite) at the limit of vanishingviscosity. In quantum theory, the first anomaly discovered was theAdler–Bell–Jackiw anomaly, wherein theaxial vector currentis conserved as a classical symmetry ofelectrodynamics, but is broken by the quantized theory. The relationship of this anomaly to theAtiyah–Singer index theoremwas one of the celebrated achievements of the theory. Technically, an anomalous symmetry in a quantum theory is a symmetry of theaction, but not of themeasure, and so not of thepartition functionas a whole. A global anomaly is the quantum violation of a global symmetry current conservation. A global anomaly can also mean that a non-perturbative global anomaly cannot be captured by one loop or any loop perturbative Feynman diagram calculations—examples include theWitten anomaly and Wang–Wen–Witten anomaly. The most prevalent global anomaly in physics is associated with the violation ofscale invarianceby quantum corrections, quantified inrenormalization. Since regulators generally introduce a distance scale, the classically scale-invariant theories are subject torenormalization groupflow, i.e., changing behavior with energy scale. For example, the large strength of thestrong nuclear forceresults from a theory that is weakly coupled at short distances flowing to a strongly coupled theory at long distances, due to this scale anomaly. Anomalies inabelianglobal symmetries pose no problems in aquantum field theory, and are often encountered (see the example of thechiral anomaly). In particular the corresponding anomalous symmetries can be fixed by fixing theboundary conditionsof thepath integral. Global anomalies insymmetriesthat approach the identity sufficiently quickly atinfinitydo, however, pose problems. In known examples such symmetries correspond to disconnected components of gauge symmetries. Such symmetries and possible anomalies occur, for example, in theories with chiral fermions or self-dualdifferential formscoupled togravityin 4k+ 2 dimensions, and also in theWitten anomalyin an ordinary 4-dimensional SU(2) gauge theory. As these symmetries vanish at infinity, they cannot be constrained by boundary conditions and so must be summed over in the path integral. The sum of the gauge orbit of a state is a sum of phases which form a subgroup of U(1). As there is an anomaly, not all of these phases are the same, therefore it is not the identity subgroup. The sum of the phases in every other subgroup of U(1) is equal to zero, and so all path integrals are equal to zero when there is such an anomaly and a theory does not exist. An exception may occur when the space of configurations is itself disconnected, in which case one may have the freedom to choose to integrate over any subset of the components. If the disconnected gauge symmetries map the system between disconnected configurations, then there is in general a consistent truncation of a theory in which one integrates only over those connected components that are not related by large gauge transformations. In this case the large gauge transformations do not act on the system and do not cause the path integral to vanish. In SU(2)gauge theoryin 4 dimensionalMinkowski space, a gauge transformation corresponds to a choice of an element of thespecial unitary groupSU(2) at each point in spacetime. The group of such gauge transformations is connected. However, if we are only interested in the subgroup of gauge transformations that vanish at infinity, we may consider the 3-sphere at infinity to be a single point, as the gauge transformations vanish there anyway. If the 3-sphere at infinity is identified with a point, our Minkowski space is identified with the 4-sphere. Thus we see that the group of gauge transformations vanishing at infinity in Minkowski 4-space isisomorphicto the group of all gauge transformations on the 4-sphere. This is the group which consists of a continuous choice of a gauge transformation in SU(2) for each point on the 4-sphere. In other words, the gauge symmetries are in one-to-one correspondence with maps from the 4-sphere to the 3-sphere, which is the group manifold of SU(2). The space of such maps isnotconnected, instead the connected components are classified by the fourthhomotopy groupof the 3-sphere which is thecyclic groupof order two. In particular, there are two connected components. One contains the identity and is called theidentity component, the other is called thedisconnected component. When a theory contains an odd number of flavors of chiral fermions, the actions of gauge symmetries in the identity component and the disconnected component of the gauge group on a physical state differ by a sign. Thus when one sums over all physical configurations in thepath integral, one finds that contributions come in pairs with opposite signs. As a result, all path integrals vanish and a theory does not exist. The above description of a global anomaly is for the SU(2) gauge theory coupled to an odd number of (iso-)spin-1/2 Weyl fermion in 4 spacetime dimensions. This is known as the Witten SU(2) anomaly.[4]In 2018, it is found by Wang, Wen and Witten that the SU(2) gauge theory coupled to an odd number of (iso-)spin-3/2 Weyl fermion in 4 spacetime dimensions has a further subtler non-perturbative global anomaly detectable on certain non-spin manifolds withoutspin structure.[5]This new anomaly is called the new SU(2) anomaly. Both types of anomalies[4][5]have analogs of (1) dynamical gauge anomalies for dynamical gauge theories and (2) the 't Hooft anomalies of global symmetries. In addition, both types of anomalies are mod 2 classes (in terms of classification, they are both finite groupsZ2of order 2 classes), and have analogs in 4 and 5 spacetime dimensions.[5]More generally, for any natural integer N, it can be shown that an odd number of fermion multiplets in representations of (iso)-spin 2N+1/2 can have the SU(2) anomaly; an odd number of fermion multiplets in representations of (iso)-spin 4N+3/2 can have the new SU(2) anomaly.[5]For fermions in the half-integer spin representation, it is shown that there are only these two types of SU(2) anomalies and the linear combinations of these two anomalies; these classify all global SU(2) anomalies.[5]This new SU(2) anomaly also plays an important rule for confirming the consistency ofSO(10)grand unified theory, with a Spin(10) gauge group and chiral fermions in the 16-dimensional spinor representations, defined on non-spin manifolds.[5][6] The concept of global symmetries can be generalized to higher global symmetries,[7]such that the charged object for the ordinary 0-form symmetry is a particle, while the charged object for the n-form symmetry is an n-dimensional extended operator. It is found that the 4 dimensional pure Yang–Mills theory with only SU(2) gauge fields with a topological theta termθ=π,{\displaystyle \theta =\pi ,}can have a mixed higher 't Hooft anomaly between the 0-form time-reversal symmetry and 1-formZ2center symmetry.[8]The 't Hooft anomaly of 4 dimensional pure Yang–Mills theory can be precisely written as a 5 dimensional invertible topological field theory or mathematically a 5 dimensional bordism invariant, generalizing the anomaly inflow picture to thisZ2class of global anomaly involving higher symmetries.[9]In other words, we can regard the 4 dimensional pure Yang–Mills theory with a topological theta termθ=π{\displaystyle \theta =\pi }live as a boundary condition of a certainZ2class invertible topological field theory, in order to match their higher anomalies on the 4 dimensional boundary.[9] Anomalies in gauge symmetries lead to an inconsistency, since a gauge symmetry is required in order to cancel unphysical degrees of freedom with a negative norm (such as aphotonpolarized in the time direction). An attempt to cancel them—i.e., to build theoriesconsistentwith the gauge symmetries—often leads to extra constraints on the theories (such is the case of thegauge anomalyin theStandard Modelof particle physics). Anomalies ingauge theorieshave important connections to thetopologyandgeometryof thegauge group. Anomalies in gauge symmetries can be calculated exactly at the one-loop level. At tree level (zero loops), one reproduces the classical theory.Feynman diagramswith more than one loop always contain internalbosonpropagators. As bosons may always be given a mass without breaking gauge invariance, aPauli–Villars regularizationof such diagrams is possible while preserving the symmetry. Whenever the regularization of a diagram is consistent with a given symmetry, that diagram does not generate an anomaly with respect to the symmetry. Vector gauge anomalies are alwayschiral anomalies. Another type of gauge anomaly is thegravitational anomaly. Quantum anomalies were discovered via the process ofrenormalization, when somedivergent integralscannot beregularizedin such a way that all the symmetries are preserved simultaneously. This is related to the high energy physics. However, due toGerard 't Hooft'sanomaly matching condition, anychiral anomalycan be described either by the UV degrees of freedom (those relevant at high energies) or by the IR degrees of freedom (those relevant at low energies). Thus one cannot cancel an anomaly by aUV completionof a theory—an anomalous symmetry is simply not a symmetry of a theory, even though classically it appears to be. Since cancelling anomalies is necessary for the consistency of gauge theories, such cancellations are of central importance in constraining the fermion content of thestandard model, which is a chiral gauge theory. For example, the vanishing of themixed anomalyinvolving two SU(2) generators and one U(1) hypercharge constrains all charges in a fermion generation to add up to zero,[10][11]and thereby dictates that the sum of the proton plus the sum of the electron vanish: thecharges of quarks and leptons must be commensurate. Specifically, for two external gauge fieldsWa,Wband one hyperchargeBat the vertices of the triangle diagram, cancellation of the triangle requires so, for each generation, the charges of the leptons and quarks are balanced,−1+3×2−13=0{\displaystyle -1+3\times {\frac {2-1}{3}}=0}, whenceQp+Qe= 0[citation needed]. The anomaly cancelation in SM was also used to predict a quark from 3rd generation, thetop quark.[12] Further such mechanisms include: In the modern description of anomalies classified bycobordismtheory,[13]theFeynman-Dyson graphsonly captures the perturbative local anomalies classified by integerZclasses also known as the free part. There exists nonperturbative global anomalies classified bycyclic groupsZ/nZclasses also known as the torsion part. It is widely known and checked in the late 20th century that thestandard modeland chiral gauge theories are free from perturbative local anomalies (captured byFeynman diagrams). However, it is not entirely clear whether there are any nonperturbative global anomalies for thestandard modeland chiral gauge theories. Recent developments[14][15][16]based on thecobordism theoryexamine this problem, and several additional nontrivial global anomalies found can further constrain these gauge theories. There is also a formulation of both perturbative local and nonperturbative global description of anomaly inflow in terms ofAtiyah,Patodi, andSinger[17][18]eta invariantin one higher dimension. Thiseta invariantis a cobordism invariant whenever the perturbative local anomalies vanish.[19]
https://en.wikipedia.org/wiki/Anomaly_(physics)
Quantum mechanicsis the fundamental physicaltheorythat describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale ofatoms.[2]: 1.1It is the foundation of allquantum physics, which includesquantum chemistry,quantum field theory,quantum technology, andquantum information science. Quantum mechanics can describe many systems thatclassical physicscannot. Classical physics can describe many aspects of nature at an ordinary (macroscopicand(optical) microscopic) scale, but is not sufficient for describing them at very smallsubmicroscopic(atomic andsubatomic) scales. Classical mechanics can be derived from quantum mechanics as an approximation that is valid at ordinary scales.[3] Quantum systems haveboundstates that arequantizedtodiscrete valuesofenergy,momentum,angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of bothparticlesandwaves(wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (theuncertainty principle). Quantum mechanicsarose graduallyfrom theories to explain observations that could not be reconciled with classical physics, such asMax Planck's solution in 1900 to theblack-body radiationproblem, and the correspondence between energy and frequency inAlbert Einstein's1905 paper, which explained thephotoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s byNiels Bohr,Erwin Schrödinger,Werner Heisenberg,Max Born,Paul Diracand others. The modern theory is formulated in variousspecially developed mathematical formalisms. In one of them, a mathematical entity called thewave functionprovides information, in the form ofprobability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. Quantum mechanics allows the calculation of properties and behaviour ofphysical systems. It is typically applied to microscopic systems:molecules,atomsandsubatomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms,[4]but its application to human beings raises philosophical problems, such asWigner's friend, and its application to the universe as a whole remains speculative.[5]Predictions of quantum mechanics have been verified experimentally to an extremely high degree ofaccuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known asquantum electrodynamics(QED), has beenshown to agree with experimentto within 1 part in 1012when predicting the magnetic properties of an electron.[6] A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of acomplex number, known as a probability amplitude. This is known as theBorn rule, named after physicistMax Born. For example, a quantum particle like anelectroncan be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives aprobability density functionfor the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. TheSchrödinger equationrelates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.[7]: 67–87 One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of thisuncertainty principlesays that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of itsmomentum.[7]: 427–435 Another consequence of the mathematical rules of quantum mechanics is the phenomenon ofquantum interference, which is often illustrated with thedouble-slit experiment. In the basic version of this experiment, acoherent light source, such as alaserbeam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.[8]: 102–111[2]: 1.1–1.8The wave nature of light causes the light waves passing through the two slits tointerfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles.[8]However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detectedphotonpasses through one slit (as would a classical particle), and not through both slits (as would a wave).[8]: 109[9][10]However,such experimentsdemonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known aswave–particle duality. In addition to light,electrons,atoms, andmoleculesare all found to exhibit the same dual behavior when fired towards a double slit.[2] Another non-classical phenomenon predicted by quantum mechanics isquantum tunnelling: a particle that goes up against apotential barriercan cross it, even if its kinetic energy is smaller than the maximum of the potential.[11]In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enablingradioactive decay,nuclear fusionin stars, and applications such asscanning tunnelling microscopy,tunnel diodeandtunnel field-effect transistor.[12][13] When quantum systems interact, the result can be the creation ofquantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...thecharacteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought".[14]Quantum entanglement enablesquantum computingand is part of quantum communication protocols, such asquantum key distributionandsuperdense coding.[15]Contrary to popular misconception, entanglement does not allow sending signalsfaster than light, as demonstrated by theno-communication theorem.[15] Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantlyBell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory oflocalhidden variables, then the results of aBell testwill be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables.[16][17] It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but alsolinear algebra,differential equations,group theory, and other more advanced subjects.[18][19]Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vectorψ{\displaystyle \psi }belonging to a (separable) complexHilbert spaceH{\displaystyle {\mathcal {H}}}. This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys⟨ψ,ψ⟩=1{\displaystyle \langle \psi ,\psi \rangle =1}, and it is well-defined up to a complex number of modulus 1 (the global phase), that is,ψ{\displaystyle \psi }andeiαψ{\displaystyle e^{i\alpha }\psi }represent the same physical system. In other words, the possible states are points in theprojective spaceof a Hilbert space, usually called thecomplex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complexsquare-integrablefunctionsL2(C){\displaystyle L^{2}(\mathbb {C} )}, while the Hilbert space for thespinof a single proton is simply the space of two-dimensional complex vectorsC2{\displaystyle \mathbb {C} ^{2}}with the usual inner product. Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which areHermitian(more precisely,self-adjoint) linearoperatorsacting on the Hilbert space. A quantum state can be aneigenvectorof an observable, in which case it is called aneigenstate, and the associatedeigenvaluecorresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as aquantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by theBorn rule: in the simplest case the eigenvalueλ{\displaystyle \lambda }is non-degenerate and the probability is given by|⟨λ→,ψ⟩|2{\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}}, whereλ→{\displaystyle {\vec {\lambda }}}is its associated unit-length eigenvector. More generally, the eigenvalue is degenerate and the probability is given by⟨ψ,Pλψ⟩{\displaystyle \langle \psi ,P_{\lambda }\psi \rangle }, wherePλ{\displaystyle P_{\lambda }}is the projector onto its associated eigenspace. In the continuous case, these formulas give instead theprobability density. After the measurement, if resultλ{\displaystyle \lambda }was obtained, the quantum state is postulated tocollapsetoλ→{\displaystyle {\vec {\lambda }}}, in the non-degenerate case, or toPλψ/⟨ψ,Pλψ⟩{\textstyle P_{\lambda }\psi {\big /}\!{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}}, in the general case. Theprobabilisticnature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famousBohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way ofthought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newerinterpretations of quantum mechanicshave been formulated that do away with the concept of "wave function collapse" (see, for example, themany-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions becomeentangledso that the original quantum system ceases to exist as an independent entity (seeMeasurement in quantum mechanics[20]). The time evolution of a quantum state is described by the Schrödinger equation:iℏ∂∂tψ(t)=Hψ(t).{\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t).}HereH{\displaystyle H}denotes theHamiltonian, the observable corresponding to thetotal energyof the system, andℏ{\displaystyle \hbar }is the reducedPlanck constant. The constantiℏ{\displaystyle i\hbar }is introduced so that the Hamiltonian is reduced to theclassical Hamiltonianin cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called thecorrespondence principle. The solution of this differential equation is given byψ(t)=e−iHt/ℏψ(0).{\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).}The operatorU(t)=e−iHt/ℏ{\displaystyle U(t)=e^{-iHt/\hbar }}is known as the time-evolution operator, and has the crucial property that it isunitary. This time evolution isdeterministicin the sense that – given an initial quantum stateψ(0){\displaystyle \psi (0)}– it makes a definite prediction of what the quantum stateψ(t){\displaystyle \psi (t)}will be at any later time.[21] Some wave functions produce probability distributions that are independent of time, such aseigenstatesof the Hamiltonian.[7]: 133–137Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around theatomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as ansorbital(Fig. 1). Analytic solutions of the Schrödinger equation are known forvery few relatively simple model Hamiltoniansincluding thequantum harmonic oscillator, theparticle in a box, thedihydrogen cation, and thehydrogen atom. Even theheliumatom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution inclosed form.[22][23][24] However, there are techniques for finding approximate solutions. One method, calledperturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weakpotential energy.[7]: 793Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.[7]: 849 One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum.[25][26]Both position and momentum are observables, meaning that they are represented byHermitian operators. The position operatorX^{\displaystyle {\hat {X}}}and momentum operatorP^{\displaystyle {\hat {P}}}do not commute, but rather satisfy thecanonical commutation relation:[X^,P^]=iℏ.{\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .}Given a quantum state, the Born rule lets us compute expectation values for bothX{\displaystyle X}andP{\displaystyle P}, and moreover for powers of them. Defining the uncertainty for an observable by astandard deviation, we haveσX=⟨X2⟩−⟨X⟩2,{\displaystyle \sigma _{X}={\textstyle {\sqrt {\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}}}},}and likewise for the momentum:σP=⟨P2⟩−⟨P⟩2.{\displaystyle \sigma _{P}={\sqrt {\left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}}}.}The uncertainty principle states thatσXσP≥ℏ2.{\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.}Either standard deviation can in principle be made arbitrarily small, but not both simultaneously.[27]This inequality generalizes to arbitrary pairs of self-adjoint operatorsA{\displaystyle A}andB{\displaystyle B}. Thecommutatorof these two operators is[A,B]=AB−BA,{\displaystyle [A,B]=AB-BA,}and this provides the lower bound on the product of standard deviations:σAσB≥12|⟨[A,B]⟩|.{\displaystyle \sigma _{A}\sigma _{B}\geq {\tfrac {1}{2}}\left|{\bigl \langle }[A,B]{\bigr \rangle }\right|.} Another consequence of the canonical commutation relation is that the position and momentum operators areFourier transformsof each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to ani/ℏ{\displaystyle i/\hbar }factor) to taking the derivative according to the position, since in Fourier analysisdifferentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentumpi{\displaystyle p_{i}}is replaced by−iℏ∂∂x{\displaystyle -i\hbar {\frac {\partial }{\partial x}}}, and in particular in thenon-relativistic Schrödinger equation in position spacethe momentum-squared term is replaced with a Laplacian times−ℏ2{\displaystyle -\hbar ^{2}}.[25] When two different quantum systems are considered together, the Hilbert space of the combined system is thetensor productof the Hilbert spaces of the two components. For example, letAandBbe two quantum systems, with Hilbert spacesHA{\displaystyle {\mathcal {H}}_{A}}andHB{\displaystyle {\mathcal {H}}_{B}}, respectively. The Hilbert space of the composite system is thenHAB=HA⊗HB.{\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.}If the state for the first system is the vectorψA{\displaystyle \psi _{A}}and the state for the second system isψB{\displaystyle \psi _{B}}, then the state of the composite system isψA⊗ψB.{\displaystyle \psi _{A}\otimes \psi _{B}.}Not all states in the joint Hilbert spaceHAB{\displaystyle {\mathcal {H}}_{AB}}can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, ifψA{\displaystyle \psi _{A}}andϕA{\displaystyle \phi _{A}}are both possible states for systemA{\displaystyle A}, and likewiseψB{\displaystyle \psi _{B}}andϕB{\displaystyle \phi _{B}}are both possible states for systemB{\displaystyle B}, then12(ψA⊗ψB+ϕA⊗ϕB){\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)}is a valid joint state that is not separable. States that are not separable are calledentangled.[28][29] If the state for a composite system is entangled, it is impossible to describe either component systemAor systemBby a state vector. One can instead definereduced density matricesthat describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system.[28][29]Just as density matrices specify the state of a subsystem of a larger system, analogously,positive operator-valued measures(POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory.[28][30] As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known asquantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic.[31] There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed byPaul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics –matrix mechanics(invented byWerner Heisenberg) and wave mechanics (invented byErwin Schrödinger).[32]An alternative formulation of quantum mechanics isFeynman'spath integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of theaction principlein classical mechanics.[33] The HamiltonianH{\displaystyle H}is known as thegeneratorof time evolution, since it defines a unitary time-evolution operatorU(t)=e−iHt/ℏ{\displaystyle U(t)=e^{-iHt/\hbar }}for each value oft{\displaystyle t}. From this relation betweenU(t){\displaystyle U(t)}andH{\displaystyle H}, it follows that any observableA{\displaystyle A}that commutes withH{\displaystyle H}will beconserved: its expectation value will not change over time.[7]: 471This statement generalizes, as mathematically, any Hermitian operatorA{\displaystyle A}can generate a family of unitary operators parameterized by a variablet{\displaystyle t}. Under the evolution generated byA{\displaystyle A}, any observableB{\displaystyle B}that commutes withA{\displaystyle A}will be conserved. Moreover, ifB{\displaystyle B}is conserved by evolution underA{\displaystyle A}, thenA{\displaystyle A}is conserved under the evolution generated byB{\displaystyle B}. This implies a quantum version of the result proven byEmmy Noetherin classical (Lagrangian) mechanics: for everydifferentiablesymmetryof a Hamiltonian, there exists a correspondingconservation law. The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:H=12mP2=−ℏ22md2dx2.{\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.}The general solution of the Schrödinger equation is given byψ(x,t)=12π∫−∞∞ψ^(k,0)ei(kx−ℏk22mt)dk,{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,}which is a superposition of all possibleplane wavesei(kx−ℏk22mt){\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}}, which are eigenstates of the momentum operator with momentump=ℏk{\displaystyle p=\hbar k}. The coefficients of the superposition areψ^(k,0){\displaystyle {\hat {\psi }}(k,0)}, which is the Fourier transform of the initial quantum stateψ(x,0){\displaystyle \psi (x,0)}. It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states.[note 1]Instead, we can consider a Gaussianwave packet:ψ(x,0)=1πa4e−x22a{\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}}which has Fourier transform, and therefore momentum distributionψ^(k,0)=aπ4e−ak22.{\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.}We see that as we makea{\displaystyle a}smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by makinga{\displaystyle a}larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant.[34] The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhereinsidea certain region, and therefore infinite potential energy everywhereoutsidethat region.[25]: 77–78For the one-dimensional case in thex{\displaystyle x}direction, the time-independent Schrödinger equation may be written−ℏ22md2ψdx2=Eψ.{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined byp^x=−iℏddx{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}the previous equation is evocative of theclassic kinetic energy analogue,12mp^x2=E,{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}with stateψ{\displaystyle \psi }in this case having energyE{\displaystyle E}coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box areψ(x)=Aeikx+Be−ikxE=ℏ2k22m{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}or, fromEuler's formula,ψ(x)=Csin⁡(kx)+Dcos⁡(kx).{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!} The infinite potential walls of the box determine the values ofC,D,{\displaystyle C,D,}andk{\displaystyle k}atx=0{\displaystyle x=0}andx=L{\displaystyle x=L}whereψ{\displaystyle \psi }must be zero. Thus, atx=0{\displaystyle x=0},ψ(0)=0=Csin⁡(0)+Dcos⁡(0)=D{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}andD=0{\displaystyle D=0}. Atx=L{\displaystyle x=L},ψ(L)=0=Csin⁡(kL),{\displaystyle \psi (L)=0=C\sin(kL),}in whichC{\displaystyle C}cannot be zero as this would conflict with the postulate thatψ{\displaystyle \psi }has norm 1. Therefore, sincesin⁡(kL)=0{\displaystyle \sin(kL)=0},kL{\displaystyle kL}must be an integer multiple ofπ{\displaystyle \pi },k=nπLn=1,2,3,….{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint onk{\displaystyle k}implies a constraint on the energy levels, yieldingEn=ℏ2π2n22mL2=n2h28mL2.{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} Afinite potential wellis the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of therectangular potential barrier, which furnishes a model for thequantum tunnelingeffect that plays an important role in the performance of modern technologies such asflash memoryandscanning tunneling microscopy. As in the classical case, the potential for the quantum harmonic oscillator is given by[7]: 234V(x)=12mω2x2.{\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.} This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. Theeigenstatesare given byψn(x)=12nn!⋅(mωπℏ)1/4⋅e−mωx22ℏ⋅Hn(mωℏx),{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad }n=0,1,2,….{\displaystyle n=0,1,2,\ldots .}whereHnare theHermite polynomialsHn(x)=(−1)nex2dndxn(e−x2),{\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),}and the corresponding energy levels areEn=ℏω(n+12).{\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).} This is another example illustrating the discretization of energy forbound states. TheMach–Zehnder interferometer(MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in thedelayed choice quantum eraser, theElitzur–Vaidman bomb tester, and in studies of quantum entanglement.[35][36] We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vectorψ∈C2{\displaystyle \psi \in \mathbb {C} ^{2}}that is a superposition of the "lower" pathψl=(10){\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}}and the "upper" pathψu=(01){\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}}, that is,ψ=αψl+βψu{\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}}for complexα,β{\displaystyle \alpha ,\beta }. In order to respect the postulate that⟨ψ,ψ⟩=1{\displaystyle \langle \psi ,\psi \rangle =1}we require that|α|2+|β|2=1{\displaystyle |\alpha |^{2}+|\beta |^{2}=1}. Bothbeam splittersare modelled as the unitary matrixB=12(1ii1){\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}}, which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of1/2{\displaystyle 1/{\sqrt {2}}}, or be reflected to the other path with a probability amplitude ofi/2{\displaystyle i/{\sqrt {2}}}. The phase shifter on the upper arm is modelled as the unitary matrixP=(100eiΔΦ){\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}}, which means that if the photon is on the "upper" path it will gain a relative phase ofΔΦ{\displaystyle \Delta \Phi }, and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitterB{\displaystyle B}, a phase shifterP{\displaystyle P}, and another beam splitterB{\displaystyle B}, and so end up in the stateBPBψl=ieiΔΦ/2(−sin⁡(ΔΦ/2)cos⁡(ΔΦ/2)),{\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},}and the probabilities that it will be detected at the right or at the top are given respectively byp(u)=|⟨ψu,BPBψl⟩|2=cos2⁡ΔΦ2,{\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},}p(l)=|⟨ψl,BPBψl⟩|2=sin2⁡ΔΦ2.{\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.}One can therefore use the Mach–Zehnder interferometer to estimate thephase shiftby estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given byp(u)=p(l)=1/2{\displaystyle p(u)=p(l)=1/2}, independently of the phaseΔΦ{\displaystyle \Delta \Phi }. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.[37] Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained byclassical methods.[note 2]Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons,protons,neutrons,photons, and others).Solid-state physicsandmaterials scienceare dependent upon quantum mechanics.[38] In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory includequantum chemistry,quantum optics,quantum computing,superconducting magnets,light-emitting diodes, theoptical amplifierand the laser, thetransistorandsemiconductorssuch as themicroprocessor,medical and research imagingsuch asmagnetic resonance imagingandelectron microscopy.[39]Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-moleculeDNA. The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is thecorrespondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those ofclassical mechanicsin the regime of largequantum numbers.[40]One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known asquantization.[41]: 299[42] When quantum mechanics was originally formulated, it was applied to models whose correspondence limit wasnon-relativisticclassical mechanics. For instance, the well-known model of thequantum harmonic oscillatoruses an explicitly non-relativistic expression for thekinetic energyof the oscillator, and is thus a quantum version of theclassical harmonic oscillator.[7]: 234 Complications arise withchaotic systems, which do not have good quantum numbers, andquantum chaosstudies the relationship between classical and quantum descriptions in these systems.[41]: 353 Quantum decoherenceis a mechanism through which quantum systems losecoherence, and thus become incapable of displaying many typically quantum effects:quantum superpositionsbecome simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations.[7]: 687–730Quantum coherence is not typically evident at macroscopic scales, though at temperatures approachingabsolute zeroquantum behavior may manifest macroscopically.[note 3] Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms andmoleculeswhich would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction ofelectric chargesunder the rules of quantum mechanics.[43] Early attempts to merge quantum mechanics withspecial relativityinvolved the replacement of the Schrödinger equation with a covariant equation such as theKlein–Gordon equationor theDirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory,quantum electrodynamics, provides a fully quantum description of theelectromagnetic interaction. Quantum electrodynamics is, along withgeneral relativity, one of the most accurate physical theories ever devised.[44][45] The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treatchargedparticles as quantum mechanical objects being acted on by a classicalelectromagnetic field. For example, the elementary quantum model of thehydrogen atomdescribes theelectric fieldof the hydrogen atom using a classical−e2/(4πϵ0r){\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)}Coulomb potential.[7]: 285Likewise, in aStern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically.[41]: 26This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons bycharged particles. Quantum fieldtheories for thestrong nuclear forceand theweak nuclear forcehave also been developed. The quantum field theory of the strong nuclear force is calledquantum chromodynamics, and describes the interactions of subnuclear particles such asquarksandgluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known aselectroweak theory), by the physicistsAbdus Salam,Sheldon GlashowandSteven Weinberg.[46] Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeatedempirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory ofquantum gravityis an important issue inphysical cosmologyand the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.[47] One proposal for doing so isstring theory, which posits that thepoint-like particlesofparticle physicsare replaced byone-dimensionalobjects calledstrings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with itsmass,charge, and other properties determined by thevibrationalstate of the string. In string theory, one of the many vibrational states of the string corresponds to thegraviton, a quantum mechanical particle that carries gravitational force.[48][49] Another popular theory isloop quantum gravity(LQG), which describes quantum properties of gravity and is thus a theory ofquantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops calledspin networks. The evolution of a spin network over time is called aspin foam. The characteristic length scale of a spin foam is thePlanck length, approximately 1.616×10−35m, and so lengths shorter than the Planck length are not physically meaningful in LQG.[50] Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strongphilosophicaldebates and manyinterpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties withwavefunction collapseand the relatedmeasurement problem, andquantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus.Richard Feynmanonce said, "I think I can safely say that nobody understands quantum mechanics."[51]According toSteven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[52] The views ofNiels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation".[53][54]According to these views, the probabilistic nature of quantum mechanics is not atemporaryfeature which will eventually be replaced by a deterministic theory, but is instead afinalrenunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to thecomplementarynature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr,[55]Heisenberg,[56]Schrödinger,[57]Feynman,[2]andZeilinger[58]as well as 21st-century researchers in quantum foundations.[59] Albert Einstein, himself one of the founders ofquantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such asdeterminismandlocality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as theBohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbidsaction at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to howthermodynamicsis valid, but the fundamental theory behind it isstatistical mechanics. In 1935, Einstein and his collaboratorsBoris PodolskyandNathan Rosenpublished an argument that the principle of locality implies the incompleteness of quantum mechanics, athought experimentlater termed theEinstein–Podolsky–Rosen paradox.[note 4]In 1964,John Bellshowed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known asBell inequalities, that can be violated by entangled particles.[64]Since thenseveral experimentshave been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.[16][17] Bohmian mechanicsshows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.[65] Everett'smany-worlds interpretation, formulated in 1956, holds thatallthe possibilities described by quantum theorysimultaneouslyoccur in a multiverse composed of mostly independent parallel universes.[66]This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule,[67][68]with no consensus on whether they have been successful.[69][70][71] Relational quantum mechanicsappeared in the late 1990s as a modern derivative of Copenhagen-type ideas,[72][73]andQBismwas developed some years later.[74][75] Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such asRobert Hooke,Christiaan HuygensandLeonhard Eulerproposed a wave theory of light based on experimental observations.[76]In 1803 EnglishpolymathThomas Youngdescribed the famousdouble-slit experiment.[77]This experiment played a major role in the general acceptance of thewave theory of light. During the early 19th century,chemicalresearch byJohn DaltonandAmedeo Avogadrolent weight to theatomic theoryof matter, an idea thatJames Clerk Maxwell,Ludwig Boltzmannand others built upon to establish thekinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics.[78]While the early conception of atoms fromGreek philosophyhad been that they were indivisible units – the word "atom" deriving from theGreekfor 'uncuttable' – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard wasMichael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure.Julius Plücker,Johann Wilhelm HittorfandEugen Goldsteincarried on and improved upon Faraday's work, leading to the identification ofcathode rays, whichJ. J. Thomsonfound to consist of subatomic particles that would be called electrons.[79][80] Theblack-body radiationproblem was discovered byGustav Kirchhoffin 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation.[81]The wordquantumderives from theLatin, meaning "how great" or "how much".[82]According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to theirfrequency(ν):E=hν{\displaystyle E=h\nu \ }, wherehis thePlanck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not thephysical realityof the radiation.[83]In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[84]However, in 1905 Albert Einstein interpreted Planck's quantum hypothesisrealisticallyand used it to explain thephotoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into amodel of the hydrogen atomthat successfully predicted thespectral linesof hydrogen.[85]Einstein further developed this idea to show that anelectromagnetic wavesuch as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency.[86]In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation,[87]which became the basis of the laser.[88] This phase is known as theold quantum theory. Never complete or self-consistent, the old quantum theory was rather a set ofheuristiccorrections to classical mechanics.[89][90]The theory is now understood as asemi-classical approximationto modern quantum mechanics.[91][92]Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein andPeter Debye's work on thespecific heatof solids, Bohr andHendrika Johanna van Leeuwen'sproofthat classical physics cannot account fordiamagnetism, andArnold Sommerfeld's extension of the Bohr model to include special-relativistic effects.[89][93] In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicistLouis de Broglieput forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, andPascual Jordan[94][95]developedmatrix mechanicsand the Austrian physicist Erwin Schrödinger inventedwave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926.[96]Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the FifthSolvay Conferencein 1927.[97] By 1930, quantum mechanics had been further unified and formalized byDavid Hilbert, Paul Dirac andJohn von Neumann[98]with greater emphasis onmeasurement, the statistical nature of our knowledge of reality, andphilosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry,quantum electronics,quantum optics, andquantum information science. It also provides a useful framework for many features of the modernperiodic table of elements, and describes the behaviors ofatomsduringchemical bondingand the flow of electrons in computersemiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain somemacroscopicphenomena such assuperconductors[99]andsuperfluids.[100] The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus: More technical: Course material Philosophy
https://en.wikipedia.org/wiki/Quantum_mechanics
Inmathematics, adiffeologyon a set generalizes the concept of a smooth atlas of adifferentiable manifold, by declaring only what constitutes the "smooth parametrizations" into the set. A diffeological space is a set equipped with a diffeology. Many of the standard tools ofdifferential geometryextend to diffeological spaces, which beyond manifolds include arbitrary quotients of manifolds, arbitrary subsets of manifolds, and spaces of mappings between manifolds. Thedifferential calculusonRn{\displaystyle \mathbb {R} ^{n}}, or, more generally, on finite dimensionalvector spaces, is one of the most impactful successes of modern mathematics. Fundamental to its basic definitions and theorems is the linear structure of the underlying space.[1][2] The field ofdifferential geometryestablishes and studies the extension of the classical differential calculus to non-linear spaces. This extension is made possible by the definition of asmooth manifold, which is also the starting point for diffeological spaces. A smoothn{\displaystyle n}-dimensional manifold is a setM{\displaystyle M}equipped with a maximalsmooth atlas, which consists of injective functions, calledcharts, of the formϕ:U→M{\displaystyle \phi :U\to M}, whereU{\displaystyle U}is an open subset ofRn{\displaystyle \mathbb {R} ^{n}}, satisfying some mutual-compatibility relations. The charts of a manifold perform two distinct functions, which are often syncretized:[3][4][5] A diffeology generalizes the structure of a smooth manifold by abandoning the first requirement for an atlas, namely that the charts give a local model of the space, while retaining the ability to discuss smooth maps into the space.[6][7][8] Adiffeological spaceis a setX{\displaystyle X}equipped with adiffeology: a collection of maps{p:U→X∣Uis an open subset ofRn,andn≥0},{\displaystyle \{p:U\to X\mid U{\text{ is an open subset of }}\mathbb {R} ^{n},{\text{ and }}n\geq 0\},}whose members are calledplots, that satisfies some axioms. The plots are not required to be injective, and can (indeed, must) have as domains the open subsets of arbitrary Euclidean spaces. A smooth manifold can be viewed as a diffeological space which is locally diffeomorphic toRn{\displaystyle \mathbb {R} ^{n}}. In general, while not giving local models for the space, the axioms of a diffeology still ensure that the plots induce a coherent notion of smooth functions, smooth curves, smooth homotopies, etc. Diffeology is therefore suitable to treat objects more general than manifolds.[6][7][8] LetM{\displaystyle M}andN{\displaystyle N}be smooth manifolds. A smooth homotopy of mapsM→N{\displaystyle M\to N}is a smooth mapH:R×M→N{\displaystyle H:\mathbb {R} \times M\to N}. For eacht∈R{\displaystyle t\in \mathbb {R} }, the mapHt:=H(t,⋅):M→N{\displaystyle H_{t}:=H(t,\cdot ):M\to N}is smooth, and the intuition behind a smooth homotopy is that it is a smooth curve into the space of smooth functionsC∞(M,N){\displaystyle {\mathcal {C}}^{\infty }(M,N)}connecting, say,H0{\displaystyle H_{0}}andH1{\displaystyle H_{1}}. ButC∞(M,N){\displaystyle {\mathcal {C}}^{\infty }(M,N)}is not a finite-dimensional smooth manifold, so formally we cannot yet speak of smooth curves into it. On the other hand, the collection of maps{p:U→C∞(M,N)∣the mapU×M→N,(r,x)↦p(r)(x)is smooth}{\displaystyle \{p:U\to {\mathcal {C}}^{\infty }(M,N)\mid {\text{ the map }}U\times M\to N,\ (r,x)\mapsto p(r)(x){\text{ is smooth}}\}}is a diffeology onC∞(M,N){\displaystyle {\mathcal {C}}^{\infty }(M,N)}. With this structure, the smooth curves (a notion which is now rigorously defined) correspond precisely to the smooth homotopies.[6][7][8] The concept of diffeology was first introduced byJean-Marie Souriauin the 1980s under the nameespace différentiel.[9][10]Souriau's motivating application for diffeology was to uniformly handle the infinite-dimensional groups arising from his work ingeometric quantization. Thus the notion of diffeological group preceded the more general concept of a diffeological space. Souriau's diffeological program was taken up by his students, particularlyPaul Donato[11]andPatrick Iglesias-Zemmour,[12]who completed early pioneering work in the field. A structure similar to diffeology was introduced byKuo-Tsaï Chen(陳國才,Chen Guocai) in the 1970s, in order to formalize certain computations with path integrals. Chen's definition usedconvex setsinstead of open sets for the domains of the plots.[13]The similarity between diffeological and "Chen" structures can be made precise by viewing both as concrete sheaves over the appropriate concrete site.[14] Adiffeologyon a setX{\displaystyle X}consists of a collection of maps, calledplotsor parametrizations, fromopen subsetsofRn{\displaystyle \mathbb {R} ^{n}}(for alln≥0{\displaystyle n\geq 0}) toX{\displaystyle X}such that the following axioms hold: Note that the domains of different plots can be subsets ofRn{\displaystyle \mathbb {R} ^{n}}for different values ofn{\displaystyle n}; in particular, any diffeology contains the elements of its underlying set as the plots withn=0{\displaystyle n=0}. A set together with a diffeology is called adiffeological space. More abstractly, a diffeological space is a concretesheafon thesiteof open subsets ofRn{\displaystyle \mathbb {R} ^{n}}, for alln≥0{\displaystyle n\geq 0}, andopen covers.[14] A map between diffeological spaces is calledsmoothif and only if its composite with any plot of the first space is a plot of the second space. It is called adiffeomorphismif it is smooth,bijective, and itsinverseis also smooth. Equipping the open subsets of Euclidean spaces with their standard diffeology (as defined in the next section), the plots into a diffeological spaceX{\displaystyle X}are precisely the smooth maps fromU{\displaystyle U}toX{\displaystyle X}. Diffeological spaces constitute the objects of acategory, denoted byDflg{\displaystyle {\mathsf {Dflg}}}, whosemorphismsare smooth maps. The categoryDflg{\displaystyle {\mathsf {Dflg}}}is closed under many categorical operations: for instance, it isCartesian closed,completeandcocomplete, and more generally it is aquasitopos.[14] Any diffeological space is atopological spacewhen equipped with theD-topology:[12]thefinal topologysuch that all plots arecontinuous(with respect to theEuclidean topologyonRn{\displaystyle \mathbb {R} ^{n}}). In other words, a subsetU⊂X{\displaystyle U\subset X}is open if and only ifp−1(U){\displaystyle p^{-1}(U)}is open for any plotp{\displaystyle p}onX{\displaystyle X}. Actually, the D-topology is completely determined by smoothcurves, i.e. a subsetU⊂X{\displaystyle U\subset X}is open if and only ifc−1(U){\displaystyle c^{-1}(U)}is open for any smooth mapc:R→X{\displaystyle c:\mathbb {R} \to X}.[15]The D-topology is automaticallylocally path-connected[16] A smooth map between diffeological spaces is automaticallycontinuousbetween their D-topologies.[6]Therefore we have the functorD:Dflg→Top{\displaystyle D:{\mathsf {Dflg}}\to {\mathsf {Top}}}, from the category of diffeological spaces to the category of topological spaces, which assigns to a diffeological space its D-topology. This functor realizesDflg{\displaystyle {\mathsf {Dflg}}}as aconcrete categoryoverTop{\displaystyle {\mathsf {Top}}}. A Cartan-De Rham calculus can be developed in the framework of diffeologies, as well as a suitable adaptation of the notions offiber bundles,homotopy, etc.[6]However, there is not a canonical definition oftangent spacesandtangent bundlesfor diffeological spaces.[17] Any set carries at least two diffeologies: Any topological space can be endowed with thecontinuousdiffeology, whose plots are thecontinuousmaps. The Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}admits several diffeologies beyond those listed above. Diffeological spaces generalize manifolds, but they are far from the only mathematical objects to do so. For instance manifolds with corners, orbifolds, and infinite-dimensional Fréchet manifolds are all well-established alternatives. This subsection makes precise the extent to which these spaces are diffeological. We viewDflg{\displaystyle {\mathsf {Dflg}}}as a concrete category over the category of topological spacesTop{\displaystyle {\mathsf {Top}}}via the D-topology functorD:Dflg→Top{\displaystyle D:{\mathsf {Dflg}}\to {\mathsf {Top}}}. IfU:C→Top{\displaystyle U:{\mathsf {C}}\to {\mathsf {Top}}}is another concrete category overTop{\displaystyle {\mathsf {Top}}}, we say that a functorE:C→Dflg{\displaystyle E:{\mathsf {C}}\to {\mathsf {Dflg}}}is an embedding (of concrete categories) if it is injective on objects and faithful, andD∘E=U{\displaystyle D\circ E=U}. To specify an embedding, we need only describe it on objects; it is necessarily the identity map on arrows. We will say that a diffeological spaceX{\displaystyle X}islocally modeledby a collection of diffeological spacesE{\displaystyle {\mathcal {E}}}if around every pointx∈X{\displaystyle x\in X}, there is a D-open neighbourhoodU{\displaystyle U}, a D-open subsetV{\displaystyle V}of someE∈E{\displaystyle E\in {\mathcal {E}}}, and a diffeological diffeomorphismU→V{\displaystyle U\to V}.[6][19] The category of finite-dimensional smooth manifolds (allowing those with connected components of different dimensions) fully embeds intoDflg{\displaystyle {\mathsf {Dflg}}}. The embeddingy{\displaystyle y}assigns to a smooth manifoldM{\displaystyle M}the canonical diffeology{p:U→M∣pis smooth in the usual sense}.{\displaystyle \{p:U\to M\mid p{\text{ is smooth in the usual sense}}\}.}In particular, a diffeologically smooth map between manifolds is smooth in the usual sense, and the D-topology ofy(M){\displaystyle y(M)}is the original topology ofM{\displaystyle M}. Theessential imageof this embedding consists of those diffeological spaces that are locally modeled by the collection{y(Rn)}{\displaystyle \{y(\mathbb {R} ^{n})\}}, and whose D-topology isHausdorffandsecond-countable.[6] The category of finite-dimensional smoothmanifolds with boundary(allowing those with connected components of different dimensions) similarly fully embeds intoDflg{\displaystyle {\mathsf {Dflg}}}. The embedding is defined identically to the smooth case, except "smooth in the usual sense" refers to the standard definition of smooth maps between manifolds with boundary. The essential image of this embedding consists of those diffeological spaces that are locally modeled by the collection{y(O)∣Ois a half-space}{\displaystyle \{y(O)\mid O{\text{ is a half-space}}\}}, and whose D-topology is Hausdorff and second-countable. The same can be done in more generality formanifolds with corners, using the collection{y(O)∣Ois an orthant}{\displaystyle \{y(O)\mid O{\text{ is an orthant}}\}}.[20] The category ofFréchet manifoldssimilarly fully embeds intoDflg{\displaystyle {\mathsf {Dflg}}}. Once again, the embedding is defined identically to the smooth case, except "smooth in the usual sense" refers to the standard definition of smooth maps between Fréchet spaces. The essential image of this embedding consists of those diffeological spaces that are locally modeled by the collection{y(E)∣Eis a Fréchet space}{\displaystyle \{y(E)\mid E{\text{ is a Fréchet space}}\}}, and whose D-topology is Hausdorff. The embedding restricts to one of the category ofBanach manifolds. Historically, the case of Banach manifolds was proved first, by Hain,[21]and the case of Fréchet manifolds was treated later, by Losik.[22][23]The category of manifolds modeled onconvenient vector spacesalso similarly embeds intoDflg{\displaystyle {\mathsf {Dflg}}}.[24][25] A (classical)orbifoldX{\displaystyle X}is a space that is locally modeled by quotients of the formRn/Γ{\displaystyle \mathbb {R} ^{n}/\Gamma }, whereΓ{\displaystyle \Gamma }is afinite subgroupof linear transformations. On the other hand, each modelRn/Γ{\displaystyle \mathbb {R} ^{n}/\Gamma }is naturally a diffeological space (with the quotient diffeology discussed below), and therefore the orbifold charts generate a diffeology onX{\displaystyle X}. This diffeology is uniquely determined by the orbifold structure ofX{\displaystyle X}. Conversely, a diffeological space that is locally modeled by the collection{Rn/Γ}{\displaystyle \{\mathbb {R} ^{n}/\Gamma \}}(and with Hausdorff D-topology) carries a classical orbifold structure that induces the original diffeology, wherein the local diffeomorphisms are the orbifold charts. Such a space is called a diffeological orbifold.[26] Whereas diffeological orbifolds automatically have a notion of smooth map between them (namely diffeologically smooth maps inDflg{\displaystyle {\mathsf {Dflg}}}), the notion of a smooth map between classical orbifolds is not standardized. If orbifolds are viewed asdifferentiable stackspresented by étale properLie groupoids, then there is a functor from the underlying 1-category of orbifolds, and equivalent maps-of-stacks between them, toDflg{\displaystyle {\mathsf {Dflg}}}. Its essential image consists of diffeological orbifolds, but the functor is neither faithful nor full.[27] If a setX{\displaystyle X}is given two different diffeologies, theirintersectionis a diffeology onX{\displaystyle X}, called theintersection diffeology, which is finer than both starting diffeologies. The D-topology of the intersection diffeology is finer than the intersection of the D-topologies of the original diffeologies. IfX{\displaystyle X}andY{\displaystyle Y}are diffeological spaces, then theproductdiffeology on theCartesian productX×Y{\displaystyle X\times Y}is the diffeology generated by all products of plots ofX{\displaystyle X}and ofY{\displaystyle Y}. Precisely, a mapp:U→X×Y{\displaystyle p:U\to X\times Y}necessarily has the formp(u)=(x(u),y(u)){\displaystyle p(u)=(x(u),y(u))}for mapsx:U→X{\displaystyle x:U\to X}andy:U→Y{\displaystyle y:U\to Y}. The mapp{\displaystyle p}is a plot in the product diffeology if and only ifx{\displaystyle x}andy{\displaystyle y}are plots ofX{\displaystyle X}andY{\displaystyle Y}, respectively. This generalizes to products of arbitrary collections of spaces. The D-topology ofX×Y{\displaystyle X\times Y}is the coarsest delta-generated topology containing theproduct topologyof the D-topologies ofX{\displaystyle X}andY{\displaystyle Y}; it is equal to the product topology whenX{\displaystyle X}orY{\displaystyle Y}islocally compact, but may be finer in general.[15] Given a mapf:X→Y{\displaystyle f:X\to Y}from a setX{\displaystyle X}to a diffeological spaceY{\displaystyle Y}, thepullbackdiffeology onX{\displaystyle X}consists of those mapsp:U→X{\displaystyle p:U\to X}such that the compositionf∘p{\displaystyle f\circ p}is a plot ofY{\displaystyle Y}. In other words, the pullback diffeology is the smallest diffeology onX{\displaystyle X}makingf{\displaystyle f}smooth. IfX{\displaystyle X}is asubsetof the diffeological spaceY{\displaystyle Y}, then thesubspacediffeology onX{\displaystyle X}is the pullback diffeology induced by the inclusionX↪Y{\displaystyle X\hookrightarrow Y}. In this case, the D-topology ofX{\displaystyle X}is equal to thesubspace topologyof the D-topology ofY{\displaystyle Y}ifY{\displaystyle Y}is open, but may be finer in general. Given a mapf:X→Y{\displaystyle f:X\to Y}from diffeological spaceX{\displaystyle X}to a setY{\displaystyle Y}, thepushforwarddiffeology onY{\displaystyle Y}is the diffeology generated by the compositionsf∘p{\displaystyle f\circ p}, for plotsp:U→X{\displaystyle p:U\to X}ofX{\displaystyle X}. In other words, the pushforward diffeology is the smallest diffeology onY{\displaystyle Y}makingf{\displaystyle f}smooth. IfX{\displaystyle X}is a diffeological space and∼{\displaystyle \sim }is anequivalence relationonX{\displaystyle X}, then thequotientdiffeology on thequotient setX/∼{\displaystyle X/{\sim }}is the pushforward diffeology induced by the quotient mapX→X/∼{\displaystyle X\to X/{\sim }}. The D-topology onX/∼{\displaystyle X/{\sim }}is thequotient topologyof the D-topology ofX{\displaystyle X}. Note that this topology may be trivial without the diffeology being trivial. Quotients often give rise to non-manifold diffeologies. For example, the set ofreal numbersR{\displaystyle \mathbb {R} }is a smooth manifold. The quotientR/(Z+αZ){\displaystyle \mathbb {R} /(\mathbb {Z} +\alpha \mathbb {Z} )}, for someirrationalα{\displaystyle \alpha }, called theirrational torus, is a diffeological space diffeomorphic to the quotient of the regular2-torusR2/Z2{\displaystyle \mathbb {R} ^{2}/\mathbb {Z} ^{2}}by a line ofslopeα{\displaystyle \alpha }. It has a non-trivial diffeology, although its D-topology is thetrivial topology.[28] Thefunctionaldiffeology on the setC∞(X,Y){\displaystyle {\mathcal {C}}^{\infty }(X,Y)}of smooth maps between two diffeological spacesX{\displaystyle X}andY{\displaystyle Y}is the diffeology whose plots are the mapsϕ:U→C∞(X,Y){\displaystyle \phi :U\to {\mathcal {C}}^{\infty }(X,Y)}such thatU×X→Y,(u,x)↦ϕ(u)(x){\displaystyle U\times X\to Y,\quad (u,x)\mapsto \phi (u)(x)}is smooth with respect to the product diffeology ofU×X{\displaystyle U\times X}. WhenX{\displaystyle X}andY{\displaystyle Y}are manifolds, the D-topology ofC∞(X,Y){\displaystyle {\mathcal {C}}^{\infty }(X,Y)}is the smallestlocally path-connectedtopology containing theWhitneyC∞{\displaystyle C^{\infty }}topology.[15] Taking the subspace diffeology of a functional diffeology, one can define diffeologies on the space ofsectionsof afibre bundle, or the space of bisections of aLie groupoid, etc. IfM{\displaystyle M}is a compact smooth manifold, andF→M{\displaystyle F\to M}is a smooth fiber bundle overM{\displaystyle M}, then the space of smooth sectionsΓ(F){\displaystyle \Gamma (F)}of the bundle is frequently equipped with the structure of a Fréchet manifold.[29]Upon embedding this Fréchet manifold into the category of diffeological spaces, the resulting diffeology coincides with the subspace diffeology thatΓ(F){\displaystyle \Gamma (F)}inherits from the functional diffeology onC∞(M,F){\displaystyle {\mathcal {C}}^{\infty }(M,F)}.[30] Analogous to the notions ofsubmersionsandimmersionsbetween manifolds, there are two special classes of morphisms between diffeological spaces. Asubductionis a surjective functionf:X→Y{\displaystyle f:X\to Y}between diffeological spaces such that the diffeology ofY{\displaystyle Y}is the pushforward of the diffeology ofX{\displaystyle X}. Similarly, aninductionis an injective functionf:X→Y{\displaystyle f:X\to Y}between diffeological spaces such that the diffeology ofX{\displaystyle X}is the pullback of the diffeology ofY{\displaystyle Y}. Subductions and inductions are automatically smooth. It is instructive to consider the case whereX{\displaystyle X}andY{\displaystyle Y}are smooth manifolds. f:(−π2,3π2)→R2,f(t):=(2cos⁡(t),sin⁡(2t)).{\displaystyle f:\left(-{\frac {\pi }{2}},{\frac {3\pi }{2}}\right)\to \mathbb {R^{2}} ,\quad f(t):=(2\cos(t),\sin(2t)).} f:R→R2,f(t):=(t2,t3).{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{2},\quad f(t):=(t^{2},t^{3}).} In the category of diffeological spaces, subductions are precisely the strongepimorphisms, and inductions are precisely the strongmonomorphisms.[18]A map that is both a subduction and induction is a diffeomorphism.
https://en.wikipedia.org/wiki/Diffeology
Diffeomorphometryis the metric study of imagery, shape and form in the discipline ofcomputational anatomy(CA) inmedical imaging. The study of images incomputational anatomyrely on high-dimensionaldiffeomorphismgroupsφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}which generate orbits of the formI≐{φ⋅I∣φ∈DiffV}{\displaystyle {\mathcal {I}}\doteq \{\varphi \cdot I\mid \varphi \in \operatorname {Diff} _{V}\}}, in which imagesI∈I{\displaystyle I\in {\mathcal {I}}}can be dense scalarmagnetic resonanceorcomputed axial tomographyimages. Fordeformable shapesthese are the collection ofmanifoldsM≐{φ⋅M∣φ∈DiffV}{\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M\mid \varphi \in \operatorname {Diff} _{V}\}}, points,curvesandsurfaces. The diffeomorphisms move the images and shapes through the orbit according to(φ,I)↦φ⋅I{\displaystyle (\varphi ,I)\mapsto \varphi \cdot I}which are defined as thegroup actions of computational anatomy. The orbit of shapes and forms is made into a metric space by inducing a metric on the group of diffeomorphisms. The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation.[1][2][3][4][5][6][7][8][9]In Computational anatomy, the diffeomorphometry metric measures how close and far two shapes or images are from each other. Informally, themetricis constructed by defining a flow of diffeomorphismsϕ˙t,t∈[0,1],ϕt∈DiffV{\displaystyle {\dot {\phi }}_{t},t\in [0,1],\phi _{t}\in \operatorname {Diff} _{V}}which connect the group elements from one to another, so forφ,ψ∈DiffV{\displaystyle \varphi ,\psi \in \operatorname {Diff} _{V}}thenϕ0=φ,ϕ1=ψ{\displaystyle \phi _{0}=\varphi ,\phi _{1}=\psi }. The metric between two coordinate systems or diffeomorphisms is then the shortest length orgeodesic flowconnecting them. The metric on the space associated to the geodesics is given byρ(φ,ψ)=infϕ:ϕ0=φ,ϕ1=ψ∫01‖ϕ˙t‖ϕtdt{\displaystyle \rho (\varphi ,\psi )=\inf _{\phi :\phi _{0}=\varphi ,\phi _{1}=\psi }\int _{0}^{1}\|{\dot {\phi }}_{t}\|_{\phi _{t}}\,dt}. The metrics on the orbitsI,M{\displaystyle {\mathcal {I}},{\mathcal {M}}}are inherited from the metric induced on the diffeomorphism group. The groupφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}is thusly made into a smoothRiemannian manifoldwith Riemannian metric‖⋅‖φ{\displaystyle \|\cdot \|_{\varphi }}associated to the tangent spaces at allφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}. TheRiemannian metricsatisfies at every point of the manifoldϕ∈DiffV{\displaystyle \phi \in \operatorname {Diff} _{V}}there is aninner productinducing the norm on thetangent space‖ϕ˙t‖ϕt{\displaystyle \|{\dot {\phi }}_{t}\|_{\phi _{t}}}that varies smoothly acrossDiffV{\displaystyle \operatorname {Diff} _{V}}. Oftentimes, the familiarEuclidean metricis not directly applicable because the patterns of shapes and images don't form avector space. In theRiemannian orbit model of Computational anatomy, diffeomorphisms acting on the formsφ⋅I∈I,φ∈DiffV,M∈M{\displaystyle \varphi \cdot I\in {\mathcal {I}},\varphi \in \operatorname {Diff} _{V},M\in {\mathcal {M}}}don't act linearly. There are many ways to define metrics, and for the sets associated to shapes theHausdorff metricis another. The method used to induce theRiemannian metricis to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is calleddiffeomorphometry. The diffeomorphisms incomputational anatomyare generated to satisfy theLagrangian and Eulerian specification of the flow fields,φt,t∈[0,1]{\displaystyle \varphi _{t},t\in [0,1]}, generated via the ordinary differential equation with the Eulerian vector fieldsv≐(v1,v2,v3){\displaystyle v\doteq (v_{1},v_{2},v_{3})}inR3{\displaystyle {\mathbb {R} }^{3}}forvt=φ˙t∘φt−1,t∈[0,1]{\displaystyle v_{t}={\dot {\varphi }}_{t}\circ \varphi _{t}^{-1},t\in [0,1]}. The inverse for the flow is given byddtφt−1=−(Dφt−1)vt,φ0−1=id,{\displaystyle {\frac {d}{dt}}\varphi _{t}^{-1}=-(D\varphi _{t}^{-1})v_{t},\ \varphi _{0}^{-1}=\operatorname {id} ,}and the3×3{\displaystyle 3\times 3}Jacobian matrix for flows inR3{\displaystyle \mathbb {R} ^{3}}given asDφ≐(∂φi∂xj).{\displaystyle \ D\varphi \doteq \left({\frac {\partial \varphi _{i}}{\partial x_{j}}}\right).} To ensure smooth flows of diffeomorphisms with inverse, the vector fieldsR3{\displaystyle {\mathbb {R} }^{3}}must be at least 1-time continuously differentiable in space[10][11]which are modelled as elements of the Hilbert space(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}using theSobolevembedding theorems so that each elementvi∈H03,i=1,2,3,{\displaystyle v_{i}\in H_{0}^{3},i=1,2,3,}has 3-square-integrable derivatives thusly implies(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}embeds smoothly in 1-time continuously differentiable functions.[10][11]The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm: Shapes inComputational Anatomy (CA)are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. In this setting, 3-dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the templateItemp{\displaystyle I_{temp}}, resulting in the observed images to be elements of the randomorbit model of CA. For images these are defined asI∈I≐{I=Itemp∘φ,φ∈DiffV}{\displaystyle I\in {\mathcal {I}}\doteq \{I=I_{temp}\circ \varphi ,\varphi \in \operatorname {Diff} _{V}\}}, with for charts representing sub-manifolds denoted asM≐{φ⋅Mtemp:φ∈DiffV}{\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M_{temp}:\varphi \in \operatorname {Diff} _{V}\}}. The orbit of shapes and forms in Computational Anatomy are generated by the group actionI≐{φ⋅I:φ∈DiffV}{\displaystyle {\mathcal {I}}\doteq \{\varphi \cdot I:\varphi \in \operatorname {Diff} _{V}\}},M≐{φ⋅M:φ∈DiffV}{\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M:\varphi \in \operatorname {Diff} _{V}\}}. These are made into a Riemannian orbits by introducing a metric associated to each point and associated tangent space. For this a metric is defined on the group which induces the metric on the orbit. Take as the metric forComputational anatomyat each element of the tangent spaceφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}in the group of diffeomorphisms with the vector fields modelled to be in a Hilbert space with the norm in theHilbert space(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}. We modelV{\displaystyle V}as areproducing kernel Hilbert space (RKHS)defined by a 1-1, differential operatorA:V→V∗{\displaystyle A:V\rightarrow V^{*}}, whereV∗{\displaystyle V^{*}}is the dual-space. In general,σ≐Av∈V∗{\displaystyle \sigma \doteq Av\in V^{*}}is ageneralized functionor distribution, the linear form associated to the inner-product and norm for generalized functions are interpreted byintegration by partsaccording to forv,w∈V{\displaystyle v,w\in V}, WhenAv≐μdx{\displaystyle Av\doteq \mu \,dx}, a vector density,∫Av⋅vdx≐∫μ⋅vdx=∑i=13μividx.{\displaystyle \int Av\cdot v\,dx\doteq \int \mu \cdot v\,dx=\sum _{i=1}^{3}\mu _{i}v_{i}\,dx.} The differential operator is selected so that theGreen's kernelassociated to the inverse is sufficiently smooth so that thevector fields support 1-continuous derivative. TheSobolev embeddingtheorem arguments were made in demonstrating that 1-continuous derivative is required for smooth flows. TheGreen'soperator generated from theGreen's function(scalar case) associated to the differential operator smooths. For proper choice ofA{\displaystyle A}then(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}is an RKHS with the operatorK=A−1:V∗→V{\displaystyle K=A^{-1}:V^{*}\rightarrow V}. The Green's kernels associated to the differential operator smooths since for controlling enough derivatives in the square-integral sense the kernelk(⋅,⋅){\displaystyle k(\cdot ,\cdot )}is continuously differentiable in both variables implying The metric on the group of diffeomorphisms is defined by the distance as defined on pairs of elements in the group of diffeomorphisms according to This distance provides a right-invariant metric of diffeomorphometry,[12][13][14]invariant to reparameterization of space since for allϕ∈DiffV{\displaystyle \phi \in \operatorname {Diff} _{V}}, The distance on images,[15]dI:I×I→R+{\displaystyle d_{\mathcal {I}}:{\mathcal {I}}\times {\mathcal {I}}\rightarrow \mathbb {R} ^{+}}, The distance on shapes and forms,[16]dM:M×M→R+{\displaystyle d_{\mathcal {M}}:{\mathcal {M}}\times {\mathcal {M}}\rightarrow \mathbb {R} ^{+}}, For calculating the metric, the geodesics are adynamical system, the flow of coordinatest↦ϕt∈DiffV{\displaystyle t\mapsto \phi _{t}\in \operatorname {Diff} _{V}}and the control the vector fieldt↦vt∈V{\displaystyle t\mapsto v_{t}\in V}related viaϕ˙t=vt⋅ϕt,ϕ0=id.{\displaystyle {\dot {\phi }}_{t}=v_{t}\cdot \phi _{t},\phi _{0}=\operatorname {id} .}The Hamiltonian view[17][18][19][20][21]reparameterizes the momentum distributionAv∈V∗{\displaystyle Av\in V^{*}}in terms of theHamiltonian momentum,a Lagrange multiplierp:ϕ˙↦(p∣ϕ˙){\displaystyle p:{\dot {\phi }}\mapsto (p\mid {\dot {\phi }})}constraining the Lagrangian velocityϕ˙t=vt∘ϕt{\displaystyle {\dot {\phi }}_{t}=v_{t}\circ \phi _{t}}.accordingly: ThePontryagin maximum principle[17]gives the HamiltonianH(ϕt,pt)≐maxvH(ϕt,pt,v).{\displaystyle H(\phi _{t},p_{t})\doteq \max _{v}H(\phi _{t},p_{t},v)\ .}The optimizing vector fieldvt≐argmaxv⁡H(ϕt,pt,v){\displaystyle v_{t}\doteq \operatorname {argmax} _{v}H(\phi _{t},p_{t},v)}with dynamicsϕ˙t=∂H(ϕt,pt)∂p,p˙t=−∂H(ϕt,pt)∂ϕ{\displaystyle {\dot {\phi }}_{t}={\frac {\partial H(\phi _{t},p_{t})}{\partial p}},{\dot {p}}_{t}=-{\frac {\partial H(\phi _{t},p_{t})}{\partial \phi }}}. Along the geodesic the Hamiltonian is constant:[22]H(ϕt,pt)=H(id,p0)=12∫Xp0⋅v0dx{\displaystyle H(\phi _{t},p_{t})=H(\operatorname {id} ,p_{0})={\frac {1}{2}}\int _{X}p_{0}\cdot v_{0}\,dx}. The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element: Forlandmarks,xi,i=1,…,n{\displaystyle x_{i},i=1,\dots ,n}, the Hamiltonian momentum with Hamiltonian dynamics taking the form with The metric between landmarksd2=∑ip0(i)⋅∑jK(xi,xj)p0(j).{\displaystyle d^{2}=\textstyle \sum _{i}p_{0}(i)\cdot \sum _{j}\displaystyle K(x_{i},x_{j})p_{0}(j).} The dynamics associated to these geodesics is shown in the accompanying figure. Forsurfaces, the Hamiltonian momentum is defined across the surface has Hamiltonian and dynamics Forvolumesthe Hamiltonian with dynamics Software suitescontaining a variety of diffeomorphic mapping algorithms include the following:
https://en.wikipedia.org/wiki/Diffeomorphometry
Inalgebraic geometry, anétale morphism(French:[etal]) is a morphism ofschemesthat isformally étaleand locally of finite presentation. This is an algebraic analogue of the notion of a local isomorphism in the complex analytic topology. They satisfy the hypotheses of theimplicit function theorem, but because open sets in theZariski topologyare so large, they are not necessarily local isomorphisms. Despite this, étale maps retain many of the properties of local analytic isomorphisms, and are useful in defining thealgebraic fundamental groupand theétale topology. The wordétaleis a Frenchadjective, which means "slack", as in "slack tide", or, figuratively, calm, immobile, something left to settle.[1] Letϕ:R→S{\displaystyle \phi :R\to S}be aring homomorphism. This makesS{\displaystyle S}anR{\displaystyle R}-algebra. Choose amonic polynomialf{\displaystyle f}inR[x]{\displaystyle R[x]}and a polynomialg{\displaystyle g}inR[x]{\displaystyle R[x]}such that thederivativef′{\displaystyle f'}off{\displaystyle f}is a unit in(R[x]/fR[x])g{\displaystyle (R[x]/fR[x])_{g}}. We say thatϕ{\displaystyle \phi }isstandard étaleiff{\displaystyle f}andg{\displaystyle g}can be chosen so thatS{\displaystyle S}is isomorphic as anR{\displaystyle R}-algebra to(R[x]/fR[x])g{\displaystyle (R[x]/fR[x])_{g}}andϕ{\displaystyle \phi }is the canonical map. Letf:X→Y{\displaystyle f:X\to Y}be amorphism of schemes. We say thatf{\displaystyle f}isétaleif it has any of the following equivalent properties: Assume thatY{\displaystyle Y}is locally noetherian andfis locally of finite type. Forx{\displaystyle x}inX{\displaystyle X}, lety=f(x){\displaystyle y=f(x)}and letO^Y,y→O^X,x{\displaystyle {\hat {\mathcal {O}}}_{Y,y}\to {\hat {\mathcal {O}}}_{X,x}}be the induced map oncompletedlocal rings. Then the following are equivalent: If in addition all the maps on residue fieldsκ(y)→κ(x){\displaystyle \kappa (y)\to \kappa (x)}are isomorphisms, or ifκ(y){\displaystyle \kappa (y)}is separably closed, thenf{\displaystyle f}is étale if and only if for everyx{\displaystyle x}inX{\displaystyle X}, the induced map on completed local rings is an isomorphism.[7] Anyopen immersionis étale because it is locally an isomorphism. Covering spaces form examples of étale morphisms. For example, ifd≥1{\displaystyle d\geq 1}is an integer invertible in the ringR{\displaystyle R}then is a degreed{\displaystyle d}étale morphism. Anyramified coveringπ:X→Y{\displaystyle \pi :X\to Y}has an unramified locus which is étale. Morphisms induced by finite separable field extensions are étale — they formarithmetic covering spaceswith group of deck transformations given byGal(L/K){\displaystyle {\text{Gal}}(L/K)}. Any ring homomorphism of the formR→S=R[x1,…,xn]g/(f1,…,fn){\displaystyle R\to S=R[x_{1},\ldots ,x_{n}]_{g}/(f_{1},\ldots ,f_{n})}, where all thefi{\displaystyle f_{i}}are polynomials, and where theJacobiandeterminantdet(∂fi/∂xj){\displaystyle \det(\partial f_{i}/\partial x_{j})}is a unit inS{\displaystyle S}, is étale. For example the morphismC[t,t−1]→C[x,t,t−1]/(xn−t){\displaystyle \mathbb {C} [t,t^{-1}]\to \mathbb {C} [x,t,t^{-1}]/(x^{n}-t)}is etale and corresponds to a degreen{\displaystyle n}covering space ofGm∈Sch/C{\displaystyle \mathbb {G} _{m}\in Sch/\mathbb {C} }with the groupZ/n{\displaystyle \mathbb {Z} /n}of deck transformations. Expanding upon the previous example, suppose that we have a morphismf{\displaystyle f}of smooth complex algebraic varieties. Sincef{\displaystyle f}is given by equations, we can interpret it as a map of complex manifolds. Whenever the Jacobian off{\displaystyle f}is nonzero,f{\displaystyle f}is a local isomorphism of complex manifolds by theimplicit function theorem. By the previous example, having non-zero Jacobian is the same as being étale. Letf:X→Y{\displaystyle f:X\to Y}be a dominant morphism of finite type withX,Ylocally noetherian, irreducible andYnormal. Iffisunramified, then it is étale.[9] For a fieldK, anyK-algebraAis necessarily flat. Therefore,Ais an etale algebra if and only if it is unramified, which is also equivalent to whereK¯{\displaystyle {\bar {K}}}is theseparable closureof the fieldKand the right hand side is a finite direct sum, all of whose summands areK¯{\displaystyle {\bar {K}}}. This characterization of etaleK-algebras is a stepping stone in reinterpreting classicalGalois theory(seeGrothendieck's Galois theory). Étale morphisms are the algebraic counterpart of localdiffeomorphisms. More precisely, a morphism between smooth varieties is étale at a point iff the differential between the correspondingtangent spacesis an isomorphism. This is in turn precisely the condition needed to ensure that a map betweenmanifoldsis a local diffeomorphism, i.e. for any pointy∈Y, there is anopenneighborhoodUofxsuch that the restriction offtoUis a diffeomorphism. This conclusion does not hold in algebraic geometry, because the topology is too coarse. For example, consider the projectionfof theparabola to they-axis. This morphism is étale at every point except the origin (0, 0), because the differential is given by 2x, which does not vanish at these points. However, there is no (Zariski-)local inverse off, just because thesquare rootis not analgebraic map, not being given by polynomials. However, there is a remedy for this situation, using the étale topology. The precise statement is as follows: iff:X→Y{\displaystyle f:X\to Y}is étale and finite, then for any pointylying inY, there is an étale morphismV→Ycontainingyin its image (Vcan be thought of as an étale open neighborhood ofy), such that when we base changeftoV, thenX×YV→V{\displaystyle X\times _{Y}V\to V}(the first member would be the pre-image ofVbyfifVwere a Zariski open neighborhood) is a finite disjoint union of open subsets isomorphic toV. In other words,étale-locallyinY, the morphismfis a topological finite cover. For a smooth morphismf:X→Y{\displaystyle f:X\to Y}of relative dimensionn,étale-locallyinXand inY,fis an open immersion into an affine spaceAYn{\displaystyle \mathbb {A} _{Y}^{n}}. This is the étale analogue version of the structure theorem onsubmersions.
https://en.wikipedia.org/wiki/%C3%89tale_morphism
Inmathematicsandtheoretical physics, alarge diffeomorphismis an equivalence class ofdiffeomorphismsunder the equivalence relation where diffeomorphisms that can be continuously connected to each other are in the same equivalence class. For example, a two-dimensional realtorushas aSL(2,Z)group of large diffeomorphisms by which the 1-cyclesa,b{\displaystyle a,b}of the torus are transformed into their integer linear combinations. This group of large diffeomorphisms is called themodular group. More generally, for asurfaceS, the structure ofself-homeomorphismsup tohomotopyis known as themapping class group. It is known (forcompact,orientableS) that this is isomorphic with theautomorphism groupof thefundamental groupofS. This is consistent with thegenus1 case, stated above, if one takes into account that then the fundamental group isZ2, on which the modular group acts as automorphisms (as a subgroup ofindex2 in all automorphisms, since the orientation may also be reverse, by a transformation with determinant −1). Thistopology-relatedarticle is astub. You can help Wikipedia byexpanding it. This article abouttheoretical physicsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Large_diffeomorphism
Inmathematics, more specificallydifferential topology, alocal diffeomorphismis intuitively amapbetweensmooth manifoldsthat preserves the localdifferentiable structure. The formal definition of a local diffeomorphism is given below. LetX{\displaystyle X}andY{\displaystyle Y}bedifferentiable manifolds. Afunctionf:X→Y{\displaystyle f:X\to Y}is alocal diffeomorphismif, for each pointx∈X{\displaystyle x\in X}, there exists anopen setU{\displaystyle U}containingx{\displaystyle x}such that theimagef(U){\displaystyle f(U)}is open inY{\displaystyle Y}andf|U:U→f(U){\displaystyle f\vert _{U}:U\to f(U)}is adiffeomorphism. A local diffeomorphism is a special case of animmersionf:X→Y{\displaystyle f:X\to Y}. In this case, for eachx∈X{\displaystyle x\in X}, there exists an open setU{\displaystyle U}containingx{\displaystyle x}such that the imagef(U){\displaystyle f(U)}is anembedded submanifold, andf|U:U→f(U){\displaystyle f|_{U}:U\to f(U)}is a diffeomorphism. HereX{\displaystyle X}andf(U){\displaystyle f(U)}have the same dimension, which may be less than the dimension ofY{\displaystyle Y}.[1] A map is a local diffeomorphism if and only if it is a smoothimmersion(smooth local embedding) and anopen map. Theinverse function theoremimplies that a smooth mapf:X→Y{\displaystyle f:X\to Y}is a local diffeomorphism if and only if thederivativeDfx:TxX→Tf(x)Y{\displaystyle Df_{x}:T_{x}X\to T_{f(x)}Y}is alinear isomorphismfor all pointsx∈X{\displaystyle x\in X}. This implies thatX{\displaystyle X}andY{\displaystyle Y}have the same dimension.[2] It follows that a mapf:X→Y{\displaystyle f:X\to Y}between two manifolds of equal dimension (dim⁡X=dim⁡Y{\displaystyle \operatorname {dim} X=\operatorname {dim} Y}) is a local diffeomorphism if and only if it is a smoothimmersion(smooth local embedding), or equivalently, if and only if it is a smoothsubmersion. This is because, for anyx∈X{\displaystyle x\in X}, bothTxX{\displaystyle T_{x}X}andTf(x)Y{\displaystyle T_{f(x)}Y}have the same dimension, thusDfx{\displaystyle Df_{x}}is a linear isomorphism if and only if it is injective, or equivalently, if and only if it is surjective.[3] Here is an alternative argument for the case of an immersion: every smooth immersion is alocally injective function, whileinvariance of domainguarantees that any continuous injective function between manifolds of equal dimensions is necessarily an open map. All manifolds of the same dimension are "locally diffeomorphic," in the following sense: ifX{\displaystyle X}andY{\displaystyle Y}have the same dimension, andx∈X{\displaystyle x\in X}andy∈Y{\displaystyle y\in Y}, then there exist open neighbourhoodsU{\displaystyle U}ofx{\displaystyle x}andV{\displaystyle V}ofy{\displaystyle y}and a diffeomorphismf:U→V{\displaystyle f:U\to V}. However, this mapf{\displaystyle f}need not extend to a smooth map defined on all ofX{\displaystyle X}, let alone extend to a local diffeomorphism. Thus the existence of a local diffeomorphismf:X→Y{\displaystyle f:X\to Y}is a stronger condition than "to be locally diffeomophic." Indeed, although locally-defined diffeomorphisms preserve differentiable structure locally, one must be able to "patch up" these (local) diffeomorphisms to ensure that the domain is the entire smooth manifold. For example, one can impose two differentdifferentiable structuresonR4{\displaystyle \mathbb {R} ^{4}}that each makeR4{\displaystyle \mathbb {R} ^{4}}into a differentiable manifold, but both structures are not locally diffeomorphic (seeExoticR4{\displaystyle \mathbb {R} ^{4}}).[citation needed] As another example, there can be no local diffeomorphism from the2-spheretoEuclidean 2-space, although they do indeed have the same local differentiable structure. This is because all local diffeomorphisms arecontinuous, the continuous image of acompact spaceis compact, and the 2-sphere is compact whereas Euclidean 2-space is not. If a local diffeomorphism between two manifolds exists then their dimensions must be equal. Every local diffeomorphism is also alocal homeomorphismand therefore alocally injectiveopen map. A local diffeomorphism has constantrankofn.{\displaystyle n.}
https://en.wikipedia.org/wiki/Local_diffeomorphism
Inphysicsandmathematics,supermanifoldsare generalizations of themanifoldconcept based on ideas coming fromsupersymmetry. Several definitions are in use, some of which are described below. An informal definition is commonly used in physics textbooks and introductory lectures. It defines asupermanifoldas amanifoldwith bothbosonicandfermioniccoordinates. Locally, it is composed ofcoordinate chartsthat make it look like a "flat", "Euclidean"superspace. These local coordinates are often denoted by wherexis the (real-number-valued)spacetimecoordinate, andθ{\displaystyle \theta \,}andθ¯{\displaystyle {\bar {\theta }}}areGrassmann-valuedspatial "directions". The physical interpretation of the Grassmann-valued coordinates are the subject of debate; explicit experimental searches forsupersymmetryhave not yielded any positive results. However, the use of Grassmann variables allow for the tremendous simplification of a number of important mathematical results. This includes, among other things a compact definition offunctional integrals, the proper treatment of ghosts inBRST quantization, the cancellation of infinities inquantum field theory, Witten's work on theAtiyah-Singer index theorem, and more recent applications tomirror symmetry. The use of Grassmann-valued coordinates has spawned the field ofsupermathematics, wherein large portions of geometry can be generalized to super-equivalents, including much ofRiemannian geometryand most of the theory ofLie groupsandLie algebras(such asLie superalgebras,etc.) However, issues remain, including the proper extension ofde Rham cohomologyto supermanifolds. Three different definitions of supermanifolds are in use. One definition is as a sheaf over aringed space; this is sometimes called the "algebro-geometricapproach".[1]This approach has a mathematical elegance, but can be problematic in various calculations and intuitive understanding. A second approach can be called a "concrete approach",[1]as it is capable of simply and naturally generalizing a broad class of concepts from ordinary mathematics. It requires the use of an infinite number of supersymmetric generators in its definition; however, all but a finite number of these generators carry no content, as the concrete approach requires the use of acoarsetopologythat renders almost all of them equivalent. Surprisingly, these two definitions, one with a finite number of supersymmetric generators, and one with an infinite number of generators, are equivalent.[1][2] A third approach describes a supermanifold as abase toposof asuperpoint. This approach remains the topic of active research.[3] Although supermanifolds are special cases ofnoncommutative manifolds, their local structure makes them better suited to study with the tools of standarddifferential geometryandlocally ringed spaces. A supermanifoldMof dimension (p,q) is atopological spaceMwith asheafofsuperalgebras, usually denotedOMor C∞(M), that is locally isomorphic toC∞(Rp)⊗Λ∙(ξ1,…ξq){\displaystyle C^{\infty }(\mathbb {R} ^{p})\otimes \Lambda ^{\bullet }(\xi _{1},\dots \xi _{q})}, where the latter is a Grassmann (Exterior) algebra onqgenerators. A supermanifoldMof dimension (1,1) is sometimes called asuper-Riemann surface. Historically, this approach is associated withFelix Berezin,Dimitry Leites, andBertram Kostant. A different definition describes a supermanifold in a fashion that is similar to that of asmooth manifold, except that the model spaceRp{\displaystyle \mathbb {R} ^{p}}has been replaced by themodel superspaceRcp×Raq{\displaystyle \mathbb {R} _{c}^{p}\times \mathbb {R} _{a}^{q}}. To correctly define this, it is necessary to explain whatRc{\displaystyle \mathbb {R} _{c}}andRa{\displaystyle \mathbb {R} _{a}}are. These are given as the even and odd real subspaces of the one-dimensional space ofGrassmann numbers, which, by convention, are generated by acountably infinitenumber of anti-commuting variables: i.e. the one-dimensional space is given byC⊗Λ(V),{\displaystyle \mathbb {C} \otimes \Lambda (V),}whereVis infinite-dimensional. An elementzis termedrealifz=z∗{\displaystyle z=z^{*}}; real elements consisting of only an even number of Grassmann generators form the spaceRc{\displaystyle \mathbb {R} _{c}}ofc-numbers, while real elements consisting of only an odd number of Grassmann generators form the spaceRa{\displaystyle \mathbb {R} _{a}}ofa-numbers. Note thatc-numbers commute, whilea-numbers anti-commute. The spacesRcp{\displaystyle \mathbb {R} _{c}^{p}}andRaq{\displaystyle \mathbb {R} _{a}^{q}}are then defined as thep-fold andq-fold Cartesian products ofRc{\displaystyle \mathbb {R} _{c}}andRa{\displaystyle \mathbb {R} _{a}}.[4] Just as in the case of an ordinary manifold, the supermanifold is then defined as a collection ofchartsglued together with differentiable transition functions.[4]This definition in terms of charts requires that the transition functions have asmooth structureand a non-vanishingJacobian. This can only be accomplished if the individual charts use a topology that isconsiderably coarserthan the vector-space topology on the Grassmann algebra. This topology is obtained by projectingRcp{\displaystyle \mathbb {R} _{c}^{p}}down toRp{\displaystyle \mathbb {R} ^{p}}and then using the natural topology on that. The resulting topology isnotHausdorff, but may be termed "projectively Hausdorff".[4] That this definition is equivalent to the first one is not at all obvious; however, it is the use of the coarse topology that makes it so, by rendering most of the "points" identical. That is,Rcp×Raq{\displaystyle \mathbb {R} _{c}^{p}\times \mathbb {R} _{a}^{q}}with the coarse topology is essentially isomorphic[1][2]toRp⊗Λ∙(ξ1,…ξq){\displaystyle \mathbb {R} ^{p}\otimes \Lambda ^{\bullet }(\xi _{1},\dots \xi _{q})} Unlike a regular manifold, a supermanifold is not entirely composed of a set of points. Instead, one takes the dual point of view that the structure of a supermanifoldMis contained in its sheafOMof "smooth functions". In the dual point of view, an injective map corresponds to a surjection of sheaves, and a surjective map corresponds to an injection of sheaves. An alternative approach to the dual point of view is to use thefunctor of points. IfMis a supermanifold of dimension (p,q), then the underlying spaceMinherits the structure of adifferentiable manifoldwhose sheaf of smooth functions isOM/I{\displaystyle O_{M}/I}, whereI{\displaystyle I}is theidealgenerated by all odd functions. ThusMis called the underlying space, or the body, ofM. The quotient mapOM→OM/I{\displaystyle O_{M}\to O_{M}/I}corresponds to an injective mapM→M; thusMis a submanifold ofM. Batchelor's theorem states that every supermanifold is noncanonically isomorphic to a supermanifold of the form ΠE. The word "noncanonically" prevents one from concluding that supermanifolds are simply glorified vector bundles; although the functor Π maps surjectively onto the isomorphism classes of supermanifolds, it is not anequivalence of categories. It was published byMarjorie Batchelorin 1979.[5] Theproofof Batchelor's theorem relies in an essential way on the existence of apartition of unity, so it does not hold for complex or real-analytic supermanifolds. In many physical and geometric applications, a supermanifold comes equipped with an Grassmann-oddsymplectic structure. All natural geometric objects on a supermanifold are graded. In particular, the bundle of two-forms is equipped with a grading. An odd symplectic form ω on a supermanifold is a closed, odd form, inducing a non-degenerate pairing onTM. Such a supermanifold is called aP-manifold. Its graded dimension is necessarily (n,n), because the odd symplectic form induces a pairing of odd and even variables. There is a version of the Darboux theorem for P-manifolds, which allows one to equip a P-manifold locally with a set of coordinates where the odd symplectic form ω is written as wherexi{\displaystyle x_{i}}are even coordinates, andξi{\displaystyle \xi _{i}}odd coordinates. (An odd symplectic form should not be confused with a Grassmann-evensymplectic formon a supermanifold. In contrast, the Darboux version of an even symplectic form is wherepi,qi{\displaystyle p_{i},q_{i}}are even coordinates,ξi{\displaystyle \xi _{i}}odd coordinates andεj{\displaystyle \varepsilon _{j}}are either +1 or −1.) Given an odd symplectic 2-form ω one may define aPoisson bracketknown as theantibracketof any two functionsFandGon a supermanifold by Here∂r{\displaystyle \partial _{r}}and∂l{\displaystyle \partial _{l}}are the right and leftderivativesrespectively andzare the coordinates of the supermanifold. Equipped with this bracket, the algebra of functions on a supermanifold becomes anantibracket algebra. Acoordinate transformationthat preserves the antibracket is called aP-transformation. If theBerezinianof a P-transformation is equal to one then it is called anSP-transformation. Using theDarboux theoremfor odd symplectic forms one can show that P-manifolds are constructed from open sets of superspacesRn|n{\displaystyle {\mathcal {R}}^{n|n}}glued together by P-transformations. A manifold is said to be anSP-manifoldif these transition functions can be chosen to be SP-transformations. Equivalently one may define an SP-manifold as a supermanifold with a nondegenerate odd 2-form ω and adensity functionρ such that on eachcoordinate patchthere existDarboux coordinatesin which ρ is identically equal to one. One may define aLaplacian operatorΔ on an SP-manifold as the operator which takes a functionHto one half of thedivergenceof the correspondingHamiltonian vector field. Explicitly one defines In Darboux coordinates this definition reduces to wherexaandθaare even and odd coordinates such that The Laplacian is odd and nilpotent One may define thecohomologyof functionsHwith respect to the Laplacian. InGeometry of Batalin-Vilkovisky quantization,Albert Schwarzhas proven that the integral of a functionHover aLagrangian submanifoldLdepends only on the cohomology class ofHand on thehomologyclass of the body ofLin the body of the ambient supermanifold. A pre-SUSY-structure on a supermanifold of dimension (n,m) is an oddm-dimensional distributionP⊂TM{\displaystyle P\subset TM}. With such a distribution one associates its Frobenius tensorS2P↦TM/P{\displaystyle S^{2}P\mapsto TM/P}(sincePis odd, the skew-symmetric Frobenius tensor is a symmetric operation). If this tensor is non-degenerate, e.g. lies in an open orbit ofGL(P)×GL(TM/P){\displaystyle GL(P)\times GL(TM/P)},Mis calleda SUSY-manifold. SUSY-structure in dimension (1,k) is the same as oddcontact structure.
https://en.wikipedia.org/wiki/Supermanifold
End-to-end auditableorend-to-end voter verifiable(E2E) systems are voting systems with stringent integrity properties and strongtamper resistance. E2E systems usecryptographic techniquesto provide voters with receipts that allow them to verify their votes were counted as cast, withoutrevealing which candidates a voter supportedto an external party. As such, these systems are sometimes calledreceipt-basedsystems.[1] Electronic votingsystems arrive at their final vote totals by a series of steps: Classical approaches to election integrity focus on ensuring the security of each step individually, going from voter intent to the final total. Such approaches have generally fallen out of favor withdistributed systemdesigners, as these local local focus may miss some vulnerabilities while over-protecting others. The alternative is to useend-to-endmeasures that are designed to demonstrate the integrity of the entire chain.[2] Comprehensive coverage of election integrity frequently involves multiple stages. Voters are expected to verify that they have marked their ballots as intended, recounts or audits are used to protect the step from marked ballots to ballot-box totals, and publication of all subtotals allows public verification that the overall totals correctly sum the ballot-box totals.[3]Conventional voting schemes do not meet this standard, and as a result cannot conclusively prove that no votes have been tampered with at any point; voters and auditors must instead verify each individual step is fully secure, which may be difficult and introducesmany points of failure.[4] While measures such asvoter verified paper audit trailsand manual recounts measure the effectiveness of some steps, they offer only weak measurement of the integrity of the physical or electronic ballot boxes. Ballots could be removed, replaced, or could have marks added to them without detection (i.e. to fill inundervotedcontests with votes for a desired candidate or toovervoteandspoilvotes for undesired candidates). This shortcoming motivated the development of the end-to-end auditable voting systems discussed here, sometimes referred to asE2E voting systems. These attempt to cover the entire path from voter attempt to election totals with just two measures: Because of the importance of the right to asecret ballot, most E2E voting schemes also attempt to meet a third requirement calledreceipt-freeness: It was originally believed that combining both properties would be impossible.[5]However, further research has since shown these properties can co-exist.[6]Both are combined in the 2005Voluntary Voting System Guidelinespromulgated by theElection Assistance Commission.[7]This definition is also predominant in the academic literature.[8][9][10][11] To addressballot stuffing, the following measure can be adopted: Alternatively, assertions regarding ballot stuffing can be externally verified by comparing the number of ballots on hand with the number of registered voters recorded as having voted, and by auditing other aspects of the registration and ballot delivery system. Support for E2E auditability, based on prior experience using it with in-person elections, is also seen as a requirement for remote voting over theInternetby many experts.[12] In 2004,David Chaumproposed a solution that allows each voter to verify that their votes are cast appropriately and that the votes are accurately tallied usingvisual cryptography.[13]After the voter selects their candidates, a voting machine prints out a specially formatted version of the ballot on two transparencies. When the layers are stacked, they show the human-readable vote. However, each transparency isencryptedwith a form ofvisual cryptographyso that it alone does not reveal any information unless it is decrypted. The voter selects one layer to destroy at the poll. The voting machine retains an electronic copy of the other layer and gives the physical copy as a receipt to allow the voter to confirm that the electronic ballot was not later changed. The system detects changes to the voter's ballot and uses amix-netdecryption[14]procedure to check if each vote is accurately counted. Sastry, Karloff and Wagner pointed out that there are issues with both of the Chaum and VoteHere cryptographic solutions.[15] Chaum's team subsequently developedPunchscan, which has stronger security properties and uses simpler paper ballots.[16]The paper ballots are voted on and then a privacy-preserving portion of the ballot is scanned by an optical scanner. ThePrêt à Votersystem, invented by Peter Ryan, uses a shuffled candidate order and a traditionalmix network. As in Punchscan, the votes are made on paper ballots and a portion of the ballot is scanned. The Scratch and Vote system, invented by Ben Adida, uses a scratch-off surface to hide cryptographic information that can be used to verify the correct printing of the ballot.[17] TheThreeBallotvoting protocol, invented byRon Rivest, was designed to provide some of the benefits of a cryptographic voting system without using cryptography. It can in principle be implemented on paper although the presented version requires an electronic verifier. TheScantegrityandScantegrity IIsystems provide E2E properties. Rather than replacing the entire voting system, as is the case in all the preceding examples, it works as an add-on for existing optical scan voting systems, producing conventional voter-verifiable paper ballots suitable forrisk-limiting audits. Scantegrity II employsinvisible inkand was developed by a team that included Chaum, Rivest, and Ryan. The STAR-Vote system[18]was defined for Travis County, the fifth most populous county in Texas, and home of the state capital, Austin.[19]It illustrated another way to combine an E2E system with conventionally auditable paper ballots, produced in this case by aballot marking device.[20]The project produced a detailed spec andrequest for proposalsin 2016, and bids were received for all the components, but no existing contractor with anEACcertified voting was willing to adapt their system to work with the novel cryptographic open-source components, as required by the RFP.[21][22] Building on the STAR-Vote experience, Josh Benaloh atMicrosoftled the design and development of ElectionGuard, asoftware development kitthat can be combined with existing voting systems to add E2E support. The voting system interprets the voter's choices, stores them for further processing, then calls ElectionGuard which encrypts these interpretations and prints a receipt for the voter. The receipt has a number which corresponds to the encrypted interpretation. The voter can then disavow the ballot (spoil it), and vote again. Later, independent sources, such as political parties, can obtain the file of numbered encrypted ballots and sum the different contests on the encrypted file to see if they match the election totals. The voter can ask those independent sources if the number(s) on the voter's receipt(s) appear in the file. If enough voters check that their numbers are in the file, they will find if ballots are omitted. Voters can get the decrypted contents of their spoiled ballots, to determine if they accurately match what the voter remembers was on those ballots. The voter cannot get decrypted copies of voted ballots, to prevent selling votes. If enough voters check spoiled ballots, they will show mistakes in encryptions.[23][24]ElectionGuard does not detect ballot stuffing, which must be detected by traditional records. It does not detect people who falsify receipts, claiming their ballot is missing or was interpreted in error. Election officials will need to decide how to track claimed errors, how many are needed to start an investigation, how to investigate and how to recover from errors, State law may give staff no authority to take action.[24]ElectionGuard does not tally write-ins, except as an undifferentiated total. It is incompatible withovervotes.[23][24][25] The city ofTakoma Park, MarylandusedScantegrity IIfor its 2009 and 2011 city elections.[26][27] Helioshas been used since 2009 by several organizations and universities for general elections, board elections, and student council elections.[28][29] Wombat Voting was used in student council elections at the private research collegeInterdisciplinary Center Herzliyain 2011 and 2012,[30][31]as well as in the primary elections for the Israeli political partyMeretzin 2012.[32] A modified version ofPrêt à Voterwas used as part of the vVote poll-site electronic voting system at the 2014 Victorian State Election in Australia.[33] ElectionGuard was combined with a voting system from VotingWorks and used for theFulton, Wisconsinspring primary election on February 18, 2020.[34] A touch-screen basedDRE-ipimplementation was trialed in a polling station inGatesheadon 2 May 2019 as part of the2019 United Kingdom local elections.[35][36]A browser-basedDRE-ipimplementation was used in an online voting trial in October 2022 among the residents ofNew Town,Kolkata, India during the 2022 Durga Puja festival celebration.[37]
https://en.wikipedia.org/wiki/End-to-end_auditable_voting_systems
Electronic votingisvotingthat useselectronicmeans to either aid or handle casting and countingballotsincluding voting time. Depending on the particular implementation, e-voting may use standaloneelectronic voting machines(also called EVM) or computers connected to the Internet (online voting). It may encompass a range of Internet services, from basic transmission of tabulated results to full-function online voting through common connectable household devices. The degree ofautomationmay be limited to marking a paper ballot, or may be a comprehensive system of vote input, vote recording, data encryption and transmission to servers, and consolidation and tabulation of election results.[citation needed] A worthy e-voting system must perform most of these tasks while complying with a set of standards established by regulatory bodies, and must also be capable to deal successfully with strong requirements associated withsecurity,accuracy,speed,privacy,auditability,accessibility,data integrity,cost-effectiveness,scalability,anonymity,trustworthiness, andsustainability.[1][2] Electronic voting technology can includepunched cards,optical scan voting systemsand specialized voting kiosks (including self-containeddirect-recording electronic voting systems, or DRE). It can also involve transmission ofballotsand votes via telephones, privatecomputer networks, or the Internet. The functions of electronic voting depend primarily on what the organizers intend to achieve. In general, two main types of e-voting can be identified: Many countries have used electronic voting for at least some elections, includingArgentina,Australia,Bangladesh,Belgium,Brazil,Canada,France,Germany,India,Italy,Japan,Kazakhstan,South Korea,Malaysia, theNetherlands,Norway, thePhilippines,Spain,Switzerland,Thailand, theUnited Kingdomand theUnited States. As of 2023[update], Brazil is the only country in which all elections are conducted through electronic voting.[8] Electronic voting technology intends to speed the counting of ballots, reduce the cost of paying staff to count votes manually and can provide improved accessibility for disabled voters. Also in the long term, expenses are expected to decrease.[9]Results can be reported and published faster.[10]Voters save time and cost by being able to vote independently from their location. This may increase overall voter turnout. The citizen groups benefiting most from electronic elections are the ones living abroad, citizens living in rural areas far away from polling stations and the disabled with mobility impairments.[11][9] In a 2004 article forOpenDemocracy, security analystBruce Schneierclaimed that computer security experts at the time were "unanimous on what to do" about concerns regarding electronic voting. "DRE machines must havevoter-verifiable paper audit trails," he said, and "software used on DRE machines must be open to public scrutiny"[12]to ensure the accuracy of the voting system. Verifiable ballots are necessary because computers can and do malfunction and because voting machines can be compromised. Concerns regarding security lapses in aging voting machines came to a head shortly before and during the2016 United States presidential election.[13][14][15]Cases were reported at the time of machines making unpredictable, inconsistent errors. The expert consensus centered on three primary solutions: the openness of a system to public examination from outside experts, the creation of an authenticablepaper recordof votes cast, and a chain of custody for records.[16][17] Several major reforms took place after the 2016 U.S. election, including the widespread adoption of voting machines that producevoter-verified paper audit trails(VVPATs). These paper records allow election officials to conduct audits and recounts, significantly enhancing transparency and security. Congress provided $380 million in funding through theConsolidated Appropriations Act of 2018under the framework of theHelp America Vote Actto replace old machines with more secure models with modern cybersecurity protections. By 2020, 93% of U.S. votes had a paper record, and only 0.5 percent of jurisdictions reported using electronic voting machines without paper audit trails.[18]This reduced the risk of undetected cyber interference or machine malfunction by enabling verification through physical ballots. In collaboration with theU.S. Department of Homeland Securityand other organizations, election officials also took steps to harden voting systems against potential cyberattacks. This included training election officials, sharing threat intelligence, and establishing secure systems for vote transmission and counting.[19] In addition to concerns aboutelectoral fraudand auditability, electronic voting has been criticized as unnecessary and expensive to introduce. While countries like India continue to use electronic voting, several countries have cancelled e-voting systems or decided against a large-scale rollout, notably theNetherlands, Ireland, Germany and the United Kingdom due to issues in reliability or transparency of EVMs.[20][21] Moreover, people without internet or the skills to use it are excluded from the service. The so-called digital divide describes the gap between those who have access to the internet and those who do not. Depending on the country or even regions in a country the gap differs. This concern is expected to become less important in future since the number of internet users tends to increase.[22] Expenses for the installation of an electronic voting system are high. For some governments they may be too high so that they do not invest. This aspect is even more important if it is not sure whether electronic voting is a long-term solution.[9] During the 2021 NSW Local Government Elections the online voting system "iVote" had technical issues that caused some access problems for some voters. Analysis done of these failures indicated a significant chance of the outages having impacted on the electoral results for the final positions. In the Kempsey ward, where the margin between the last elected and first non-elected candidates was only 69 votes, the electoral commission determined that the outage caused a 60% chance that the wrong final candidate was elected. Singleton had a 40% chance of having elected the wrong councillor, Shellharbour was a 7% chance and two other races were impacted by a sub-1% chance of having elected the wrong candidate. The NSW Supreme Court ordered the elections in Kempsey, Singleton and Shellharbour Ward A to be re-run. In the 2022 Kempsey re-vote the highest placed non-elected candidate from 2021, Dean Saul, was instead one of the first councillors elected.[23]This failure caused the NSW Government to suspend the iVote system from use in the2023 New South Wales state election. Electronic voting systems for electorates have been in use since the 1960s whenpunched cardsystems debuted. Their first widespread use was in the US where 7 counties switched to this method for the 1964 presidential election.[24]The neweroptical scan voting systemsallow a computer to count a voter's mark on a ballot.DRE voting machineswhich collect and tabulate votes in a single machine, are used by all voters in all elections in Brazil andIndia, and also on a large scale inVenezuelaand the United States. They have been used on a large scale in theNetherlandsbut have been decommissioned after public concerns.[25]In Brazil, the use of DRE voting machines has been associated with a decrease in error-ridden and uncounted votes, promoting a larger enfranchisement of mainly less educated people in the electoral process, shifting government spending toward public healthcare, particularly beneficial to the poor.[26] Paper-based voting systemsoriginated as a system where votes are cast andcounted by hand, using paper ballots. With the advent ofelectronic tabulationcame systems where paper cards or sheets could be marked by hand, but counted electronically. These systems includedpunched card voting,marksenseand laterdigital pen voting systems.[27] These systems can include aballot marking deviceor electronic ballot marker that allows voters to make their selections using anelectronic input device, usually atouch screensystem similar to a DRE. Systems including a ballot marking device can incorporate different forms ofassistive technology. In 2004, Open Voting Consortium demonstrated the 'Dechert Design', aGeneral Public Licenseopen sourcepaper ballot printing system with open sourcebar codeson each ballot.[28] A direct-recording electronic (DRE)voting machinerecords votes by means of aballotdisplay provided with mechanical or electro-optical components that can be activated by the voter (typically buttons or atouchscreen); that processes data with computer software; and that records voting data and ballot images inmemory components. After the election it produces a tabulation of the voting data stored in a removable memory component and as a printed copy. The system may also provide a means for transmitting individual ballots or vote totals to a central location for consolidating and reporting results from precincts at the central location. These systems use a precinct count method that tabulates ballots at the polling place. They typically tabulate ballots as they are cast and print the results after the close of polling.[29] In 2002, in the United States, theHelp America Vote Actmandated that one handicapped accessible voting system be provided per polling place, which most jurisdictions have chosen to satisfy with the use of DRE voting machines, some switching entirely over to DRE. In 2004, 28.9% of the registered voters in the United States used some type of direct recording electronic voting system,[30]up from 7.7% in 1996.[31] In 2004, India adoptedElectronic Voting Machines(EVM) for its elections to its parliament with 380 million voters casting their ballots using more than one million voting machines.[32]The Indian EVMs are designed and developed by two government-owned defence equipment manufacturing units,Bharat Electronics Limited(BEL) andElectronics Corporation of India Limited(ECIL). Both systems are identical, and are developed to the specifications ofElection Commission of India. The system is a set of two devices running on 7.5 volt batteries. One device, the voting Unit is used by the voter, and another device called the control unit is operated by the electoral officer. Both units are connected by a five-metre cable. The voting unit has a blue button for each candidate. The unit can hold 16 candidates, but up to four units can be chained, to accommodate 64 candidates. The control unit has three buttons on the surface – one button to release a single vote, one button to see the total number of votes cast till now, and one button to close the election process. The result button is hidden and sealed. It cannot be pressed unless the close button has already been pressed. A controversy was raised when the voting machine malfunctioned which was shown in Delhi assembly.[33]On 9 April 2019, the Supreme Court ordered the ECI to increasevoter-verified paper audit trail(VVPAT) slips vote count to five randomly selected EVMs per assembly constituency, which means ECI has to count VVPAT slips of 20,625 EVMs before it certifies the final election results.[34][35][36] A public network DRE voting system is an election system that uses electronic ballots and transmits vote data from the polling place to another location over a public network.[37]Vote data may be transmitted as individual ballots as they are cast, periodically as batches of ballots throughout the election day, or as one batch at the close of voting. Public network DRE voting system can utilize either precinct count or central count method. The central count method tabulates ballots from multiple precincts at a central location. Internet voting systems have gained popularity and have been used for government and membership organization elections and referendums inEstonia, and Switzerland[38]as well as municipal elections in Canada and party primary elections in the United States and France.[39][failed verification][citation needed]Internet voting has also been widely used in sub-nationalparticipatory budgetingprocesses, including in Brazil, France, United States, Portugal and Spain.[40][41][42][43][44][45] Security experts have found security problems in every attempt at online voting,[46][47][48][49]including systems in Australia,[50][51]Estonia,[52][53]Switzerland,[54][55]Russia,[56][57][58]and the United States.[59][46] It has been argued political parties that have more support from less-wealthy voters—who tend to have less access to and familiarity with the Internet—may suffer in the elections due to e-voting, which tends to increase participation among wealthier voters.[citation needed]It is unsure as to whether narrowing thedigital dividewould promote equal voting opportunities for people across various social, economic, and ethnic backgrounds.[60] The effects of internet voting on overall voter turnout are unclear. A 2017 study of online voting in two Swiss cantons found that it had no effect on turnout,[61]and a 2009 study of Estonia's national election found similar results.[62]To the contrary, however, the introduction of online voting in municipal elections in the Canadian province ofOntarioresulted in an average increase in turnout of around 3.5 percentage points.[63]Similarly, a further study of the Swiss case found that while online voting did not increase overall turnout, it did induce some occasional voters to participate who would have abstained were online voting not an option.[64] A paper on “remote electronic voting and turnout in the Estonian 2007 parliamentary elections” showed that rather than eliminating inequalities, e-voting might have enhanced thedigital dividebetween higher and lower socioeconomic classes. People who lived greater distances from polling areas voted at higher levels with this service now available. The 2007 Estonian elections yielded a higher voter turnout from those who lived in higher income regions and who received formal education.[60]Still regarding the Estonian Internet voting system, it was proved to be more cost-efficient than the rest of the voting systems offered in 2017 local elections.[65][66] Electronic voting is perceived to be favored moreover by a certain demographic, namely the younger generation such as Generation X and Y voters. However, in recent elections about a quarter of e-votes were cast by the older demographic, such as individuals over the age of 55. Including this, about 20% of e-votes came from voters between the ages of 45 and 54. This goes to show that e-voting is not supported exclusively by the younger generations, but finding some popularity amongst Gen X and Baby Boomers as well.[67]In terms of electoral results as well, the expectation that online voting would favor younger candidates has not been borne out in the data, with mayors in Ontario, Canada who were elected in online elections actually being slightly older on average than those elected by pencil and paper.[68] Online voting is widely used privately for shareholder votes,[69][70]and other private organizations.[71][72]The election management companies do not promise accuracy or privacy.[73][74][75]In fact one company uses an individual's past votes for research,[76]and to target ads.[77] Corporations and organizations routinely use Internet voting to elect officers and board members and for other proxy elections. Internet voting systems have been used privately in many modern nations and publicly in the United States, the UK, Switzerland and Estonia. In Switzerland, where it is already an established part of local referendums, voters get their passwords to access the ballot through the postal service. Most voters in Estonia can cast their vote in local and parliamentary elections, if they want to, via the Internet, as most of those on the electoral roll have access to an e-voting system, the largest run by anyEuropean Unioncountry. It has been made possible because most Estonians carry a national identity card equipped with a computer-readable microchip and it is these cards which they use to get access to the online ballot. All a voter needs is a computer, an electronic card reader, their ID card and its PIN, and they can vote from anywhere in the world.Estonian e-votescan only be cast during the days ofadvance voting. On election day itself people have to go to polling stations and fill in a paper ballot. One of the biggest weaknesses of online voting is the difficulty of dealing with fake identities, especially when voting is implemented using software without the cooperation of some kind of government agency.[78]These attacks use sybils—fake or duplicate identities—to influence community decisions. Since a single vote has the potential to tilt a group decision, prevention of sybil attacks is an important priority in ensuring the security of voting.[79]Sybil attacks are a common issue with implementations on open, peer-to-peer networks, as the system must have a way to prevent fake identities to prevent gaming of the vote.[80] Some future possible avenues of inquiries include to investigate more intersectionalproof of personhoodsystems that aren't directly blockchain-based.[81]For example, extending theweb of trustby having a protocol that verifies proof of identities using social interactions would allow a community of users to assign corresponding levels of trusts to different candidates in relation with others. However, this would require a fully decentralized system.[81]This web-of-trust protocol could even expand to allowing candidates to provide proof of personhood by physical attendance, which could lead to trusted clusters that grow into communities.[82] There are also hybrid systems that include an electronic ballot marking device (usually a touch screen system similar to a DRE) or otherassistive technologyto print avoter verified paper audit trail, then use a separate machine for electronic tabulation. Hybrid voting often includes both e-voting and mail-in paper ballots.[83] Internet voting can use remote locations (voting from any Internet capable computer) or can use traditional polling locations with voting booths consisting of Internet connected voting systems. Electronic voting systems may offer advantages compared to other voting techniques. An electronic voting system can be involved in any one of a number of steps in the setup, distributing, voting, collecting, and counting of ballots, and thus may or may not introduce advantages into any of these steps. Potential disadvantages exist as well including the potential for flaws or weakness in any electronic component. Charles Stewart of theMassachusetts Institute of Technologyestimates that 1 million more ballots were counted in the 2004 US presidential election than in 2000 because electronic voting machines detected votes that paper-based machines would have missed.[84] In May 2004 the U.S.Government Accountability Officereleased a report titled "Electronic Voting Offers Opportunities and Presents Challenges",[85]analyzing both the benefits and concerns created by electronic voting. A second report was released in September 2005 detailing some of the concerns with electronic voting, and ongoing improvements, titled "Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Under Way, but Key Activities Need to Be Completed".[86] Electronic voting systems may useelectronic ballotto store votes incomputer memory. Systems which use them exclusively are called DRE voting systems. When electronic ballots are used there is no risk of exhausting the supply of ballots. Additionally, these electronic ballots remove the need for printing of paper ballots, a significant cost.[87]When administering elections in which ballots are offered in multiple languages (in some areas of the United States, public elections are required by theNational Voting Rights Act of 1965), electronic ballots can be programmed to provide ballots in multiple languages for a single machine. The advantage with respect to ballots in different languages appears to be unique to electronic voting. For example,King County, Washington's demographics require them under U.S. federal election law to provide ballot access in Chinese (Mandarin?). With any type of paper ballot, the county has to decide how many Chinese-language ballots to print, how many to make available at each polling place, etc. Any strategy that can assure that Chinese-language ballots will be available at all polling places is certain, at the very least, to result in a significant number of wasted ballots.[citation needed](The situation with lever machines would be even worse than with paper: the only apparent way to reliably meet the need would be to set up a Chinese-language lever machine at each polling place, few of which would be used at all.) Critics argue[who?]the need for extra ballots in any language can be mitigated by providing a process to print ballots at voting locations. They argue further, the cost of software validation, compiler trust validation, installation validation, delivery validation and validation of other steps related to electronic voting is complex and expensive, thus electronic ballots are not guaranteed to be less costly than printed ballots.[citation needed] Electronic voting machines can be made fully accessible for persons with disabilities. Punched card and optical scan machines are not fully accessible for the blind or visually impaired, and lever machines can be difficult for voters with limited mobility and strength.[88]Electronic machines can use headphones,sip and puff, foot pedals, joy sticks and otheradaptive technologyto provide the necessaryaccessibility. Organizations such as theVerified Voting Foundationhave criticized the accessibility of electronic voting machines[89]and advocate alternatives. Some disabled voters (including the visually impaired) could use atactile ballot, a ballot system using physical markers to indicate where a mark should be made, to vote a secret paper ballot. These ballots can be designed identically to those used by other voters.[90]However, other disabled voters (including voters with dexterity disabilities) could be unable to use these ballots. The concept of election verifiability through cryptographic solutions has emerged in the academic literature to introduce transparency and trust in electronic voting systems.[91][92]It allows voters and election observers to verify that votes have been recorded, tallied and declared correctly, in a manner independent from the hardware and software running the election. Three aspects of verifiability are considered:[93]individual, universal, and eligibility. Individual verifiability allows a voter to check that her own vote is included in the election outcome, universal verifiability allows voters or election observers to check that the election outcome corresponds to the votes cast, and eligibility verifiability allows voters and observers to check that each vote in the election outcome was cast by a uniquely registered voter. Electronic voting machines are able to provide immediate feedback to the voter detecting such possible problems asundervotingandovervotingwhich may result in aspoiled ballot. This immediate feedback can be helpful in successfully determiningvoter intent. It has been alleged by groups such as the UK-basedOpen Rights Group[94][95]that a lack of testing, inadequate audit procedures, and insufficient attention given to system or process design with electronic voting leaves "elections open to error andfraud". In 2009, theFederal Constitutional Court of Germanyfound that when using voting machines the "verification of the result must be possible by the citizen reliably and without any specialist knowledge of the subject." TheDRENedap-computers used till then did not fulfill that requirement. The decision did not ban electronic voting as such, but requires all essential steps in elections to be subject to public examinability.[96][97] In 2013, TheCalifornia Association of Voting Officialswas formed to maintain efforts toward publicly owned General Public License open source voting systems In 2013, researchers from Europe proposed that the electronic voting systems should be coercion evident.[98]There should be a public evidence of the amount of coercion that took place in a particular elections. An internet voting system called "Caveat Coercitor"[99]shows how coercion evidence in voting systems can be achieved.[98] A fundamental challenge with anyvoting machineis to produce evidence that the votes were recorded as cast and tabulated as recorded. Election results produced by voting systems that rely on voter-marked paper ballots can be verified with manual hand counts (either valid sampling or full recounts).Paperlessballot voting systems must support auditability in different ways. An independently auditable system, sometimes called an Independent Verification, can be used in recounts or audits. These systems can include the ability for voters to verify how their votes were cast or enable officials to verify that votes were tabulated correctly. A discussion draft argued by researchers at theNational Institute of Standards and Technology(NIST) states, "Simply put, the DRE architecture’s inability to provide for independent audits of its electronic records makes it a poor choice for an environment in which detecting errors and fraud is important."[100]The report does not represent the official position of NIST, and misinterpretations of the report has led NIST to explain that "Some statements in the report have been misinterpreted. The draft report includes statements from election officials, voting system vendors, computer scientists and other experts in the field about what is potentially possible in terms of attacks on DREs. However, these statements are not report conclusions."[101] Various technologies can be used to assure DRE voters that their votes were cast correctly, and allow officials to detect possible fraud or malfunction, and to provide a means to audit the tabulated results. Some systems include technologies such as cryptography (visual or mathematical), paper (kept by the voter or verified and left with election officials), audio verification, and dual recording or witness systems (other than with paper). Dr.Rebecca Mercuri, the creator of theVoter Verified Paper Audit Trail(VVPAT) concept (as described in her Ph.D. dissertation in October 2000 on the basic voter verifiable ballot system), proposes to answer the auditability question by having the voting machine print a paper ballot or other paper facsimile that can be visually verified by the voter before being entered into a secure location. Subsequently, this is sometimes referred to as the "Mercuri method." To be trulyvoter-verified, the record itself must be verified by the voter and able to be done without assistance, such as visually or audibly. If the voter must use a bar-code scanner or other electronic device to verify, then the record is not truly voter-verifiable, since it is actually the electronic device that is verifying the record for the voter. VVPAT is the form of Independent Verification most commonly found inelections in the United Statesand other countries such as Venezuela.[102] End-to-end auditable voting systemscan provide the voter with a receipt that can be taken home. This receipt does not allow voters to prove to others how they voted, but it does allow them to verify that the system detected their vote correctly. End-to-end (E2E) systems includePunchscan,ThreeBallotandPrêt à Voter.Scantegrityis an add-on that extends current optical scan voting systems with an E2E layer. The city ofTakoma Park, MarylandusedScantegrity IIfor its November 2009 election.[103][104] Systems that allow the voter to prove how they voted are never used in U.S. public elections, and are outlawed by most state constitutions. The primary concerns with this solution arevoter intimidationandvote selling. An audit system can be used in measured random recounts to detect possible malfunction or fraud. With the VVPAT method, the paper ballot is often treated as the official ballot of record. In this scenario, the ballot is primary and the electronic records are used only for an initial count. In any subsequent recounts or challenges, the paper, not the electronic ballot, would be used for tabulation. Whenever a paper record serves as the legal ballot, that system will be subject to the same benefits and concerns as any paper ballot system. To successfully audit any voting machine, a strictchain of custodyis required. The solution was first demonstrated (New York City, March 2001) and used (Sacramento, California 2002) by AVANTE International Technology, Inc.. In 2004 Nevada was the first state to successfully implement a DRE voting system that printed an electronic record. The $9.3 million voting system provided bySequoia Voting Systemsincluded more than 2,600AVC EDGEtouchscreen DREs equipped with theVeriVote VVPATcomponent.[105]The new systems, implemented under the direction of then Secretary of StateDean Hellerreplaced largely punched card voting systems and were chosen after feedback was solicited from the community through town hall meetings and input solicited from theNevada Gaming Control Board.[106] Inadequately secured hardware can be subject tophysical tampering. Some critics, such as the group "Wij vertrouwen stemcomputers niet" ("We do not trust voting machines"), charge that, for instance, foreign hardware could be inserted into the machine, or between the user and the central mechanism of the machine itself, using aman in the middle attacktechnique, and thus even sealing DRE machines may not be sufficient protection.[107]This claim is countered by the position that review and testing procedures can detect fraudulent code or hardware, if such things are present, and that a thorough, verifiablechain of custodywould prevent the insertion of such hardware or software.[citation needed]Security sealsare commonly employed in an attempt to detect tampering, but testing byArgonne National Laboratoryand others demonstrates that existing seals can usually be quickly defeated by a trained person using low-tech methods.[108] Security experts, such asBruce Schneier, have demanded that voting machinesource codeshould be publicly available for inspection.[109]Others have also suggested publishing voting machine software under afree software licenseas is done in Australia.[110] One method to detect errors with voting machines isparallel testing, which are conducted on the Election Day with randomly picked machines. TheACMpublished a study showing that, to change the outcome of the 2000 U.S. presidential election, only 2 votes in each precinct would have needed to be changed.[111] Cost of having electronic machines receive the voter's choices, print a ballot and scan the ballots to tally results is higher than the cost of printing blank ballots, having voters mark them directly (with machine-marking only when voters want it) and scanning ballots to tally results, according to studies in Georgia,[112][113]New York[114]and Pennsylvania.[115] Electronic voting by countryvaries and may include voting machines in polling places, centralized tallying of paper ballots, and internet voting. Many countries use centralized tallying. Some also use electronic voting machines in polling places. Very few use internet voting. Several countries have tried electronic approaches and stopped because of difficulties or concerns about security and reliability.[citation needed] Electronic voting requires capital spending every few years to update equipment, as well as annual spending for maintenance, security, and supplies. If it works well, its speed can be an advantage where many contests are on each ballot. Hand-counting is more feasible in parliamentary systems where each level of government is elected at different times, and only one contest is on each ballot, for the national or regional member of parliament, or for a local council member.[citation needed] Polling place electronic voting or Internet voting examples have taken place in Australia,[116]Belgium,[117][118]Brazil,[119]Estonia,[120][121]France, Germany, India,[122]Italy, Namibia, the Netherlands (Rijnland Internet Election System), Norway, Peru, Switzerland, the UK,[123]Venezuela,[124]Pakistan and the Philippines.[125] In the 2006 filmMan of the YearstarringRobin Williams, the character played by Williams—a comedic host of political talk show—wins the election for President of the United States when a software error in the electronic voting machines produced by the fictional manufacturer Delacroy causes votes to be tallied inaccurately. InRunoff, a 2007 novel byMark Coggins, a surprising showing by theGreen Partycandidate in aSan Francisco Mayoral electionforces arunoffbetween him and the highly favored establishment candidate—a plot line that closely parallels the actual results of the 2003 election. When the private-eye protagonist of the book investigates at the behest of a powerful Chinatown businesswoman, he determines that the outcome was rigged by someone who defeated the security on the city's newly installed e-voting system.[127] "Hacking Democracy" is a 2006 documentary film shown onHBO. Filmed over three years, it documents American citizens investigating anomalies and irregularities with electronic voting systems that occurred during America's 2000 and 2004 elections, especially inVolusia County, Florida. The film investigates the flawed integrity of electronic voting machines, particularly those made byDiebold Election Systemsand culminates in the hacking of aDieboldelection system inLeon County, Florida. The central conflict in theMMOvideo gameInfantryresulted from the global institution ofdirect democracythrough the use of personal voting devices sometime in the 22nd century AD. The practice gave rise to a 'voting class' of citizens composed mostly of homemakers and retirees who tended to be at home all day. Because they had the most free time to participate in voting, their opinions ultimately came to dominate politics.[128]
https://en.wikipedia.org/wiki/Electronic_voting
Electoral fraud, sometimes referred to aselection manipulation,voter fraud, orvote rigging, involves illegal interference with the process of anelection, either by increasing the vote share of a favored candidate, depressing the vote share of rival candidates, or both.[1]It differs from but often goes hand-in-hand withvoter suppression. What exactly constitutes electoral fraud varies from country to country, though the goal is oftenelection subversion. Electoral legislation outlaws many kinds of election fraud,[2]but other practices violate general laws, such as those banningassault,harassmentorlibel. Although technically the term "electoral fraud" covers only those acts which are illegal, the term is sometimes used to describeacts which are legal, but considered morally unacceptable, outside the spirit of an election or in violation of the principles ofdemocracy.[3][4]Show elections, featuring only one candidate, are sometimes classified[by whom?]as electoral fraud, although they may comply with the law and are presented more as referendums/plebiscites. In national elections, successful electoral fraud on a sufficient scale can have the effect of acoup d'état,[citation needed]protest[5]orcorruptionof democracy. In anarrow election, a small amount of fraud may suffice to change the result. Even if the outcome is not affected, the revelation of fraud can reduce voters' confidence in democracy. Because U.S. states have primary responsibility for conducting elections, including federal elections, many forms of electoral fraud are prosecuted as state crimes. State election offenses include voter impersonation, double voting, ballot stuffing, tampering with voting machines, and fraudulent registration. Penalties vary widely by state and can include fines, imprisonment, loss of voting rights, and disqualification from holding public office. The U.S. federal government prosecutes electoral crimes including voter intimidation, conspiracy to commit election fraud, bribery, interference with the right to vote, and fraud related to absentee ballots in federal elections.[6] In France, someone guilty may be fined and/or imprisoned for not more than one year, or two years if the person is a public official.[7][non-primary source needed] Electoral fraud can occur in advance of voting if the composition of the electorate is altered. The legality of this type of manipulation varies across jurisdictions. Deliberate manipulation of election outcomes is widely considered a violation of the principles of democracy.[8] In many cases, it is possible for authorities to artificially control the composition of an electorate in order to produce a foregone result. One way of doing this is to move a large number of voters into the electorate prior to an election, for example by temporarily assigning them land or lodging them inflophouses.[9][10]Many countries prevent this with rules stipulating that a voter must have lived in an electoral district for a minimum period (for example, six months) in order to be eligible to vote there. However, such laws can also be used for demographic manipulation as they tend todisenfranchisethose with no fixed address, such as the homeless, travelers,Roma, students (studying full-time away from home), and some casual workers. Another strategy is to permanently move people into an electoral district, usually throughpublic housing. If people eligible for public housing are likely to vote for a particular party, then they can either be concentrated into one area, thus making their votes count for less, or moved intomarginal seats, where they may tip the balance towards their preferred party. One example of this was the 1986–1990Homes for votes scandalin theCity of Westminsterin England underShirley Porter.[11] Immigration law may also be used to manipulate electoral demography. For instance,Malaysiagave citizenship to immigrants from the neighboringPhilippinesandIndonesia, together with suffrage, in order for a political party to "dominate" the state ofSabah; this controversial process was known asProject IC.[12]In the United States, there have been allegations of an attempt to alter electoral demography via immigration as part of a far-rightGreat Replacement Theory conspiracy."[13] A method of manipulatingprimary contestsand other elections of party leaders are related to this. People who support one party may temporarily join another party (or vote in a crossover way, when permitted) in order to elect a weak candidate for that party's leadership. The goal ultimately is to defeat the weak candidate in the general election by the leader of the party that the voter truly supports. There were claims that this method was being utilised in theUK Labour Party leadership election in 2015, where Conservative-leaningToby YoungencouragedConservativesto joinLabourand vote forJeremy Corbynin order to "consign Labour to electoral oblivion".[14][15]Shortly after, #ToriesForCorbyntrendedonTwitter.[15] The composition of an electorate may also be altered bydisenfranchisingsome classes of people, rendering them unable to vote. In some cases, states had passed provisions that raised general barriers to voter registration, such aspoll taxes, literacy and comprehension tests, and record-keeping requirements, which in practice were applied against minority populations to discriminatory effect. From the turn of the century into the late 1960s, most African Americans in the southern states comprising theformer Confederacywere disenfranchised by such measures. Corrupt election officials may misuse voting regulations such as aliteracy testor requirement for proof of identity or address in such a way as to make it difficult or impossible for their targets to cast a vote. If such practices discriminate against a religious or ethnic group, they may so distort the political process that the political order becomes grossly unrepresentative, as in the post-ReconstructionorJim Crowera until theVoting Rights Act of 1965.Felons have been disenfranchisedin many states as a strategy to prevent African Americans from voting.[16] Groups may also be disenfranchised by rules which make it impractical or impossible for them to cast a vote. For example, requiring people to vote within their electorate may disenfranchise serving military personnel, prison inmates, students, hospital patients or anyone else who cannot return to their homes. Polling can be set for inconvenient days, such as midweek or on holy days of religious groups: for example onthe Sabbathor otherholy daysof a religious group whose teachings determine that voting is prohibited on such a day. Communities may also be effectively disenfranchised if polling places are situated in areas perceived by voters as unsafe, or are not provided within reasonable proximity (rural communities are especially vulnerable to this).[example needed] In some cases, voters may be invalidly disenfranchised, which is true electoral fraud. For example, a legitimate voter may be "accidentally" removed from theelectoral roll, making it difficult or impossible for the person to vote.[citation needed] In the Canadian federal election of 1917, during theGreat War, the Canadian government, led by the Union Party, passed theMilitary Voters Actand theWartime Elections Act. TheMilitary Voters Actpermitted any active military personnel to vote by party only and allowed that party to decide in which electoral district to place that vote. It also enfranchised those women who were directly related or married to an active soldier. These groups were believed to be disproportionately in favor of the Union government, as that party was campaigning in favor of conscription.[citation needed]TheWartime Elections Act, conversely, disenfranchised particular ethnic groups assumed to be disproportionately in favour of the opposition Liberal Party.[citation needed] Stanford University professorBeatriz Magalonidescribed a model governing the behaviour of autocratic regimes. She proposed that ruling parties can maintain political control under a democratic system without actively manipulating votes or coercing the electorate. Under the right conditions, the democratic system is maneuvered into an equilibrium in which divided opposition parties act as unwitting accomplices to single-party rule. This permits the ruling regime to abstain from illegal electoral fraud.[17] Preferential voting systems such asscore votingandsingle transferable vote, and in some cases,instant-runoff voting, can reduce the impact of systemic electoral manipulation andpolitical duopoly.[18][19] Voter intimidationinvolves putting undue pressure on a voter or group of voters so that they will vote a particular way, or not at all.[20]Absenteeand otherremote votingcan be more open to some forms of intimidation as the voter does not have the protection and privacy of the polling location. Intimidation can take a range of forms including verbal, physical, or coercion. This was so common that in 1887, a Kansas Supreme Court inNew Perspectives on Election Fraud in The Gilded Agesaid "[...] physical retaliation constituted only a slight disturbance and would not vitiate an election." In its simplest form, voters from a particular demographic or known to support a particular party or candidate are directly threatened by supporters of another party or candidate or by those hired by them. In other cases, supporters of a particular party make it known that if a particular village or neighborhood is found to have voted the 'wrong' way, reprisals will be made against that community. Another method is to make a general threat of violence, for example, abomb threatwhich has the effect of closing a particular polling place, thus making it difficult for people in that area to vote.[21]One notable example of outright violence was the1984 Rajneeshee bioterror attack, where followers ofBhagwan Shree Rajneeshdeliberately contaminated salad bars inThe Dalles, Oregon, in an attempt to weaken political opposition during county elections. Historically, this tactic includedLynching in the United Statesto terrorize potential African American voters in some areas.[citation needed] Polling places in an area known to support a particular party or candidate may be targeted for vandalism, destruction or threats, thus making it difficult or impossible for people in that area to vote.[citation needed] In this case, voters will be made to believe, accurately or otherwise, that they are not legally entitled to vote, or that they are legally obliged to vote a particular way. Voters who are not confident about their entitlement to vote may also be intimidated by real or implied authority figures who suggest that those who vote when they are not entitled to will be imprisoned, deported or otherwise punished.[22][23] For example, in 2004, in Wisconsin and elsewhere voters allegedly received flyers that said, "If you already voted in any election this year, you can't vote in the Presidential Election", implying that those who had voted in earlier primary elections were ineligible to vote. Also, "If anybody in your family has ever been found guilty of anything you can't vote in the Presidential Election." Finally, "If you violate any of these laws, you can get 10 years in prison and your children will be taken away from you."[24][25] Employers can coerce the voters' decision, through strategies such as explicit or implicit threats of job loss.[26] People may distribute false or misleading information in order to affect the outcome of an election.[3]For example, in theChilean presidential election of 1970, the U.S. government'sCentral Intelligence Agencyused "black propaganda"—materials purporting to be from various political parties—to sow discord between members of a coalition between socialists and communists.[27] Another method, allegedly used inCook County, Illinois, in 2004, is to falsely tell particular people that they are not eligible to vote[23]In 1981 in New Jersey, theRepublican National Committeecreated theBallot Security Task Forceto discourage voting among Latino and African-American citizens of New Jersey. The task force identified voters from an old registration list and challenged their credentials. It also paid off-duty police officers to patrol polling sites in Newark and Trenton, and posted signs saying that falsifying a ballot is a crime.[28] Another use ofdisinformationis to give voters incorrect information about the time or place of polling, thus causing them to miss their chance to vote. As part of the2011 Canadian federal election voter suppression scandal,Elections Canadatraced fraudulent phone calls, telling voters that their polling stations had been moved, to a telecommunications company that worked with theConservative Party.[29] Similarly in the United States, right-wingpolitical operativesJacob WohlandJack Burkmanwere indicted on several counts of bribery and election fraud in October 2020 regarding a voter disinformation scheme they undertook in the months prior to the2020 United States presidential election.[30]The pair hired a firm to make nearly 85,000robocallsthat targeted minority neighborhoods in Pennsylvania, Ohio, New York, Michigan, and Illinois. LikeDemocraticconstituencies in general that year, minorities voted overwhelmingly byabsentee ballot, many judging it a safer option during theCOVID-19 pandemicthan in-person voting.[31]Baselessly, the call warned potential voters if they submitted their votes by mail that authorities could use theirpersonal informationagainst them, including threats of police arrest for outstanding warrants and forced debt collection by creditors.[32] On October 24, 2022,WohlandBurkmanpleaded guilty inCuyahoga County, OhioCommon Pleas Courtto one count each of felony telecommunications fraud.[33]Commenting on the tactic of using disinformation to suppress voter turnout,Cuyahoga CountyProsecutor Michael C. O’Malley said the two men had "infringed upon the right to vote", and that "by pleading guilty, they were held accountable for their un-American actions.”[34] False claims of electoral fraud can be used as a basis for attempting to overturn an election. During and after the2020 presidential election, incumbent PresidentDonald Trumpmade numerousbaseless allegationsof electoral fraud by supporters ofDemocraticcandidateJoe Biden. The Trump campaign lost numerous legal challenges to the results.[36][37][38][39]President of BrazilJair Bolsonaroalso made numerous claims of electoral fraud without evidence during and after the2022 Brazilian presidential election.[40] Dead people voting refers to instances where ballots are fraudulently cast in the name of deceased individuals. While concerns about this type of electoral fraud often arise, studies suggest that such cases are extremely rare. In many democratic systems, safeguards exist to prevent this, such as regularly updating voter rolls and requiring identification at polling stations. However, in some cases, fraudulent actors may exploit outdated records or use the identification of deceased individuals to attempt illegal voting.[41] In Indonesia, dead people voting occurred during the2020 local elections[41]and the2024 general election.[42]There have been similar incidents reported in smaller regional elections.[43] Vote buying occurs when a political party or candidate seeks to buy the vote of a voter in an upcoming election. Vote buying can take various forms such as a monetary exchange, as well as an exchange for necessary goods or services.[44] A list of threats to voting systems, or electoral fraud methods considered as sabotage are kept by theNational Institute of Standards and Technology.[45] Ballot papers may be used to discourage votes for a particular party or candidate, using the design or other features which confuse voters into voting for a different candidate. For example, in the2000 U.S. presidential election, Florida'sbutterfly ballotpaper was criticized as poorly designed, leading some voters to vote for the wrong candidate. While the ballot itself was designed by a Democrat, it was the Democratic candidate,Al Gore, who was most harmed by voter errors because of this design.[46]Poor or misleading design is usually not illegal and therefore not technically election fraud, but it can nevertheless subvert the principles of democracy.[citation needed] Swedenhas a system with separate ballots used for each party, to reduce confusion among candidates. However, ballots from small parties such asPiratpartiet,JunilistanandFeministiskt initiativhave been omitted or placed on a separate table in the election to the EU parliament in 2009.[47]Ballots fromSweden Democratshave been mixed with ballots from the largerSwedish Social Democratic Party, which used a very similar font for the party name written on the top of the ballot.[citation needed] Another method of confusing people into voting for a different candidate from the one intended is to run candidates or create political parties with similar names or symbols to an existing candidate or party. The goal is to mislead voters into voting for the false candidate or party.[48]Such tactics may be particularly effective when many voters have limited literacy in the language used on the ballot. Again, such tactics are usually not illegal but they often work against the principles of democracy.[citation needed] Another possible source of electoral confusion is multiple variations of voting by differentelectoral systems. This may cause ballots to be counted as invalid if the wrong system is used. For instance, if a voter puts afirst-past-the-postcross in a numberedsingle transferable voteballot paper, it is invalidated. For example, in Scotland and other parts of the United Kingdom, up to three different voting systems and types of ballots may be used, based on the jurisdictional level of the election.Local electionsare determined bysingle transferable votes;Scottish parliamentary electionsby theadditional member system; and UK Parliamentary elections byfirst-past-the-post.[citation needed] Ballot stuffing, or "ballot-box stuffing", is the illegal practice of one person submitting multipleballotsduring avotein which only one ballot per person is permitted. Votes may be misrecorded at source, on a ballot paper or voting machine, or later in misrecording totals. The2019 Malawian general electionwas nullified by the Constitutional Court in 2020 because many results were changed by use of correction fluid, as well as duplicate, unverified and unsigned results forms.[60][61]California allows correction fluid and tape, so changes can be made after the ballot leaves the voter.[62] Where votes are recorded through electronic or mechanical means, the voting machinery may be altered so that a vote intended for one candidate is recorded for another, or electronic results are duplicated or lost, and there is rarely evidence whether the cause was fraud or error.[63][64][65] Many elections feature multiple opportunities for unscrupulous officials or 'helpers' to record an elector's vote differently from their intentions. Voters who require assistance to cast their votes are particularly vulnerable to having their votes stolen in this way. For example, a blind or illiterate person may be told that they have voted for one party when in fact they have been led to vote for another.[citation needed] Proxy votingis particularly vulnerable to election fraud, due to the amount of trust placed in the person who casts the vote. In several countries, there have been allegations of retirement home residents being asked to fill out 'absentee voter' forms. When the forms are signed and gathered, they are secretly rewritten as applications for proxy votes, naming party activists or their friends and relatives as the proxies. These people, unknown to the voter, cast the vote for the party of their choice. In theUnited Kingdom, this is known as 'granny farming.'[66] One of methods of electoral fraud is to destroy ballots for an opposing candidate or party. While mass destruction of ballots can be difficult to achieve without drawing attention to it, in a very close election it may be possible to destroy a small number of ballot papers without detection, thereby changing the overall result. Blatant destruction of ballot papers can render an election invalid and force it to be re-run. If a party can improve its vote on the re-run election, it can benefit from such destruction as long as it is not linked to it.[citation needed] During theBourbon Restorationin late 19th century Spain, the organized “loss” of voting slips (pucherazo) was used to maintain the agreed alternation between the Liberals and the Conservatives. This system of local political domination, especially rooted in rural areas and small cities, was known ascaciquismo.[citation needed] Another method is to make it appear that the voter has spoiled his or her ballot, thus rendering it invalid. Typically this would be done by adding another mark to the paper, making it appear that the voter has voted for more candidates than entitled, for instance. It would be difficult to do this to a large number of paper ballots without detection in some locales, but altogether too simple in others, especially jurisdictions where legitimate ballot spoiling by voter would serve a clear and reasonable aim: for example emulating protest votes in jurisdictions that have recently had and since abolished a "none of the above" or "against all" voting option; civil disobedience where voting is mandatory; and attempts at discrediting or invalidating an election. An unusually large share of invalidated ballots may be attributed to loyal supporters of candidates that lost in primaries or previous rounds, did not run or did not qualify to do so, or some manner of protest movement or organized boycott.[citation needed] In 2016, during theEU membership referendum, Leave-supporting voters in the UKalleged without evidence that the pencilssupplied by voting stations would allow votes to be erased their votes from the ballot.[67][68] Allvoting systemsface threats of some form of electoral fraud. The types of threats that affectvoting machinesvary.[69]Research at Argonne National Laboratories revealed that a single individual with physical access to a machine, such as a Diebold Accuvote TS, can install inexpensive, readily available electronic components to manipulate its functions.[70][71] Other approaches include: In 1994,the electionwhich brought majority rule and putNelson Mandelain office, South Africa's election compilation system was hacked, so they re-tabulated by hand.[78][79][80] In 2014, Ukraine's central election system was hacked. Officials found and removed a virus and said the totals were correct.[81] Academic research has generally found voter impersonation to be 'exceptionally rare' in the UK.[82]TheConservativegovernment passed theElections Act 2022, which mandated photo identification.[83][84] Voter impersonation is considered extremely rare in the US by experts.[85]Since 2013, several states have passedvoter ID lawsto counter voter impersonation. Voter ID requirements are generally popular among Americans[86][87]and proponents have argued that it can be difficult to detect voter impersonation without them.[88][89][90]Voter ID laws' effectiveness given the rarity of voter impersonation, and their potential to disenfranchise citizens without the right ID have created controversy. By August 2016, four federal court rulings (Texas, North Carolina, Wisconsin, and North Dakota) overturned laws or parts of such laws because they placed undue burdens on minorities.[91] Allegations of widespread voter impersonation often turn out to be false.[92]The North Carolina Board of Elections reported in 2017 that out of 4,769,640 votes cast in the November 2016 election in North Carolina, only one illegal vote would potentially have been blocked by the voter ID law. The investigation found fewer than 500 incidences of invalid ballots cast, the vast majority of which were cast by individuals on probation forfelonywho were likely not aware that this status disqualified them from voting, and the total number of invalid votes was far too small to have affected the outcome of any race in North Carolina in the 2016 election.[93][94] In particularly corrupt regimes, the voting process may be nothing more than a sham, to the point that officials simply announce whatever results they want, sometimes without even bothering to count the votes. While such practices tend to draw international condemnation, voters typically have little if any recourse, as there would seldom be any ways to remove the fraudulent winner from power, short of a revolution.[citation needed] InTurkmenistan, incumbent PresidentGurbanguly Berdymukhamedovreceived 97.69% of votesin the 2017 election, with his sole opponent, who was seen as pro-government, in fact being appointed by Berdymukhamedov. InGeorgia,Mikheil Saakashvilireceived 96.2% of votes in the election following theRose Revolutionwhile his allyNino Burjanadzewas an interim head of state.[citation needed] In both the United Kingdom and the United States, experts estimate that voting fraud by mail has affected only a few local elections, without likely any impact at the national level.[95][96][97][98]In April 2020, a 20-year voter fraud study by theMassachusetts Institute of Technologyfound the level of mail-in ballot fraud "exceedingly rare" in the United States, occurring only in "0.00006 percent" of instances nationally, and, with Oregon's mail-in-ballots, "0.000004 percent—about five times less likely than getting hit by lightning".[99] Types of fraud have included pressure on voters from family or others, since the ballot is not always cast in secret;[97][100][101]collection of ballots by dishonest collectors who mark votes or fail to deliver ballots;[102][103]and insiders changing, challenging or destroying ballots after they arrive.[104][105] A measure championed as a way to prevent some types of mail-in fraud has been to require the voter's signature on the outer envelope, which is compared to one or more signatures on file before taking the ballot out of the envelope and counting it.[97][106]Not all places have standards for signature review,[107]and there have been calls to update signatures more often to improve this review.[97][106]While any level of strictness involves rejecting some valid votes and accepting some invalid votes,[108]there have been concerns that signatures are improperly rejected from young and minority voters at higher rates than others, with no or limited ability of voters to appeal the rejection.[109][110] Some problems have inherently limited scope, such as family pressure, while others can affect several percent of the vote, such as dishonest collectors[97]and overly strict signature verification.[109] In 2019,Elections Canadaidentified 103,000 non-citizens who were illegally on Canada's federal voters register.[111]It subsequently identified roughly 3,500 cases of potential non-citizens who voted in2019, but noted that it was not a coordinated effort and did not affect the result in anyriding.[112]"But almost a year after Canadians headed to the polls, the agency says it's still trying to determine how many of those cases — if any — involved non-Canadian citizens casting ballots."[112][needs update] Illegal non-citizen voting is considered extremely rare in the United States by most experts due to the severe penalties associated with the practice including deportation, incarceration or fines in addition to jeopardizing their attempt to naturalize.[113][114][115][116]The federal form to register a voter does not require proof of citizenship,[113]though non-citizens have been found to vote only in very small numbers.[117][118][further explanation needed] Vote fraud can also take place in legislatures. Some of the forms used in national elections can also be used in parliaments, particularly intimidation and vote-buying. Because of the much smaller number of voters, however, election fraud in legislatures is qualitatively different in many ways. Fewer people are needed to 'swing' the election, and therefore specific people can be targeted in ways impractical on a larger scale. For example,Adolf Hitlerachieved hisdictatorialpowers due to theEnabling Act of 1933. He attempted to achieve the necessary two-thirds majority to pass the Act by arresting members of the opposition, though this turned out to be unnecessary to attain the needed majority. Later, the Reichstag was packed withNaziparty members who voted for the Act's renewal.[citation needed] In many legislatures, voting is public, in contrast to thesecret ballotused in most modern public elections. This may make their elections more vulnerable to some forms of fraud since a politician can be pressured by others who will know how the legislator voted. However, it may also protect against bribery and blackmail, since the public and media will be aware if a politician votes in an unexpected way. Since voters and parties are entitled to pressure politicians to vote a particular way, the line between legitimate and fraudulent pressure is not always clear.[citation needed] As in public elections, proxy votes are particularly prone to fraud. In some systems, parties may vote on behalf of any member who is not present in parliament. This protects those members from missing out on voting if prevented from attending parliament, but it also allows their party to prevent them from voting against its wishes. In some legislatures, proxy voting is not allowed, but politicians may rig voting buttons or otherwise illegally cast "ghost votes" while absent.[119] The three main strategies for the prevention of electoral fraud in society are: Some of the main fraud prevention tactics can be summarised as secrecy and openness. Thesecret ballotprevents many kinds of intimidation and vote selling, while transparency at all other levels of the electoral process prevents and allows detection of most interference. Electoral fraud is generally considered difficult to prove, as perpetrators are highly motivated to conceal their acts.[120][121]Researchers must often rely oninferential methodsto uncover unusual patterns that could indicate election fraud, as fraud often cannot be observed directly.[122] Election auditing refers to any review conducted after polls close for the purpose of determining whether the votes were counted accurately (a results audit) or whether proper procedures were followed (a process audit), or both.[citation needed] Audits vary and can include checking that the number of voters signed in at the polls matches the number of ballots, seals on ballot boxes and storage rooms are intact, computer counts (if used) match hand counts, and counts are accurately totaled.[citation needed] Election recountsare a specific type of audit, with elements of both results and process audits.[citation needed] In the United States the goal of prosecutions is not to stop fraud or keep fraudulent winners out of office; it is to deter and punish years later. TheJustice Departmenthas publishedFederal Prosecution of Election Offensesin eight editions from 1976 to 2017, under PresidentsFord,Carter,Reagan,Clinton, Bush andTrump. It says, "Department does not have authority to directly intercede in the election process itself. ... overt criminal investigative measures should not ordinarily be taken ... until the election in question has been concluded, its results certified, and all recounts and election contests concluded."[123][124]Sentencing guidelines provide a range of 0–21 months in prison for a first offender;[125]offense levelsrange from 8 to 14.[126]Investigation, prosecution and appeals can take over 10 years.[127] In thePhilippines, formerPresidentGloria Macapagal Arroyowas arrested in 2011 following the filing of criminal charges against her for electoral sabotage, in connection with the2007 Philippine general election. She was accused of conspiring with election officials to ensure the victory of her party'ssenatorialslate in the province ofMaguindanao, through the tampering of election returns.[128] Thesecret ballot, in which only the voter knows how they have voted, is believed by many to be a crucial part of ensuringfree and fair electionsthrough preventing voter intimidation or retribution.[129]Others argue that the secret ballot enables election fraud (because it makes it harder to verify that votes have been counted correctly)[130][131]and that it discourages voter participation.[132][failed verification]Although the secret ballot was sometimes practiced inancient Greeceand was a part of theConstitution of the Year IIIof 1795, it only became common in the nineteenth century. Secret balloting appears to have been first implemented in the former Britishcolony—now anAustralianstate—ofTasmaniaon 7 February 1856. By the turn of the century, the practice had spread to most Western democracies.[citation needed] In the United States, the popularity of the Australian ballot grew as reformers in the late 19th century sought to reduce the problems of election fraud. Groups such as the Greenbackers, Nationalist, and more fought for those who yearned to vote, but were exiled for their safety. George Walthew, Greenback, helped initiate one of the first secret ballots in America in Michigan in 1885. Even George Walthew had a predecessor in John Seitz, Greenback, who campaigned a bill to "preserve the purity of elections" in 1879 after the discovery of Ohio's electoral fraud in congressional elections.[citation needed] The efforts of many helped accomplish this and led to the spread of other secret ballots all across the country. As mentioned on February 18, 1890, in the Galveston News "The Australian ballot has come to stay. It protects the independence of the voter and largely puts a stop to vote to buy." Before this, it was common for candidates to intimidate or bribe voters, as they would always know who had voted which way.[citation needed] Most methods of preventing electoral fraud involve making the election process completely transparent to all voters, from nomination of candidates through casting of the votes and tabulation.[133][non-primary source needed]A key feature in ensuring the integrity of any part of the electoral process is a strictchain of custody.[citation needed] To prevent fraud in central tabulation, there has to be a public list of the results from every single polling place. This is the only way for voters to prove that the results they witnessed in their election office are correctly incorporated into the totals.[citation needed] End-to-end auditable voting systemsprovide voters with a receipt to allow them to verify their vote was cast correctly, and an audit mechanism to verify that the results were tabulated correctly and all votes were cast by valid voters. However, the ballot receipt does not permit voters to prove to others how they voted, since this would open the door towards forced voting and blackmail. End-to-end systems includePunchscanandScantegrity, the latter being an add-on to optical scan systems instead of a replacement.[citation needed] In many cases,election observersare used to help prevent fraud and assure voters that the election is fair. International observers (bilateral and multilateral) may be invited to observe the elections (examples include election observation by the Organisation for Security and Cooperation in Europe (OSCE), European Union election observation missions, observation missions of the Commonwealth of Independent States (CIS), as well as international observation organised by NGOs, such asCIS-EMO, European Network of Election Monitoring Organizations (ENEMO), etc.). Some countries also invite foreign observers (i.e. bi-lateral observation, as opposed to multi-lateral observation by international observers).[citation needed] In addition, national legislatures of countries often permit domestic observation. Domestic election observers can be either partisan (i.e. representing interests of one or a group of election contestants) or non-partisan (usually done by civil society groups). Legislations of different countries permit various forms and extents of international and domestic election observation.[citation needed] Election observation is also prescribed by various international legal instruments. For example, paragraph 8 of the 1990 Copenhagen Document states that "The [OSCE] participating States consider that the presence of observers, both foreign and domestic, can enhance the electoral process for States in which elections are taking place. They, therefore, invite observers from any other CSCE participating States and any appropriate private institutions and organisations who may wish to do so to observe the course of their national election proceedings, to the extent permitted by law. They will also endeavour to facilitate similar access for election proceedings held below the national level. Such observers will undertake not to interfere in the electoral proceedings".[citation needed] Critics note that observers cannot spot certain types of election fraud like targetedvoter suppressionor manipulated software ofvoting machines.[citation needed] Various forms ofstatisticscan be indicators of election fraud—e.g.,exit pollswhich diverge from the final results. Well-conducted exit polls serve as a deterrent to electoral fraud. However, exit polls are still notoriously imprecise. For instance, in the Czech Republic, some voters are afraid or ashamed to admit that they voted for the Communist Party (exit polls in 2002 gave the Communist party 2–3 percentage points less than the actual result). Variations in willingness to participate in an exit poll may result in an unrepresentative sample compared to the overall voting population.[citation needed] When elections are marred by ballot-box stuffing (e.g., the Armenian presidential elections of 1996 and 1998), the affected polling stations will show abnormally high voter turnouts with results favouring a single candidate. By graphing the number of votes against turnout percentage (i.e., aggregating polling stations results within a given turnout range), the divergence from bell-curve distribution gives an indication of the extent of the fraud. Stuffing votes in favour of a single candidate affects votes vs. turnout distributions for that candidate and other candidates differently; this difference could be used to quantitatively assess the number of votes stuffed. Also, these distributions sometimes exhibit spikes at round-number turnout percentage values.[134][135][136]High numbers of invalid ballots, overvoting or undervoting are other potential indicators.Risk-limiting auditsare methods to assess the validity of an election result statistically without the effort of a fullelection recount. Though electionforensicscan determine if election results are anomalous, the statistical results still need to be interpreted. Alan Hicken and Walter R. Mebane describe the results of election forensic analyses as not providing "definitive proof" of fraud. Election forensics can be combined with other fraud detection and prevention strategies, such as in-person monitoring.[137] One method for verifyingvoting machineaccuracy is 'parallel testing', the process of using an independent set of results compared to the original machine results. Parallel testing can be done prior to or during an election. During an election, one form of parallel testing is thevoter-verified paper audit trail(VVPAT) or verified paper record (VPR). A VVPAT is intended as an independent verification system for voting machines designed to allow voters to verify that their vote was cast correctly, to detect possible election fraud or malfunction, and to provide a means to audit the stored electronic results. This method is only effective ifstatistically significantnumbers of voters verify that their intended vote matches both the electronic and paper votes.[citation needed] On election day, a statistically significant number of voting machines can be randomly selected from polling locations and used for testing. This can be used to detect potential fraud or malfunction unless manipulated software would only start to cheat after a certain event like a voter pressing a special key combination (Or a machine might cheat only if someone does not perform the combination, which requires more insider access but fewer voters).[citation needed] Another form of testing is 'Logic & Accuracy Testing (L&A)', pre-election testing of voting machines using test votes to determine if they are functioning correctly.[citation needed] Another method to ensure the integrity of electronic voting machines is independentsoftware verificationandcertification.[133]Once a software is certified, code signing can ensure the software certified is identical to that which is used on election day. Some argue certification would be more effective if voting machine software was publicly available oropen source.[138][139]VotingWorkshas created anopen-source voting systemin the United States.[140] Certification and testing processes conducted publicly and with oversight from interested parties can promote transparency in the election process. The integrity of those conducting testing can be questioned.[citation needed] Testing and certification can prevent voting machines from being ablack boxwhere voters cannot be sure that counting inside is done as intended.[133] One method that people have argued would help prevent these machines from being tampered with would be for the companies that produce the machines to share the source code, which displays and captures the ballots, with computer scientists. This would allow external sources to make sure that the machines are working correctly.[75]
https://en.wikipedia.org/wiki/Electoral_fraud#Tampering_with_electronic_voting_machines
Electoral fraud, sometimes referred to aselection manipulation,voter fraud, orvote rigging, involves illegal interference with the process of anelection, either by increasing the vote share of a favored candidate, depressing the vote share of rival candidates, or both.[1]It differs from but often goes hand-in-hand withvoter suppression. What exactly constitutes electoral fraud varies from country to country, though the goal is oftenelection subversion. Electoral legislation outlaws many kinds of election fraud,[2]but other practices violate general laws, such as those banningassault,harassmentorlibel. Although technically the term "electoral fraud" covers only those acts which are illegal, the term is sometimes used to describeacts which are legal, but considered morally unacceptable, outside the spirit of an election or in violation of the principles ofdemocracy.[3][4]Show elections, featuring only one candidate, are sometimes classified[by whom?]as electoral fraud, although they may comply with the law and are presented more as referendums/plebiscites. In national elections, successful electoral fraud on a sufficient scale can have the effect of acoup d'état,[citation needed]protest[5]orcorruptionof democracy. In anarrow election, a small amount of fraud may suffice to change the result. Even if the outcome is not affected, the revelation of fraud can reduce voters' confidence in democracy. Because U.S. states have primary responsibility for conducting elections, including federal elections, many forms of electoral fraud are prosecuted as state crimes. State election offenses include voter impersonation, double voting, ballot stuffing, tampering with voting machines, and fraudulent registration. Penalties vary widely by state and can include fines, imprisonment, loss of voting rights, and disqualification from holding public office. The U.S. federal government prosecutes electoral crimes including voter intimidation, conspiracy to commit election fraud, bribery, interference with the right to vote, and fraud related to absentee ballots in federal elections.[6] In France, someone guilty may be fined and/or imprisoned for not more than one year, or two years if the person is a public official.[7][non-primary source needed] Electoral fraud can occur in advance of voting if the composition of the electorate is altered. The legality of this type of manipulation varies across jurisdictions. Deliberate manipulation of election outcomes is widely considered a violation of the principles of democracy.[8] In many cases, it is possible for authorities to artificially control the composition of an electorate in order to produce a foregone result. One way of doing this is to move a large number of voters into the electorate prior to an election, for example by temporarily assigning them land or lodging them inflophouses.[9][10]Many countries prevent this with rules stipulating that a voter must have lived in an electoral district for a minimum period (for example, six months) in order to be eligible to vote there. However, such laws can also be used for demographic manipulation as they tend todisenfranchisethose with no fixed address, such as the homeless, travelers,Roma, students (studying full-time away from home), and some casual workers. Another strategy is to permanently move people into an electoral district, usually throughpublic housing. If people eligible for public housing are likely to vote for a particular party, then they can either be concentrated into one area, thus making their votes count for less, or moved intomarginal seats, where they may tip the balance towards their preferred party. One example of this was the 1986–1990Homes for votes scandalin theCity of Westminsterin England underShirley Porter.[11] Immigration law may also be used to manipulate electoral demography. For instance,Malaysiagave citizenship to immigrants from the neighboringPhilippinesandIndonesia, together with suffrage, in order for a political party to "dominate" the state ofSabah; this controversial process was known asProject IC.[12]In the United States, there have been allegations of an attempt to alter electoral demography via immigration as part of a far-rightGreat Replacement Theory conspiracy."[13] A method of manipulatingprimary contestsand other elections of party leaders are related to this. People who support one party may temporarily join another party (or vote in a crossover way, when permitted) in order to elect a weak candidate for that party's leadership. The goal ultimately is to defeat the weak candidate in the general election by the leader of the party that the voter truly supports. There were claims that this method was being utilised in theUK Labour Party leadership election in 2015, where Conservative-leaningToby YoungencouragedConservativesto joinLabourand vote forJeremy Corbynin order to "consign Labour to electoral oblivion".[14][15]Shortly after, #ToriesForCorbyntrendedonTwitter.[15] The composition of an electorate may also be altered bydisenfranchisingsome classes of people, rendering them unable to vote. In some cases, states had passed provisions that raised general barriers to voter registration, such aspoll taxes, literacy and comprehension tests, and record-keeping requirements, which in practice were applied against minority populations to discriminatory effect. From the turn of the century into the late 1960s, most African Americans in the southern states comprising theformer Confederacywere disenfranchised by such measures. Corrupt election officials may misuse voting regulations such as aliteracy testor requirement for proof of identity or address in such a way as to make it difficult or impossible for their targets to cast a vote. If such practices discriminate against a religious or ethnic group, they may so distort the political process that the political order becomes grossly unrepresentative, as in the post-ReconstructionorJim Crowera until theVoting Rights Act of 1965.Felons have been disenfranchisedin many states as a strategy to prevent African Americans from voting.[16] Groups may also be disenfranchised by rules which make it impractical or impossible for them to cast a vote. For example, requiring people to vote within their electorate may disenfranchise serving military personnel, prison inmates, students, hospital patients or anyone else who cannot return to their homes. Polling can be set for inconvenient days, such as midweek or on holy days of religious groups: for example onthe Sabbathor otherholy daysof a religious group whose teachings determine that voting is prohibited on such a day. Communities may also be effectively disenfranchised if polling places are situated in areas perceived by voters as unsafe, or are not provided within reasonable proximity (rural communities are especially vulnerable to this).[example needed] In some cases, voters may be invalidly disenfranchised, which is true electoral fraud. For example, a legitimate voter may be "accidentally" removed from theelectoral roll, making it difficult or impossible for the person to vote.[citation needed] In the Canadian federal election of 1917, during theGreat War, the Canadian government, led by the Union Party, passed theMilitary Voters Actand theWartime Elections Act. TheMilitary Voters Actpermitted any active military personnel to vote by party only and allowed that party to decide in which electoral district to place that vote. It also enfranchised those women who were directly related or married to an active soldier. These groups were believed to be disproportionately in favor of the Union government, as that party was campaigning in favor of conscription.[citation needed]TheWartime Elections Act, conversely, disenfranchised particular ethnic groups assumed to be disproportionately in favour of the opposition Liberal Party.[citation needed] Stanford University professorBeatriz Magalonidescribed a model governing the behaviour of autocratic regimes. She proposed that ruling parties can maintain political control under a democratic system without actively manipulating votes or coercing the electorate. Under the right conditions, the democratic system is maneuvered into an equilibrium in which divided opposition parties act as unwitting accomplices to single-party rule. This permits the ruling regime to abstain from illegal electoral fraud.[17] Preferential voting systems such asscore votingandsingle transferable vote, and in some cases,instant-runoff voting, can reduce the impact of systemic electoral manipulation andpolitical duopoly.[18][19] Voter intimidationinvolves putting undue pressure on a voter or group of voters so that they will vote a particular way, or not at all.[20]Absenteeand otherremote votingcan be more open to some forms of intimidation as the voter does not have the protection and privacy of the polling location. Intimidation can take a range of forms including verbal, physical, or coercion. This was so common that in 1887, a Kansas Supreme Court inNew Perspectives on Election Fraud in The Gilded Agesaid "[...] physical retaliation constituted only a slight disturbance and would not vitiate an election." In its simplest form, voters from a particular demographic or known to support a particular party or candidate are directly threatened by supporters of another party or candidate or by those hired by them. In other cases, supporters of a particular party make it known that if a particular village or neighborhood is found to have voted the 'wrong' way, reprisals will be made against that community. Another method is to make a general threat of violence, for example, abomb threatwhich has the effect of closing a particular polling place, thus making it difficult for people in that area to vote.[21]One notable example of outright violence was the1984 Rajneeshee bioterror attack, where followers ofBhagwan Shree Rajneeshdeliberately contaminated salad bars inThe Dalles, Oregon, in an attempt to weaken political opposition during county elections. Historically, this tactic includedLynching in the United Statesto terrorize potential African American voters in some areas.[citation needed] Polling places in an area known to support a particular party or candidate may be targeted for vandalism, destruction or threats, thus making it difficult or impossible for people in that area to vote.[citation needed] In this case, voters will be made to believe, accurately or otherwise, that they are not legally entitled to vote, or that they are legally obliged to vote a particular way. Voters who are not confident about their entitlement to vote may also be intimidated by real or implied authority figures who suggest that those who vote when they are not entitled to will be imprisoned, deported or otherwise punished.[22][23] For example, in 2004, in Wisconsin and elsewhere voters allegedly received flyers that said, "If you already voted in any election this year, you can't vote in the Presidential Election", implying that those who had voted in earlier primary elections were ineligible to vote. Also, "If anybody in your family has ever been found guilty of anything you can't vote in the Presidential Election." Finally, "If you violate any of these laws, you can get 10 years in prison and your children will be taken away from you."[24][25] Employers can coerce the voters' decision, through strategies such as explicit or implicit threats of job loss.[26] People may distribute false or misleading information in order to affect the outcome of an election.[3]For example, in theChilean presidential election of 1970, the U.S. government'sCentral Intelligence Agencyused "black propaganda"—materials purporting to be from various political parties—to sow discord between members of a coalition between socialists and communists.[27] Another method, allegedly used inCook County, Illinois, in 2004, is to falsely tell particular people that they are not eligible to vote[23]In 1981 in New Jersey, theRepublican National Committeecreated theBallot Security Task Forceto discourage voting among Latino and African-American citizens of New Jersey. The task force identified voters from an old registration list and challenged their credentials. It also paid off-duty police officers to patrol polling sites in Newark and Trenton, and posted signs saying that falsifying a ballot is a crime.[28] Another use ofdisinformationis to give voters incorrect information about the time or place of polling, thus causing them to miss their chance to vote. As part of the2011 Canadian federal election voter suppression scandal,Elections Canadatraced fraudulent phone calls, telling voters that their polling stations had been moved, to a telecommunications company that worked with theConservative Party.[29] Similarly in the United States, right-wingpolitical operativesJacob WohlandJack Burkmanwere indicted on several counts of bribery and election fraud in October 2020 regarding a voter disinformation scheme they undertook in the months prior to the2020 United States presidential election.[30]The pair hired a firm to make nearly 85,000robocallsthat targeted minority neighborhoods in Pennsylvania, Ohio, New York, Michigan, and Illinois. LikeDemocraticconstituencies in general that year, minorities voted overwhelmingly byabsentee ballot, many judging it a safer option during theCOVID-19 pandemicthan in-person voting.[31]Baselessly, the call warned potential voters if they submitted their votes by mail that authorities could use theirpersonal informationagainst them, including threats of police arrest for outstanding warrants and forced debt collection by creditors.[32] On October 24, 2022,WohlandBurkmanpleaded guilty inCuyahoga County, OhioCommon Pleas Courtto one count each of felony telecommunications fraud.[33]Commenting on the tactic of using disinformation to suppress voter turnout,Cuyahoga CountyProsecutor Michael C. O’Malley said the two men had "infringed upon the right to vote", and that "by pleading guilty, they were held accountable for their un-American actions.”[34] False claims of electoral fraud can be used as a basis for attempting to overturn an election. During and after the2020 presidential election, incumbent PresidentDonald Trumpmade numerousbaseless allegationsof electoral fraud by supporters ofDemocraticcandidateJoe Biden. The Trump campaign lost numerous legal challenges to the results.[36][37][38][39]President of BrazilJair Bolsonaroalso made numerous claims of electoral fraud without evidence during and after the2022 Brazilian presidential election.[40] Dead people voting refers to instances where ballots are fraudulently cast in the name of deceased individuals. While concerns about this type of electoral fraud often arise, studies suggest that such cases are extremely rare. In many democratic systems, safeguards exist to prevent this, such as regularly updating voter rolls and requiring identification at polling stations. However, in some cases, fraudulent actors may exploit outdated records or use the identification of deceased individuals to attempt illegal voting.[41] In Indonesia, dead people voting occurred during the2020 local elections[41]and the2024 general election.[42]There have been similar incidents reported in smaller regional elections.[43] Vote buying occurs when a political party or candidate seeks to buy the vote of a voter in an upcoming election. Vote buying can take various forms such as a monetary exchange, as well as an exchange for necessary goods or services.[44] A list of threats to voting systems, or electoral fraud methods considered as sabotage are kept by theNational Institute of Standards and Technology.[45] Ballot papers may be used to discourage votes for a particular party or candidate, using the design or other features which confuse voters into voting for a different candidate. For example, in the2000 U.S. presidential election, Florida'sbutterfly ballotpaper was criticized as poorly designed, leading some voters to vote for the wrong candidate. While the ballot itself was designed by a Democrat, it was the Democratic candidate,Al Gore, who was most harmed by voter errors because of this design.[46]Poor or misleading design is usually not illegal and therefore not technically election fraud, but it can nevertheless subvert the principles of democracy.[citation needed] Swedenhas a system with separate ballots used for each party, to reduce confusion among candidates. However, ballots from small parties such asPiratpartiet,JunilistanandFeministiskt initiativhave been omitted or placed on a separate table in the election to the EU parliament in 2009.[47]Ballots fromSweden Democratshave been mixed with ballots from the largerSwedish Social Democratic Party, which used a very similar font for the party name written on the top of the ballot.[citation needed] Another method of confusing people into voting for a different candidate from the one intended is to run candidates or create political parties with similar names or symbols to an existing candidate or party. The goal is to mislead voters into voting for the false candidate or party.[48]Such tactics may be particularly effective when many voters have limited literacy in the language used on the ballot. Again, such tactics are usually not illegal but they often work against the principles of democracy.[citation needed] Another possible source of electoral confusion is multiple variations of voting by differentelectoral systems. This may cause ballots to be counted as invalid if the wrong system is used. For instance, if a voter puts afirst-past-the-postcross in a numberedsingle transferable voteballot paper, it is invalidated. For example, in Scotland and other parts of the United Kingdom, up to three different voting systems and types of ballots may be used, based on the jurisdictional level of the election.Local electionsare determined bysingle transferable votes;Scottish parliamentary electionsby theadditional member system; and UK Parliamentary elections byfirst-past-the-post.[citation needed] Ballot stuffing, or "ballot-box stuffing", is the illegal practice of one person submitting multipleballotsduring avotein which only one ballot per person is permitted. Votes may be misrecorded at source, on a ballot paper or voting machine, or later in misrecording totals. The2019 Malawian general electionwas nullified by the Constitutional Court in 2020 because many results were changed by use of correction fluid, as well as duplicate, unverified and unsigned results forms.[60][61]California allows correction fluid and tape, so changes can be made after the ballot leaves the voter.[62] Where votes are recorded through electronic or mechanical means, the voting machinery may be altered so that a vote intended for one candidate is recorded for another, or electronic results are duplicated or lost, and there is rarely evidence whether the cause was fraud or error.[63][64][65] Many elections feature multiple opportunities for unscrupulous officials or 'helpers' to record an elector's vote differently from their intentions. Voters who require assistance to cast their votes are particularly vulnerable to having their votes stolen in this way. For example, a blind or illiterate person may be told that they have voted for one party when in fact they have been led to vote for another.[citation needed] Proxy votingis particularly vulnerable to election fraud, due to the amount of trust placed in the person who casts the vote. In several countries, there have been allegations of retirement home residents being asked to fill out 'absentee voter' forms. When the forms are signed and gathered, they are secretly rewritten as applications for proxy votes, naming party activists or their friends and relatives as the proxies. These people, unknown to the voter, cast the vote for the party of their choice. In theUnited Kingdom, this is known as 'granny farming.'[66] One of methods of electoral fraud is to destroy ballots for an opposing candidate or party. While mass destruction of ballots can be difficult to achieve without drawing attention to it, in a very close election it may be possible to destroy a small number of ballot papers without detection, thereby changing the overall result. Blatant destruction of ballot papers can render an election invalid and force it to be re-run. If a party can improve its vote on the re-run election, it can benefit from such destruction as long as it is not linked to it.[citation needed] During theBourbon Restorationin late 19th century Spain, the organized “loss” of voting slips (pucherazo) was used to maintain the agreed alternation between the Liberals and the Conservatives. This system of local political domination, especially rooted in rural areas and small cities, was known ascaciquismo.[citation needed] Another method is to make it appear that the voter has spoiled his or her ballot, thus rendering it invalid. Typically this would be done by adding another mark to the paper, making it appear that the voter has voted for more candidates than entitled, for instance. It would be difficult to do this to a large number of paper ballots without detection in some locales, but altogether too simple in others, especially jurisdictions where legitimate ballot spoiling by voter would serve a clear and reasonable aim: for example emulating protest votes in jurisdictions that have recently had and since abolished a "none of the above" or "against all" voting option; civil disobedience where voting is mandatory; and attempts at discrediting or invalidating an election. An unusually large share of invalidated ballots may be attributed to loyal supporters of candidates that lost in primaries or previous rounds, did not run or did not qualify to do so, or some manner of protest movement or organized boycott.[citation needed] In 2016, during theEU membership referendum, Leave-supporting voters in the UKalleged without evidence that the pencilssupplied by voting stations would allow votes to be erased their votes from the ballot.[67][68] Allvoting systemsface threats of some form of electoral fraud. The types of threats that affectvoting machinesvary.[69]Research at Argonne National Laboratories revealed that a single individual with physical access to a machine, such as a Diebold Accuvote TS, can install inexpensive, readily available electronic components to manipulate its functions.[70][71] Other approaches include: In 1994,the electionwhich brought majority rule and putNelson Mandelain office, South Africa's election compilation system was hacked, so they re-tabulated by hand.[78][79][80] In 2014, Ukraine's central election system was hacked. Officials found and removed a virus and said the totals were correct.[81] Academic research has generally found voter impersonation to be 'exceptionally rare' in the UK.[82]TheConservativegovernment passed theElections Act 2022, which mandated photo identification.[83][84] Voter impersonation is considered extremely rare in the US by experts.[85]Since 2013, several states have passedvoter ID lawsto counter voter impersonation. Voter ID requirements are generally popular among Americans[86][87]and proponents have argued that it can be difficult to detect voter impersonation without them.[88][89][90]Voter ID laws' effectiveness given the rarity of voter impersonation, and their potential to disenfranchise citizens without the right ID have created controversy. By August 2016, four federal court rulings (Texas, North Carolina, Wisconsin, and North Dakota) overturned laws or parts of such laws because they placed undue burdens on minorities.[91] Allegations of widespread voter impersonation often turn out to be false.[92]The North Carolina Board of Elections reported in 2017 that out of 4,769,640 votes cast in the November 2016 election in North Carolina, only one illegal vote would potentially have been blocked by the voter ID law. The investigation found fewer than 500 incidences of invalid ballots cast, the vast majority of which were cast by individuals on probation forfelonywho were likely not aware that this status disqualified them from voting, and the total number of invalid votes was far too small to have affected the outcome of any race in North Carolina in the 2016 election.[93][94] In particularly corrupt regimes, the voting process may be nothing more than a sham, to the point that officials simply announce whatever results they want, sometimes without even bothering to count the votes. While such practices tend to draw international condemnation, voters typically have little if any recourse, as there would seldom be any ways to remove the fraudulent winner from power, short of a revolution.[citation needed] InTurkmenistan, incumbent PresidentGurbanguly Berdymukhamedovreceived 97.69% of votesin the 2017 election, with his sole opponent, who was seen as pro-government, in fact being appointed by Berdymukhamedov. InGeorgia,Mikheil Saakashvilireceived 96.2% of votes in the election following theRose Revolutionwhile his allyNino Burjanadzewas an interim head of state.[citation needed] In both the United Kingdom and the United States, experts estimate that voting fraud by mail has affected only a few local elections, without likely any impact at the national level.[95][96][97][98]In April 2020, a 20-year voter fraud study by theMassachusetts Institute of Technologyfound the level of mail-in ballot fraud "exceedingly rare" in the United States, occurring only in "0.00006 percent" of instances nationally, and, with Oregon's mail-in-ballots, "0.000004 percent—about five times less likely than getting hit by lightning".[99] Types of fraud have included pressure on voters from family or others, since the ballot is not always cast in secret;[97][100][101]collection of ballots by dishonest collectors who mark votes or fail to deliver ballots;[102][103]and insiders changing, challenging or destroying ballots after they arrive.[104][105] A measure championed as a way to prevent some types of mail-in fraud has been to require the voter's signature on the outer envelope, which is compared to one or more signatures on file before taking the ballot out of the envelope and counting it.[97][106]Not all places have standards for signature review,[107]and there have been calls to update signatures more often to improve this review.[97][106]While any level of strictness involves rejecting some valid votes and accepting some invalid votes,[108]there have been concerns that signatures are improperly rejected from young and minority voters at higher rates than others, with no or limited ability of voters to appeal the rejection.[109][110] Some problems have inherently limited scope, such as family pressure, while others can affect several percent of the vote, such as dishonest collectors[97]and overly strict signature verification.[109] In 2019,Elections Canadaidentified 103,000 non-citizens who were illegally on Canada's federal voters register.[111]It subsequently identified roughly 3,500 cases of potential non-citizens who voted in2019, but noted that it was not a coordinated effort and did not affect the result in anyriding.[112]"But almost a year after Canadians headed to the polls, the agency says it's still trying to determine how many of those cases — if any — involved non-Canadian citizens casting ballots."[112][needs update] Illegal non-citizen voting is considered extremely rare in the United States by most experts due to the severe penalties associated with the practice including deportation, incarceration or fines in addition to jeopardizing their attempt to naturalize.[113][114][115][116]The federal form to register a voter does not require proof of citizenship,[113]though non-citizens have been found to vote only in very small numbers.[117][118][further explanation needed] Vote fraud can also take place in legislatures. Some of the forms used in national elections can also be used in parliaments, particularly intimidation and vote-buying. Because of the much smaller number of voters, however, election fraud in legislatures is qualitatively different in many ways. Fewer people are needed to 'swing' the election, and therefore specific people can be targeted in ways impractical on a larger scale. For example,Adolf Hitlerachieved hisdictatorialpowers due to theEnabling Act of 1933. He attempted to achieve the necessary two-thirds majority to pass the Act by arresting members of the opposition, though this turned out to be unnecessary to attain the needed majority. Later, the Reichstag was packed withNaziparty members who voted for the Act's renewal.[citation needed] In many legislatures, voting is public, in contrast to thesecret ballotused in most modern public elections. This may make their elections more vulnerable to some forms of fraud since a politician can be pressured by others who will know how the legislator voted. However, it may also protect against bribery and blackmail, since the public and media will be aware if a politician votes in an unexpected way. Since voters and parties are entitled to pressure politicians to vote a particular way, the line between legitimate and fraudulent pressure is not always clear.[citation needed] As in public elections, proxy votes are particularly prone to fraud. In some systems, parties may vote on behalf of any member who is not present in parliament. This protects those members from missing out on voting if prevented from attending parliament, but it also allows their party to prevent them from voting against its wishes. In some legislatures, proxy voting is not allowed, but politicians may rig voting buttons or otherwise illegally cast "ghost votes" while absent.[119] The three main strategies for the prevention of electoral fraud in society are: Some of the main fraud prevention tactics can be summarised as secrecy and openness. Thesecret ballotprevents many kinds of intimidation and vote selling, while transparency at all other levels of the electoral process prevents and allows detection of most interference. Electoral fraud is generally considered difficult to prove, as perpetrators are highly motivated to conceal their acts.[120][121]Researchers must often rely oninferential methodsto uncover unusual patterns that could indicate election fraud, as fraud often cannot be observed directly.[122] Election auditing refers to any review conducted after polls close for the purpose of determining whether the votes were counted accurately (a results audit) or whether proper procedures were followed (a process audit), or both.[citation needed] Audits vary and can include checking that the number of voters signed in at the polls matches the number of ballots, seals on ballot boxes and storage rooms are intact, computer counts (if used) match hand counts, and counts are accurately totaled.[citation needed] Election recountsare a specific type of audit, with elements of both results and process audits.[citation needed] In the United States the goal of prosecutions is not to stop fraud or keep fraudulent winners out of office; it is to deter and punish years later. TheJustice Departmenthas publishedFederal Prosecution of Election Offensesin eight editions from 1976 to 2017, under PresidentsFord,Carter,Reagan,Clinton, Bush andTrump. It says, "Department does not have authority to directly intercede in the election process itself. ... overt criminal investigative measures should not ordinarily be taken ... until the election in question has been concluded, its results certified, and all recounts and election contests concluded."[123][124]Sentencing guidelines provide a range of 0–21 months in prison for a first offender;[125]offense levelsrange from 8 to 14.[126]Investigation, prosecution and appeals can take over 10 years.[127] In thePhilippines, formerPresidentGloria Macapagal Arroyowas arrested in 2011 following the filing of criminal charges against her for electoral sabotage, in connection with the2007 Philippine general election. She was accused of conspiring with election officials to ensure the victory of her party'ssenatorialslate in the province ofMaguindanao, through the tampering of election returns.[128] Thesecret ballot, in which only the voter knows how they have voted, is believed by many to be a crucial part of ensuringfree and fair electionsthrough preventing voter intimidation or retribution.[129]Others argue that the secret ballot enables election fraud (because it makes it harder to verify that votes have been counted correctly)[130][131]and that it discourages voter participation.[132][failed verification]Although the secret ballot was sometimes practiced inancient Greeceand was a part of theConstitution of the Year IIIof 1795, it only became common in the nineteenth century. Secret balloting appears to have been first implemented in the former Britishcolony—now anAustralianstate—ofTasmaniaon 7 February 1856. By the turn of the century, the practice had spread to most Western democracies.[citation needed] In the United States, the popularity of the Australian ballot grew as reformers in the late 19th century sought to reduce the problems of election fraud. Groups such as the Greenbackers, Nationalist, and more fought for those who yearned to vote, but were exiled for their safety. George Walthew, Greenback, helped initiate one of the first secret ballots in America in Michigan in 1885. Even George Walthew had a predecessor in John Seitz, Greenback, who campaigned a bill to "preserve the purity of elections" in 1879 after the discovery of Ohio's electoral fraud in congressional elections.[citation needed] The efforts of many helped accomplish this and led to the spread of other secret ballots all across the country. As mentioned on February 18, 1890, in the Galveston News "The Australian ballot has come to stay. It protects the independence of the voter and largely puts a stop to vote to buy." Before this, it was common for candidates to intimidate or bribe voters, as they would always know who had voted which way.[citation needed] Most methods of preventing electoral fraud involve making the election process completely transparent to all voters, from nomination of candidates through casting of the votes and tabulation.[133][non-primary source needed]A key feature in ensuring the integrity of any part of the electoral process is a strictchain of custody.[citation needed] To prevent fraud in central tabulation, there has to be a public list of the results from every single polling place. This is the only way for voters to prove that the results they witnessed in their election office are correctly incorporated into the totals.[citation needed] End-to-end auditable voting systemsprovide voters with a receipt to allow them to verify their vote was cast correctly, and an audit mechanism to verify that the results were tabulated correctly and all votes were cast by valid voters. However, the ballot receipt does not permit voters to prove to others how they voted, since this would open the door towards forced voting and blackmail. End-to-end systems includePunchscanandScantegrity, the latter being an add-on to optical scan systems instead of a replacement.[citation needed] In many cases,election observersare used to help prevent fraud and assure voters that the election is fair. International observers (bilateral and multilateral) may be invited to observe the elections (examples include election observation by the Organisation for Security and Cooperation in Europe (OSCE), European Union election observation missions, observation missions of the Commonwealth of Independent States (CIS), as well as international observation organised by NGOs, such asCIS-EMO, European Network of Election Monitoring Organizations (ENEMO), etc.). Some countries also invite foreign observers (i.e. bi-lateral observation, as opposed to multi-lateral observation by international observers).[citation needed] In addition, national legislatures of countries often permit domestic observation. Domestic election observers can be either partisan (i.e. representing interests of one or a group of election contestants) or non-partisan (usually done by civil society groups). Legislations of different countries permit various forms and extents of international and domestic election observation.[citation needed] Election observation is also prescribed by various international legal instruments. For example, paragraph 8 of the 1990 Copenhagen Document states that "The [OSCE] participating States consider that the presence of observers, both foreign and domestic, can enhance the electoral process for States in which elections are taking place. They, therefore, invite observers from any other CSCE participating States and any appropriate private institutions and organisations who may wish to do so to observe the course of their national election proceedings, to the extent permitted by law. They will also endeavour to facilitate similar access for election proceedings held below the national level. Such observers will undertake not to interfere in the electoral proceedings".[citation needed] Critics note that observers cannot spot certain types of election fraud like targetedvoter suppressionor manipulated software ofvoting machines.[citation needed] Various forms ofstatisticscan be indicators of election fraud—e.g.,exit pollswhich diverge from the final results. Well-conducted exit polls serve as a deterrent to electoral fraud. However, exit polls are still notoriously imprecise. For instance, in the Czech Republic, some voters are afraid or ashamed to admit that they voted for the Communist Party (exit polls in 2002 gave the Communist party 2–3 percentage points less than the actual result). Variations in willingness to participate in an exit poll may result in an unrepresentative sample compared to the overall voting population.[citation needed] When elections are marred by ballot-box stuffing (e.g., the Armenian presidential elections of 1996 and 1998), the affected polling stations will show abnormally high voter turnouts with results favouring a single candidate. By graphing the number of votes against turnout percentage (i.e., aggregating polling stations results within a given turnout range), the divergence from bell-curve distribution gives an indication of the extent of the fraud. Stuffing votes in favour of a single candidate affects votes vs. turnout distributions for that candidate and other candidates differently; this difference could be used to quantitatively assess the number of votes stuffed. Also, these distributions sometimes exhibit spikes at round-number turnout percentage values.[134][135][136]High numbers of invalid ballots, overvoting or undervoting are other potential indicators.Risk-limiting auditsare methods to assess the validity of an election result statistically without the effort of a fullelection recount. Though electionforensicscan determine if election results are anomalous, the statistical results still need to be interpreted. Alan Hicken and Walter R. Mebane describe the results of election forensic analyses as not providing "definitive proof" of fraud. Election forensics can be combined with other fraud detection and prevention strategies, such as in-person monitoring.[137] One method for verifyingvoting machineaccuracy is 'parallel testing', the process of using an independent set of results compared to the original machine results. Parallel testing can be done prior to or during an election. During an election, one form of parallel testing is thevoter-verified paper audit trail(VVPAT) or verified paper record (VPR). A VVPAT is intended as an independent verification system for voting machines designed to allow voters to verify that their vote was cast correctly, to detect possible election fraud or malfunction, and to provide a means to audit the stored electronic results. This method is only effective ifstatistically significantnumbers of voters verify that their intended vote matches both the electronic and paper votes.[citation needed] On election day, a statistically significant number of voting machines can be randomly selected from polling locations and used for testing. This can be used to detect potential fraud or malfunction unless manipulated software would only start to cheat after a certain event like a voter pressing a special key combination (Or a machine might cheat only if someone does not perform the combination, which requires more insider access but fewer voters).[citation needed] Another form of testing is 'Logic & Accuracy Testing (L&A)', pre-election testing of voting machines using test votes to determine if they are functioning correctly.[citation needed] Another method to ensure the integrity of electronic voting machines is independentsoftware verificationandcertification.[133]Once a software is certified, code signing can ensure the software certified is identical to that which is used on election day. Some argue certification would be more effective if voting machine software was publicly available oropen source.[138][139]VotingWorkshas created anopen-source voting systemin the United States.[140] Certification and testing processes conducted publicly and with oversight from interested parties can promote transparency in the election process. The integrity of those conducting testing can be questioned.[citation needed] Testing and certification can prevent voting machines from being ablack boxwhere voters cannot be sure that counting inside is done as intended.[133] One method that people have argued would help prevent these machines from being tampered with would be for the companies that produce the machines to share the source code, which displays and captures the ballots, with computer scientists. This would allow external sources to make sure that the machines are working correctly.[75]
https://en.wikipedia.org/wiki/Election_fraud#Testing_and_certification_of_electronic_voting
Vote countingis the process of countingvotesin anelection. It can be done manually orby machines. In the United States, the compilation of election returns and validation of the outcome that forms the basis of the official results is calledcanvassing.[1] Counts are simplest in elections where just one choice is on theballot, and these are often counted manually. In elections where many choices are on the same ballot, counts are often done by computers to give quick results. Tallies done at distant locations must be carried or transmitted accurately to the central election office. Manual counts are usually accurate within one percent. Computers are at least that accurate, except when they have undiscovered bugs, broken sensors scanning the ballots, paper misfeeds, orhacks. Officials keep election computers off theinternetto minimize hacking, but the manufacturers are on the internet. They and their annual updates are still subject to hacking, like any computers. Further voting machines are in public locations on election day, and often the night before, so they are vulnerable. Paper ballots and computer files of results are stored until they are tallied, so they need secure storage, which is hard. The election computers themselves are stored for years, and briefly tested before each election. Despite the challenges to the U.S. voting process integrity in recent years, including multiple claims by Republican Party members of error orvoter fraud in 2020and 2021, a robust examination of the voting process in multiple U.S. states, including Arizona[2](where claims were most strenuous), found no basis in truth for those claims. The absence of error and fraud is partially attributable to the inherent checks and balances in the voting process itself, which are, as with democracy, built into the system to reduce their likelihood. Manual counting, also known as hand-counting, requires a physicalballotthat represents voter intent. The physical ballots are taken out of ballot boxes and/or envelopes, read and interpreted; then results are tallied.[3]Manual counting may be used forelection auditsandrecountsin areas where automated counting systems are used.[4] One method of manual counting is to sort ballots in piles by candidate, and count the number of ballots in each pile. If there is more than one contest on the same sheet of paper, the sorting and counting are repeated for each contest.[5]This method has been used in Burkina Faso, Russia, Sweden, United States (Minnesota), and Zimbabwe.[6] A variant is to read aloud the choice on each ballot while putting it into its pile, so observers can tally initially, and check by counting the piles. This method has been used in Ghana, Indonesia, and Mozambique.[6]These first two methods do not preserve the original order of the ballots, which can interfere with matching them to tallies or digital images taken earlier. Another approach is for one official to read all the votes on a ballot aloud, to one or more other staff, who tally the counts for each candidate. The reader and talliers read and tally all contests, before going on to the next ballot.[4]A variant is to project the ballots where multiple people can see them to tally.[7][8] Another approach is for three or more people to look at and tally ballots independently; if a majority (Arizona[9]) or all (Germany[10]) agree on their tallies after a certain number of ballots, that result is accepted; otherwise they re-tally. A variant of all approaches is to scan all the ballots and release a file of the images, so anyone can count them. Parties and citizens can count these images by hand or by software. The file gives them evidence to resolve discrepancies.[11][12]The fact that different parties and citizens count with independent systems protects against errors from bugs and hacks. Achecksumfor the file identifies true copies.[13]Election machines which scan ballots typically create such image files automatically,[14]though those images can be hacked or be subject to bugs if the election machine is hacked or has bugs. Independent scanners can also create image files.Copies of ballotsare known to be available for release in many parts of the United States.[15][16][17]The press obtained copies of many ballots in the2000 Presidential election in Floridato recount after the Supreme Court halted official recounts.[18]Different methods resulted in different winners. The tallying may be done at night at the end of the last day of voting, as in Britain,[19]Canada,[20]France,[21]Germany,[22]and Spain,[23]or the next day,[6]or 1–2 weeks later in the US, afterprovisional ballotshave been adjudicated.[24] If counting is not done immediately, or if courts accept challenges which can require re-examination of ballots, the ballots need to besecurely stored, which is problematic. Australia federal elections count ballots at least twice, at the polling place and, starting Monday night after election day, at counting centres.[25][26] Hand counting has been found to be slower and more prone to error than other counting methods.[27] Repeated tests have found that the tedious and repetitive nature of hand counting leads to a loss of focus and accuracy over time. A 2023 test inMohave County, Arizonaused 850 ballots, averaging 36 contests each, that had been machine-counted many times. The hand count used seven experienced poll workers: one reader with two watchers, and two talliers with two watchers. The results included 46 errors not noticed by the counting team, including: Similar tallying errors were reported in Indiana and Texas election hand counts. Errors were 3% to 27% for various candidates in a 2016 Indiana race, because the tally sheet labels misled officials into over-counting groups of five tally marks, and officials sometimes omitted absentee ballots or double-counted ballots.[29]12 of 13 precincts in the 2024 Republican primary in Gillespie County, TX, were added or written down wrong after a hand count, including two precincts with seven contests wrong and one with six contests wrong.[30]While the Texas errors were caught and corrected before results were finalized, the Indiana errors were not. Average errors in hand-counted candidate tallies in New Hampshire towns were 2.5% in 2002, including one town with errors up to 20%. Omitting that town cut the average error to 0.87%. Only the net result for each candidate in each town could be measured, by assuming the careful manual recount was fully accurate. Total error can be higher if there were countervailing errors hidden in the net result, but net error in the overall electorate is what determines winners.[31]Connecticut towns in 2007 to 2013 had similar errors up to 2%.[32] In candidate tallies for precincts in Wisconsin recounted by hand in 2011 and 2016, the average net discrepancy was 0.28% in 2011 and 0.18% in 2016.[33] India hand tallies paper records from a 1.5% sample of election machines before releasing results. For each voter, the machine prints the selected candidate on a slip of paper, displays it to the voter, then drops the slip into a box. In the April–May 2019 elections for the lower house of Parliament, the Lok Sabha, the Election Commission hand-tallied the slips of paper from 20,675 voting machines (out of 1,350,000 machines)[34]and found discrepancies for 8 machines, usually of four votes or less.[35]Most machines tally over 16 candidates,[36]and they did not report how many of these candidate tallies were discrepant. They formed investigation teams to report within ten days, were still investigating in November 2019, with no report as of June 2021.[35][37]Hand tallies before and after 2019 had a perfect match with machine counts.[35] An experiment with multiple types of ballots counted by multiple teams found average errors of 0.5% in candidate tallies when one person, watched by another, read to two people tallying independently. Almost all these errors were overcounts. The same ballots had errors of 2.1% in candidate tallies from sort and stack. These errors were equally divided between undercounts and overcounts of the candidates. Optical scan ballots, which were tallied by both methods, averaged 1.87% errors, equally divided between undercounts and overcounts. Since it was an experiment, the true numbers were known. Participants thought that having the candidate names printed in larger type and bolder than the office and party would make hand tallies faster and more accurate.[38] Intentional errors hand tallying election results are fraud. Close review by observers, if allowed, may detect fraud, and the observers may or may not be believed.[39]If only one person sees each ballot and reads off its choice, there is no check on that person's mistakes. In the US only Massachusetts and the District of Columbia give anyone but officials a legal right to see ballot marks during hand counting.[40]If fraud is detected and proven, penalties may be light or delayed. US prosecution policy since the 1980s has been to let fraudulent winners take office and keep office, usually for years, until convicted,[41][42]and to impose sentencing level 8–14,[43]which earns less than two years of prison.[44] In 1934, the United States had been hand-counting ballots for over 150 years, and problems were described in a report by Joseph P. Harris, who 20 years later invented apunched card votingmachine,[45] "Recounts in Chicago and Philadelphia have indicated such wide variations that apparently the precinct officers did not take the trouble to count the ballots at all... While many election boards pride themselves upon their ability to conduct the count rapidly and accurately, as a general rule the count is conducted poorly and slowly... precinct officers conduct the count with practically no supervision whatever... It is impossible to fix the responsibility for errors or frauds... Not infrequently there is a mixup with the ballots and some uncertainty as to which have been counted and which have not... The central count was used some years ago in San Francisco... experience indicated that there is considerable confusion at the central counting place... and that the results are not more accurate than those obtained from the count by the precinct officer."[46] Data in the table are comparable, because average error in candidate tallies as percent of candidate tallies, weighted by number of votes for each candidate (in NH) is mathematically the same as the sum of absolute values of errors in each candidate's tally, as percent of all ballots (in other studies). Cost depends on pay levels and staff time needed, recognizing that staff generally work in teams of two to four (one to read, one to watch, and one or two to record votes). Teams of four, with two to read and two to record are more secure[38][51]and would increase costs. Three to record might more quickly resolve discrepancies, if 2 of the 3 agree. Typical times in the table below range from a tenth to a quarter of a minute per vote tallied, so 24-60 ballots per hour per team, if there are 10 votes per ballot. One experiment with identical ballots of various types and multiple teams found that sorting ballots into stacks took longer and had more errors than two people reading to two talliers.[38] Mechanical voting machines have voters selecting switches (levers),[65][66]pushing plastic chips through holes, or pushing mechanical buttons which increment a mechanical counter (sometimes called the odometer) for the appropriate candidate.[3] There is no record of individual votes to check. Tampering with the gears or initial settings can change counts, or gears can stick when a small object is caught in them, so they fail to count some votes.[67]When not maintained well the counters can stick and stop counting additional votes; staff may or may not choose to fix the problem.[68]Also, election staff can read the final results wrong off the back of the machine. Electronic machines for elections are being procured around the world, often with donor money. In places with honest independent election commissions, machines can add efficiency, though not usually transparency. Where the election commission is weaker, expensive machines can be fetishized, waste money on kickbacks and divert attention, time and resources from harmful practices, as well as reducing transparency.[69] An Estonian study compared the staff, computer, and other costs of different ways of voting to the numbers of voters, and found highest costs per vote were in lightly used, heavily staffed early in-person voting. Lowest costs per vote were in internet voting and in-person voting on election day at local polling places, because of the large numbers of voters served by modest staffs. For internet voting they do not break down the costs. They show steps to decrypt internet votes and imply but do not say they are hand-counted.[70] In anoptical scan voting system, or marksense, each voter's choices are marked on one or more pieces of paper, which then go through a scanner. The scanner creates an electronic image of each ballot, interprets it, creates a tally for each candidate, and usually stores the image for later review. The voter may mark the paper directly, usually in a specific location for each candidate, either by filling in an oval or by using a patterned stamp that can be easily detected by OCR software. Or the voter may pick one pre-marked ballot among many, each with its own barcode or QR code corresponding to a candidate. Or the voter may select choices on an electronic screen, which then prints the chosen names, usually with a bar code or QR code summarizing all choices, on a sheet of paper to put in the scanner.[71]This screen and printer is called an electronic ballot marker (EBM) orballot marking device(BMD), and voters with disabilities can communicate with it by headphones, large buttons, sip and puff, or paddles, if they cannot interact with the screen or paper directly. Typically the ballot marking device does not store or tally votes. The paper it prints is the official ballot, put into a scanning system which counts the barcodes, or the printed names can be hand-counted, as a check on the machines.[72]Most voters do not look at the paper to ensure it reflects their choices, and when there is a mistake, an experiment found that 81% of registered voters do not report errors to poll workers.[73] Two companies, Hart and Clear Ballot, have scanners which count the printed names, which voters had a chance to check, rather than bar codes and QR codes, which voters are unable to check.[74] The machines are faster than hand-counting, so are typically used the night after the election, to give quick results. The paper ballots and electronic memories still need to be stored, to check that the images are correct, and to be available for court challenges. Scanners have a row of photo-sensors which the paper passes by, and they record light and dark pixels from the ballot. A black streak results when a scratch or paper dust causes a sensor to record black continuously.[75][76]A white streak can result when a sensor fails.[77]In the right place, such lines can indicate a vote for every candidate or no votes for anyone. Some offices blow compressed air over the scanners after every 200 ballots to remove dust.[78]Fold lines in the wrong places can also count as votes.[79] Software can miscount; if it miscounts drastically enough, people notice and check. Staff rarely can say who caused an error, so they do not know whether it was accidental or a hack. Errors from 2002 to 2008 were listed and analyzed by the Brennan Center in 2010.[80]There have been numerous examples before and since. Researchers find security flaws in all election computers, which let voters, staff members or outsiders disrupt or change results, often without detection.[85]Security reviews and audits are discussed inElectronic voting in the United States#Security reviews. When a ballot marking device prints a bar code or QR code along with candidate names, the candidates are represented in the bar code or QR code as numbers, and the scanner counts those codes, not the names. If a bug or hack makes the numbering system in the ballot marking device not aligned with the numbering system in the scanner, votes will be tallied for the wrong candidates.[74]This numbering mismatch has appeared with direct recording electronic machines (below).[86] SomeUS states checka small number of places by hand-counting or use of machines independent of the original election machines.[40] Recreated ballots are paper[87]or electronic[88]ballots created by election staff when originals cannot be counted for some reason. They usually apply to optical scan elections, not hand-counting. Reasons include tears, water damage and folds which prevent feeding through scanners. Reasons also include voters selecting candidates by circling them or other marks, when machines are only programmed to tally specific marks in front of the candidate's name.[89]As many as 8% of ballots in an election may be recreated.[88] Recreating ballots is sometimes called reconstructing ballots,[87]ballot replication, ballot remaking or ballot transcription.[90]The term "duplicate ballot" sometimes refers to these recreated ballots,[91]and sometimes to extra ballots erroneously given to or received from a voter.[92] Recreating can be done manually, or by scanners with manual review.[93] Because of its potential for fraud, recreation of ballots is usually done by teams of two people working together[94]or closely observed by bipartisan teams.[87]The security of a team process can be undermined by having one person read to the other, so only one looks at the original votes and one looks at the recreated votes, or by having the team members appointed by a single official.[95] When auditing an election, audits need to be done with the original ballots, not the recreated ones. List prices of optical scanners in the US in 2002–2019, ranged from $5,000 to $111,000 per machine, depending primarily on speed. List prices add up to $1 to $4 initial cost per registered voter. Discounts vary, based on negotiations for each buyer, not on number of machines purchased. Annual fees often cost 5% or more per year, and sometimes over 10%. Fees for training and managing the equipment during elections are additional. Some jurisdictions lease the machines so their budgets can stay relatively constant from year to year. Researchers say that the steady flow of income from past sales, combined with barriers to entry, reduces the incentive for vendors to improve voting technology.[96] If most voters mark their own paper ballots and one marking device is available at each polling place for voters with disabilities, Georgia's total cost of machines and maintenance for 10 years, starting 2020, has been estimated at $12 per voter ($84 million total). Pre-printed ballots for voters to mark would cost $4 to $20 per voter ($113 million to $224 million total machines, maintenance and printing). The low estimate includes $0.40 to print each ballot, and more than enough ballots for historic turnout levels. the high estimate includes $0.55 to print each ballot, and enough ballots for every registered voter, including three ballots (of different parties) for each registered voter in primary elections with historically low turnout.[97][98]The estimate is $29 per voter ($203 million total) if all voters use ballot marking devices, including $0.10 per ballot for paper. The capital cost of machines in 2019 in Pennsylvania is $11 per voter if most voters mark their own paper ballots and a marking device is available at each polling place for voters with disabilities, compared to $23 per voter if all voters use ballot marking devices.[99]This cost does not include printing ballots. New York has an undated comparison of capital costs and a system where all voters use ballot marking devices costing over twice as much as a system where most do not. The authors say extra machine maintenance would exacerbate that difference, and printing cost would be comparable in both approaches.[100]Their assumption of equal printing costs differs from the Georgia estimates of $0.40 or $0.50 to print a ballot in advance, and $0.10 to print it in a ballot marking device.[97] A touch screen displays choices to the voter, who selects choices, and can change their mind as often as needed, before casting the vote. Staff initialize each voter once on the machine, to avoid repeat voting. Voting data and ballot images are recorded in memory components, and can be copied out at the end of the election. The system may also provide a means for communicating with a central location for reporting results and receiving updates,[101]which is an access point for hacks and bugs to arrive. Some of these machines also print names of chosen candidates on paper for the voter to verify. These names on paper can be used forelection auditsandrecountsif needed. The tally of the voting data is stored in a removable memory component and in bar codes on the paper tape. The paper tape is called aVoter-verified paper audit trail(VVPAT). The VVPATs can be counted at 20–43 seconds of staff time per vote (not per ballot).[102][60] For machines without VVPAT, there is no record of individual votes to check. This approach can have software errors. It does not include scanners, so there are no scanner errors. When there is no paper record, it is hard to notice or research most errors. Election officials or optical scanners decide if a ballot is valid before tallying it. Reasons why it might not be valid include: more choices selected than allowed; incorrect voter signature or details on ballots received by mail, if allowed; lack of poll worker signatures, if required; forged ballot (wrong paper, printing or security features); stray marks which could identify who cast the ballot (to earn payments); and blank ballots, though these may be counted separately as abstentions.[6] For paper ballots officials decide if the voter's intent is clear, since voters may mark lightly, or circle their choice, instead of marking as instructed. The ballot may be visible to observers to ensure agreement, by webcam or passing around a table,[6]or the process may be private. In the US only Massachusetts and the District of Columbia give anyone but officials a legal right to see ballot marks during hand counting.[40]For optical scans, the software has rules to interpret voter intent, based on the darkness of marks.[77]Software may ignore circles around a candidate name, and paper dust or broken sensors can cause marks to appear or disappear, not where the voter intended. Officials also check if the number of voters checked in at the polling place matches the number of ballots voted, and that the votes plus remaining unused ballots matches the number of ballots sent to the polling place. If not, they look for the extra ballots, and may report discrepancies.[6] If ballots or other paper or electronic records of an election may be needed for counting or court review after a period of time, they need to be stored securely. Election storage often usestamper-evident seals,[112][113]although seals can typically be removed and reapplied without damage, especially in the first 48 hours.[114]Photos taken when the seal is applied can be compared to photos taken when the seal is opened.[115]Detecting subtle tampering requires substantial training.[114][116][117]Election officials usually take too little time to examine seals, and observers are too far away to check seal numbers, though they could compare old and new photos projected on a screen. If seal numbers and photos are kept for later comparison, these numbers and photos need their own secure storage. Seals can also be forged. Seals and locks can be cut so observers cannot trust the storage. If the storage is breached, election results cannot be checked and corrected. Experienced testers can usually bypass all physical security systems.[118]Locks[119]and cameras[120]are vulnerable before and after delivery.[118]Guards can be bribed or blackmailed. Insider threats[121][122]and the difficulty of following all security procedures are usually under-appreciated, and most organizations do not want to learn their vulnerabilities.[118] Security recommendations include preventing access by anyone alone,[123]which would typically require two hard-to-pick locks, and having keys held by independent officials if such officials exist in the jurisdiction; having storage risks identified by people other than those who design or manage the system; and using background checks on staff.[112] No US state has adequate laws on physical security of the ballots.[124] Starting the tally soon after voting ends makes it feasible for independent parties to guard storage sites.[125] The ballots can be carried securely to a central station for central tallying, or they can be tallied at each polling place, manually or by machine, and the results sent securely to the central elections office. Transport is often accompanied by representatives of different parties to ensure honest delivery. Colorado transmits voting records by internet from counties to the Secretary of State, with hash values also sent by internet to try to identify accurate transmissions.[126] Postal votingis common worldwide, though France stopped it in the 1970s because of concerns about ballot security. Voters who receive a ballot at home may also hand-deliver it or have someone else to deliver it. The voter may be forced or paid to vote a certain way,[39]or ballots may be changed or lost during the delivery process,[127][128]or delayed so they arrive too late to be counted or for signature mis-matches to be resolved.[129][130] Postal voting lowered turnout in California by 3%.[131]It raised turnout in Oregon only in Presidential election years by 4%, turning occasional voters into regular voters, without bringing in new voters.[132]Election offices do not mail to people who have not voted recently, and letter carriers do not deliver to recent movers they do not know, omitting mobile populations.[133] Some jurisdictions let ballots be sent to the election office by email, fax, internet or app.[134]Email and fax are highly insecure.[135]Internet so far has also been insecure, including inSwitzerland,[136]Australia,[137]andEstonia.[138]Apps try to verify the correct voter is using the app by name, date of birth and signature,[139]which are widely available for most voters, so can be faked; or by name, ID and video selfie, which can be faked by loading a pre-recorded video.[140]Apps have been particularly criticized for operating on insecure phones, and pretending to more security during transmission than they have.[141][142][140]
https://en.wikipedia.org/wiki/Vote_counting_system
E-democracy(a blend of the termselectronicanddemocracy), also known asdigital democracyorInternet democracy, usesinformation and communication technology(ICT) inpoliticalandgovernanceprocesses.[1][2][3][4]The term is credited to digital activist Steven Clift.[5][6][7]By using 21st-century ICT, e-democracy seeks to enhance democracy, including aspects likecivic technologyandE-government. Proponents argue that by promoting transparency in decision-making processes, e-democracy can empower all citizens to observe and understand the proceedings. Also, if they possess overlooked data, perspectives, or opinions, they can contribute meaningfully. This contribution extends beyond mere informal disconnected debate; it facilitates citizen engagement in theproposal, development, and actual creation of a country's laws. In this way, e-democracy has the potential to incorporate crowdsourced analysis more directly into the policy-making process.[8] Electronic democracyincorporates a diverse range of tools that use both existing and emerging information sources. These tools provide a platform for the public to express their concerns, interests, and perspectives, and to contribute evidence that may influence decision-making processes at the community, national, or global level. E-democracy leverages both traditional broadcast technologies such as television and radio, as well as newer interactive internet-enabled devices and applications, including polling systems. These emerging technologies have become popular means of public participation, allowing a broad range of stakeholders to access information and contribute directly via the internet. Moreover, large groups can offer real-time input at public meetings using electronic polling devices.[9] Utilizing information and communication technology (ICT),e-democracybolsters political self-determination. It collects social, economic, and cultural data to enhance democratic engagement. As a concept that encompasses various applications within differing democratic structures, e-democracy has substantial impacts on political norms and public engagement. It emerges from theoretical explorations of democracy and practical initiatives to address societal challenges through technology. The extent and manner of its implementation often depend on the specific form of democracy adopted by a society, thus shaped by both internal dynamics and external technological developments. When designed to present both supporting and opposing evidence and arguments for each issue, applyconflict resolutionandCost–benefit analysistechniques, and actively addressconfirmation biasand othercognitive biases, E-Democracy could potentially foster a more informed citizenry. However, the development of such a system poses significant challenges. These include designing sophisticated platforms to achieve these aims, navigating the dynamics of populism while acknowledging that not everyone has the time or resources for full-time policy analysis and debate, promoting inclusive participation, and addressing cybersecurity and privacy concerns. Despite these hurdles, some envision e-democracy as a potential facilitator of more participatory governance, a countermeasure to excessive partisan dogmatism, a problem-solving tool, a means for evaluating the validity of pro/con arguments, and a method for balancing power distribution within society. Throughout history, social movements have adapted to use the prevailing technologies as part of their civic engagement and social change efforts. This trend persists in the digital era, illustrating how technology shapes democratic processes. As technology evolves, it inevitably impacts all aspects of society, including governmental operations. This ongoing technological advancement brings new opportunities for public participation and policy-making while presenting challenges such as cybersecurity threats, issues related to the digital divide, and privacy concerns. Society is actively grappling with these complexities, striving to balance leveraging technology for democratic enhancement and managing its associated risks. E-democracy incorporates elements of bothrepresentativeanddirect democracy. In representative democracies, which characterize most modern systems, responsibilities such as law-making, policy formation, and regulation enforcement are entrusted to elected officials. This differs fromdirect democracies, where citizens undertake these duties themselves.[10] Motivations for e-democracy reforms are diverse and reflect the desired outcomes of its advocates. Some aim to align government actions more closely with the public's interest, akin topopulism, diminish the influence of media, political parties, and lobbyists, or use public input to assess potential costs and benefits of each policy. E-democracy, in its unstructured form, emphasizes direct participation and has the potential to redistribute political power from elected officials to individuals or groups. However, reforms aimed at maximizing benefits and minimizing costs might require structures that mimic a form of representation, conceivable if the public had the capacity to debate and analyze issues full-time. Given the design of electronic forums that can accommodate extensive debate, e-democracy has the potential to mimic aspects of representation on a much larger scale. These structures could involve public education initiatives or systems that permit citizens to contribute based on their interests or expertise. Further, E-democracies allow participants to engage online thus allowing it to reach a broader range of people.[11] From this standpoint, e-democracy appears less concerned with what the public believes to be true and more focused on the evidence the public can demonstrate as true. This view reveals a tension within e-democratic reforms betweenpopulismand anevidence-basedapproach akin to thescientific methodor theEnlightenmentprinciples. A key indicator of the effectiveness of a democratic system is the successful implementation of policy. To facilitate this, voters must comprehend the implications of each policy approach, evaluate its costs and benefits, and consider historical precedents for policy effectiveness. Some proponents of e-democracy argue that technology can enable citizens to perform these tasks as effectively, if not more so, than traditional political parties within representative democracies. By harnessing technological advancements, e-democracy has the potential to foster more informed decision-making and enhance citizen involvement in the democratic process. E-democracy traces back to the development ofinformation and communication technology (ICT)and the evolution of democratic structures. It encompasses initiatives from governments to interact with citizens through digital means and grassroots activities using electronic platforms to influence governmental practices.[12] The inception of e-democracy corresponds with the rise of theInternetin the late 20th century. The diffusion ofpersonal computersand the Internet during the 1990s led to the initiation of electronic government initiatives. Digital platforms, such as forums, chat rooms, and email lists, were pivotal in fostering public discourse, thereby encouraging informal civic engagement online. These platforms provided an accessible medium for individuals to discuss ideas and issues, and they were utilized by both governments and citizens to promote dialogue, advocate for change, and involve the public in decision-making processes. The structure of the Internet, which currently embodies characteristics such asdecentralization,open standards, and universalaccess, has been observed to align with principles often associated with democracy. These democratic principles have their roots infederalismandEnlightenmentvalues like openness andindividual liberty.[13] Steven Clift, a notable proponent of e-democracy, suggests that the Internet should be utilized to enhance democratic processes and provide increased opportunities for interaction between individuals, communities, and the government. He emphasizes the importance of structuring citizen-to-citizen discussions online within existing power structures and maintaining significant reach within the community for these discussions to hold agenda-setting potential.[13] The concept involves endorsing individuals or policies committed to leveraging internet technologies to amplify public engagement without modifying or substituting existing constitutions. The approach includes data collection, analysis of advantages and disadvantages, evaluation of interests, and facilitating discussions around potential outcomes.[14] In the late 20th century and early into the 21st century, e-democracy started to become more structured as governments worldwide started to explore its potential. One major development was the rise ofe-governmentinitiatives, which aimed to provide public services online. One of the first instances of such an initiative was the establishment of the Government Information Locator Service (GILS) by the United States government in 1994.[15]GILS was a searchable database of government information accessible to citizens and businesses, and it served as a tool to improve agency electronic records management practices. Along with the rise of e-government services, government websites started to spring up, aiming to improve communication with citizens, increase transparency, and make administrative tasks easier to accomplish online. The mid-2000s ushered in the era ofWeb 2.0, emphasizing user-generated content, interoperability, and collaboration. This period witnessed the rise of social media platforms, blogs, and other collaborative tools, further amplifying the potential for e-democracy through increasing opportunities for public participation and interaction. Concepts likecrowdsourcingandopen-source governancegained traction, advocating for broader and more direct public involvement in policymaking.[16] As the digital age progressed, so too did the interaction between governments and citizens. The advent and rapid adoption of the internet globally catalyzed this transformation. With high internet penetration in many regions, politics have increasingly relied on the internet as a primary source of information for numerous people. This digital shift has been supported by the rise in online advertising among political candidates and groups actively trying to sway public opinion or directly influence legislators.[17] This trend is especially noticeable among younger voters, who often regard the internet as their primary source of information due to its convenience and ability to streamline their information-gathering process. The user-friendly nature of search engines like Google and social networks encourages increased citizen engagement in political research and discourse. Social networks, for instance, offer platforms where individuals can voice their opinions on governmental issues without fear of judgement.[18]The vast scale and decentralized structure of the internet enable anyone to create viral content and influence a wide audience. The Internet facilitates citizens in accessing and disseminating information about politicians while simultaneously providing politicians with insights from a broader citizen base. This collaborative approach to decision-making and problem-solving empowers citizens. It accelerates decision-making processes by politicians, thereby fostering a more efficient society. Gathering citizen feedback and perspectives is essential to a politician's role. The Internet functions as a conduit for effective engagement with a larger audience. Consequently, this enhanced communication with the public strengthens the capability and effectiveness of the American government as a democracy.[19] The2016 U.S. presidential electionis an example of social media integration in political campaigns, where both Donald Trump and Hillary Clinton actively utilized Twitter as a communication tool. These platforms allow candidates to shape public perceptions while also humanizing their personas, suggesting that political figures are as approachable and relatable as ordinary individuals. Through resources such as Google, the Internet enables every citizen to readily research political topics. Social media platforms like Facebook, Twitter, and Instagram encourage political engagement, allowing users to share their political views and connect with like-minded individuals.[citation needed] Generation X's disillusionment with political processes, epitomized by large-scale public protests such as the U.K. miners' strike of 1984-1985 that appeared to fail, predated the widespread availability of information technology to individual citizens.[20]There is a perception that e-democracy could address some of these concerns by offering a counter to the insularity, power concentration, and post-election accountability deficit often associated with traditional democratic processes organized primarily around political parties.[citation needed]Tom Watson, the Deputy Leader of the U.K. Labour Party, once stated: It feels like the Labour frontbench is further away from our members than at any point in our history, and the digital revolution can help bring the party closer together … I'm going to ask our NEC to see whether we can have digital branches and digital delegates to the conference. Not replacing what we do but providing an alternative platform. It's a way of organizing for a different generation of people who do their politics differently, get their news differently. Despite the benefits of the digital shift, one of the challenges of e-democracy is the potential disconnect between politics and actual government implementation. While the internet provides a platform for robust political discourse, translating these discussions into effective government action can be complex. This gap can often be exacerbated by the rapid pace of digital dialogue, which may outpace the slower, more deliberative processes of policy-making. The rise of digital media has created new opportunities for citizens to participate in politics and to hold governments accountable. However, it has also created new challenges, such as the potential for echo chambers, and the need for governments to be responsive to citizen concerns.[22]The challenge for e-democracy, therefore, is to ensure that the digital discourse contributes constructively to the functioning of the government and the decision-making processes, rather than becoming an echo chamber of opinions with little practical impact. As of the 2020s, e-democracy's landscape continues to evolve alongside advancements in technologies such asartificial intelligence,blockchain, andbig data. These technologies promise to expand citizen participation further, enhance transparency, and boost the overall efficiency and responsiveness of democratic governance.[23] The history of e-democracy exhibits significant progress, but it is also characterized by ongoing debates and challenges, such as the digital divide, data privacy, cybersecurity, and the impact of misinformation. One concern is whether or not e-democracies will be able to withstand terrorist threats; once people are assured that defenses are in place for this, e-democracies will better serve communities the way they were intended to.[24] As this journey continues, the emphasis remains on leveraging technology to enhance democratic processes and ensure all citizens' voices are heard and valued.[25] E-democracy promotes wider access to information, and its inherent decentralization challengescensorshippractices. It embodies elements of the internet's origins, including stronglibertariansupport forfreedom of speech, widespreadsharing culture, and theNational Science Foundation's commercial use prohibition. The internet's capacity for mass communication, evident innewsgroups, chat rooms, andMUDs, surpasses traditional boundaries associated withbroadcast medialike newspapers or radio, as well as personal media such as letters orlandline telephones. As the Internet represents a vast digital network supporting open standards, achieving widespread, cost-effective access to a diverse range of communication media and models is feasible.[26] Practical issues pertaining to e-democracy include managing the agenda while encouraging meaningful participation and fostering enlightened understanding. Furthermore, efforts are evaluated based on their ability to ensure voting equality and promote inclusivity. The success or failure of e-democracy largely depends on its capability to accurately delineate each issue's relevant costs and benefits, identify their likelihood and significance, and align votes with this analysis. In addition, all internet forums, includingWikipedia, must address cybersecurity and protect sensitive data.[27] TheOccupy movement, which proposed various demonstrations in response to the2008 financial crisis, extensively utilized social networks.[28] Originating in Spain and subsequently spreading to other European countries, the15-M Movementgave rise to proposals by the Partido X (X Party) in Spain.[29][30]In 2016 and 2017, citizens involved in the movement together with theCity Council of Barcelonadeveloped a combined online and offline e-democracy project calledDecidim, that self-describes as a "technopolitical network for participatory democracy", with the aim of implementing the hopes of participatory democracy raised by the movement.[31]The project combines afree and open-source software(FOSS) software package together with aparticipatorypolitical project and an organising community, "Metadecidim".[32]Decidim participants refer to the software, political and organising components of the project as "technical", "political" and "technopolitical" levels, respectively.[33]By 2023, Decidim estimated that 400 city and regional governments and civil society institutions were running Decidim instances.[34] During the Arab Spring, uprisings across North Africa and the Middle East were spearheaded byonline activists. Initially, pro-democracy movements harnessed digital media to challenge authoritarian regimes. These regimes, however, adapted and integrated social media into their counter-insurgency strategies over time. Digital media served as a critical tool in transforming localized and individual dissent into structured movements with a shared awareness of common grievances and opportunities for collective action.[35] TheEgyptian Revolutionbegan on 25 January 2011, prompted by mass protests inCairo, Egypt, against the long reign of PresidentHosni Mubarak, high unemployment, governmental corruption, poverty, and societal oppression. The 18-day revolution gained momentum not through initial acts of violence or protests, but via a single Facebook page, which quickly attracted the attention of thousands and eventually millions of Egyptians, evolving into a global phenomenon.[36] The Internet became a tool of empowerment for the protestors, facilitating participation in their government's democratization process. Protestors effectively utilized digital platforms to communicate, organize, and collaborate, generating real-time impact.[37] In response to the regime's failed attempt to disrupt political online discussions by severing all internet access, Google and Twitter collaborated to create a system that allowed information to reach the public without internet access.[38] The interactive nature of media during this revolution enhanced civic participation and played a significant role in shaping the political outcome of the revolution and the democratization of the entire nation. The Egyptian Revolution has been interpreted by some as a paradigm shift from a group-controlled system to one characterized by "networked individualism". This transformation is tied to the post-"triple revolution" of technology, consisting of three key developments. First, the shift towards social networks, second, the widespread propagation of the instantaneous internet, and third, the ubiquity of mobile phones.[39] These elements significantly impacted change through the Internet, providing an alternative, unregulated sphere for idea formation and protests. For instance, the "6 April Youth Movement" in Egypt established their political group on Facebook and called for a national strike. Despite the subsequent suppression of this event, the Facebook group persisted, encouraging other activist groups to utilize online media. Moreover, the Internet served as a medium for building international connections, amplifying the impact of the revolt. The rapid transmission of information via Twitter hashtags, for example, made the uprising globally known. In particular, over three million tweets contained popular hashtags such as #Egypt and #sidibouzid, further facilitating the spread of knowledge and fostering change in Egypt.[39] The Kony 2012 video, released on 5 March 2012 by the non-profit organization Invisible Children, launched an online grassroots campaign aimed at locating and arrestingJoseph Kony, the leader of the Lord's Resistance Army (LRA) in Central Africa. The video's mission was to raise global awareness about Kony's activities, with Jason Russell, a founder of Invisible Children, emphasizing the necessity of public support to urge the government's continued search for Kony.[40]The organization leveraged the extensive reach of social media and contemporary technology to spotlight Kony's crimes. In response to the campaign, on 21 March 2012, a resolution was introduced by 33 Senators denouncing "the crimes against humanity" perpetrated by Kony and the LRA. This resolution supported the US government's ongoing efforts to boost the capabilities of regional military forces for civilian protection and the pursuit of LRA commanders. It also advocated for cross-border initiatives to augment civilian protection and aid populations affected by the LRA. Co-sponsor SenatorLindsey Grahamnoted the significant impact of public attention driven by social media, stating that the YouTube sensation would "help the Congress be more aggressive and will do more to lead to his demise than all other action combined".[41] TheIndia Against Corruption(IAC) movement was an influential anti-corruption crusade in India, garnering substantial attention during the anti-corruption protests of 2011 and 2012. Its primary focus was the contention surrounding the proposed Jan Lokpal bill. IAC sought to galvanize the populace in their pursuit of a less corrupt Indian society. However, internal divisions within the IAC's central committee led to the movement's split. Arvind Kejriwal left to establish the Aam Aadmi Party, while Anna Hazare created the Jantantra Morcha. Long Marchis a socio-political movement in Pakistan initiated by Qadri after returning from a seven-year residence inToronto, Ontario, Canada, in December 2012. Qadri called for a "million-men" march in Islamabad to protest government corruption.[42]The march commenced on 14 January 2013, with thousands pledging to participate in a sit-in until their demands were met.[43]The march began in Lahore with about 25,000 participants.[44]During a rally in front of the parliament, Qadri critiqued the legislators saying, "There is no Parliament; there is a group of looters, thieves and dacoits [bandits] ... Our lawmakers are the lawbreakers.".[45]After four days of sit-in, Qadri and the government reached an agreement—termed the Islamabad Long March Declaration—which pledged electoral reforms and enhanced political transparency.[46]Despite Qadri's call for a "million-men" march, the government estimated the sit-in participants in Islamabad to number around 50,000. TheFive Star Movement(M5S), a prominent political party in Italy, has been utilizing online voting since 2012 to select its candidates for Italian and European elections. These votes are conducted through a web-based application called Rousseau, accessible to registered members ofBeppe Grillo's blog.[47] Within this platform, M5S users are able to discuss, approve, or reject legislative proposals. These proposals are then presented in Parliament by the M5S group.[48]For instance, the M5S's electoral law and the selection of its presidential candidate were determined via online voting.[49][50]Notably, the decision to abolish a law against immigrants was made by online voting among M5S members, in opposition to the views of Grillo and Casaleggio.[51] M5S's alliance with theUK Independence Partywas also determined by online voting, albeit with limited options for the choice of European Parliament group for M5S. These wereEurope of Freedom and Democracy(EFD),European Conservatives and Reformists(ECR), and "Stay independent" (Non-Inscrits). The possibility of joining theGreens/EFAgroup was discussed but not available at the time due to the group's prior rejection of M5S.[52][53] When theConte I Cabinetcollapsed, a new coalition between theDemocratic Partyand M5S was endorsed after over 100,000 members voted online, with 79.3% supporting the new coalition.[54] TheCOVID-19 pandemichas underscored the importance and impact of e-democracy.[citation needed]In 2020,[55]the advent ofCOVID-19led countries worldwide to implement safety measures as recommended by public health officials. This abrupt societal shift constrained social movements, causing a temporary halt to certain political issues. Despite these limitations, individuals leveraged digital platforms to express their views, create visibility for social movements, and strive to instigate change and raise awareness through democracy in social media. As reported by news analysis firmThe ASEAN Post, the pandemic-induced limitations on traditional democratic spaces such as public meetings have led Filipinos, among others, to resort to social media, digital media, and collaborative platforms for engaging in public affairs and practising "active citizenship" in the virtual domain.[55]This shift has enabled active participation in social, written, or visual interaction and the rectification of misinformation in a virtual setting.[citation needed] E-democracy has the potential to inspire greater community involvement in political processes and policy decisions, interlacing its growth with complex internal aspects such as political norms and public pressure.[17]The manner in which it is implemented is also closely connected to the specific model of democracy employed.[56]Consequently, e-democracy is profoundly influenced by a country's internal dynamics as well as the external drivers defined by standard innovation and diffusion theory.[17] In the current age, where the internet and social networking dominate daily life, individuals are increasingly advocating for their public representatives to adopt practices similar to those in other states or countries concerning the online dissemination of government information. By making government data easily accessible and providing straightforward channels to communicate with government officials, e-democracy addresses the needs of modern society. E-democracy promotes more rapid and efficient dissemination of political information, encourages public debate, and boosts participation in decision-making processes.[57]Social media platforms have emerged as tools of empowerment, particularly among younger individuals, stimulating their participation in electoral processes. These platforms also afford politicians opportunities for direct engagement with constituents. A notable example is the2016 United States presidential elections, in whichDonald Trumpprimarily used Twitter to communicate policy initiatives and goals. Similar practices have been observed among various global leaders, such as Justin Trudeau, Jair Bolsonaro, and Hassan Rouhani, who maintain active Twitter accounts. Some observers[who?]argue that the government's online publication of public information enhances its transparency, enabling more extensive public scrutiny, and consequently promoting a more equitable distribution of power within society.[58] Jane Fountain, in her 2001 workBuilding the Virtual State, delves into the expansive reach of e-democracy and its interaction with traditional governmental structures. She offers a comprehensive model to understand how pre-existing norms, procedures, and rules within bureaucracies impact the adoption of new technological forms. Fountain suggests that this form of e-government, in its most radical manifestation, would necessitate a significant overhaul of the modern administrative state, with routine electronic consultations involving elected politicians, civil servants, pressure groups, and other stakeholders becoming standard practice at all stages of policy formulation. States where legislatures are controlled by the Republican Party, as well as those characterized by a high degree of legislative professionalization and active professional networks, have shown a greater propensity to embrace e-government and e-democracy.[59] E-democracy provides numerous benefits, contributing to a more engaged public sphere. It encourages increased public participation by offering platforms for citizens to express their opinions through websites, emails, and other electronic communication channels, influencing planning and decision-making processes.[9] This digital democracy model broadens the number and diversity of individuals who exercise their democratic rights by conveying their thoughts to decision-making bodies about various proposals and issues. Moreover, it cultivates a virtual public space, fostering interaction, discussion, and the exchange of ideas among citizens. E-democracy also promotes convenience, allowing citizens to participate at their own pace and comfort. Its digital nature enables it to reach vast audiences with relative ease and minimal cost. The system promotes interactive communication, encouraging dialogue between authorities and citizens. It also serves as an effective platform for disseminating large amounts of information, maintaining clarity and minimizing distortion. While e-democracy platforms, also known as digital democracy platforms, offer enhanced opportunities for exercising voting rights, they are also susceptible to disruption. Digital voting platforms, for example, have faced attacks aimed at influencing election outcomes. As Dobrygowski states, "cybersecurity threats to the integrity of both electoral mechanisms and government institutions are, quite uncomfortably, more intangible."[60]That being said, if e-democracy options were more secure, people would be more comfortable using it for things such as voting.[2]While traditional paper ballots are often considered the most secure method for conducting elections, digital voting provides the convenience of electronic participation. However, the successful implementation of this system necessitates continual innovations and contributions from third parties. Essentially for e-democracies to be used in real time, governments would have to prove it's reliability tousers. To foster a robust digital democracy, it's imperative to promote digital inclusion that ensures all citizens, regardless of income, education, gender, religion, ethnicity, language, physical and mental health, have equal opportunities to participate in public policy formulation. Early instances of digital inclusion in e-democracy can be seen in the 2008 election; individuals who were normally civically uninvolved became increasingly engaged due to the accessibility of receiving and spreading campaign information.[61] During the 2020 elections, digital communications were utilized by various communities to cultivate a sense of inclusivity.[62] Specifically, the COVID-19 pandemic saw a surge in online political participation among the youth, demonstrated by the signing ofonline petitionsand participation in digital protests. Even as youth participation in traditional politics dwindles, young people show significant support for pressure groups mobilized through social media.[63] For instance, the Black Lives Matter movement gained widespread recognition on social media, enabling many young people to participate in meaningful ways, including online interactions and protests.[64] E-Democracy is facilitated by its significance in fostering participation, promoting social inclusivity, displaying sensitivity to individual perspectives, and offering flexible means of engagement. The Internet endows a sense of relevance to participation by giving everyone a platform for their voices to be heard and articulated. It also facilitates a structure of social inclusivity through a broad array of websites, groups, and social networks, each representing diverse viewpoints and ideas. Individual needs are met by enabling the public and rapid expression of personal opinions. Furthermore, the Internet offers an exceptionally flexible environment for engagement; it is cost-effective and widely accessible. Through these attributes, e-democracy and the deployment of the Internet can play a pivotal role in societal change.[65] The progression of e-democracy is impeded by thedigital divide, which separates those actively engaged in electronic communities from those who do not participate. Proponents of e-democracy often recommend governmental actions to bridge this digital gap.[66]The divergence in e-governance and e-democracy between the developed and the developing world is largely due to thedigital divide.[67]Practical concerns include thedigital dividethat separates those with access from those without, and theopportunity costassociated with investments in e-democracy innovations. There also exists a degree of skepticism regarding the potential impact of online participation.[68] The government has a responsibility to ensure that online communications are both secure and respectful of individuals' privacy. This aspect gains prominence when consideringelectronic voting. The complexity of electronic voting systems surpasses other digital transaction mechanisms, necessitating authentication measures that can counter ballot manipulation or its potential threat. These measures may encompass the use of smart cards, which authenticate a voter's identity while maintaining the confidentiality of the cast vote.Electronic voting in Estoniaexemplifies a successful approach to addressing the privacy-identity dilemma inherent in internet voting systems. However, the ultimate goal should be to match the security and privacy standards of existing manual systems. Despite these advancements, recent research has indicated, through aSWOT analysis, that the risks of an e-government are related to data loss, privacy and security, and user adoption.[69] To encourage citizens to engage in online consultations and discussions, the government needs to be responsive and clearly demonstrate that public engagement influences policy outcomes. It's crucial for citizens to have the opportunity to contribute at a time and place that suits them and when their viewpoints will make a difference. The government should put structures in place to accommodate increased participation. Considering the role that intermediaries and representative organizations might play could be beneficial to ensure issues are debated in a manner that is democratic, inclusive, tolerant, and productive. To amplify the efficacy of existing legal rights allowing public access to information held by public authorities, citizens ought to be granted the right to productive public deliberation and moderation.[70] Some researchers argue that many initiatives have been driven by technology rather than by the core values of government, which has resulted in weakened democracy. E-democracy presents an opportunity to reconcile the conventional trade-off between the size of the group involved in democratic processes and the depth of will expression (refer to the Figure). Historically, broad group participation was facilitated via simpleballotvoting, but the depth of will expression was confined to predefined options (those on the ballot). Depth of will expression was obtained by limiting participant numbers throughrepresentative democracy(refer to the Table). Thesocial mediaWeb 2.0revolution has demonstrated the possibility of achieving both large group sizes and depth of will expression. However, expressions of will in social media are unstructured, making their interpretation challenging and often subjective (see Table). Novel information processing methods, includingbig dataanalytics and thesemantic web, suggest potential ways to exploit these capabilities for future e-democracy implementations.[71]Currently, e-democracy processes are facilitated by technologies such aselectronic mailing lists,peer-to-peernetworks,collaborative software, and apps like GovernEye,Countable,VoteSpotter,wikis, internet forums, andblogs. The examination of e-democracy encompasses its various stages including "information provision, deliberation, and participation in decision-making."[72]This assessment also takes into account the different hierarchical levels of governance such aslocal communities,states/regions,nations, and the global stage.[73]Further, the scope of involvement is also considered, which includes the participation ofcitizens/voters, themedia,elected officials,political organizations, andgovernments.[74]Therefore, e-democracy's evolution is influenced by such broad changes as increased interdependency, technological multimediation, partnership governance, and individualism.[75] Social mediaplatforms such asFacebook,Twitter,WordPress, andBlogspot, are increasingly significant in democratic dialogues.[76][77]The role of social media in e-democracy is an emerging field of study, along with technological developments such asargument mapsand thesemantic web.[71] Another notable development is the combination of open social networking communication with structured communication from closed expert and/or policy-maker panels, such as through the modifiedDelphi method(HyperDelphi).[78][79] This approach seeks to balance distributed knowledge and self-organized memories with critical control, responsibility, and decision-making in electronic democracy. Social networking serves as an entry point within the citizens' environment, engaging them on their terms. Proponents of e-government believe this helps the government act more in tune with its public. Examples of state usage include The Official Commonwealth of Virginia Homepage,[80]where citizens can findGoogletools andopen socialforums, considered significant steps towards the maturity of e-democracy.[71] Civic engagement encompasses three key aspects: understanding public affairs (political knowledge), trust in the political system (political trust), and involvement in governmental decision-making processes (political participation).[81]The internet enhances civic engagement by creating a new medium for interaction with government institutions.[82] Advocates of e-democracy propose that it can facilitate more active government engagement[83]and inspire citizens to actively influence decisions that directly affect them.[84]Digital tools have and continue to be used to determine the best practices for getting citizens involved in government. Collecting data on what gets citizens involved most efficiently allows for stronger practices going forward in citizen involvement.[85] Numerous studies indicate an increased use of the internet for obtaining political information. From 1996 to 2002, the percentage of adults claiming that the internet played a significant role in their political choices rose from around 14 to 20 percent.[86]In 2002, almost a quarter of the population stated that they had visited a website to research specific public policy issues. Research has indicated that people are more likely to visit websites that challenge their viewpoints rather than those that align with their own beliefs.[citation needed]Around 16 percent of the population has participated in online political activities such as joining campaigns, volunteering time, donating money, or participating in polls. A survey conducted by Philip N. Howard revealed that nearly two-thirds of the adult population in the United States has interacted with online political news, information, or other content over the past four election cycles.[86]People tend to reference the websites of special interest groups more frequently than those of specific elected leaders, political candidates, political parties, nonpartisan groups, and local community groups. The vast informational capacity of the Internet empowers citizens to gain a deeper understanding of governmental and political affairs, while its interactive nature fosters new forms of communication with elected officials and public servants. By providing access to contact information, legislation, agendas, and policies, governments can enhance transparency, thereby potentially facilitating more informed participation both online and offline.[87] As articulated by Matt Leighninger, the internet bolsters government by enhancing individual empowerment and reinforcing group agency.[88]The internet avails vital information to citizens, empowering them to influence public policy more effectively. The utilization of online tools for organizing allows citizens to participate more easily in the government's policy-making process, leading to a surge in public engagement. Social media platforms foster networks of individuals whose online activities can shape the political process, including prompting politicians to intensify public appeal efforts in their campaigns. E-democracy offers a digital platform for public dialogue, enhancing the interaction between government and its residents. This form of online engagement enables the government to concentrate on key issues the community wishes to address. The underpinning philosophy is that every citizen should have the potential to influence their local governance. E-democracy aligns with local communities and provides an opportunity for any willing citizen to make a contribution. The essence of an effective e-democracy lies not just in citizen contribution to government activities, but in promoting mutual communication and collaboration among citizens for the improvement of their own communities.[89]: 397 E-democracy utilizesinformation and communication technologies(ICT) to bolster the democratic processes of decision-making. These technologies play a pivotal role in informing and organizing citizens in different avenues of civic participation. Moreover, ICTs enhance the active engagement of citizens, and foster collaboration among stakeholders for policy formation within political processes across all stages of governance.[90][91] TheOrganisation for Economic Co-operation and Development(OECD) identifies three key aspects regarding the role of ICTs in fostering civic engagement. The first aspect is timing, with most civic engagement activities occurring during the agenda-setting phase of a cycle. The second factor is adaptation, which refers to how ICTs evolve to facilitate increased civic participation. The final aspect is integration, representing how emerging ICTs blend new and traditional methods to maximize civic engagement.[92] ICT fosters the possibility of a government that is both more democratic and better informed by facilitating open online collaborations between professionals and the public. The responsibility of collecting information and making decisions is shared between those possessing technological expertise and the traditionally recognized decision-makers. This broadened public involvement in the exchange of ideas and policies results in more democratic decision-making. Furthermore, ICT enhances the notion ofpluralismwithin a democracy, introducing fresh issues and viewpoints.[93] Ordinary citizens have the opportunity to become creators of political content and commentary, for instance, by establishing individual blogs and websites. Collaborative efforts in the online political sphere, similar to ABC News' Campaign Watchdog initiative, allow citizens to report any rule violations committed by any political party during elections.[94] In the 2000 United States presidential race, candidates frequently utilized their websites to not only encourage their supporters to vote but to motivate their friends to vote as well. This dual-process approach—urging an individual to vote and then to prompt their friends to vote—was just beginning to emerge during that time. Today, political participation through various social media platforms is typical, and civic involvement via online forums is common. Through the use of ICTs, individuals interested in politics have the ability to become more engaged.[94] In previous years, individuals belonging toGeneration X,Generation Y, andGeneration Z, typically encompassing those aged 35 and below as of the mid-2000s, have been noted for their relative disengagement from political activities.[95]The implementation of electronic democracy has been proposed as a potential solution to foster increased voter turnout, democratic participation, and political literacy among these younger demographics.[96][97] Youth e-citizenship presents a dichotomy between two predominant approaches: management and autonomy. The strategy of "targeting" younger individuals, prompting them to "play their part," can be interpreted as either an incentive for youth activism or a mechanism to regulate it.[98] Autonomous e-citizens argue that despite their relative inexperience, young people should have the right to voice their perspectives on issues that they personally consider important. Conversely, proponents of managed e-citizenship view youth as nascent citizens transitioning from childhood to adulthood, and hence not yet fully equipped to engage in political discourse without proper guidance. Another significant concern is the role of the Internet, with advocates of managed e-citizenship arguing that young people may be especially susceptible to misinformation or manipulation online. This discord manifests as two perspectives on democracy: one that sees democracy as an established and reasonably just system, where young people should be motivated to participate, and another that views democracy as a political and cultural goal best achieved through networks where young people interact. What might initially appear as mere differences in communication styles ultimately reveals divergent strategies for accessing and influencing power.[98] The Highland Youth Voice, an initiative in Scotland, is an exemplar of efforts to bolster democratic participation, particularly through digital means.[99]Despite an increasing emphasis on the youth demographic in UK governmental policy and issues, their engagement and interest have been waning. During the 2001 elections to the Westminster Parliament in the UK, voter turnout among 18- to 24-year-olds was estimated to be a mere 40%. This contrasts starkly with the fact that over 80% of 16- to 24-year-olds have accessed the internet at some point.[100] The United Nations Convention on the Rights of the Child emphasizes the importance of educating young individuals as citizens of their respective nations. It advocates for the promotion of active political participation, which they can shape through robust debate and communication. The Highland Youth Voice strives to boost youth participation by understanding their governmental needs, perspectives, experiences, and aspirations. It provides young Scots, aged 14 to 18, an opportunity to influence decision-makers in the Highlands.[101] This body, consisting of approximately 100 elected members, represents youth voices. Elections occur biennially and candidates are chosen directly from schools and youth forums. The Highland Youth Voice website serves as a pivotal platform where members can discuss issues pertinent to them, partake in online policy debates, and experience a model of e-democracy through simplified online voting. Thus, the website encompasses three key features, forming an online forum that enables youth self-education, participation in policy discourse, and engagement in the e-democracy process. Civil society organizations have a pivotal role in democracies, as highlighted by theorists such asAlexis de Tocqueville, acting as platforms for citizens to gain knowledge about public affairs and as sources of power beyond the state's reach. According toHans Klein, a public policy researcher at theGeorgia Institute of Technology, there exist several obstacles to participation in these forums, including logistical challenges of physical meetings.[102]Klein's study of a civic association in the northeastern US revealed that electronic communication significantly boosted the organization's capacity to achieve its objectives. Given the relatively low cost of exchanging information over the Internet and its potential for wide reach, the medium has become an attractive venue for disseminating political information, especially among interest groups and parties operating on smaller budgets.'" For example, environmental or social interest groups might leverage the Internet as a cost-effective mechanism to raise awareness around their causes. Unlike traditional media outlets, like television or newspapers, which often necessitate substantial financial investments, the Internet provides an affordable and extensive platform for information dissemination. As such, the Internet could potentially supplant certain traditional modes of political communication, such as telephone, television, newspapers, and radio. Consequently,civil societyhas been increasingly integrating into the online realm.[103] Civic society encompasses various types of associations. The terminterest groupis typically used to refer to formal organizations focused on specific social groups, economic sectors like trade unions, business and professional associations, or specific issues such as abortion, gun control, or the environment.[104]Many of these traditional interest groups have well-established organizational structures and formal membership rules, primarily oriented towards influencing government and policy-making processes. Transnational advocacy networks assemble loose coalitions of these organizations under common umbrella organizations that cross national borders. Innovative tools are increasingly being developed to empower bloggers, webmasters, and social media owners. These aim to transition from the Internet's strictly informational use to its application as a medium for social organization, independent of top-down initiatives. For instance, the concept ofCalls to actionis a novel approach that enables webmasters to inspire their audience into action without the need for explicit leadership. This trend is global, with countries like India cultivating an activeblogospherethat encouragesinternet usersto express their perspectives and opinions.[105] The Internet serves multifaceted roles for these organizations. It functions as a platform for lobbying elected officials, public representatives, and policy elites; networking with affiliated associations and groups; mobilizing organizers, activists, and members through action alerts, newsletters, and emails; raising funds and recruiting support; and conveying their messages to the public via traditional news media channels. The Internet holds a pivotal role indeliberative democracy, a model that underscores dialogue, open discussion, and access to diverse perspectives in decision-making.[106]It provides an interactive platform and functions as a vital instrument for research within the deliberative process. The Internet facilitates the exchange of ideas through a myriad of platforms such as websites, blogs, and social networking sites like Twitter, all of which champion freedom of expression.[citation needed] It allows for easily accessible and cost-effective information, paving the way for change. One of the intrinsic attributes of the Internet is its unregulated nature, offering a platform for all viewpoints, regardless of their accuracy. The autonomy granted by the Internet can foster and advocate change, a critical factor in e-democracy. A notable development in the application of e-democracy in the deliberative process is theCalifornia Report Card. This tool was created by the Data and Democracy Initiative of theCenter for Information Technology Research in the Interest of Societyat theUniversity of California, Berkeley, in collaboration with Lt. GovernorGavin Newsom.[107]Launched in January 2014, theCalifornia Report Cardis a web application optimized for mobile use, aimed at facilitating onlinedeliberative democracy. The application features a brief opinion poll on six pertinent issues, after which participants are invited to join an online "café". In this space, they are grouped with users sharing similar views throughPrincipal Component Analysis, and are encouraged to participate in the deliberative process by suggesting new political issues and rating the suggestions of other participants. The design of theCalifornia Report Cardis intended to minimize the influence of private agendas on the discussion. Openforum.com.aualso exemplifies eDemocracy. This non-profit Australian project facilitates high-level policy discussions, drawing participants such as politicians, senior public servants, academics, business professionals, and other influential stakeholders. TheOnline Protection and Enforcement of Digital Trade Act(OPEN Act), presented as an alternative to SOPA and PIPA, garners the support of major companies like Google and Facebook. Its website, Keep The Web Open,[108]not only provides full access to the bill but also incorporates public input—over 150 modifications have been made through user contributions.[109][110] Thepeer-to-patentproject allows public participation in the patent review process by providing research and 'prior art' publications for patent examiners to assess the novelty of an invention. In this process, the community nominates ten pieces of prior art to be reviewed by the patent examiner. This not only enables direct communication between the public and the patent examiner but also creates a structured environment that prompts participants to provide relevant information to aid in decision-making. By allowing experts and the general public to collaborate in finding solutions, the project aims to enhance the efficacy of the decision-making process. It offers a platform for citizens to participate and express their ideas beyond merely checking boxes that limit their opinions to predefined options.[111] One significant challenge in implementing e-democracy is ensuring the security of internet-voting systems. The potential interference from viruses and malware, which could alter or inhibit citizens' votes on critical issues, hinders the widespread adoption of e-democracy as long as such cybersecurity threats persist.[citation needed] E-voting presents several practical challenges that can affect its legitimacy in elections. For instance, electronic voting machines can be vulnerable to physical interference, as they are often left unattended prior to elections, making them susceptible to tampering. This issue led to a decision by the Netherlands in 2017 to count election votes manually.[112]Furthermore, 'Direct Recording Electronic' (DRE) systems, used in numerous US states, are quickly becoming outdated and prone to faults. A study by USENIX discovered that certain DREs in New Jersey inaccurately counted votes, potentially casting votes for unintended candidates without voters' knowledge. The study found these inconsistencies to be widespread with that specific machine.[113]Despite the potential of electronic voting to increase voter turnout, the absence of a paper trail in DREs can lead to untraceable errors, which could undermine its application in digital democracy. Diminished participation in democracy may stem from the proliferation of polls and surveys, potentially leading to a condition known as survey fatigue.[114] Through Listserv's,RSSfeeds, mobile messaging, micro-blogging services and blogs, government and its agencies can disseminate information to citizens who share common interests and concerns. For instance, many government representatives, includingRhode IslandState TreasurerFrank T. Caprio, have begun to utilizeTwitteras an easy medium for communication. Several non-governmental websites, like transparent.gov.com,[115]andUSA.gov,[116]have developed cross-jurisdiction, customer-focused applications that extract information from thousands of governmental organizations into a unified system, making it easier for citizens to access information. E-democracy has led to a simplified process and access to government information for public-sector agencies and citizens. For example, theIndianaBureau of Motor Vehicles simplified the process of certifying driver records for admission in county court proceedings. Indiana became the first state to allow government records to be digitally signed, legally certified and delivered electronically using Electronic Postmark technology.[117] The internet has increased government accessibility to news, policies, and contacts in the 21st century. In 2000, only two percent of government sites offered three or more services online; in 2007, that figure was 58 percent. Also, in 2007, 89 percent of government sites allowed the public to email a public official directly rather than merely emailing the webmaster (West, 2007)"(Issuu). Information and communications technologiescan be utilized for both democratic and anti-democratic purposes. For instance, digital technology can be used to promote both coercive control and active participation.[56]The vision of anti-democratic use of technology is exemplified inGeorge Orwell'sNineteen Eighty-Four. Critiques associated with direct democracy are also considered applicable to e-democracy. This includes the potential for directgovernanceto cause the polarization of opinions,populism, anddemagoguery.[56] The current inability to protect internet traffic from interference and manipulation has significantly limited the potential of e-democracy for decision-making. As a result, most experts express opposition to the use of theinternet for widespread voting.[118][119][120][121][122] In countries with severe government censorship, the full potential of e-democracy might not be realized. Internet clampdowns often occur during extensive political protests. For instance, the series of internet blackouts in the Middle East in 2011, termed as the "Arab Net Crackdown", provides a significant example. Governments in Libya, Egypt, Bahrain, Syria, Iran, and Yemen have all implemented total internet censorship in response to the numerous pro-democracy demonstrations within their respective nations.[123]These lockdowns were primarily instituted to prevent the dissemination of cell phone videos that featured images of government violence against protesters.[124] Joshua A. Tucker and his colleagues critique e-democracy, pointing out that the adaptability and openness of social media may allow political entities to manipulate it for their own ends.[125]They suggest that authorities could use social media to spread authoritarian practices in several ways. Firstly, by intimidating opponents, monitoring private conversations, and even jailing those who voice undesirable opinions. Secondly, by flooding online spaces with pro-regime messages, thereby diverting and occupying these platforms. Thirdly, by disrupting signal access to hinder the flow of information. Lastly, by banning globalized platforms and websites.[125] A study that interviewed elected officials in Austria's parliament revealed a broad and strong opposition to e-democracy. These officials held the view that citizens, generally uninformed, should limit their political engagement to voting. The task of sharing opinions and ideas, they contended, belonged solely to elected representatives.[126][17] Contrary to this view, theories ofepistemic democracysuggest that greater public engagement contributes to the aggregation of knowledge and intelligence. This active participation, proponents argue, enables democracies to better discern the truth. The introduction of H.R. 3261, theStop Online Piracy Act (SOPA), in the United States House of Representatives, was perceived by many internet users as an attack on internet democracy.[127][128]A contributor to the Huffington Post argued that defeating SOPA was crucial for the preservation of democracy and freedom of speech.[127] Significantly, SOPA was indefinitely postponed following widespread protests, which included a site blackout by popular websites like Wikipedia on 18 January 2012.[129] A comparable event occurred in India towards the end of 2011, when the country's Communication and IT MinisterKapil Sibalsuggested pre-screening content for offensive material before its publication on the internet, with no clear mechanism for appeal.[76]Subsequent reports, however, quote Sibal as stating that there would be no restrictions on internet use.[130] A radical shift from a representative government to an internet-mediated direct democracy is not considered likely.[citation needed]Nonetheless, proponents suggest that a "hybrid model" which leverages the internet for enhanced governmental transparency and greater community involvement in decision-making could be forthcoming.[131]The selection of committees, local town and city decisions, and other people-centric decisions could be more readily facilitated through this approach. This doesn't indicate a shift in the principles of democracy but rather an adaptation in the tools utilized to uphold them. E-democracy would not serve as a means to enact direct democracy, but rather as a tool to enable a more participatory form of democracy as it exists currently.[132] Supporters of e-democracy often foresee a transition from arepresentative democracyto adirect democracy, facilitated by technology, viewing this transition as an ultimate goal of e-democracy.[133]In an electronic direct democracy (EDD) – also referred to asopen source governanceorcollaborative e-democracy– citizens are directly involved in thelegislativefunction through electronic means. Theyvote electronicallyon legislation, propose new legislation, and recall representatives, if any are retained. Technology to support electronic direct democracy (EDD) has been researched and developed at theFlorida Institute of Technology, where it has been applied within student organizations.[134]Many other software development projects are currently underway,[135]along with numerous supportive and related projects.[136]Several of these projects are now collaborating on a cross-platform architecture within the framework of the Meta-government project.[137] EDD as a system is not fully implemented in a political government anywhere in the world, although several initiatives are currently forming. In the United States, businessman and politicianRoss Perotwas a prominent supporter of EDD, advocating for "electronictown halls" during his1992and1996presidential campaigns.Switzerland, already partially governed by direct democracy, is making progress towards such a system.[138]Senator On-Line, an Australian political party established in 2007, proposes to institute an EDD system so that Australians can decide which way the senators vote on each and every bill.[139]A similar initiative was formed 2002 in Sweden where the partyDirektdemokraterna, running for theParliament, offered its members the power to decide the actions of the party over all or some areas of decision, or to use a proxy with immediate recall for one or several areas. Liquid democracy, or direct democracy incorporating adelegable proxy, enables citizens to appoint a proxy for voting on their behalf, while retaining the ability to cast their own vote on legislation. This voting and proxy assignment could be conducted electronically. Extending this concept, proxies could establishproxy chains; for instance, if citizen A appoints citizen B, and B appoints citizen C, and only C votes on a proposed bill, C's vote will represent all three of them. Citizens could also rank their proxies by preference, meaning that if their primary proxy does not vote, their vote could be cast by their second-choice proxy. One form of e-democracy that has been proposed is "wikidemocracy", where the codex of laws in a government legislature could be editable via a wiki, similar to Wikipedia. In 2012, J Manuel Feliz-Teixeira suggested that the resources necessary for implementing wikidemocracy were already accessible. He envisages a system in which citizens can participate in legislative, executive, and judiciary roles via a wiki-system. Every citizen would have free access to this wiki and a personal ID to make policy reforms continuously until the end of December, when all votes would be tallied.[140]Perceived benefits of wikidemocracy include a cost-free system that eliminates elections and the need for parliament or representatives, as citizens would directly represent themselves, and the ease of expressing one's opinion. However, there are several potential obstacles and disagreements. The digital divide and educational inequality could hinder the full potential of a wikidemocracy. Similarly, differing rates of technological adoption mean that some people might readily accept new methods, while others reject or are slow to adapt.[141]Security is also a concern; we would need to trust that the system administrators would ensure a high level of integrity to safeguard votes in the public domain. Peter Levine concurs that wikidemocracy could increase discussion on political and moral issues but disagrees with Feliz-Teixeira, arguing that representatives and formal governmental structures would still be needed.[142] The term "wikidemocracy" is also used to refer to more specific instances of e-democracy. For example, in August 2011 in Argentina, the voting records from the presidential election were made available to the public in an online format for scrutiny.[143]More broadly, the term can refer to the democratic values and environments facilitated by wikis.[144] In 2011, a group inFinlandexplored the concept of wikidemocracy by creating an online "shadow government program". This initiative was essentially a compilation of the political views and goals of various Finnish groups, assembled on a wiki.[145] Egora, also known as "intelligent democracy", is a free software application developed for political opinion formation and decision-making. It is filed under thecopyleftlicensing system. The name "Egora" is a blend of "electronic" and "agora", a term fromAncient Greekdenoting the central public space in city-states (polis). The ancient agora was the hub of public life, facilitating social interactions, business transactions, and discussions. Drawing from this Ancient Greek concept, Egora aims to foster a new, rational, efficient, and incorruptible form of democratic organization. It allows users to form their own political philosophies from diverse ideas, ascertain the most popular ideas among the public, organize meetings to scrutinize and debate these ideas, and employ a simple algorithm to identify true representatives of the public will.[146] The theme of e-democracy has frequently appeared inscience fiction. Works such asDavid's SlingbyMarc StieglerandEnder's GamebyOrson Scott Cardnotably predicted forms of the internet before it actually came into existence. These early conceptualizations of the internet, and their implications for democracy, served as major plot drivers in these stories. InDavid's Sling,Marc Stieglerpresents e-democracy as a strategy leveraged by a team of hackers to construct a computer-controlled smart weapon. They utilize an online debate platform, the Information Decision Duel, where two parties delve deeply into the intricacies of their arguments, dissecting the pros and cons before a neutral referee selects the more convincing side. This fictional portrayal of an internet-like system for public discourse echoes real-world aspirations for e-democracy, underscoring thorough issue analysis, technological enablement, and transparency.[147]The book's dedication, "To those who never stop seeking the third alternatives," epitomizes this emphasis on comprehensive issue scrutiny. Orson Scott Card'sEnder's Gamealso explores e-democracy, with the internet portrayed as a powerful platform for political discourse and social change. Two of the characters, siblings Valentine and Peter, use this platform to anonymously share their political views, gaining considerable influence. Their activities lead to a significant political shift, even though they are just children posing as adults. This highlights the issue of true identity within online participation and raises questions about the potential for manipulation in e-democracy.[148] E-democracy has also been depicted in: These works provide varied perspectives on the potential benefits and challenges of e-democracy. [1]
https://en.wikipedia.org/wiki/E-democracy
Mental pokeris the common name for a set ofcryptographicproblems that concerns playing a fair game over distance without the need for atrusted third party. The term is also applied to thetheoriessurrounding these problems and their possible solutions. The name comes from thecard gamepokerwhich is one of the games to which this kind of problem applies. Similar problems described as two party games are Blum'sflipping a coin over a distance,Yao's Millionaires' Problem, and Rabin'soblivious transfer. The problem can be described thus: "How can one allow only authorized actors to have access to certain information while not using a trusted arbiter?" (Eliminating the trusted third-party avoids the problem of trying to determine whether the third party can be trusted or not, and may also reduce the resources required.) In poker, this could translate to: "How can we make sure no player is stacking the deck or peeking at other players' cards when we are shuffling the deck ourselves?". In a physical card game, this would be relatively simple if the players were sitting face to face and observing each other, at least if the possibility of conventional cheating can be ruled out. However, if the players are not sitting at the same location but instead are at widely separate locations and pass the entire deck between them (using the postalmail, for instance), this suddenly becomes very difficult. And for electronic card games, such asonline poker, where the mechanics of the game are hidden from the user, this is impossible unless the method used is such that it cannot allow any party to cheat by manipulating or inappropriately observing the electronic "deck". Several protocols for doing this have been suggested, the first byAdi Shamir,Ron RivestandLen Adleman(the creators of theRSA-encryption protocol).[1]This protocol was the first example of two parties conducting secure computation rather than secure message transmission, employing cryptography; later on due to leaking partial information in the original protocol, this led to the definition ofsemantic securitybyShafi GoldwasserandSilvio Micali. The concept of multi-player mental poker was introduced inMoti Yung's 1984 book Cryptoprotocols.[2]The area has later evolved into what is known assecure multi-party computationprotocols (for two parties, and multi parties as well). One possiblealgorithmforshufflingcards without the use of a trusted third party is to use acommutativeencryption scheme. A commutative scheme means that if some data is encrypted more than once, the order in which one decrypts this data will not matter. Example:Alicehas aplaintextmessage. She encrypts this, producing a garbledciphertextwhich she gives then toBob. Bob encrypts the ciphertext again, using the same scheme as Alice but with anotherkey. When decrypting this double encrypted message, if the encryption scheme is commutative, it will not matter who decrypts first. An algorithm for shuffling cards using commutative encryption would be as follows: The deck is now shuffled. This algorithm may be expanded for an arbitrary number of players. PlayersCarol,Daveand so forth need only repeat steps 2-4 and 8-10. During the game, Alice and Bob will pick cards from the deck, identified in which order they are placed in the shuffled deck. When either player wants to see their cards, they will request the corresponding keys from the other player. That player, upon checking that the requesting player is indeed entitled to look at the cards, passes the individual keys for those cards to the other player. The check is to ensure that the player does not try to request keys for cards that do not belong to that player. Example: Alice has picked cards 1 to 5 in the shuffled deck. Bob has picked cards 6 to 10. Bob requests to look at his allotted cards. Alice agrees that Bob is entitled to look at cards 6 to 10 and gives him her individual card keys A6to A10. Bob decrypts his cards by using both Alice's keys and his own for these cards, B6to B10. Bob can now see the cards. Alice cannot know which cards Bob has because she does not have access to Bob's keys B6to B10which are required to decrypt the cards. The encryption scheme used must be secure againstknown-plaintext attacks: Bob must not be able to determine Alice's original key A (or enough of it to allow him to decrypt any cards he does not hold) based on his knowledge of the unencrypted values of the cards he has drawn. This rules out some obvious commutative encryption schemes, such as simplyXORingeach card with the key. (Using a separate key for each card even in the initial exchange, which would otherwisemake this scheme secure, doesn't work since the cards are shuffled before they're returned.) Depending on the deck agreed upon, this algorithm may be weak. When encrypting data, certain properties of this data may be preserved from the plaintext to the ciphertext. This may be used to "tag" certain cards. Therefore, the parties must agree on a deck where no cards have properties that are preserved during encryption. Christian Schindelhauer describes sophisticated protocols to both perform and verify a large number of useful operations on cards and stacks of cards in his 1998 paper "A Toolbox for Mental Card Games" [SCH98]. The work is concerned with general-purpose operations (masking and unmasking cards, shuffling and re-shuffling, inserting a card into a stack, etc.) that make the protocols applicable to any card game. Thecryptographic protocolsused by Schindelhauer are based onquadratic residuosity, and the general scheme is similar in spirit to the above protocol. The correctness of operations can be checked by usingzero-knowledge proofs, so that players do not need to reveal their strategy to verify the game's correctness. The C++ library libtmcg [STA05] provides an implementation of the Schindelhauer toolbox. It has been used to implement a secure version of the German card gameSkat, achieving modest real-world performance. The game Skat is played by three players with a 32-card deck, and so is substantially less computationally intensive than a poker game in which anywhere from five to eight players use a full 52-card deck. To date, mental poker approaches based on the standard Alice-Bob protocol (above) do not offer high enough performance for real-time online play. The requirement that each player encrypts each card imposes a substantial overhead. A recent paper by Golle [GOL05] describes a mental poker protocol that achieves significantly higher performance by exploiting the properties of the poker game to move away from the encrypt-shuffle model. Rather than shuffle the cards and then deal as needed, with the new approach, the players generate (encrypted) random numbers on the fly, which are used to select the next card. Every new card needs to be checked against all the cards that have already been dealt to detect duplicates. As a result, this method is uniquely useful in poker-style games, in which the number of cards dealt is very small compared to the size of the whole deck. However, the method needs all cards that have already been dealt to be known to all, which in most poker-style games would beat its very purpose. The card-generation algorithm requires a cryptosystem with two key properties. The encryption E must be additivelyhomomorphic, so thatE(c1)*E(c2) = E(c1+ c2). Second, collisions must be detectable, without revealing the plaintext. In other words, givenE(c1)andE(c2), it must be possible to answer whetherc1=c2, without the players learning any other information (specifically, the identities ofc1andc2). TheElgamal encryptionscheme is just one example of a well-known system with these properties. The algorithm operates as follows: In this way, the players need only to compute encryption for the cards that are actually used in the game, plus some overhead for the collisions that is small as long as the number of cards needed is much less than the size of the deck. As a result, this scheme turns out to be 2-4 times faster (as measured by the total number of modular exponentiations) than the best-known protocol [JAK99] that does full shuffling usingmix-networks. Note that therandom number generationis secure as long as any one player is generating valid random numbers. Even ifk-1players collude to generate the numberr*, as long as thekth player truthfully generates a randomr′{\displaystyle r'}, the sumr=r∗+r′{\displaystyle r=r*+r'}is still uniformly random in {0, 51}. Measured in terms of the number of single-agent encryptions, the algorithm in [GOL05] is optimal when no collisions occur, in the sense that any protocol that is fair to every player must perform at least as many encryption operations. At minimum, every agent must encrypt every card that is actually used. Otherwise, if any agent doesn't participate in the encryption, then that agent is susceptible to being cheated by a coalition of the remaining players. Unknown to the non-encrypting agent, the other agents may share the keys to enable them all to know the values of all the cards. Thus, any approach relying on the agents to perform the encryption must focus on schemes that minimize the effect of collisions if they are to achieve better performance. Any mental poker protocol that relies on the players to perform the encryption is bound by the requirement that every player encrypt every card that is dealt. However, by making limited assumptions about the trustworthiness of third parties, significantly more efficient protocols may be realized. The protocol for choosing cards without shuffling may be adapted so that the encryption is handled by two or more servers. Under the assumption that the servers are non-colluding, such a protocol is secure. The basic protocol using two servers is as follows: In this protocol, serversS1andS2must collude if either is to learn the values of any cards. Furthermore, because players ultimately decide which cards are dealt, non-trustworthy servers are unable to influence the game to the extent that is possible in traditionalonline poker. The scheme may be extended to allow more servers, (and thus, increased security), simply by including the additional servers in the initial encryption. Finally, step one in the protocol may be done offline, allowing for large numbers of shuffled, encrypted "decks" to be pre-computed and cached, resulting in excellent in-game performance.
https://en.wikipedia.org/wiki/Mental_poker
Incategory theoryand its applications tomathematics, anormal monomorphismorconormal epimorphismis a particularly well-behaved type ofmorphism. Anormal categoryis a category in which everymonomorphismis normal. Aconormal categoryis one in which everyepimorphismis conormal. A monomorphism isnormalif it is thekernelof some morphism, and an epimorphism isconormalif it is thecokernelof some morphism. A categoryCisbinormalif it's both normal and conormal. But note that some authors will use the word "normal" only to indicate thatCis binormal.[citation needed] In thecategory of groups, a monomorphismffromHtoGis normalif and only ifits image is anormal subgroupofG. In particular, ifHis asubgroupofG, then theinclusion mapifromHtoGis a monomorphism, and will be normal if and only ifHis a normal subgroup ofG. In fact, this is the origin of the term "normal" for monomorphisms.[citation needed] On the other hand, every epimorphism in the category of groups is conormal (since it is the cokernel of its own kernel), so this category is conormal. In anabelian category, every monomorphism is the kernel of its cokernel, and every epimorphism is the cokernel of its kernel. Thus, abelian categories are always binormal. The category ofabelian groupsis the fundamental example of an abelian category, and accordingly every subgroup of an abelian group is a normal subgroup.
https://en.wikipedia.org/wiki/Normal_morphism
Incategory theory, a branch ofmathematics, azero morphismis a special kind ofmorphismexhibiting properties like the morphisms to and from azero object. SupposeCis acategory, andf:X→Yis a morphism inC. The morphismfis called aconstant morphism(or sometimesleft zero morphism) if for anyobjectWinCand anyg,h:W→X,fg=fh. Dually,fis called acoconstant morphism(or sometimesright zero morphism) if for any objectZinCand anyg,h:Y→Z,gf=hf. Azero morphismis one that is both a constant morphism and a coconstant morphism. Acategory with zero morphismsis one where, for every two objectsAandBinC, there is a fixed morphism 0AB:A→B, and this collection of morphisms is such that for all objectsX,Y,ZinCand all morphismsf:Y→Z,g:X→Y, the following diagram commutes: The morphisms 0XYnecessarily are zero morphisms and form a compatible system of zero morphisms. IfCis a category with zero morphisms, then the collection of 0XYis unique.[1] This way of defining a "zero morphism" and the phrase "a category with zero morphisms" separately is unfortunate, but if eachhom-sethas a unique "zero morphism", then the category "has zero morphisms". IfChas a zero object0, given two objectsXandYinC, there are canonical morphismsf:X→0andg:0→Y. Then,gfis a zero morphism in MorC(X,Y). Thus, every category with a zero object is a category with zero morphisms given by the composition 0XY:X→0→Y. If a category has zero morphisms, then one can define the notions ofkernelandcokernelfor any morphism in that category.
https://en.wikipedia.org/wiki/Zero_morphism
Inmathematics, aquotient categoryis acategoryobtained from another category by identifying sets ofmorphisms. Formally, it is aquotient objectin thecategory of (locally small) categories, analogous to aquotient grouporquotient space, but in the categorical setting. LetCbe a category. Acongruence relationRonCis given by: for each pair of objectsX,YinC, anequivalence relationRX,Yon Hom(X,Y), such that the equivalence relations respect composition of morphisms. That is, if are related in Hom(X,Y) and are related in Hom(Y,Z), theng1f1andg2f2are related in Hom(X,Z). Given a congruence relationRonCwe can define thequotient categoryC/Ras the category whose objects are those ofCand whose morphisms areequivalence classesof morphisms inC. That is, Composition of morphisms inC/Riswell-definedsinceRis a congruence relation. There is a natural quotientfunctorfromCtoC/Rwhich sends each morphism to its equivalence class. This functor is bijective on objects and surjective on Hom-sets (i.e. it is afull functor). Every functorF:C→Ddetermines a congruence onCby sayingf~giffF(f) =F(g). The functorFthen factors through the quotient functorC→C/~ in a unique manner. This may be regarded as the "first isomorphism theorem" for categories. IfCis anadditive categoryand we require the congruence relation ~ onCto be additive (i.e. iff1,f2,g1andg2are morphisms fromXtoYwithf1~f2andg1~g2, thenf1+g1~f2+g2), then the quotient categoryC/~ will also be additive, and the quotient functorC→C/~ will be an additive functor. The concept of an additive congruence relation is equivalent to the concept of atwo-sided ideal of morphisms: for any two objectsXandYwe are given an additive subgroupI(X,Y) of HomC(X,Y) such that for allf∈I(X,Y),g∈ HomC(Y,Z) andh∈ HomC(W,X), we havegf∈I(X,Z) andfh∈I(W,Y). Two morphisms in HomC(X,Y) are congruent iff their difference is inI(X,Y). Every unitalringmay be viewed as an additive category with a single object, and the quotient of additive categories defined above coincides in this case with the notion of aquotient ringmodulo a two-sided ideal. Thelocalization of a categoryintroduces new morphisms to turn several of the original category's morphisms into isomorphisms. This tends to increase the number of morphisms between objects, rather than decrease it as in the case of quotient categories. But in both constructions it often happens that two objects become isomorphic that weren't isomorphic in the original category. TheSerre quotientof anabelian categoryby aSerre subcategoryis a new abelian category which is similar to a quotient category but also in many cases has the character of a localization of the category.
https://en.wikipedia.org/wiki/Quotient_category
Cramér's theoremis a fundamental result in thetheory of large deviations, a subdiscipline ofprobability theory. It determines therate functionof a series ofiidrandom variables. A weak version of this result was first shown byHarald Cramérin 1938. The logarithmicmoment generating function(which is thecumulant-generating function) of arandom variableis defined as: LetX1,X2,…{\displaystyle X_{1},X_{2},\dots }be a sequence ofiidrealrandom variableswith finite logarithmic moment generating function, i.e.Λ(t)<∞{\displaystyle \Lambda (t)<\infty }for allt∈R{\displaystyle t\in \mathbb {R} }. Then theLegendre transformofΛ{\displaystyle \Lambda }: satisfies, for allx>E⁡[X1].{\displaystyle x>\operatorname {E} [X_{1}].} In the terminology of the theory of large deviations the result can be reformulated as follows: IfX1,X2,…{\displaystyle X_{1},X_{2},\dots }is a series of iid random variables, then the distributions(L(1n∑i=1nXi))n∈N{\displaystyle \left({\mathcal {L}}({\tfrac {1}{n}}\sum _{i=1}^{n}X_{i})\right)_{n\in \mathbb {N} }}satisfy alarge deviation principlewithrate functionΛ∗{\displaystyle \Lambda ^{*}}.
https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_theorem_(large_deviations)
The termBlahut–Arimoto algorithmis often used to refer to a class ofalgorithmsfor computing numerically either theinformation theoreticcapacityof a channel, therate-distortionfunction of a source or a source encoding (i.e. compression to remove the redundancy). They areiterative algorithmsthat eventually converge to one of the maxima of theoptimization problemthat is associated with these information theoretic concepts. For the case ofchannel capacity, the algorithm was independently invented bySuguru Arimoto[1]andRichard Blahut.[2]In addition, Blahut's treatment gives algorithms for computingrate distortionand generalizedcapacitywith input contraints (i.e. the capacity-cost function, analogous to rate-distortion). These algorithms are most applicable to the case of arbitrary finite alphabet sources. Much work has been done to extend it to more general problem instances.[3][4]Recently, a version of the algorithm that accounts for continuous and multivariate outputs was proposed with applications in cellular signaling.[5]There exists also a version of Blahut–Arimoto algorithm fordirected information.[6] A discrete memoryless channel (DMC) can be specified using two random variablesX,Y{\displaystyle X,Y}with alphabetX,Y{\displaystyle {\mathcal {X}},{\mathcal {Y}}}, and a channel law as aconditional probability distributionp(y|x){\displaystyle p(y|x)}. Thechannel capacity, defined asC:=supp∼XI(X;Y){\displaystyle C:=\sup _{p\sim X}I(X;Y)}, indicates the maximum efficiency that a channel can communicate, in the unit of bit per use.[7]Now if we denote the cardinality|X|=n,|Y|=m{\displaystyle |{\mathcal {X}}|=n,|{\mathcal {Y}}|=m}, thenpY|X{\displaystyle p_{Y|X}}is an×m{\displaystyle n\times m}matrix, which we denote theith{\displaystyle i^{th}}row,jth{\displaystyle j^{th}}column entry bywij{\displaystyle w_{ij}}. For the case ofchannel capacity, the algorithm was independently invented bySuguru Arimoto[8]andRichard Blahut.[9]They both found the following expression for the capacity of a DMC with channel law: C=maxpmaxQ∑i=1n∑j=1mpiwijlog⁡(Qjipi){\displaystyle C=\max _{\mathbf {p} }\max _{Q}\sum _{i=1}^{n}\sum _{j=1}^{m}p_{i}w_{ij}\log \left({\dfrac {Q_{ji}}{p_{i}}}\right)} wherep{\displaystyle \mathbf {p} }andQ{\displaystyle Q}are maximized over the following requirements: Then upon picking a random probability distributionp0:=(p10,p20,...pn0){\displaystyle \mathbf {p} ^{0}:=(p_{1}^{0},p_{2}^{0},...p_{n}^{0})}onX{\displaystyle X}, we can generate a sequence(p0,Q0,p1,Q1...){\displaystyle (\mathbf {p} ^{0},Q^{0},\mathbf {p} ^{1},Q^{1}...)}iteratively as follows: (qjit):=pitwij∑k=1npktwkj{\displaystyle (q_{ji}^{t}):={\dfrac {p_{i}^{t}w_{ij}}{\sum _{k=1}^{n}p_{k}^{t}w_{kj}}}} pkt+1:=∏j=1m(qjkt)wkj∑i=1n∏j=1m(qjit)wij{\displaystyle p_{k}^{t+1}:={\dfrac {\prod _{j=1}^{m}(q_{jk}^{t})^{w_{kj}}}{\sum _{i=1}^{n}\prod _{j=1}^{m}(q_{ji}^{t})^{w_{ij}}}}} Fort=0,1,2...{\displaystyle t=0,1,2...}. Then, using the theory of optimization, specificallycoordinate descent, Yeung[10]showed that the sequence indeed converges to the required maximum. That is, limt→∞∑i=1n∑j=1mpitwijlog⁡(Qjitpit)=C{\displaystyle \lim _{t\to \infty }\sum _{i=1}^{n}\sum _{j=1}^{m}p_{i}^{t}w_{ij}\log \left({\dfrac {Q_{ji}^{t}}{p_{i}^{t}}}\right)=C}. So given a channel lawp(y|x){\displaystyle p(y|x)}, the capacity can be numerically estimated up to arbitrary precision. Suppose we have a sourceX{\displaystyle X}with probabilityp(x){\displaystyle p(x)}of any given symbol. We wish to find an encodingp(x^|x){\displaystyle p({\hat {x}}|x)}that generates acompressedsignalX^{\displaystyle {\hat {X}}}from the original signal while minimizing the expecteddistortion⟨d(x,x^)⟩{\displaystyle \langle d(x,{\hat {x}})\rangle }, where the expectation is taken over the joint probability ofX{\displaystyle X}andX^{\displaystyle {\hat {X}}}. We can find an encoding that minimizes the rate-distortion functional locally by repeating the following iteration until convergence: whereβ{\displaystyle \beta }is a parameter related to the slope in the rate-distortion curve that we are targeting and thus is related to how much we favor compression versus distortion (higherβ{\displaystyle \beta }means less compression).
https://en.wikipedia.org/wiki/Blahut%E2%80%93Arimoto_algorithm
Ininformation theory,data compression,source coding,[1]orbit-rate reductionis the process of encodinginformationusing fewerbitsthan the original representation.[2]Any particular compression is eitherlossyorlossless. Lossless compression reduces bits by identifying and eliminatingstatistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information.[3]Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder. The process of reducing the size of adata fileis often referred to as data compression. In the context ofdata transmission, it is called source coding: encoding is done at the source of the data before it is stored or transmitted.[4]Source coding should not be confused withchannel coding, for error detection and correction orline coding, the means for mapping data onto a signal. Data Compression algorithms present aspace-time complexity trade-offbetween the bytes needed to store or transmit information, and theComputational resourcesneeded to perform the encoding and decoding. The design of data compression schemes involves balancing the degree of compression, the amount of distortion introduced (when usinglossy data compression), and the computational resources or time required to compress and decompress the data.[5] Lossless data compressionalgorithmsusually exploitstatistical redundancyto represent data without losing anyinformation, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the data may be encoded as "279 red pixels". This is a basic example ofrun-length encoding; there are many schemes to reduce file size by eliminating redundancy. TheLempel–Ziv(LZ) compression methods are among the most popular algorithms for lossless storage.[6]DEFLATEis a variation on LZ optimized for decompression speed and compression ratio,[7]but compression can be slow. In the mid-1980s, following work byTerry Welch, theLempel–Ziv–Welch(LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used inGIFimages, programs such asPKZIP, and hardware devices such as modems.[8]LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is oftenHuffman encoded.Grammar-based codeslike this can compress highly repetitive input extremely effectively, for instance, a biologicaldata collectionof the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms includeSequiturandRe-Pair. The strongest modern lossless compressors useprobabilisticmodels, such asprediction by partial matching. TheBurrows–Wheeler transformcan also be viewed as an indirect form of statistical modelling.[citation needed]In a further refinement of the direct use ofprobabilistic modelling, statistical estimates can be coupled to an algorithm calledarithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of afinite-state machineto produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of theprobability distributionof the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of theJPEGimage coding standard.[9]It has since been applied in various other designs includingH.263,H.264/MPEG-4 AVCandHEVCfor video coding.[10] Archive software typically has the ability to adjust the "dictionary size", where a larger size demands morerandom-access memoryduring compression and decompression, but compresses stronger, especially on repeating patterns in files' content.[11][12] In the late 1980s, digital images became more common, and standards for losslessimage compressionemerged. In the early 1990s, lossy compression methods began to be widely used.[13]In these schemes, some loss of information is accepted as dropping nonessential detail can save storage space. There is a correspondingtrade-offbetween preserving information and reducing size. Lossy data compression schemes are designed by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations inluminancethan it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information.[14]A number of popular compression formats exploit these perceptual differences, includingpsychoacousticsfor sound, andpsychovisualsfor images and video. Most forms of lossy compression are based ontransform coding, especially thediscrete cosine transform(DCT). It was first proposed in 1972 byNasir Ahmed, who then developed a working algorithm with T. Natarajan andK. R. Raoin 1973, before introducing it in January 1974.[15][16]DCT is the most widely used lossy compression method, and is used in multimedia formats for images (such as JPEG andHEIF),[17]video(such asMPEG,AVCand HEVC) and audio (such asMP3,AACandVorbis). Lossy image compression is used indigital cameras, to increase storage capacities. Similarly,DVDs,Blu-rayandstreaming videouse lossyvideo coding formats. Lossy compression is extensively used in video. In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of theaudio signal. Compression of human speech is often performed with even more specialized techniques;speech codingis distinguished as a separate discipline from general-purpose audio compression. Speech coding is used ininternet telephony, for example, audio compression is used for CD ripping and is decoded by the audio players.[citation needed] Lossy compression can causegeneration loss. The theoretical basis for compression is provided byinformation theoryand, more specifically,Shannon's source coding theorem; domain-specific theories includealgorithmic information theoryfor lossless compression andrate–distortion theoryfor lossy compression. These areas of study were essentially created byClaude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Other topics associated with compression includecoding theoryandstatistical inference.[18] There is a close connection betweenmachine learningand compression. A system that predicts theposterior probabilitiesof a sequence given its entire history can be used for optimal data compression (by usingarithmetic codingon the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".[19][20][21] An alternative view can show compression algorithms implicitly map strings into implicitfeature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.[22] According toAIXItheory, a connection more directly explained inHutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form. Examples of AI-powered audio/video compression software includeNVIDIA Maxine, AIVC.[23]Examples of software that can perform AI-powered image compression includeOpenCV,TensorFlow,MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression.[24] Inunsupervised machine learning,k-means clusteringcan be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such asimage compression.[25] Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by thecentroidof its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial inimageandsignal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space.[26] Large language models(LLMs) are also efficient lossless data compressors on some data sets, as demonstrated byDeepMind's research with the Chinchilla 70B model. Developed by DeepMind, Chinchilla 70B effectively compressed data, outperforming conventional methods such asPortable Network Graphics(PNG) for images andFree Lossless Audio Codec(FLAC) for audio. It achieved compression of image and audio data to 43.4% and 16.4% of their original sizes, respectively. There is, however, some reason to be concerned that the data set used for testing overlaps the LLM training data set, making it possible that the Chinchilla 70B model is only an efficient compression tool on data it has already been trained on.[27][28] Data compression can be viewed as a special case ofdata differencing.[29][30]Data differencing consists of producing adifferencegiven asourceand atarget,with patching reproducing thetargetgiven asourceand adifference.Since there is no separate source and target in data compression, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a difference from nothing. This is the same as considering absoluteentropy(corresponding to data compression) as a special case ofrelative entropy(corresponding to data differencing) with no initial data. The termdifferential compressionis used to emphasize the data differencing connection. Entropy codingoriginated in the 1940s with the introduction ofShannon–Fano coding,[31]the basis forHuffman codingwhich was developed in 1950.[32]Transform codingdates back to the late 1960s, with the introduction offast Fourier transform(FFT) coding in 1968 and theHadamard transformin 1969.[33] An important image compression technique is thediscrete cosine transform(DCT), a technique developed in the early 1970s.[15]DCT is the basis for JPEG, alossy compressionformat which was introduced by theJoint Photographic Experts Group(JPEG) in 1992.[34]JPEG greatly reduces the amount of data required to represent an image at the cost of a relatively small reduction in image quality and has become the most widely usedimage file format.[35][36]Its highly efficient DCT-based compression algorithm was largely responsible for the wide proliferation ofdigital imagesanddigital photos.[37] Lempel–Ziv–Welch(LZW) is alossless compressionalgorithm developed in 1984. It is used in theGIFformat, introduced in 1987.[38]DEFLATE, a lossless compression algorithm specified in 1996, is used in thePortable Network Graphics(PNG) format.[39] Wavelet compression, the use ofwaveletsin image compression, began after the development of DCT coding.[40]TheJPEG 2000standard was introduced in 2000.[41]In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead usesdiscrete wavelet transform(DWT) algorithms.[42][43][44]JPEG 2000 technology, which includes theMotion JPEG 2000extension, was selected as thevideo coding standardfordigital cinemain 2004.[45] Audio data compression, not to be confused withdynamic range compression, has the potential to reduce the transmissionbandwidthand storage requirements of audio data.Audio compression formats compression algorithmsare implemented insoftwareas audiocodecs. In both lossy and lossless compression,information redundancyis reduced, using methods such ascoding,quantization, DCT andlinear predictionto reduce the amount of information used to represent the uncompressed data. Lossy audio compression algorithms provide higher compression and are used in numerous audio applications includingVorbisandMP3. These algorithms almost all rely onpsychoacousticsto eliminate or reduce fidelity of less audible sounds, thereby reducing the space required to store or transmit them.[2][46] The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640 MBcompact disc(CD) holds approximately one hour of uncompressedhigh fidelitymusic, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in theMP3format at a mediumbit rate. A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640 MB.[47] Lossless audio compression produces a representation of digital data that can be decoded to an exact digital duplicate of the original. Compression ratios are around 50–60% of the original size,[48]which is similar to those for generic lossless data compression. Lossless codecs usecurve fittingor linear prediction as a basis for estimating the signal. Parameters describing the estimation and the difference between the estimation and the actual signal are coded separately.[49] A number of lossless audio compression formats exist. Seelist of lossless codecsfor a listing. Some formats are associated with a distinct system, such asDirect Stream Transfer, used inSuper Audio CDandMeridian Lossless Packing, used inDVD-Audio,Dolby TrueHD,Blu-rayandHD DVD. Someaudio file formatsfeature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats includeMPEG-4 SLS(Scalable to Lossless),WavPack, andOptimFROG DualStream. When audio files are to be processed, either by further compression or forediting, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies. Lossy audio compression is used in a wide range of applications. In addition to standalone audio-only applications of file playback in MP3 players or computers, digitally compressed audio streams are used in most video DVDs, digital television, streaming media on theInternet, satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression, by discarding less-critical data based onpsychoacousticoptimizations.[50] Psychoacoustics recognizes that not all data in an audio stream can be perceived by the humanauditory system. Most lossy compression reduces redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at the same time as louder sounds. Those irrelevant sounds are coded with decreased accuracy or not at all. Due to the nature of lossy algorithms,audio qualitysuffers adigital generation losswhen a file is decompressed and recompressed. This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, lossy formats such asMP3are very popular with end-users as the file size is reduced to 5-20% of the original size and a megabyte can store about a minute's worth of music at adequate quality. Several proprietary lossy compression algorithms have been developed that provide higher quality audio performance by using a combination of lossless and lossy algorithms with adaptive bit rates and lower compression ratios. Examples includeaptX,LDAC,LHDC,MQAandSCL6. To determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as themodified discrete cosine transform(MDCT) to converttime domainsampled waveforms into a transform domain, typically thefrequency domain. Once transformed, component frequencies can be prioritized according to how audible they are. Audibility of spectral components is assessed using theabsolute threshold of hearingand the principles ofsimultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases,temporal masking—where a signal is masked by another signal separated by time.Equal-loudness contoursmay also be used to weigh the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often calledpsychoacoustic models.[51] Other types of lossy compressors, such as thelinear predictive coding(LPC) used with speech, are source-based coders. LPC uses a model of the human vocal tract to analyze speech sounds and infer the parameters used by the model to produce them moment to moment. These changing parameters are transmitted or stored and used to drive another model in the decoder which reproduces the sound. Lossy formats are often used for the distribution of streaming audio or interactive communication (such as in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications.[50] Latencyis introduced by the methods used to encode and decode the data. Some codecs will analyze a longer segment, called aframe, of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. The inherent latency of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality. In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analyzed before a block of audio is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms. Speech encodingis an important category of audio data compression. The perceptual models used to estimate what aspects of speech a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice is normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate. This is accomplished, in general, by some combination of two approaches: The earliest algorithms used in speech encoding (and audio data compression in general) were theA-law algorithmand theμ-law algorithm. Early audio research was conducted atBell Labs. There, in 1950,C. Chapin Cutlerfiled the patent ondifferential pulse-code modulation(DPCM).[52]In 1973,Adaptive DPCM(ADPCM) was introduced by P. Cummiskey,Nikil S. JayantandJames L. Flanagan.[53][54] Perceptual codingwas first used forspeech codingcompression, withlinear predictive coding(LPC).[55]Initial concepts for LPC date back to the work ofFumitada Itakura(Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966.[56]During the 1970s,Bishnu S. AtalandManfred R. SchroederatBell Labsdeveloped a form of LPC calledadaptive predictive coding(APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with thecode-excited linear prediction(CELP) algorithm which achieved a significantcompression ratiofor its time.[55]Perceptual coding is used by modern audio compression formats such asMP3[55]andAAC. Discrete cosine transform(DCT), developed byNasir Ahmed, T. Natarajan andK. R. Raoin 1974,[16]provided the basis for themodified discrete cosine transform(MDCT) used by modern audio compression formats such as MP3,[57]Dolby Digital,[58][59]and AAC.[60]MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987,[61]following earlier work by Princen and Bradley in 1986.[62] The world's first commercialbroadcast automationaudio compression system was developed by Oscar Bonello, an engineering professor at theUniversity of Buenos Aires.[63]In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967,[64]he started developing a practical application based on the recently developedIBM PCcomputer, and the broadcast automation system was launched in 1987 under the nameAudicom.[65]35 years later, almost all the radio stations in the world were using this technology manufactured by a number of companies because the inventor refused to patent his work, preferring to publish it and leave it in the public domain.[66] A literature compendium for a large variety of audio coding systems was published in the IEEE'sJournal on Selected Areas in Communications(JSAC), in February 1988. While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual techniques and some kind of frequency analysis and back-end noiseless coding.[67] Uncompressed videorequires a very highdata rate. Althoughlossless video compressioncodecs perform at a compression factor of 5 to 12, a typicalH.264lossy compression video has a compression factor between 20 and 200.[68] The two key video compression techniques used invideo coding standardsare the DCT andmotion compensation(MC). Most video coding standards, such as theH.26xandMPEGformats, typically use motion-compensated DCT video coding (block motion compensation).[69][70] Most video codecs are used alongside audio compression techniques to store the separate but complementary data streams as one combined package using so-calledcontainer formats.[71] Video data may be represented as a series of still image frames. Such data usually contains abundant amounts of spatial and temporalredundancy. Video compression algorithms attempt to reduce redundancy and store information more compactly. Mostvideo compression formatsandcodecsexploit both spatial and temporal redundancy (e.g. through difference coding withmotion compensation). Similarities can be encoded by only storing differences between e.g. temporally adjacent frames (inter-frame coding) or spatially adjacent pixels (intra-frame coding).Inter-framecompression (a temporaldelta encoding) (re)uses data from one or more earlier or later frames in a sequence to describe the current frame.Intra-frame coding, on the other hand, uses only data from within the current frame, effectively being still-image compression.[51] Theintra-frame video coding formatsused in camcorders and video editing employ simpler compression that uses only intra-frame prediction. This simplifies video editing software, as it prevents a situation in which a compressed frame refers to data that the editor has deleted. Usually, video compression additionally employslossy compressiontechniques likequantizationthat reduce aspects of the source data that are (more or less) irrelevant to the human visual perception by exploiting perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas in a manner similar to those used in JPEG image compression.[9]As in all lossy compression, there is atrade-offbetweenvideo qualityandbit rate, cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distractingartifacts. Other methods other than the prevalent DCT-based transform formats, such asfractal compression,matching pursuitand the use of adiscrete wavelet transform(DWT), have been the subject of some research, but are typically not used in practical products.Wavelet compressionis used in still-image coders and video coders without motion compensation. Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods.[51] In inter-frame coding, individual frames of a video sequence are compared from one frame to the next, and thevideo compression codecrecords thedifferencesto the reference frame. If the frame contains areas where nothing has moved, the system can simply issue a short command that copies that part of the previous frame into the next one. If sections of the frame move in a simple manner, the compressor can emit a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than data generated by intra-frame compression. Usually, the encoder will also transmit a residue signal which describes the remaining more subtle differences to the reference imagery. Using entropy coding, these residue signals have a more compact representation than the full signal. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in thevariable bitrate. Many commonly used video compression methods (e.g., those in standards approved by theITU-TorISO) share the same basic architecture that dates back toH.261which was standardized in 1988 by the ITU-T. They mostly rely on the DCT, applied to rectangular blocks of neighboring pixels, and temporal prediction usingmotion vectors, as well as nowadays also an in-loop filtering step. In the prediction stage, variousdeduplicationand difference-coding techniques are applied that help decorrelate data and describe new data based on already transmitted data. Then rectangular blocks of remainingpixeldata are transformed to the frequency domain. In the main lossy processing stage, frequency domain data gets quantized in order to reduce information that is irrelevant to human visual perception. In the last stage statistical redundancy gets largely eliminated by anentropy coderwhich often applies some form of arithmetic coding. In an additional in-loop filtering stage various filters can be applied to the reconstructed image signal. By computing these filters also inside the encoding loop they can help compression because they can be applied to reference material before it gets used in the prediction process and they can be guided using the original signal. The most popular example aredeblocking filtersthat blur out blocking artifacts from quantization discontinuities at transform block boundaries. In 1967, A.H. Robinson and C. Cherry proposed arun-length encodingbandwidth compression scheme for the transmission of analog television signals.[72]The DCT, which is fundamental to modern video compression,[73]was introduced byNasir Ahmed, T. Natarajan andK. R. Raoin 1974.[16][74] H.261, which debuted in 1988, commercially introduced the prevalent basic architecture of video compression technology.[75]It was the firstvideo coding formatbased on DCT compression.[73]H.261 was developed by a number of companies, includingHitachi,PictureTel,NTT,BTandToshiba.[76] The most popularvideo coding standardsused for codecs have been theMPEGstandards.MPEG-1was developed by theMotion Picture Experts Group(MPEG) in 1991, and it was designed to compressVHS-quality video. It was succeeded in 1994 byMPEG-2/H.262,[75]which was developed by a number of companies, primarilySony,ThomsonandMitsubishi Electric.[77]MPEG-2 became the standard video format forDVDandSD digital television.[75]In 1999, it was followed byMPEG-4/H.263.[75]It was also developed by a number of companies, primarily Mitsubishi Electric,HitachiandPanasonic.[78] H.264/MPEG-4 AVCwas developed in 2003 by a number of organizations, primarily Panasonic,Godo Kaisha IP BridgeandLG Electronics.[79]AVC commercially introduced the moderncontext-adaptive binary arithmetic coding(CABAC) andcontext-adaptive variable-length coding(CAVLC) algorithms. AVC is the main video encoding standard forBlu-ray Discs, and is widely used by video sharing websites and streaming internet services such asYouTube,Netflix,Vimeo, andiTunes Store, web software such asAdobe Flash PlayerandMicrosoft Silverlight, and variousHDTVbroadcasts over terrestrial and satellite television.[citation needed] Genetics compression algorithmsare the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not use a reference genome for compression. HAPZIPPER was tailored forHapMapdata and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and is less computationally intensive than the leading general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF-based encoding (MAFE), which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing the dataset.[80]Other algorithms developed in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing 6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged over many genomes).[81][82]For a benchmark in genetics/genomics data compressors, see[83] It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1.[84]It is estimated that the combined technological capacity of the world to store information provides 1,300exabytesof hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes ofShannon information.[85]
https://en.wikipedia.org/wiki/Data_compression
Decorrelationis a general term for any process that is used to reduceautocorrelationwithin a signal, orcross-correlationwithin a set of signals, while preserving other aspects of the signal.[citation needed]A frequently used method of decorrelation is the use of a matchedlinear filterto reduce theautocorrelationof a signal as far as possible. Since the minimum possible autocorrelation for a given signal energy is achieved by equalising the power spectrum of the signal to be similar to that of awhite noisesignal, this is often referred to assignal whitening. Most decorrelation algorithms arelinear, but there are alsonon-lineardecorrelation algorithms. Many data compression algorithms incorporate a decorrelation stage.[citation needed]For example, manytransform codersfirst apply a fixed linear transformation that would, on average, have the effect of decorrelating a typical signal of the class to be coded, prior to any later processing. This is typically aKarhunen–Loève transform, or a simplified approximation such as thediscrete cosine transform. By comparison,sub-band codersdo not generally have an explicit decorrelation step, but instead exploit the already-existing reduced correlation within each of the sub-bands of the signal, due to the relative flatness of each sub-band of the power spectrum in many classes of signals. Linear predictive coderscan be modelled as an attempt to decorrelate signals by subtracting the best possible linear prediction from the input signal, leaving a whitened residual signal. Decorrelation techniques can also be used for many other purposes, such as reducingcrosstalkin a multi-channel signal, or in the design ofecho cancellers. Inimage processingdecorrelation techniques can be used to enhance or stretch,colourdifferences found in eachpixelof an image. This is generally termed as 'decorrelation stretching'.[1] The concept of decorrelation can be applied in many other fields. Inneuroscience, decorrelation is used in the analysis of theneural networksin the human visual system. Incryptography, it is used in cipher design (seeDecorrelation theory) and in the design ofhardware random number generators. Thiscomputational physics-related article is astub. You can help Wikipedia byexpanding it. Thissignal processing-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Decorrelation
Rate-distortion optimization(RDO) is a method of improvingvideo qualityinvideo compression. The name refers to the optimization of the amount ofdistortion(loss of video quality) against the amount of data required to encode the video, therate. While it is primarily used by video encoders, rate-distortion optimization can be used to improve quality in any encoding situation (image, video, audio, or otherwise) where decisions have to be made that affect both file size and quality simultaneously. The classical method of making encoding decisions is for the video encoder to choose the result which yields the highest quality output image. However, this has the disadvantage that the choice it makes might require more bits while giving comparatively little quality benefit. One common example of this problem is inmotion estimation,[1]and in particular regarding the use ofquarter pixel-precision motion estimation. Adding the extra precision to the motion of ablockduring motion estimation might increase quality, but in some cases that extra quality isn't worth the extra bits necessary to encode the motion vector to a higher precision. Rate-distortion optimization solves the aforementioned problem by acting as a video quality metric, measuring both the deviation from the source material and the bit cost for each possible decision outcome. The bits are mathematically measured by multiplying the bit cost by theLagrangian, a value representing the relationship between bit cost and quality for a particular quality level. The deviation from the source is usually measured as themean squared error, in order to maximize thePSNRvideo quality metric. Calculating the bit cost is made more difficult by theentropy encodersin modern video codecs, requiring the rate-distortion optimization algorithm to pass each block of video to be tested to the entropy coder to measure its actual bit cost. InMPEGcodecs, the full process consists of adiscrete cosine transform, followed byquantizationand entropy encoding. Because of this, rate-distortion optimization is much slower than most other block-matching metrics, such as the simplesum of absolute differences(SAD) andsum of absolute transformed differences(SATD). As such it is usually used only for the final steps of themotion estimationprocess, such as deciding between different partition types inH.264/AVC.
https://en.wikipedia.org/wiki/Rate%E2%80%93distortion_optimization
Insignal processing,white noiseis a randomsignalhaving equal intensity at differentfrequencies, giving it a constantpower spectral density.[1]The term is used with this or similar meanings in many scientific and technical disciplines, includingphysics,acoustical engineering,telecommunications, andstatistical forecasting. White noise refers to a statistical model for signals and signal sources, not to any specific signal. White noise draws its name fromwhite light,[2]although light that appears white generally does not have a flat power spectral density over thevisible band. Indiscrete time, white noise is adiscrete signalwhosesamplesare regarded as a sequence ofserially uncorrelatedrandom variableswith zeromeanand finitevariance; a single realization of white noise is arandom shock. In some contexts, it is also required that the samples beindependentand have identicalprobability distribution(in other wordsindependent and identically distributed random variablesare the simplest representation of white noise).[3]In particular, if each sample has anormal distributionwith zero mean, the signal is said to beadditive white Gaussian noise.[4] The samples of a white noise signal may besequentialin time, or arranged along one or more spatial dimensions. Indigital image processing, thepixelsof a white noise image are typically arranged in a rectangular grid, and are assumed to be independent random variables withuniform probability distributionover some interval. The concept can be defined also for signals spread over more complicated domains, such as asphereor atorus. Aninfinite-bandwidth white noise signalis a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. Thus, random signals are considered white noise if they are observed to have a flat spectrum over the range of frequencies that are relevant to the context. For anaudio signal, the relevant range is the band of audible sound frequencies (between 20 and 20,000Hz). Such a signal is heard by the human ear as a hissing sound, resembling the /h/ sound in a sustained aspiration. On the other hand, theshsound/ʃ/inashis a colored noise because it has aformantstructure. Inmusicandacoustics, the termwhite noisemay be used for any signal that has a similar hissing sound. In the context ofphylogenetically based statistical methods, the termwhite noisecan refer to a lack of phylogenetic pattern in comparative data.[5]In nontechnical contexts, it is sometimes used to mean "random talk without meaningful contents".[6][7] Any distribution of values is possible (although it must have zeroDC component). Even a binary signal which can only take on the values 1 or -1 will be white if thesequenceis statistically uncorrelated. Noise having a continuous distribution, such as anormal distribution, can of course be white. It is often incorrectly assumed thatGaussian noise(i.e., noise with a Gaussian amplitude distribution – seenormal distribution) necessarily refers to white noise, yet neither property implies the other. Gaussianity refers to the probability distribution with respect to the value, in this context the probability of the signal falling within any particular range of amplitudes, while the term 'white' refers to the way the signal power is distributed (i.e., independently) over time or among frequencies. One form of white noise is the generalized mean-square derivative of theWiener processorBrownian motion. A generalization torandom elementson infinite dimensional spaces, such asrandom fields, is thewhite noise measure. White noise is commonly used in the production ofelectronic music, usually either directly or as an input for a filter to create other types of noise signal. It is used extensively inaudio synthesis, typically to recreate percussive instruments such ascymbalsorsnare drumswhich have high noise content in their frequency domain.[8]A simple example of white noise is a nonexistent radio station (static). White noise is also used to obtain theimpulse responseof an electrical circuit, in particular ofamplifiersand other audio equipment. It is not used for testing loudspeakers as its spectrum contains too great an amount of high-frequency content.Pink noise, which differs from white noise in that it has equal energy in each octave, is used for testing transducers such as loudspeakers and microphones. White noise is used as the basis of somerandom number generators. For example,Random.orguses a system of atmospheric antennas to generate random digit patterns from sources that can be well-modeled by white noise.[9] White noise is a common synthetic noise source used for sound masking by atinnitus masker.[10]White noise machinesand other white noise sources are sold as privacy enhancers and sleep aids (seemusic and sleep) and to masktinnitus.[11]The Marpac Sleep-Mate was the first domestic use white noise machine built in 1962 by traveling salesman Jim Buckwalter.[12]Alternatively, the use of an AM radio tuned to unused frequencies ("static") is a simpler and more cost-effective source of white noise.[13]However, white noise generated from a common commercial radio receiver tuned to an unused frequency is extremely vulnerable to being contaminated with spurious signals, such as adjacent radio stations, harmonics from non-adjacent radio stations, electrical equipment in the vicinity of the receiving antenna causing interference, or even atmospheric events such as solar flares and especially lightning. The effects of white noise upon cognitive function are mixed. Recently, a small study found that white noise background stimulation improves cognitive functioning among secondary students withattention deficit hyperactivity disorder(ADHD), while decreasing performance of non-ADHD students.[14][15]Other work indicates it is effective in improving the mood and performance of workers by masking background office noise,[16]but decreases cognitive performance in complex card sorting tasks.[17] Similarly, an experiment was carried out on sixty-six healthy participants to observe the benefits of using white noise in a learning environment. The experiment involved the participants identifying different images whilst having different sounds in the background. Overall the experiment showed that white noise does in fact have benefits in relation to learning. The experiments showed that white noise improved the participants' learning abilities and their recognition memory slightly.[18] Arandom vector(that is, a random variable with values inRn) is said to be a white noise vector or white random vector if its components each have aprobability distributionwith zero mean and finitevariance,[clarification needed]and arestatistically independent: that is, theirjoint probability distributionmust be the product of the distributions of the individual components.[19] A necessary (but,in general, not sufficient) condition for statistical independence of two variables is that they bestatistically uncorrelated; that is, theircovarianceis zero. Therefore, thecovariance matrixRof the components of a white noise vectorwwithnelements must be annbyndiagonal matrix, where each diagonal elementRiiis thevarianceof componentwi; and thecorrelationmatrix must be thenbynidentity matrix. If, in addition to being independent, every variable inwalso has anormal distributionwith zero mean and the same varianceσ2{\displaystyle \sigma ^{2}},wis said to be a Gaussian white noise vector. In that case, the joint distribution ofwis amultivariate normal distribution; the independence between the variables then implies that the distribution hasspherical symmetryinn-dimensional space. Therefore, anyorthogonal transformationof the vector will result in a Gaussian white random vector. In particular, under most types ofdiscrete Fourier transform, such asFFTandHartley, the transformWofwwill be a Gaussian white noise vector, too; that is, thenFourier coefficients ofwwill be independent Gaussian variables with zero mean and the same varianceσ2{\displaystyle \sigma ^{2}}. Thepower spectrumPof a random vectorwcan be defined as the expected value of thesquared modulusof each coefficient of its Fourier transformW, that is,Pi= E(|Wi|2). Under that definition, a Gaussian white noise vector will have a perfectly flat power spectrum, withPi=σ2for alli. Ifwis a white random vector, but not a Gaussian one, its Fourier coefficientsWiwill not be completely independent of each other; although for largenand common probability distributions the dependencies are very subtle, and their pairwise correlations can be assumed to be zero. Often the weaker condition statistically uncorrelated is used in the definition of white noise, instead of statistically independent. However, some of the commonly expected properties of white noise (such as flat power spectrum) may not hold for this weaker version. Under this assumption, the stricter version can be referred to explicitly as independent white noise vector.[20]: p.60Other authors use strongly white and weakly white instead.[21] An example of a random vector that is Gaussian white noise in the weak but not in the strong sense isx=[x1,x2]{\displaystyle x=[x_{1},x_{2}]}wherex1{\displaystyle x_{1}}is a normal random variable with zero mean, andx2{\displaystyle x_{2}}is equal to+x1{\displaystyle +x_{1}}or to−x1{\displaystyle -x_{1}}, with equal probability. These two variables are uncorrelated and individually normally distributed, but they are not jointly normally distributed and are not independent. Ifx{\displaystyle x}is rotated by 45 degrees, its two components will still be uncorrelated, but their distribution will no longer be normal. In some situations, one may relax the definition by allowing each component of a white random vectorw{\displaystyle w}to have non-zero expected valueμ{\displaystyle \mu }. Inimage processingespecially, where samples are typically restricted to positive values, one often takesμ{\displaystyle \mu }to be one half of the maximum sample value. In that case, the Fourier coefficientW0{\displaystyle W_{0}}corresponding to the zero-frequency component (essentially, the average of thewi{\displaystyle w_{i}}) will also have a non-zero expected valueμn{\displaystyle \mu {\sqrt {n}}}; and the power spectrumP{\displaystyle P}will be flat only over the non-zero frequencies. A discrete-timestochastic processW(n){\displaystyle W(n)}is a generalization of a random vector with a finite number of components to infinitely many components. A discrete-time stochastic processW(n){\displaystyle W(n)}is called white noise if its mean is equal to zero for alln{\displaystyle n}, i.e.E⁡[W(n)]=0{\displaystyle \operatorname {E} [W(n)]=0}and if the autocorrelation functionRW(n)=E⁡[W(k+n)W(k)]{\displaystyle R_{W}(n)=\operatorname {E} [W(k+n)W(k)]}has a nonzero value only forn=0{\displaystyle n=0}, i.e.RW(n)=σ2δ(n){\displaystyle R_{W}(n)=\sigma ^{2}\delta (n)}.[citation needed][clarification needed] In order to define the notion of white noise in the theory ofcontinuous-timesignals, one must replace the concept of a random vector by a continuous-time random signal; that is, a random process that generates a functionw{\displaystyle w}of a real-valued parametert{\displaystyle t}. Such a process is said to be white noise in the strongest sense if the valuew(t){\displaystyle w(t)}for any timet{\displaystyle t}is a random variable that is statistically independent of its entire history beforet{\displaystyle t}. A weaker definition requires independence only between the valuesw(t1){\displaystyle w(t_{1})}andw(t2){\displaystyle w(t_{2})}at every pair of distinct timest1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}. An even weaker definition requires only that such pairsw(t1){\displaystyle w(t_{1})}andw(t2){\displaystyle w(t_{2})}be uncorrelated.[22]As in the discrete case, some authors adopt the weaker definition for white noise, and use the qualifier independent to refer to either of the stronger definitions. Others use weakly white and strongly white to distinguish between them. However, a precise definition of these concepts is not trivial, because some quantities that are finite sums in the finite discrete case must be replaced by integrals that may not converge. Indeed, the set of all possible instances of a signalw{\displaystyle w}is no longer a finite-dimensional spaceRn{\displaystyle \mathbb {R} ^{n}}, but an infinite-dimensionalfunction space. Moreover, by any definition a white noise signalw{\displaystyle w}would have to be essentially discontinuous at every point; therefore even the simplest operations onw{\displaystyle w}, like integration over a finite interval, require advanced mathematical machinery. Some authors[citation needed][clarification needed]require each valuew(t){\displaystyle w(t)}to be a real-valued random variable with expectationμ{\displaystyle \mu }and some finite varianceσ2{\displaystyle \sigma ^{2}}. Then the covarianceE(w(t1)⋅w(t2)){\displaystyle \mathrm {E} (w(t_{1})\cdot w(t_{2}))}between the values at two timest1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}is well-defined: it is zero if the times are distinct, andσ2{\displaystyle \sigma ^{2}}if they are equal. However, by this definition, the integral over any interval with positive widthr{\displaystyle r}would be simply the width times the expectation:rμ{\displaystyle r\mu }.[clarification needed]This property renders the concept inadequate as a model of white noise signals either in a physical or mathematical sense.[clarification needed] Therefore, most authors define the signalw{\displaystyle w}indirectly by specifying random values for the integrals ofw(t){\displaystyle w(t)}and|w(t)|2{\displaystyle |w(t)|^{2}}over each interval[a,a+r]{\displaystyle [a,a+r]}. In this approach, however, the value ofw(t){\displaystyle w(t)}at an isolated time cannot be defined as a real-valued random variable[citation needed]. Also the covarianceE(w(t1)⋅w(t2)){\displaystyle \mathrm {E} (w(t_{1})\cdot w(t_{2}))}becomes infinite whent1=t2{\displaystyle t_{1}=t_{2}}; and theautocorrelationfunctionR(t1,t2){\displaystyle \mathrm {R} (t_{1},t_{2})}must be defined asNδ(t1−t2){\displaystyle N\delta (t_{1}-t_{2})}, whereN{\displaystyle N}is some real constant andδ{\displaystyle \delta }is theDirac delta function.[clarification needed] In this approach, one usually specifies that the integralWI{\displaystyle W_{I}}ofw(t){\displaystyle w(t)}over an intervalI=[a,b]{\displaystyle I=[a,b]}is a real random variable with normal distribution, zero mean, and variance(b−a)σ2{\displaystyle (b-a)\sigma ^{2}}; and also that the covarianceE(WI⋅WJ){\displaystyle \mathrm {E} (W_{I}\cdot W_{J})}of the integralsWI{\displaystyle W_{I}},WJ{\displaystyle W_{J}}isrσ2{\displaystyle r\sigma ^{2}}, wherer{\displaystyle r}is the width of the intersectionI∩J{\displaystyle I\cap J}of the two intervalsI,J{\displaystyle I,J}. This model is called a Gaussian white noise signal (or process). In the mathematical field known aswhite noise analysis, a Gaussian white noisew{\displaystyle w}is defined as a stochastic tempered distribution, i.e. a random variable with values in the spaceS′(R){\displaystyle {\mathcal {S}}'(\mathbb {R} )}oftempered distributions. Analogous to the case for finite-dimensional random vectors, a probability law on the infinite-dimensional spaceS′(R){\displaystyle {\mathcal {S}}'(\mathbb {R} )}can be defined via its characteristic function (existence and uniqueness are guaranteed by an extension of the Bochner–Minlos theorem, which goes under the name Bochner–Minlos–Sazanov theorem); analogously to the case of the multivariate normal distributionX∼Nn(μ,Σ){\displaystyle X\sim {\mathcal {N}}_{n}(\mu ,\Sigma )}, which has characteristic function the white noisew:Ω→S′(R){\displaystyle w:\Omega \to {\mathcal {S}}'(\mathbb {R} )}must satisfy where⟨w,φ⟩{\displaystyle \langle w,\varphi \rangle }is the natural pairing of the tempered distributionw(ω){\displaystyle w(\omega )}with the Schwartz functionφ{\displaystyle \varphi }, taken scenariowise forω∈Ω{\displaystyle \omega \in \Omega }, and‖φ‖22=∫R|φ(x)|2dx{\displaystyle \|\varphi \|_{2}^{2}=\int _{\mathbb {R} }\vert \varphi (x)\vert ^{2}\,\mathrm {d} x}. Instatisticsandeconometricsone often assumes that an observed series of data values is the sum of the values generated by adeterministiclinear process, depending on certainindependent (explanatory) variables, and on a series of random noise values. Thenregression analysisis used to infer the parameters of the model process from the observed data, e.g. byordinary least squares, and totest the null hypothesisthat each of the parameters is zero against the alternative hypothesis that it is non-zero. Hypothesis testing typically assumes that the noise values are mutually uncorrelated with zero mean and have the same Gaussian probability distribution – in other words, that the noise is Gaussian white (not just white). If there is non-zero correlation between the noise values underlying different observations then the estimated model parameters are stillunbiased, but estimates of their uncertainties (such asconfidence intervals) will be biased (not accurate on average). This is also true if the noise isheteroskedastic– that is, if it has different variances for different data points. Alternatively, in the subset of regression analysis known astime series analysisthere are often no explanatory variables other than the past values of the variable being modeled (thedependent variable). In this case the noise process is often modeled as amoving averageprocess, in which the current value of the dependent variable depends on current and past values of a sequential white noise process. These two ideas are crucial in applications such aschannel estimationandchannel equalizationincommunicationsandaudio. These concepts are also used indata compression. In particular, by a suitable linear transformation (acoloring transformation), a white random vector can be used to produce a non-white random vector (that is, a list of random variables) whose elements have a prescribedcovariance matrix. Conversely, a random vector with known covariance matrix can be transformed into a white random vector by a suitablewhitening transformation. White noise may be generated digitally with adigital signal processor,microprocessor, ormicrocontroller. Generating white noise typically entails feeding an appropriate stream of random numbers to adigital-to-analog converter. The quality of the white noise will depend on the quality of the algorithm used.[23] The term is sometimes used as acolloquialismto describe a backdrop of ambient sound, creating an indistinct or seamless commotion. Following are some examples: The term can also be used metaphorically, as in the novelWhite Noise(1985) byDon DeLillowhich explores the symptoms ofmodern culturethat came together so as to make it difficult for an individual to actualize their ideas and personality.
https://en.wikipedia.org/wiki/White_noise
TheNyquist–Shannon sampling theoremis an essential principle fordigital signal processinglinking thefrequency rangeof a signal and thesample raterequired to avoid a type ofdistortioncalledaliasing. The theorem states that the sample rate must be at least twice thebandwidthof the signal to avoid aliasing. In practice, it is used to selectband-limitingfilters to keep aliasing below an acceptable amount when an analog signal is sampled or when sample rates are changed within a digital signal processing function. The Nyquist–Shannon sampling theorem is a theorem in the field ofsignal processingwhich serves as a fundamental bridge betweencontinuous-time signalsanddiscrete-time signals. It establishes a sufficient condition for asample ratethat permits a discrete sequence ofsamplesto capture all the information from a continuous-time signal of finitebandwidth. Strictly speaking, the theorem only applies to a class ofmathematical functionshaving aFourier transformthat is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence andinterpolatesback to a continuous function, the fidelity of the result depends on the density (orsample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that areband-limitedto a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples. Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known (see§ Sampling of non-baseband signalsbelow andcompressed sensing). In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizingBochner's theorem.[1] The nameNyquist–Shannon sampling theoremhonoursHarry NyquistandClaude Shannon, but the theorem was also previously discovered byE. T. Whittaker(published in 1915), and Shannon cited Whittaker's paper in his work. The theorem is thus also known by the namesWhittaker–Shannon sampling theorem,Whittaker–Shannon, andWhittaker–Nyquist–Shannon, and may also be referred to as thecardinal theorem of interpolation. Samplingis a process of converting a signal (for example, a function of continuous time or space) into a sequence of values (a function of discrete time or space).Shannon'sversion of the theorem states:[2] Theorem—If a functionx(t){\displaystyle x(t)}contains no frequencies higher thanBhertz, then it can be completely determined from its ordinates at a sequence of points spaced less than1/(2B){\displaystyle 1/(2B)}seconds apart. A sufficient sample-rate is therefore anything larger than2B{\displaystyle 2B}samples per second. Equivalently, for a given sample ratefs{\displaystyle f_{s}}, perfect reconstruction is guaranteed possible for a bandlimitB<fs/2{\displaystyle B<f_{s}/2}. When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known asaliasing. Modern statements of the theorem are sometimes careful to explicitly state thatx(t){\displaystyle x(t)}must contain nosinusoidalcomponent at exactly frequencyB,{\displaystyle B,}or thatB{\displaystyle B}must be strictly less than one half the sample rate. The threshold2B{\displaystyle 2B}is called theNyquist rateand is an attribute of the continuous-time inputx(t){\displaystyle x(t)}to be sampled. The sample rate must exceed the Nyquist rate for the samples to suffice to representx(t).{\displaystyle x(t).}The thresholdfs/2{\displaystyle f_{s}/2}is called theNyquist frequencyand is an attribute of thesampling equipment. All meaningful frequency components of the properly sampledx(t){\displaystyle x(t)}exist below the Nyquist frequency. The condition described by these inequalities is called theNyquist criterion, or sometimes theRaabe condition. The theorem is also applicable to functions of other domains, such as space, in the case of a digitized image. The only change, in the case of other domains, is the units of measure attributed tot,{\displaystyle t,}fs,{\displaystyle f_{s},}andB.{\displaystyle B.} The symbolT≜1/fs{\displaystyle T\triangleq 1/f_{s}}is customarily used to represent the interval between adjacent samples and is called thesample periodorsampling interval. The samples of functionx(t){\displaystyle x(t)}are commonly denoted byx[n]≜T⋅x(nT){\displaystyle x[n]\triangleq T\cdot x(nT)}[3](alternativelyxn{\displaystyle x_{n}}in older signal processing literature), for all integer values ofn.{\displaystyle n.}The multiplierT{\displaystyle T}is a result of the transition from continuous time to discrete time (seeDiscrete-time Fourier transform#Relation to Fourier Transform), and it is needed to preserve the energy of the signal asT{\displaystyle T}varies. A mathematically ideal way to interpolate the sequence involves the use ofsinc functions. Each sample in the sequence is replaced by a sinc function, centered on the time axis at the original location of the samplenT,{\displaystyle nT,}with the amplitude of the sinc function scaled to the sample value,x(nT).{\displaystyle x(nT).}Subsequently, the sinc functions are summed into a continuous function. A mathematically equivalent method uses theDirac comband proceeds byconvolvingone sinc function with a series ofDirac deltapulses, weighted by the sample values. Neither method is numerically practical. Instead, some type of approximation of the sinc functions, finite in length, is used. The imperfections attributable to the approximation are known asinterpolation error. Practicaldigital-to-analog convertersproduce neither scaled and delayedsinc functions, nor idealDirac pulses. Instead they produce apiecewise-constant sequenceof scaled and delayedrectangular pulses(thezero-order hold), usually followed by alowpass filter(called an "anti-imaging filter") to remove spurious high-frequency replicas (images) of the original baseband signal. Whenx(t){\displaystyle x(t)}is a function with aFourier transformX(f){\displaystyle X(f)}: Then the samplesx[n]{\displaystyle x[n]}ofx(t){\displaystyle x(t)}are sufficient to create aperiodic summationofX(f).{\displaystyle X(f).}(seeDiscrete-time Fourier transform#Relation to Fourier Transform): X1/T(f)≜∑k=−∞∞X(f−k/T)=∑n=−∞∞x[n]e−i2πfnT,{\displaystyle X_{1/T}(f)\ \triangleq \sum _{k=-\infty }^{\infty }X\left(f-k/T\right)=\sum _{n=-\infty }^{\infty }x[n]\ e^{-i2\pi fnT},} which is a periodic function and its equivalent representation as aFourier series, whose coefficients arex[n]{\displaystyle x[n]}. This function is also known as thediscrete-time Fourier transform(DTFT) of the sample sequence. As depicted, copies ofX(f){\displaystyle X(f)}are shifted by multiples of the sampling ratefs=1/T{\displaystyle f_{s}=1/T}and combined by addition. For a band-limited function(X(f)=0,for all|f|≥B){\displaystyle (X(f)=0,{\text{ for all }}|f|\geq B)}and sufficiently largefs,{\displaystyle f_{s},}it is possible for the copies to remain distinct from each other. But if the Nyquist criterion is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguousX(f).{\displaystyle X(f).}Any frequency component abovefs/2{\displaystyle f_{s}/2}is indistinguishable from a lower-frequency component, called analias, associated with one of the copies. In such cases, the customary interpolation techniques produce the alias, rather than the original component. When the sample-rate is pre-determined by other considerations (such as an industry standard),x(t){\displaystyle x(t)}is usually filtered to reduce its high frequencies to acceptable levels before it is sampled. The type of filter required is alowpass filter, and in this application it is called ananti-aliasing filter. When there is no overlap of the copies (also known as "images") ofX(f){\displaystyle X(f)}, thek=0{\displaystyle k=0}term ofEq.1can be recovered by the product: X(f)=H(f)⋅X1/T(f),{\displaystyle X(f)=H(f)\cdot X_{1/T}(f),} where: H(f)≜{1|f|<B0|f|>fs−B.{\displaystyle H(f)\ \triangleq \ {\begin{cases}1&|f|<B\\0&|f|>f_{s}-B.\end{cases}}} The sampling theorem is proved sinceX(f){\displaystyle X(f)}uniquely determinesx(t){\displaystyle x(t)}. All that remains is to derive the formula for reconstruction.H(f){\displaystyle H(f)}need not be precisely defined in the region[B,fs−B]{\displaystyle [B,\ f_{s}-B]}becauseX1/T(f){\displaystyle X_{1/T}(f)}is zero in that region. However, the worst case is whenB=fs/2,{\displaystyle B=f_{s}/2,}the Nyquist frequency. A function that is sufficient for that and all less severe cases is: H(f)=rect(ffs)={1|f|<fs20|f|>fs2,{\displaystyle H(f)=\mathrm {rect} \left({\frac {f}{f_{s}}}\right)={\begin{cases}1&|f|<{\frac {f_{s}}{2}}\\0&|f|>{\frac {f_{s}}{2}},\end{cases}}} whererect{\displaystyle \mathrm {rect} }is therectangular function. Therefore: The inverse transform of both sides produces theWhittaker–Shannon interpolation formula: which shows how the samples,x(nT){\displaystyle x(nT)}, can be combined to reconstructx(t){\displaystyle x(t)}. Poisson shows that the Fourier series inEq.1produces the periodic summation ofX(f){\displaystyle X(f)}, regardless offs{\displaystyle f_{s}}andB{\displaystyle B}. Shannon, however, only derives the series coefficients for the casefs=2B{\displaystyle f_{s}=2B}. Virtually quoting Shannon's original paper: x(n2B)=12π∫−2πB2πBX(ω)eiωn2Bdω.{\displaystyle x\left({\tfrac {n}{2B}}\right)={1 \over 2\pi }\int _{-2\pi B}^{2\pi B}X(\omega )e^{i\omega {n \over {2B}}}\;{\rm {d}}\omega .} Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction viasinc functions, what we now call theWhittaker–Shannon interpolation formulaas discussed above. He does not derive or prove the properties of the sinc function, as the Fourier pair relationship between therect(the rectangular function) and sinc functions was well known by that time.[4] Letxn{\displaystyle x_{n}}be thenth{\displaystyle n^{th}}sample. Then the functionx(t){\displaystyle x(t)}is represented by: As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes. The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities ofpixels(picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely—one for the row, and one for the column. Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors—red, green, and blue, orRGBfor short. Other colorspaces using 3-vectors for colors include HSV, CIELAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated asvector-valued functionsover a two-dimensional sampled domain. Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera'simage sensor. The aliasing appears as amoiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor using anoptical low-pass filter. Another example is shown here in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through alow-pass filterfirst and thendownsamplesthe image to result in a smaller image that does not exhibit themoiré pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results. The sampling theorem applies to camera systems, where the scene and lens constitute an analog spatial signal source, and the image sensor is a spatial sampling device. Each of these components is characterized by amodulation transfer function(MTF), representing the precise resolution (spatial bandwidth) available in that component. Effects of aliasing or blurring can occur when the lens MTF and sensor MTF are mismatched. When the optical image which is sampled by the sensor device contains higher spatial frequencies than the sensor, the under sampling acts as a low-pass filter to reduce or eliminate aliasing. When the area of the sampling spot (the size of the pixel sensor) is not large enough to provide sufficientspatial anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) may be included in a camera system to reduce the MTF of the optical image. Instead of requiring an optical filter, thegraphics processing unitofsmartphonecameras performsdigital signal processingto remove aliasing with a digital filter. Digital filters also apply sharpening to amplify the contrast from the lens at high spatial frequencies, which otherwise falls off rapidly at diffraction limits. The sampling theorem also applies to post-processing digital images, such as to up or down sampling. Effects of aliasing, blurring, and sharpening may be adjusted with digital filtering implemented in software, which necessarily follows the theoretical principles. To illustrate the necessity offs>2B,{\displaystyle f_{s}>2B,}consider the family of sinusoids generated by different values ofθ{\displaystyle \theta }in this formula: Withfs=2B{\displaystyle f_{s}=2B}or equivalentlyT=1/2B,{\displaystyle T=1/2B,}the samples are given by: regardless of the value ofθ.{\displaystyle \theta .}That sort of ambiguity is the reason for thestrictinequality of the sampling theorem's condition. As discussed by Shannon:[2] A similar result is true if the band does not start at zero frequency but at some higher value, and can be proved by a linear translation (corresponding physically tosingle-sideband modulation) of the zero-frequency case. In this case the elementary pulse is obtained fromsin⁡(x)/x{\displaystyle \sin(x)/x}by single-side-band modulation. That is, a sufficient no-loss condition for samplingsignalsthat do not havebasebandcomponents exists that involves thewidthof the non-zero frequency interval as opposed to its highest frequency component. Seesamplingfor more details and examples. For example, in order to sampleFM radiosignals in the frequency range of 100–102MHz, it is not necessary to sample at 204 MHz (twice the upper frequency), but rather it is sufficient to sample at 4 MHz (twice the width of the frequency interval). A bandpass condition is thatX(f)=0,{\displaystyle X(f)=0,}for all nonnegativef{\displaystyle f}outside the open band of frequencies: for some nonnegative integerN{\displaystyle N}. This formulation includes the normal baseband condition as the caseN=0.{\displaystyle N=0.} The corresponding interpolation function is the impulse response of an ideal brick-wallbandpass filter(as opposed to the idealbrick-walllowpass filterused above) with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses: (N+1)sinc⁡((N+1)tT)−Nsinc⁡(NtT).{\displaystyle (N+1)\,\operatorname {sinc} \left({\frac {(N+1)t}{T}}\right)-N\,\operatorname {sinc} \left({\frac {Nt}{T}}\right).} Other generalizations, for example to signals occupying multiple non-contiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost. The sampling theory of Shannon can be generalized for the case ofnonuniform sampling, that is, samples not taken equally spaced in time. The Shannon sampling theory for non-uniform sampling states that a band-limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition.[5]Therefore, although uniformly spaced samples may result in easier reconstruction algorithms, it is not a necessary condition for perfect reconstruction. The general theory for non-baseband and nonuniform samples was developed in 1967 byHenry Landau.[6]He proved that the average sampling rate (uniform or otherwise) must be twice theoccupiedbandwidth of the signal, assuming it isa prioriknown what portion of the spectrum was occupied. In the late 1990s, this work was partially extended to cover signals for which the amount of occupied bandwidth is known but the actual occupied portion of the spectrum is unknown.[7]In the 2000s, a complete theory was developed (see the sectionSampling below the Nyquist rate under additional restrictionsbelow) usingcompressed sensing. In particular, the theory, using signal processing language, is described in a 2009 paper by Mishali and Eldar.[8]They show, among other things, that if the frequency locations are unknown, then it is necessary to sample at least at twice the Nyquist criteria; in other words, you must pay at least a factor of 2 for not knowing the location of thespectrum. Note that minimum sampling requirements do not necessarily guaranteestability. The Nyquist–Shannon sampling theorem provides asufficient conditionfor the sampling and reconstruction of a band-limited signal. When reconstruction is done via theWhittaker–Shannon interpolation formula, the Nyquist criterion is also a necessary condition to avoid aliasing, in the sense that if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed. However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be anecessary condition. A non-trivial example of exploiting extra assumptions about the signal is given by the recent field ofcompressed sensing, which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some domain. As an example, compressed sensing deals with signals that may have a low overall bandwidth (say, theeffectivebandwidthEB{\displaystyle EB}) but the frequency locations are unknown, rather than all together in a single band, so that thepassband techniquedoes not apply. In other words, the frequency spectrum is sparse. Traditionally, the necessary sampling rate is thus2B.{\displaystyle 2B.}Using compressed sensing techniques, the signal could be perfectly reconstructed if it is sampled at a rate slightly lower than2EB.{\displaystyle 2EB.}With this approach, reconstruction is no longer given by a formula, but instead by the solution to alinear optimization program. Another example where sub-Nyquist sampling is optimal arises under the additional constraint that the samples are quantized in an optimal manner, as in a combined system of sampling and optimallossy compression.[9]This setting is relevant in cases where the joint effect of sampling andquantizationis to be considered, and can provide a lower bound for the minimal reconstruction error that can be attained in sampling and quantizing arandom signal. For stationary Gaussian random signals, this lower bound is usually attained at a sub-Nyquist sampling rate, indicating that sub-Nyquist sampling is optimal for this signal model under optimalquantization.[10] The sampling theorem was implied by the work ofHarry Nyquistin 1928,[11]in which he showed that up to2B{\displaystyle 2B}independent pulse samples could be sent through a system of bandwidthB{\displaystyle B}; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time,Karl Küpfmüllershowed a similar result[12]and discussed the sinc-function impulse response of a band-limiting filter, via its integral, the step-responsesine integral; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as aKüpfmüller filter(but seldom so in English). The sampling theorem, essentially adualof Nyquist's result, was proved byClaude E. Shannon.[2]The mathematicianE. T. Whittakerpublished similar results in 1915,[13]J. M. Whittaker in 1935,[14]andGaborin 1946 ("Theory of communication"). In 1948 and 1949, Claude E. Shannon published the two revolutionary articles in which he foundedinformation theory.[15][16][2]In Shannon's "A Mathematical Theory of Communication", the sampling theorem is formulated as "Theorem 13": Letf(t){\displaystyle f(t)}contain no frequencies over W. Then f(t)=∑n=−∞∞Xnsin⁡π(2Wt−n)π(2Wt−n),{\displaystyle f(t)=\sum _{n=-\infty }^{\infty }X_{n}{\frac {\sin \pi (2Wt-n)}{\pi (2Wt-n)}},}whereXn=f(n2W).{\displaystyle X_{n}=f\left({\frac {n}{2W}}\right).} It was not until these articles were published that the theorem known as "Shannon's sampling theorem" became common property among communication engineers, although Shannon himself writes that this is a fact which is common knowledge in the communication art.[B]A few lines further on, however, he adds: "but in spite of its evident importance, [it] seems not to have appeared explicitly in the literature ofcommunication theory". Despite his sampling theorem being published at the end of the 1940s, Shannon had derived his sampling theorem as early as 1940.[17] Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example, by Jerri[18]and by Lüke.[19]For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the termRaabe conditioncame to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth). Meijering[20]mentions several other discoverers and names in a paragraph and pair of footnotes: As pointed out by Higgins, the sampling theorem should really be considered in two parts, as done above: the first stating the fact that a bandlimited function is completely determined by its samples, the second describing how to reconstruct the function using its samples. Both parts of the sampling theorem were given in a somewhat different form by J. M. Whittaker and before him also by Ogura. They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel.[Meijering 1]As we have seen, Borel also used around that time what became known as the cardinal series. However, he appears not to have made the link. In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community byKotel'nikov. In more implicit, verbal form, it had also been described in the German literature by Raabe. Several authors have mentioned that Someya introduced the theorem in the Japanese literature parallel to Shannon. In the English literature, Weston introduced it independently of Shannon around the same time.[Meijering 2] In Russian literature it is known as the Kotelnikov's theorem, named afterVladimir Kotelnikov, who discovered it in 1933.[21] Exactly how, when, or whyHarry Nyquisthad his name attached to the sampling theorem remains obscure. The termNyquist Sampling Theorem(capitalized thus) appeared as early as 1959 in a book from his former employer,Bell Labs,[22]and appeared again in 1963,[23]and not capitalized in 1965.[24]It had been called theShannon Sampling Theoremas early as 1954,[25]but also justthe sampling theoremby several other books in the early 1950s. In 1958,BlackmanandTukeycited Nyquist's 1928 article as a reference forthe sampling theorem of information theory,[26]even though that article does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries: Exactly what "Nyquist's result" they are referring to remains mysterious. When Shannon stated and proved the sampling theorem in his 1949 article, according to Meijering,[20]"he referred to the critical sampling intervalT=12W{\displaystyle T={\frac {1}{2W}}}as theNyquist intervalcorresponding to the bandW,{\displaystyle W,}in recognition of Nyquist's discovery of the fundamental importance of this interval in connection with telegraphy". This explains Nyquist's name on the critical interval, but not on the theorem. Similarly, Nyquist's name was attached toNyquist ratein 1953 byHarold S. Black: If the essential frequency range is limited toB{\displaystyle B}cycles per second,2B{\displaystyle 2B}was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less than half a quantum step. This rate is generally referred to assignaling at the Nyquist rateand12B{\displaystyle {\frac {1}{2B}}}has been termed aNyquist interval. According to theOxford English Dictionary, this may be the origin of the termNyquist rate. In Black's usage, it is not a sampling rate, but a signaling rate.
https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
Indigital communicationordata transmission,Eb/N0{\displaystyle E_{b}/N_{0}}(energy per bit to noise power spectral density ratio) is a normalizedsignal-to-noise ratio(SNR) measure, also known as the "SNR per bit". It is especially useful when comparing thebit error rate(BER) performance of different digitalmodulationschemes without taking bandwidth into account. As the description implies,Eb{\displaystyle E_{b}}is the signal energy associated with each user data bit; it is equal to the signal power divided by the user bit rate (notthe channel symbol rate). If signal power is in watts and bit rate is in bits per second,Eb{\displaystyle E_{b}}is in units ofjoules(watt-seconds).N0{\displaystyle N_{0}}is thenoise spectral density, the noise power in a 1 Hz bandwidth, measured in watts per hertz or joules. These are the same units asEb{\displaystyle E_{b}}so the ratioEb/N0{\displaystyle E_{b}/N_{0}}isdimensionless; it is frequently expressed indecibels.Eb/N0{\displaystyle E_{b}/N_{0}}directly indicates the power efficiency of the system without regard to modulation type, error correction coding or signal bandwidth (including any use ofspread spectrum). This also avoids any confusion as towhichof several definitions of "bandwidth" to apply to the signal. But when the signal bandwidth is well defined,Eb/N0{\displaystyle E_{b}/N_{0}}is also equal to the signal-to-noise ratio (SNR) in that bandwidth divided by the "gross"link spectral efficiencyin(bit/s)/Hz, where the bits in this context again refer to user data bits, irrespective of error correction information and modulation type.[1] Eb/N0{\displaystyle E_{b}/N_{0}}must be used with care on interference-limited channels since additive white noise (with constant noise densityN0{\displaystyle N_{0}}) is assumed, and interference is not always noise-like. Inspread spectrumsystems (e.g.,CDMA), the interferenceissufficiently noise-like that it can be represented asI0{\displaystyle I_{0}}and added to the thermal noiseN0{\displaystyle N_{0}}to produce the overall ratioEb/(N0+I0){\displaystyle E_{b}/(N_{0}+I_{0})}. Eb/N0{\displaystyle E_{b}/N_{0}}is closely related to thecarrier-to-noise ratio(CNR orCN{\displaystyle {\frac {C}{N}}}), i.e. thesignal-to-noise ratio(SNR) of the received signal, after the receiver filter but before detection: CN=EbN0fbB{\displaystyle {\frac {C}{N}}={\frac {E_{\text{b}}}{N_{0}}}{\frac {f_{\text{b}}}{B}}} wherefb{\displaystyle f_{b}}is the channel data rate (net bit rate) andBis the channel bandwidth. The equivalent expression in logarithmic form (dB): CNRdB=10log10⁡(EbN0)+10log10⁡(fbB){\displaystyle {\text{CNR}}_{\text{dB}}=10\log _{10}\left({\frac {E_{\text{b}}}{N_{0}}}\right)+10\log _{10}\left({\frac {f_{\text{b}}}{B}}\right)} Caution: Sometimes, the noise power is denoted byN0/2{\displaystyle N_{0}/2}when negative frequencies and complex-valued equivalentbasebandsignals are considered rather thanpassbandsignals, and in that case, there will be a 3 dB difference. Eb/N0{\displaystyle E_{b}/N_{0}}can be seen as a normalized measure of theenergy per symbol to noise power spectral density(Es/N0{\displaystyle E_{s}/N_{0}}): EbN0=EsρN0{\displaystyle {\frac {E_{b}}{N_{0}}}={\frac {E_{\text{s}}}{\rho N_{0}}}} whereEs{\displaystyle E_{s}}is the energy per symbol in joules andρis the nominalspectral efficiencyin (bits/s)/Hz.[2]Es/N0{\displaystyle E_{s}/N_{0}}is also commonly used in the analysis of digital modulation schemes. The two quotients are related to each other according to the following: EsN0=EbN0log2⁡(M){\displaystyle {\frac {E_{\text{s}}}{N_{0}}}={\frac {E_{\text{b}}}{N_{0}}}\log _{2}(M)} whereMis the number of alternative modulation symbols, e.g.M=4{\displaystyle M=4}for QPSK andM=8{\displaystyle M=8}for 8PSK. This is the energy per bit, not the energy per information bit. Es/N0{\displaystyle E_{s}/N_{0}}can further be expressed as: EsN0=CNBfs{\displaystyle {\frac {E_{\text{s}}}{N_{0}}}={\frac {C}{N}}{\frac {B}{f_{\text{s}}}}} whereCN{\displaystyle {\frac {C}{N}}}is thecarrier-to-noise ratioorsignal-to-noise ratio,Bis the channel bandwidth in hertz, andfs{\displaystyle f_{s}}is the symbol rate inbaudor symbols per second. TheShannon–Hartley theoremsays that the limit of reliableinformation rate(data rate exclusive of error-correcting codes) of a channel depends on bandwidth and signal-to-noise ratio according to: I<Blog2⁡(1+SN){\displaystyle I<B\log _{2}\left(1+{\frac {S}{N}}\right)} whereIis theinformation rateinbits per secondexcludingerror-correcting codes,Bis thebandwidthof the channel inhertz,Sis the total signal power (equivalent to the carrier powerC), andNis the total noise power in the bandwidth. This equation can be used to establish a bound onEb/N0{\displaystyle E_{b}/N_{0}}for any system that achieves reliable communication, by considering a gross bit rateRequal to the net bit rateIand therefore an average energy per bit ofEb=S/R{\displaystyle E_{b}=S/R}, with noise spectral density ofN0=N/B{\displaystyle N_{0}=N/B}. For this calculation, it is conventional to define a normalized rateRl=R/(2B){\displaystyle R_{l}=R/(2B)}, a bandwidth utilization parameter of bits per second per half hertz, or bits per dimension (a signal of bandwidthBcan be encoded with2B{\displaystyle 2B}dimensions, according to theNyquist–Shannon sampling theorem). Making appropriate substitutions, the Shannon limit is: RB=2Rl<log2⁡(1+2RlEbN0){\displaystyle {R \over B}=2R_{l}<\log _{2}\left(1+2R_{l}{\frac {E_{\text{b}}}{N_{0}}}\right)} Which can be solved to get the Shannon-limit bound onEb/N0{\displaystyle E_{b}/N_{0}}: EbN0>22Rl−12Rl{\displaystyle {\frac {E_{\text{b}}}{N_{0}}}>{\frac {2^{2R_{l}}-1}{2R_{l}}}} When the data rate is small compared to the bandwidth, so thatRl{\displaystyle R_{l}}is near zero, the bound, sometimes called theultimate Shannon limit,[3]is: EbN0>ln⁡(2){\displaystyle {\frac {E_{\text{b}}}{N_{0}}}>\ln(2)} which corresponds to −1.59dB. This often-quoted limit of −1.59 dB appliesonlyto the theoretical case of infinite bandwidth. The Shannon limit for finite-bandwidth signals is always higher. For any given system of coding and decoding, there exists what is known as acutoff rateR0{\displaystyle R_{0}}, typically corresponding to anEb/N0{\displaystyle E_{b}/N_{0}}about 2 dB above the Shannon capacity limit.[citation needed]The cutoff rate used to be thought of as the limit on practicalerror correction codeswithout an unbounded increase in processing complexity, but has been rendered largely obsolete by the more recent discovery ofturbo codes,low-density parity-check(LDPC) andpolarcodes.
https://en.wikipedia.org/wiki/Eb/N0
TheBahl-Cocke-Jelinek-Raviv (BCJR) algorithmis analgorithmformaximum a posterioridecoding oferror correcting codesdefined ontrellises(principallyconvolutional codes). The algorithm is named after its inventors: Bahl, Cocke,Jelinekand Raviv.[1]This algorithm is critical to modern iteratively-decoded error-correcting codes, includingturbo codesandlow-density parity-check codes. Based on thetrellis: Berrou, Glavieux and Thitimajshima simplification.[2] [3] Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/BCJR_algorithm
Intelecommunication, aconvolutional codeis a type oferror-correcting codethat generates parity symbols via the sliding application of aboolean polynomialfunction to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitatestrellisdecoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity. The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the basecode rateand the depth (or memory) of the encoder[n,k,K]{\displaystyle [n,k,K]}. The base code rate is typically given asn/k{\displaystyle n/k}, wherenis the raw input data rate andkis the data rate of output channel encoded stream.nis less thankbecause channel coding inserts redundancy in the input bits. The memory is often called the "constraint length"K, where the output is a function of the current input as well as the previousK−1{\displaystyle K-1}inputs. The depth may also be given as the number of memory elementsvin the polynomial or the maximum possible number of states of the encoder (typically:2v{\displaystyle 2^{v}}). Convolutional codes are often described as continuous. However, it may also be said that convolutional codes have arbitrary block length, rather than being continuous, since most real-world convolutional encoding is performed on blocks of data. Convolutionally encoded block codes typically employ termination. The arbitrary block length of convolutional codes can also be contrasted to classicblock codes, which generally have fixed block lengths that are determined by algebraic properties. The code rate of a convolutional code is commonly modified viasymbol puncturing. For example, a convolutional code with a 'mother' code raten/k=1/2{\displaystyle n/k=1/2}may be punctured to a higher rate of, for example,7/8{\displaystyle 7/8}simply by not transmitting a portion of code symbols. The performance of a punctured convolutional code generally scales well with the amount of parity transmitted. The ability to perform economical soft decision decoding on convolutional codes, as well as the block length and code rate flexibility of convolutional codes, makes them very popular for digital communications. Convolutional codes were introduced in 1955 byPeter Elias. It was thought that convolutional codes could be decoded with arbitrary quality at the expense of computation and delay. In 1967,Andrew Viterbidetermined that convolutional codes could be maximum-likelihood decoded with reasonable complexity using time invariant trellis based decoders — theViterbi algorithm. Other trellis-based decoder algorithms were later developed, including theBCJRdecoding algorithm. Recursive systematic convolutional codes were invented byClaude Berrouaround 1991. These codes proved especially useful for iterative processing including the processing of concatenated codes such asturbo codes.[1] Using the "convolutional" terminology, a classic convolutional code might be considered aFinite impulse response(FIR) filter, while a recursive convolutional code might be considered anInfinite impulse response(IIR) filter. Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such asdigital video, radio,mobile communications(e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)[3][4]) andsatellite communications.[5]These codes are often implemented inconcatenationwith a hard-decision code, particularlyReed–Solomon. Prior toturbo codessuch constructions were the most efficient, coming closest to theShannon limit. To convolutionally encode data, start withkmemory registers, each holding one input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder hasnmodulo-2adders(a modulo 2 adder can be implemented with a singleBooleanXOR gate, where the logic is:0+0 = 0,0+1 = 1,1+0 = 1,1+1 = 0), andngenerator polynomials— one for each adder (see figure below). An input bitm1is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputsnsymbols. These symbols may be transmitted or punctured depending on the desired code rate. Nowbit shiftall register values to the right (m1moves tom0,m0moves tom−1) and wait for the next input bit. If there are no remaining input bits, the encoder continues shifting until all registers have returned to the zero state (flush bit termination). The figure below is a rate1⁄3(m⁄n) encoder with constraint length (k) of 3. Generator polynomials areG1= (1,1,1),G2= (0,1,1), andG3= (1,0,1). Therefore, output bits are calculated (modulo 2) as follows: Convolutional codes can be systematic and non-systematic: Non-systematic convolutional codes are more popular due to better noise immunity. It relates to the free distance of the convolutional code.[6] The encoder on the picture above is anon-recursiveencoder. Here's an example of a recursive one and as such it admits a feedback structure: The example encoder issystematicbecause the input data is also used in the output symbols (Output 2). Codes with output symbols that do not include the input data are callednon-systematic. Recursive codes are typically systematic and, conversely, non-recursive codes are typically non-systematic. It isn't a strict requirement, but a common practice. The example encoder in Img. 2. is an 8-state encoder because the 3 registers will create 8 possible encoder states (23). A corresponding decoder trellis will typically use 8 states as well. Recursive systematic convolutional (RSC) codes have become more popular due to their use in Turbo Codes. Recursive systematic codes are also referred to as pseudo-systematic codes. Other RSC codes and example applications include: Useful forLDPCcode implementation and as inner constituent code forserial concatenated convolutional codes(SCCC's). Useful for SCCC's and multidimensional turbo codes. Useful as constituent code in low error rate turbo codes for applications such as satellite links. Also suitable as SCCC outer code. A convolutional encoder is called so because it performs aconvolutionof the input stream with the encoder'simpulse responses: wherexis an input sequence,yjis a sequence from outputj,hjis an impulse response for outputjand∗{\displaystyle {*}}denotes convolution. A convolutional encoder is a discretelinear time-invariant system. Every output of an encoder can be described by its owntransfer function, which is closely related to the generator polynomial. An impulse response is connected with a transfer function throughZ-transform. Transfer functions for the first (non-recursive) encoder are: Transfer functions for the second (recursive) encoder are: Definemby where, for anyrational functionf(z)=P(z)/Q(z){\displaystyle f(z)=P(z)/Q(z)\,}, Thenmis the maximum of thepolynomial degreesof the Hi(1/z){\displaystyle H_{i}(1/z)\,}, and theconstraint lengthis defined asK=m+1{\displaystyle K=m+1\,}. For instance, in the first example the constraint length is 3, and in the second the constraint length is 4. A convolutional encoder is afinite state machine. An encoder withnbinary cells will have 2nstates. Imagine that the encoder (shown on Img.1, above) has '1' in the left memory cell (m0), and '0' in the right one (m−1). (m1is not really a memory cell because it represents a current value). We will designate such a state as "10". According to an input bit the encoder at the next turn can convert either to the "01" state or the "11" state. One can see that not all transitions are possible for (e.g., a decoder can't convert from "10" state to "00" or even stay in "10" state). All possible transitions can be shown as below: An actual encoded sequence can be represented as a path on this graph. One valid path is shown in red as an example. This diagram gives us an idea aboutdecoding: if a received sequence doesn't fit this graph, then it was received with errors, and we must choose the nearestcorrect(fitting the graph) sequence. The real decoding algorithms exploit this idea. Thefree distance[7](d) is the minimalHamming distancebetween different encoded sequences. Thecorrecting capability(t) of a convolutional code is the number of errors that can be corrected by the code. It can be calculated as Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value oftapplies to a quantity of errors located relatively near to each other. That is, multiple groups ofterrors can usually be fixed when they are relatively far apart. Free distance can be interpreted as the minimal length of an erroneous "burst" at the output of a convolutional decoder. The fact that errors appear as "bursts" should be accounted for when designing aconcatenated codewith an inner convolutional code. The popular solution for this problem is tointerleavedata before convolutional encoding, so that the outer block (usuallyReed–Solomon) code can correct most of the errors. Severalalgorithmsexist for decoding convolutional codes. For relatively small values ofk, theViterbi algorithmis universally used as it providesmaximum likelihoodperformance and is highly parallelizable. Viterbi decoders are thus easy to implement inVLSIhardware and in software on CPUs withSIMDinstruction sets. Longer constraint length codes are more practically decoded with any of severalsequential decodingalgorithms, of which theFanoalgorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in thePioneer programof the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with largeReed–Solomon error correctioncodes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates. Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword. An approximate confidence measure can be added to each bit by use of theSoft output Viterbi algorithm.Maximum a posteriori(MAP) soft decisions for each bit can be obtained by use of theBCJR algorithm. In fact, predefined convolutional codes structures obtained during scientific researches are used in the industry. This relates to the possibility to select catastrophic convolutional codes (causes larger number of errors). An especially popular Viterbi-decoded convolutional code, used at least since theVoyager program, has a constraint lengthKof 7 and a raterof 1/2.[12] Mars Pathfinder,Mars Exploration Roverand theCassini probeto Saturn use aKof 15 and a rate of 1/6; this code performs about 2 dB better than the simplerK=7{\displaystyle K=7}code at a cost of 256× in decoding complexity (compared to Voyager mission codes). The convolutional code with a constraint length of 2 and a rate of 1/2 is used inGSMas an error correction technique.[13] Convolutional code with any code rate can be designed based on polynomial selection;[15]however, in practice, a puncturing procedure is often used to achieve the required code rate.Puncturingis a technique used to make am/nrate code from a "basic" low-rate (e.g., 1/n) code. It is achieved by deleting of some bits in the encoder output. Bits are deleted according to apuncturing matrix. The following puncturing matrices are the most frequently used: For example, if we want to make a code with rate 2/3 using the appropriate matrix from the above table, we should take a basic encoder output and transmit every first bit from the first branch and every bit from the second one. The specific order of transmission is defined by the respective communication standard. Punctured convolutional codes are widely used in thesatellite communications, for example, inIntelsatsystems andDigital Video Broadcasting. Punctured convolutional codes are also called "perforated". Simple Viterbi-decoded convolutional codes are now giving way toturbo codes, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed byShannon's theoremwith much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance.Concatenationwith an outer algebraic code (e.g.,Reed–Solomon) addresses the issue oferror floorsinherent to turbo code designs.
https://en.wikipedia.org/wiki/Convolutional_code
Incomputing,telecommunication,information theory, andcoding theory,forward error correction(FEC) orchannel coding[1][2][3]is a technique used forcontrolling errorsindata transmissionover unreliable or noisycommunication channels. The central idea is that the sender encodes the message in aredundantway, most often by using anerror correction code, orerror correcting code(ECC).[4][5]The redundancy allows the receiver not only todetect errorsthat may occur anywhere in the message, but often to correct a limited number of errors. Therefore areverse channelto request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematicianRichard Hammingpioneered this field in the 1940s and invented the first error-correcting code in 1950: theHamming (7,4) code.[5] FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers inmulticast. Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used inmodemsand incellular networks. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initialanalog-to-digital conversionin the receiver. TheViterbi decoderimplements asoft-decision algorithmto demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate abit-error rate(BER) signal which can be used as feedback to fine-tune the analog receiving electronics. FEC information is added tomass storage(magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used asECCcomputer memoryon systems that require special provisions for reliability. The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effectivesignal-to-noise ratio. Thenoisy-channel coding theoremofClaude Shannoncan be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems likepolar code[3]come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame. ECC is accomplished by addingredundancyto the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output aresystematic, while those that do not arenon-systematic. A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1)repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below. This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is: Though simple to implement and widely used, thistriple modular redundancyis a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits). ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data. Most telecommunication systems use a fixedchannel codedesigned to tolerate the expected worst-casebit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances ofhybrid automatic repeat-requestuse a fixed ECC method as long as the ECC can handle the error rate, then switch toARQwhen the error rate gets too high;adaptive modulation and codinguses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed. The two main categories of ECC codes areblock codesandconvolutional codes. There are many types of block codes;Reed–Solomon codingis noteworthy for its widespread use incompact discs,DVDs, andhard disk drives. Other examples of classical block codes includeGolay,BCH,Multidimensional parity, andHamming codes. Hamming ECC is commonly used to correctNAND flashmemory errors.[6]This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliablesingle-level cell(SLC) NAND. Densermulti-level cell(MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon.[7][8]NOR Flash typically does not use any error correction.[7] Classical block codes are usually decoded usinghard-decisionalgorithms,[9]which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded usingsoft-decisionalgorithms like the Viterbi, MAP orBCJRalgorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding. Nearly all classical block codes apply the algebraic properties offinite fields. Hence classical block codes are often referred to as algebraic codes. In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such asLDPC codeslack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates. Mostforward error correctioncodes correct only bit-flips, but not bit-insertions or bit-deletions. In this setting, theHamming distanceis the appropriate way to measure thebit error rate. A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes. TheLevenshtein distanceis a more appropriate way to measure the bit error rate when using such codes.[10] The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code. The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate.[11]In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection. One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:[12]His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement. The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.[13] Classical (algebraic) block codes and convolutional codes are frequently combined inconcatenatedcoding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended. Concatenated codes have been standard practice in satellite and deep space communications sinceVoyager 2first used the technique in its 1986 encounter withUranus. TheGalileocraft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna. Low-density parity-check(LDPC) codes are a class of highly efficient linear block codes made from many single parity check (SPC) codes. They can provide performance very close to thechannel capacity(the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel. LDPC codes were first introduced byRobert G. Gallagerin his PhD thesis in 1960, but due to the computational effort in implementing encoder and decoder and the introduction ofReed–Solomoncodes, they were mostly ignored until the 1990s. LDPC codes are now used in many recent high-speed communication standards, such asDVB-S2(Digital Video Broadcasting – Satellite – Second Generation),WiMAX(IEEE 802.16estandard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n),[14]10GBase-T Ethernet(802.3an) andG.hn/G.9960(ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within3GPPMBMS(seefountain codes). Turbo codingis an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of theShannon limit. PredatingLDPC codesin terms of practical application, they now provide similar performance. One of the earliest commercial applications of turbo coding was theCDMA2000 1x(TIA IS-2000) digital cellular technology developed byQualcommand sold byVerizon Wireless,Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access,1xEV-DO(TIA IS-856). Like 1x, EV-DO was developed byQualcomm, and is sold byVerizon Wireless,Sprint, and other carriers (Verizon's marketing name for 1xEV-DO isBroadband Access, Sprint's consumer and business marketing names for 1xEV-DO arePower VisionandMobile Broadband, respectively). Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool incomputational complexity theory, e.g., for the design ofprobabilistically checkable proofs. Locally decodable codesare error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions.Locally testable codesare error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal. Not all testing codes are locally decoding and testing of codes Not all locally decodable codes (LDCs) are locally testable codes (LTCs)[15]neither locally correctable codes (LCCs),[16]q-query LCCs are bounded exponentially[17][18]while LDCs can havesubexponentiallengths.[19][20] Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Manycommunication channelsare not memoryless: errors typically occur inburstsrather than independently. If the number of errors within acode wordexceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a moreuniform distributionof errors.[21]Therefore, interleaving is widely used forburst error-correction. The analysis of modern iterated codes, liketurbo codesandLDPC codes, typically assumes an independent distribution of errors.[22]Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.[23] For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance.[21][24]The iterative decoding algorithm works best when there are not short cycles in thefactor graphthat represents the decoder; the interleaver is chosen to avoid short cycles. Interleaver designs include: In multi-carriercommunication systems, interleaving across carriers may be employed to provide frequencydiversity, e.g., to mitigatefrequency-selective fadingor narrowband interference.[28] Transmission without interleaving: Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword cccc is altered in one bit and can be corrected, but the codeword dddd is altered in three bits, so either it cannot be decoded at all or it might bedecoded incorrectly. With interleaving: In each of the codewords "aaaa", "eeee", "ffff", and "gggg", only one bit is altered, so one-bit error-correcting code will decode everything correctly. Transmission without interleaving: The term "AnExample" ends up mostly unintelligible and difficult to correct. With interleaving: No word is completely lost and the missing letters can be recovered with minimal guesswork. Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded.[29]Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver[citation needed]. An example of such an algorithm is based onneural network[30]structures. Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: theCloud Radio Access Networks (C-RAN)in aSoftware-defined radio (SDR)context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system. In this context, there are various available Open-source software listed below (non exhaustive).
https://en.wikipedia.org/wiki/Interleaver
Low-density parity-check (LDPC)codes are a class oferror correction codeswhich (together with the closely-relatedturbo codes) have gained prominence incoding theoryandinformation theorysince the late 1990s. The codes today are widely used in applications ranging from wireless communications to flash-memory storage. Together with turbo codes, they sparked a revolution in coding theory, achieving order-of-magnitude improvements in performance compared to traditional error correction codes.[1] Central to the performance of LDPC codes is their adaptability to the iterativebelief propagationdecoding algorithm. Under this algorithm, they can be designed to approach theoretical limits (capacities) of many channels[2]at low computation costs. Theoretically, analysis of LDPC codes focuses on sequences of codes of fixedcode rateand increasingblock length. These sequences are typically tailored to a set of channels. For appropriately designed sequences, the decoding error under belief propagation can often be proven to be vanishingly small (approaches zero with the block length) at rates that are very close to the capacities of the channels. Furthermore, this can be achieved at a complexity that is linear in the block length. This theoretical performance is made possible using a flexible design method that is based on sparseTanner graphs(specializedbipartite graphs).[3] LDPC codes were originally conceived byRobert G. Gallager(and are thus also known as Gallager codes). Gallager devised the codes in his doctoral dissertation[4]at theMassachusetts Institute of Technologyin 1960.[5][6]The codes were largely ignored at the time, as their iterative decoding algorithm (despite having linear complexity), was prohibitively computationally expensive for the hardware available. Renewed interest in the codes emerged following the invention of the closely-relatedturbo codes(1993), whose similarly iterative decoding algorithm outperformed other codes used at that time. LDPC codes were subsequently rediscovered in 1996.[7]Initial industry preference for LDPC codes over turbo codes stemmed from patent-related constraints on the latter.[8]Over the time that has elapsed since their discovery, advances in LDPC codes have seen them surpass turbo codes in terms oferror floorand performance in the highercode raterange, leaving turbo codes better suited for the lower code rates only.[9]Although the fundamental patent for turbo codes has expired (on August 29, 2013),[10][11]LDPC codes are now still being preferred for their technical merits. Theoretical interest in LDPC codes also follows from their amenability to mathematical analysis. In his dissertation, Gallager showed that LDPC codes achieve theGilbert–Varshamov bound for linear codesover binary fields with high probability. Over thebinary erasure channel, code sequences were designed at rates arbitrary close to channel capacity, with provably vanishing decoding error probability and linear decoding complexity.[12]In 2020 it was shown that Gallager's LDPC codes achievelist decodingcapacity and also achieve theGilbert–Varshamov bound for linear codesover general fields.[13] In 2003, anirregular repeat accumulate(IRA) style LDPC code beat six turbo codes to become the error-correcting code in the newDVB-S2standard fordigital television.[14]The DVB-S2 selection committee made decoder complexity estimates for the turbo code proposals using a much less efficient serial decoder architecture rather than a parallel decoder architecture. This forced the turbo code proposals to use frame sizes on the order of one half the frame size of the LDPC proposals.[citation needed] In 2008, LDPC beat convolutional turbo codes as theforward error correction(FEC) system for theITU-TG.hnstandard.[15]G.hn chose LDPC codes over turbo codes because of their lower decoding complexity (especially when operating at data rates close to 1.0 Gbit/s) and because the proposed turbo codes exhibited a significanterror floorat the desired range of operation.[16] LDPC codes are also used for10GBASE-TEthernet, which sends data at 10 gigabits per second over twisted-pair cables. As of 2009, LDPC codes are also part of theWi-Fi802.11 standard as an optional part of802.11nand802.11ac, in the High Throughput (HT) PHY specification.[17]LDPC is a mandatory part of802.11ax(Wi-Fi 6).[18] SomeOFDMsystems add an additional outer error correction that fixes the occasional errors (the "error floor") that get past the LDPC correction inner code even at lowbit error rates. For example: TheReed-Solomon codewith LDPC Coded Modulation (RS-LCM) uses a Reed-Solomon outer code.[19]The DVB-S2, the DVB-T2 and the DVB-C2 standards all use aBCH codeouter code to mop up residual errors after LDPC decoding.[20] 5G NRusespolar codefor the control channels and LDPC for the data channels.[21][22] Although LDPC code has had its success in commercial hard disk drives, to fully exploit its error correction capability inSSDsdemands unconventional fine-grained flash memory sensing, leading to an increased memory read latency. LDPC-in-SSD[23]is an effective approach to deploy LDPC in SSD with a very small latency increase, which turns LDPC in SSD into a reality. Since then, LDPC has been widely adopted in commercial SSDs in both customer-grades and enterprise-grades by major storage venders. Many TLC (and later) SSDs are using LDPC codes. A fast hard-decode (binary erasure) is first attempted, which can fall back into the slower but more powerful soft decoding.[24] LDPC codes functionally are defined by a sparseparity-check matrix. Thissparse matrixis often randomly generated, subject to thesparsityconstraints—LDPC code constructionis discussedlater. These codes were first designed by Robert Gallager in 1960.[6] Below is a graph fragment of an example LDPC code usingForney's factor graph notation. In this graph,nvariable nodes in the top of the graph are connected to (n−k) constraint nodes in the bottom of the graph. This is a popular way of graphically representing an (n,k) LDPC code. The bits of a valid message, when placed on theT'sat the top of the graph, satisfy the graphical constraints. Specifically, all lines connecting to a variable node (box with an '=' sign) have the same value, and all values connecting to a factor node (box with a '+' sign) must sum,modulotwo, to zero (in other words, they must sum to an even number; or there must be an even number of odd values). Ignoring any lines going out of the picture, there are eight possible six-bit strings corresponding to valid codewords: (i.e., 000000, 011001, 110010, 101011, 111100, 100101, 001110, 010111). This LDPC code fragment represents a three-bit message encoded as six bits. Redundancy is used, here, to increase the chance of recovering from channel errors. This is a (6, 3)linear code, withn= 6 andk= 3. Again ignoring lines going out of the picture, the parity-check matrix representing this graph fragment is In this matrix, each row represents one of the three parity-check constraints, while each column represents one of the six bits in the received codeword. In this example, the eight codewords can be obtained by putting theparity-check matrixHinto this form[−PT|In−k]{\displaystyle {\begin{bmatrix}-P^{T}|I_{n-k}\end{bmatrix}}}through basicrow operationsinGF(2): Step 1: H. Step 2: Row 1 is added to row 3. Step 3: Row 2 and 3 are swapped. Step 4: Row 1 is added to row 3. From this, thegenerator matrixGcan be obtained as[Ik|P]{\displaystyle {\begin{bmatrix}I_{k}|P\end{bmatrix}}}(noting that in the special case of this being a binary codeP=−P{\displaystyle P=-P}), or specifically: Finally, by multiplying all eight possible 3-bit strings byG, all eight valid codewords are obtained. For example, the codeword for the bit-string '101' is obtained by: where⊙{\displaystyle \odot }is symbol of mod 2 multiplication. As a check, the row space ofGis orthogonal toHsuch thatG⊙HT=0{\displaystyle G\odot H^{T}=0} The bit-string '101' is found in as the first 3 bits of the codeword '101011'. During the encoding of a frame, the input data bits (D) are repeated and distributed to a set of constituent encoders. The constituent encoders are typically accumulators and each accumulator is used to generate a parity symbol. A single copy of the original data (S0,K-1) is transmitted with the parity bits (P) to make up the code symbols. The S bits from each constituent encoder are discarded. The parity bit may be used within another constituent code. In an example using the DVB-S2 rate 2/3 code the encoded block size is 64800 symbols (N=64800) with 43200 data bits (K=43200) and 21600 parity bits (M=21600). Each constituent code (check node) encodes 16 data bits except for the first parity bit which encodes 8 data bits. The first 4680 data bits are repeated 13 times (used in 13 parity codes), while the remaining data bits are used in 3 parity codes (irregular LDPC code). For comparison, classic turbo codes typically use two constituent codes configured in parallel, each of which encodes the entire input block (K) of data bits. These constituent encoders are recursive convolutional codes (RSC) of moderate depth (8 or 16 states) that are separated by a code interleaver which interleaves one copy of the frame. The LDPC code, in contrast, uses many low depth constituent codes (accumulators) in parallel, each of which encode only a small portion of the input frame. The many constituent codes can be viewed as many low depth (2 state) "convolutional codes" that are connected via the repeat and distribute operations. The repeat and distribute operations perform the function of the interleaver in the turbo code. The ability to more precisely manage the connections of the various constituent codes and the level of redundancy for each input bit give more flexibility in the design of LDPC codes, which can lead to better performance than turbo codes in some instances. Turbo codes still seem to perform better than LDPCs at low code rates, or at least the design of well performing low rate codes is easier for turbo codes. As a practical matter, the hardware that forms the accumulators is reused during the encoding process. That is, once a first set of parity bits are generated and the parity bits stored, the same accumulator hardware is used to generate a next set of parity bits. As with other codes, themaximum likelihood decodingof an LDPC code on thebinary symmetric channelis anNP-completeproblem,[25]shown by reduction from3-dimensional matching. So assumingP != NP, which is widely believed, then performing optimal decoding for an arbitrary code of any useful size is not practical. However, sub-optimal techniques based on iterativebelief propagationdecoding give excellent results and can be practically implemented. The sub-optimal decoding techniques view each parity check that makes up the LDPC as an independent single parity check (SPC) code. Each SPC code is decoded separately usingsoft-in-soft-out(SISO) techniques such asSOVA,BCJR,MAP, and other derivates thereof. The soft decision information from each SISO decoding is cross-checked and updated with other redundant SPC decodings of the same information bit. Each SPC code is then decoded again using the updated soft decision information. This process is iterated until a valid codeword is achieved or decoding is exhausted. This type of decoding is often referred to as sum-product decoding. The decoding of the SPC codes is often referred to as the "check node" processing, and the cross-checking of the variables is often referred to as the "variable-node" processing. In a practical LDPC decoder implementation, sets of SPC codes are decoded in parallel to increase throughput. In contrast, belief propagation on thebinary erasure channelis particularly simple where it consists of iterative constraint satisfaction. For example, consider that the valid codeword, 101011, from the example above, is transmitted across a binary erasure channel and received with the first and fourth bit erased to yield ?01?11. Since the transmitted message must have satisfied the code constraints, the message can be represented by writing the received message on the top of the factor graph. In this example, the first bit cannot yet be recovered, because all of the constraints connected to it have more than one unknown bit. In order to proceed with decoding the message, constraints connecting to only one of the erased bits must be identified. In this example, only the second constraint suffices. Examining the second constraint, the fourth bit must have been zero, since only a zero in that position would satisfy the constraint. This procedure is then iterated. The new value for the fourth bit can now be used in conjunction with the first constraint to recover the first bit as seen below. This means that the first bit must be a one to satisfy the leftmost constraint. Thus, the message can be decoded iteratively. For other channel models, the messages passed between the variable nodes and check nodes arereal numbers, which express probabilities and likelihoods of belief. This result can be validated by multiplying the corrected codewordrby the parity-check matrixH: Because the outcomez(thesyndrome) of this operation is the three × one zero vector, the resulting codewordris successfully validated. After the decoding is completed, the original message bits '101' can be extracted by looking at the first 3 bits of the codeword. While illustrative, this erasure example does not show the use of soft-decision decoding or soft-decision message passing, which is used in virtually all commercial LDPC decoders. In recent years[when?], there has also been a great deal of work spent studying the effects of alternative schedules for variable-node and constraint-node update. The original technique that was used for decoding LDPC codes was known asflooding. This type of update required that, before updating a variable node, all constraint nodes needed to be updated and vice versa. In later work by Vila Casadoet al.,[26]alternative update techniques were studied, in which variable nodes are updated with the newest available check-node information.[citation needed] The intuition behind these algorithms is that variable nodes whose values vary the most are the ones that need to be updated first. Highly reliable nodes, whoselog-likelihood ratio(LLR) magnitude is large and does not change significantly from one update to the next, do not require updates with the same frequency as other nodes, whose sign and magnitude fluctuate more widely.[citation needed]These scheduling algorithms show greater speed of convergence and lower error floors than those that use flooding. These lower error floors are achieved by the ability of the Informed Dynamic Scheduling (IDS)[26]algorithm to overcome trapping sets of near codewords.[27] When nonflooding scheduling algorithms are used, an alternative definition of iteration is used. For an (n,k) LDPC code of ratek/n, a fulliterationoccurs whennvariable andn−kconstraint nodes have been updated, no matter the order in which they were updated.[citation needed] For large block sizes, LDPC codes are commonly constructed by first studying the behaviour of decoders. As the block size tends to infinity, LDPC decoders can be shown to have a noise threshold below which decoding is reliably achieved, and above which decoding is not achieved,[28]colloquially referred to as thecliff effect. This threshold can be optimised by finding the best proportion of arcs from check nodes and arcs from variable nodes. An approximate graphical approach to visualising this threshold is anEXIT chart.[citation needed] The construction of a specific LDPC code after this optimization falls into two main types of techniques:[citation needed] Construction by a pseudo-random approach builds on theoretical results that, for large block size, a random construction gives good decoding performance.[7]In general, pseudorandom codes have complex encoders, but pseudorandom codes with the best decoders can have simple encoders.[29]Various constraints are often applied to help ensure that the desired properties expected at the theoretical limit of infinite block size occur at a finite block size.[citation needed] Combinatorial approaches can be used to optimize the properties of small block-size LDPC codes or to create codes with simple encoders.[citation needed] Some LDPC codes are based onReed–Solomon codes, such as the RS-LDPC code used in the10 Gigabit Ethernetstandard.[30]Compared to randomly generated LDPC codes, structured LDPC codes—such as the LDPC code used in theDVB-S2standard—can have simpler and therefore lower-cost hardware—in particular, codes constructed such that the H matrix is acirculant matrix.[31] Yet another way of constructing LDPC codes is to usefinite geometries. This method was proposed by Y. Kouet al.in 2001.[32] LDPC codes can be compared with other powerful coding schemes, e.g.turbo codes.[33]In one hand,BERperformance of turbo codes is influenced by low codes limitations.[34]LDPC codes have no limitations of minimum distance,[35]that indirectly means that LDPC codes may be more efficient on relatively largecode rates(e.g. 3/4, 5/6, 7/8) than turbo codes. However, LDPC codes are not the complete replacement: turbo codes are the best solution at the lower code rates (e.g. 1/6, 1/3, 1/2).[36][37] So far there is only one capacity achieving code by design and proof.
https://en.wikipedia.org/wiki/Low-density_parity-check_code
Serial concatenated convolutional codes(SCCC) are a class offorward error correction(FEC) codes highly suitable forturbo(iterative) decoding.[1][2]Data to be transmitted over a noisy channel may first be encoded using an SCCC. Upon reception, the coding may be used to remove any errors introduced during transmission. The decoding is performed by repeated decoding and [de]interleaving of the received symbols. SCCCs typically include aninner code, anouter code, and a linking interleaver. A distinguishing feature of SCCCs is the use of a recursiveconvolutional codeas the inner code. The recursive inner code provides the 'interleaver gain' for the SCCC, which is the source of the excellent performance of these codes. The analysis of SCCCs was spawned in part by the earlier discovery ofturbo codesin 1993. This analysis of SCCC's took place in the 1990s in a series of publications from NASA'sJet Propulsion Laboratory(JPL). The research offered SCCC's as a form of turbo-like serial concatenated codes that 1) were iteratively ('turbo') decodable with reasonablecomplexity, and 2) gave error correction performance comparable with the turbo codes. Prior forms ofserial concatenated codestypically did not use recursive inner codes. Additionally, the constituent codes used in prior forms of serial concatenated codes were generally too complex for reasonable soft-in-soft-out (SISO) decoding. SISO decoding is considered essential for turbo decoding. Serial concatenated convolutional codes have not found widespread commercial use, although they were proposed for communications standards such asDVB-S2. Nonetheless, the analysis of SCCCs has provided insight into the performance and bounds of all types of iterative decodable codes includingturbo codesandLDPCcodes.[citation needed] US patent 6,023,783 covers some forms of SCCCs. The patent expired on May 15, 2016.[3] Serial concatenated convolutional codes were first analyzed with a view toward turbo decoding in "Serial Concatenation of Interleaved Codes: Performance Analysis, Design, and Iterative Decoding" by S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara.[4]This analysis yielded a set of observations for designing high performance, turbo decodable serial concatenated codes that resembledturbo codes. One of these observations was that "the use of a recursive convolutional inner encoder always yields an interleaver gain."[clarification needed]This is in contrast to the use of block codes or non-recursive convolutional codes, which do not provide comparable interleaver gain. Additional analysis of SCCCs was done in "Coding Theorems for 'Turbo-Like' Codes" by D. Divsalar, Hui Jin, and Robert J. McEliece.[5]This paper analyzed repeat-accumulate (RA) codes which are the serial concatenation of an inner two-state recursive convolutional code (also called an 'accumulator' or parity-check code) with a simple repeat code as the outer code, with both codes linked by an interleaver. The performance of the RA codes is quite good considering the simplicity of the constituent codes themselves. SCCC codes were further analyzed in "Serial Turbo Trellis Coded Modulation with Rate-1 Inner Code".[6]In this paper SCCCs were designed for use with higher order modulation schemes. Excellent performing codes with inner and outer constituent convolutional codes of only two or four states were presented. Fig 1 is an example of a SCCC. The example encoder is composed of a 16-state outer convolutional code and a 2-state inner convolutional code linked by an interleaver. The natural code rate of the configuration shown is 1/4, however, the inner and/or outer codes may be punctured to achieve higher code rates as needed. For example, an overall code rate of 1/2 may be achieved by puncturing the outer convolutional code to rate 3/4 and the inner convolutional code to rate 2/3. A recursive inner convolutional code is preferable for turbo decoding of the SCCC. The inner code may be punctured to a rate as high as 1/1 with reasonable performance. An example of an iterative SCCC decoder. The SCCC decoder includes two soft-in-soft-out (SISO) decoders and an interleaver. While shown as separate units, the two SISO decoders may share all or part of their circuitry. The SISO decoding may be done is serial or parallel fashion, or some combination thereof. The SISO decoding is typically done usingMaximum a posteriori(MAP) decoders using theBCJRalgorithm. SCCCs provide performance comparable to other iteratively decodable codes including turbo codes andLDPCcodes. They are noted for having slightly worse performance at lower SNR environments (i.e. worse waterfall region), but slightly better performance at higher SNR environments (i.e. lower error floor).
https://en.wikipedia.org/wiki/Serial_concatenated_convolutional_codes
Ininformation theory, asoft-decision decoderis a kind ofdecoding method– a class ofalgorithmused to decode data that has been encoded with anerror correcting code. Whereas ahard-decision decoderoperates on data that take on a fixed set of possible values (typically 0 or 1 in a binary code), the inputs to a soft-decision decoder may take on a whole range of values in-between. This extra information indicates the reliability of each input data point, and is used to form better estimates of the original data. Therefore, a soft-decision decoder will typically perform better in the presence of corrupted data than its hard-decision counterpart.[1] Soft-decision decoders are often used inViterbi decodersandturbo codedecoders. This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Soft-decision_decoding
Indigital communications, aturbo equalizeris a type ofreceiverused to receive a message corrupted by acommunication channelwithintersymbol interference(ISI). It approaches the performance of amaximum a posteriori(MAP) receiver via iterativemessage passingbetween asoft-in soft-out(SISO)equalizerand a SISO decoder.[1]It is related toturbo codesin that a turbo equalizer may be considered a type of iterative decoder if the channel is viewed as a non-redundantconvolutional code. The turbo equalizer is different from classic a turbo-like code, however, in that the 'channel code' adds no redundancy and therefore can only be used to remove non-gaussian noise. Turbo codeswere invented byClaude Berrouin 1990–1991. In 1993,turbo codeswere introduced publicly via a paper listing authorsBerrou,Glavieux, andThitimajshima.[2]In 1995 a novel extension of the turbo principle was applied to an equalizer byDouillard,Jézéquel, andBerrou.[3]In particular, they formulated the ISI receiver problem as a turbo code decoding problem, where the channel is thought of as a rate 1 convolutional code and the error correction coding is the second code. In 1997,Glavieux,Laot, andLabatdemonstrated that a linear equalizer could be used in a turbo equalizer framework.[4]This discovery made turbo equalization computationally efficient enough to be applied to a wide range of applications.[5] Before discussing turbo equalizers, it is necessary to understand the basic receiver in the context of a communication system. This is the topic of this section. At thetransmitter, informationbitsareencoded. Encoding adds redundancy by mapping the information bitsa{\displaystyle a}to a longer bit vector – the code bit vectorb{\displaystyle b}. The encoded bitsb{\displaystyle b}are theninterleaved. Interleaving permutes the order of the code bitsb{\displaystyle b}resulting in bitsc{\displaystyle c}. The main reason for doing this is to insulate the information bits from bursty noise. Next, the symbol mapper maps the bitsc{\displaystyle c}intocomplex symbolsx{\displaystyle x}. These digital symbols are then converted into analog symbols with aD/A converter. Typically the signal is thenup-convertedto pass band frequencies by mixing it with acarriersignal. This is a necessary step for complex symbols. The signal is then ready to be transmitted through thechannel. At thereceiver, the operations performed by the transmitter are reversed to recovera^{\displaystyle {\hat {a}}}, an estimate of the information bits. Thedown-convertermixes the signal back down to baseband. TheA/D converterthen samples the analog signal, making it digital. At this point,y{\displaystyle y}is recovered. The signaly{\displaystyle y}is what would be received ifx{\displaystyle x}were transmitted through the digitalbasebandequivalent of the channel plusnoise. The signal is thenequalized. The equalizer attempts to unravel theISIin the received signal to recover the transmitted symbols. It then outputs the bitsc^{\displaystyle {\hat {c}}}associated with those symbols. The vectorc^{\displaystyle {\hat {c}}}may represent hard decisions on the bits or soft decisions. If the equalizer makes soft decisions, it outputs information relating to the probability of the bit being a 0 or a 1. If the equalizer makes hard decisions on the bits, it quantizes the soft bit decisions and outputs either a 0 or a 1. Next, the signal is deinterleaved which is a simple permutation transformation that undoes the transformation the interleaver executed. Finally, the bits are decoded by the decoder. The decoder estimatesa^{\displaystyle {\hat {a}}}fromb^{\displaystyle {\hat {b}}}. A diagram of the communication system is shown below. In this diagram, the channel is the equivalent baseband channel, meaning that it encompasses the D/A, the up converter, the channel, the down converter, and the A/D. The block diagram of a communication system employing a turbo equalizer is shown below. The turbo equalizer encompasses the equalizer, the decoder, and the blocks in between. The difference between a turbo equalizer and a standard equalizer is the feedback loop from the decoder to the equalizer. Due to the structure of the code, the decoder not only estimates the information bitsa{\displaystyle a}, but it also discovers new information about the coded bitsb{\displaystyle b}. The decoder is therefore able to output extrinsic information,b~{\displaystyle {\tilde {b}}}about the likelihood that a certain code bit stream was transmitted. Extrinsic information is new information that is not derived from information input to the block. This extrinsic information is then mapped back into information about the transmitted symbolsx{\displaystyle x}for use in the equalizer. These extrinsic symbol likelihoods,x~{\displaystyle {\tilde {x}}}, are fed into the equalizer asa priorisymbol probabilities. The equalizer uses thisa prioriinformation as well as the input signaly{\displaystyle y}to estimate extrinsic probability information about the transmitted symbols. Thea prioriinformation fed to the equalizer is initialized to 0, meaning that the initial estimatea^{\displaystyle {\hat {a}}}made by the turbo equalizer is identical to the estimate made by the standard receiver. The informationx^{\displaystyle {\hat {x}}}is then mapped back into information aboutb{\displaystyle b}for use by the decoder. The turbo equalizer repeats this iterative process until a stopping criterion is reached. In practical turbo equalization implementations, an additional issue need to be considered. Thechannel state information(CSI)that the equalizer operates on comes from some channel estimation technique, and hence un-reliable. Firstly, in order to improve the reliability of the CSI, it is desirable to include the channel estimation block also into the turbo equalization loop, and parse soft or hard decision directed channel estimation within each turbo equalization iteration.[6][7]Secondly, incorporating the presence of CSI uncertainty into the turbo equalizer design leads to a more robust approach with significant performance gains in practical scenarios.[8][9]
https://en.wikipedia.org/wiki/Turbo_equalizer
TheUniversal Mobile Telecommunications System(UMTS) is a3Gmobile cellular system for networks based on theGSMstandard.[1]UMTS useswideband code-division multiple access(W-CDMA) radio access technology to offer greater spectral efficiency and bandwidth tomobile network operatorscompared to previous2Gsystems likeGPRSandCSD.[2]UMTS on its provides a peak theoretical data rate of 2Mbit/s.[3] Developed and maintained by the3GPP(3rd Generation Partnership Project), UMTS is a component of theInternational Telecommunication UnionIMT-2000standard set and compares with theCDMA2000standard set for networks based on the competingcdmaOnetechnology. The technology described in UMTS is sometimes also referred to asFreedom of Mobile Multimedia Access(FOMA)[4]or 3GSM. UMTS specifies a complete network system, which includes theradio access network(UMTS Terrestrial Radio Access Network, or UTRAN), thecore network(Mobile Application Part, or MAP) and the authentication of users via SIM (subscriber identity module) cards. UnlikeEDGE(IMT Single-Carrier, based on GSM) and CDMA2000 (IMT Multi-Carrier), UMTS requires new base stations and new frequency allocations. UMTS has since been enhanced asHigh Speed Packet Access(HSPA).[5] UMTS supports theoretical maximum datatransfer ratesof 42Mbit/swhenEvolved HSPA(HSPA+) is implemented in the network.[6]Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release '99 (R99) handsets (the original UMTS release), and 7.2 Mbit/s forHigh-Speed Downlink Packet Access(HSDPA) handsets in the downlink connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel, multiple 9.6 kbit/s channels inHigh-Speed Circuit-Switched Data(HSCSD) and 14.4 kbit/s for CDMAOne channels. Since 2006, UMTS networks in many countries have been or are in the process of being upgraded with High-Speed Downlink Packet Access (HSDPA), sometimes known as3.5G. Currently, HSDPA enablesdownlinktransfer speeds of up to 21 Mbit/s. Work is also progressing on improving the uplink transfer speed with theHigh-Speed Uplink Packet Access(HSUPA). The 3GPPLTEstandard succeeds UMTS and initially provided 4G speeds of 100 Mbit/s down and 50 Mbit/s up, with scalability up to 3 Gbps, using a next generation air interface technology based uponorthogonal frequency-division multiplexing. The first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-provided mobile applications such as mobile TV andvideo calling. The high data speeds of UMTS are now most often utilised for Internet access: experience in Japan and elsewhere has shown that user demand for video calls is not high, and telco-provided audio/video content has declined in popularity in favour of high-speed access to the World Wide Web – either directly on a handset or connected to a computer viaWi-Fi,BluetoothorUSB.[citation needed] UMTS combines three different terrestrialair interfaces,GSM'sMobile Application Part(MAP) core, and the GSM family ofspeech codecs. The air interfaces are called UMTS Terrestrial Radio Access (UTRA).[7]All air interface options are part ofITU'sIMT-2000. In the currently most popular variant for cellular mobile telephones, W-CDMA (IMT Direct Spread) is used. It is also called "Uu interface", as it links User Equipment to the UMTS Terrestrial Radio Access Network. Please note that the termsW-CDMA,TD-CDMAandTD-SCDMAare misleading. While they suggest covering just achannel access method(namely a variant ofCDMA), they are actually the common names for the whole air interface standards.[8] W-CDMA (WCDMA; WidebandCode-Division Multiple Access), along with UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interface standard found in3Gmobile telecommunicationsnetworks. It supports conventional cellular voice, text andMMSservices, but can also carry data at high speeds, allowing mobile operators to deliver higher bandwidth applications including streaming and broadband Internet access.[9] W-CDMA uses theDS-CDMAchannel access method with a pair of 5 MHz wide channels. In contrast, the competingCDMA2000system uses one or more available 1.25 MHz channels for each direction of communication. W-CDMA systems are widely criticized for their large spectrum usage, which delayed deployment in countries that acted relatively slowly in allocating new frequencies specifically for 3G services (such as the United States). The specificfrequency bandsoriginally defined by the UMTS standard are 1885–2025 MHz for the mobile-to-base (uplink) and 2110–2200 MHz for the base-to-mobile (downlink). In the US, 1710–1755 MHz and 2110–2155 MHz are used instead, as the 1900 MHz band was already used.[10]While UMTS2100 is the most widely deployed UMTS band, some countries' UMTS operators use the 850 MHz (900 MHz in Europe) and/or 1900 MHz bands (independently, meaning uplink and downlink are within the same band), notably in the US byAT&T Mobility, New Zealand byTelecom New Zealandon theXT Mobile Networkand in Australia byTelstraon theNext Gnetwork. Some carriers such asT-Mobileuse band numbers to identify the UMTS frequencies. For example, Band I (2100 MHz), Band IV (1700/2100 MHz), and Band V (850 MHz). UMTS-FDD is an acronym for Universal Mobile Telecommunications System (UMTS) –frequency-division duplexing(FDD) and a3GPPstandardizedversion of UMTS networks that makes use of frequency-division duplexing forduplexingover an UMTS Terrestrial Radio Access (UTRA) air interface.[11] W-CDMA is the basis of Japan'sNTT DoCoMo'sFOMAservice and the most-commonly used member of the Universal Mobile Telecommunications System (UMTS) family and sometimes used as a synonym for UMTS.[12]It uses the DS-CDMA channel access method and the FDD duplexing method to achieve higher speeds and support more users compared to most previously usedtime-division multiple access(TDMA) andtime-division duplex(TDD) schemes. While not an evolutionary upgrade on the airside, it uses the samecore networkas the2GGSM networks deployed worldwide, allowingdual-mode mobileoperation along with GSM/EDGE; a feature it shares with other members of the UMTS family. In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G networkFOMA. Later NTT DoCoMo submitted the specification to theInternational Telecommunication Union(ITU) as a candidate for the international 3G standard known as IMT-2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards, as an alternative to CDMA2000, EDGE, and the short rangeDECTsystem. Later, W-CDMA was selected as an air interface forUMTS. As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their network was initially incompatible with UMTS.[13]However, this has been resolved by NTT DoCoMo updating their network. Code-Division Multiple Access communication networks have been developed by a number of companies over the years, but development of cell-phone networks based on CDMA (prior to W-CDMA) was dominated byQualcomm, the first company to succeed in developing a practical and cost-effective CDMA implementation for consumer cell phones and its earlyIS-95air interface standard has evolved into the current CDMA2000 (IS-856/IS-2000) standard. Qualcomm created an experimental wideband CDMA system called CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000 (3GPP2) network technologies into a single design for a worldwide standard air interface. Compatibility with CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with coverage in 58 countries as of 2006[update]. However, divergent requirements resulted in the W-CDMA standard being retained and deployed globally. W-CDMA has then become the dominant technology with 457 commercial networks in 178 countries as of April 2012.[14]Several CDMA2000 operators have even converted their networks to W-CDMA for international roaming compatibility and smooth upgrade path toLTE. Despite incompatibility with existing air-interface standards, late introduction and the high upgrade cost of deploying an all-new transmitter technology, W-CDMA has become the dominant standard. W-CDMA transmits on a pair of 5 MHz-wide radio channels, while CDMA2000 transmits on one or several pairs of 1.25 MHz radio channels. Though W-CDMA does use adirect-sequenceCDMA transmission technique like CDMA2000, W-CDMA is not simply a wideband version of CDMA2000 and differs in many aspects from CDMA2000. From an engineering point of view, W-CDMA provides a different balance of trade-offs between cost, capacity, performance, and density[citation needed]; it also promises to achieve a benefit of reduced cost for video phone handsets. W-CDMA may also be better suited for deployment in the very dense cities of Europe and Asia. However, hurdles remain, andcross-licensingofpatentsbetween Qualcomm and W-CDMA vendors has not eliminated possible patent issues due to the features of W-CDMA which remain covered by Qualcomm patents.[15] W-CDMA has been developed into a complete set of specifications, a detailed protocol that defines how a mobile phone communicates with the tower, how signals are modulated, how datagrams are structured, and system interfaces are specified allowing free competition on technology elements. The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo inJapanin 2001. Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand. W-CDMA has also been adapted for use in satellite communications on the U.S.Mobile User Objective Systemusing geosynchronous satellites in place of cell towers. J-PhoneJapan (onceVodafoneand nowSoftBank Mobile) soon followed by launching their own W-CDMA based service, originally branded "Vodafone Global Standard" and claiming UMTS compatibility. The name of the service was changed to "Vodafone 3G" (now "SoftBank 3G") in December 2004. Beginning in 2003,Hutchison Whampoagradually launched their upstart UMTS networks. Most countries have, since the ITU approved of the 3G mobile service, either "auctioned" the radio frequencies to the company willing to pay the most, or conducted a "beauty contest" – asking the various companies to present what they intend to commit to if awarded the licences. This strategy has been criticised for aiming to drain the cash of operators to the brink of bankruptcy in order to honour their bids or proposals. Most of them have a time constraint for the rollout of the service – where a certain "coverage" must be achieved within a given date or the licence will be revoked. Vodafone launched several UMTS networks in Europe in February 2004.MobileOneofSingaporecommercially launched its 3G (W-CDMA) services in February 2005.New Zealandin August 2005 andAustraliain October 2005. AT&T Mobilityutilized a UMTS network, with HSPA+, from 2005 until its shutdown in February 2022. Rogers inCanadaMarch 2007 has launched HSDPA in the Toronto Golden Horseshoe district on W-CDMA at 850/1900 MHz and plan the launch the service commercial in the top 25 cities October, 2007. TeliaSoneraopened W-CDMA service inFinlandOctober 13, 2004, with speeds up to 384 kbit/s. Availability only in main cities. Pricing is approx. €2/MB.[citation needed] SK TelecomandKTF, two largest mobile phone service providers inSouth Korea, have each started offering W-CDMA service in December 2003. Due to poor coverage and lack of choice in handhelds, the W-CDMA service has barely made a dent in the Korean market which was dominated by CDMA2000. By October 2006 both companies are covering more than 90 cities whileSK Telecomhas announced that it will provide nationwide coverage for its WCDMA network in order for it to offer SBSM (Single Band Single Mode) handsets by the first half of 2007.KT Freecelwill thus cut funding to its CDMA2000 network development to the minimum. InNorway,Telenorintroduced W-CDMA in major cities by the end of 2004, while their competitor,NetCom, followed suit a few months later. Both operators have 98% national coverage on EDGE, but Telenor has parallel WLAN roaming networks on GSM, where the UMTS service is competing with this. For this reason Telenor is dropping support of their WLAN service in Austria (2006). Maxis CommunicationsandCelcom, two mobile phone service providers inMalaysia, started offering W-CDMA services in 2005. InSweden,Teliaintroduced W-CDMA in March 2004. UMTS-TDD, an acronym for Universal Mobile Telecommunications System (UMTS) – time-division duplexing (TDD), is a 3GPP standardized version of UMTS networks that use UTRA-TDD.[11]UTRA-TDD is a UTRA that usestime-division duplexingfor duplexing.[11]While a full implementation of UMTS, it is mainly used to provide Internet access in circumstances similar to those whereWiMAXmight be used.[citation needed]UMTS-TDD is not directly compatible with UMTS-FDD: a device designed to use one standard cannot, unless specifically designed to, work on the other, because of the difference in air interface technologies and frequencies used.[citation needed]It is more formally as IMT-2000 CDMA-TDD or IMT 2000 Time-Division (IMT-TD).[16][17] The two UMTS air interfaces (UTRAs) for UMTS-TDD are TD-CDMA and TD-SCDMA. Both air interfaces use a combination of two channel access methods,code-division multiple access(CDMA) and time-division multiple access (TDMA): the frequency band is divided into time slots (TDMA), which are further divided into channels using CDMA spreading codes. These air interfaces are classified as TDD, because time slots can be allocated to either uplink or downlink traffic. TD-CDMA, an acronym for Time-Division-Code-Division Multiple Access, is a channel-access method based on usingspread-spectrummultiple-access (CDMA) across multiple time slots (TDMA). TD-CDMA is the channel access method for UTRA-TDD HCR, which is an acronym for UMTS Terrestrial Radio Access-Time Division Duplex High Chip Rate.[16] UMTS-TDD's air interfaces that use the TD-CDMA channel access technique are standardized as UTRA-TDD HCR, which uses increments of 5MHzof spectrum, each slice divided into 10 ms frames containing fifteen time slots (1500 per second).[16]The time slots (TS) are allocated in fixed percentage for downlink and uplink. TD-CDMA is used to multiplex streams from or to multiple transceivers. Unlike W-CDMA, it does not need separate frequency bands for up- and downstream, allowing deployment in tightfrequency bands.[18] TD-CDMA is a part of IMT-2000, defined as IMT-TD Time-Division (IMT CDMA TDD), and is one of the three UMTS air interfaces (UTRAs), as standardized by the 3GPP in UTRA-TDD HCR. UTRA-TDD HCR is closely related to W-CDMA, and provides the same types of channels where possible. UMTS's HSDPA/HSUPA enhancements are also implemented under TD-CDMA.[19] In the United States, the technology has been used for public safety and government use in theNew York Cityand a few other areas.[needs update][20]In Japan, IPMobile planned to provide TD-CDMA service in year 2006, but it was delayed, changed to TD-SCDMA, and bankrupt before the service officially started. Time-Division Synchronous Code-Division Multiple Access(TD-SCDMA) or UTRA TDD 1.28Mcpslow chip rate (UTRA-TDD LCR)[17][8]is an air interface[17]found in UMTS mobile telecommunications networks in China as an alternative to W-CDMA. TD-SCDMA uses the TDMA channel access method combined with an adaptivesynchronous CDMAcomponent[17]on 1.6 MHz slices of spectrum, allowing deployment in even tighter frequency bands than TD-CDMA. It is standardized by the 3GPP and also referred to as "UTRA-TDD LCR". However, the main incentive for development of this Chinese-developed standard was avoiding or reducing the license fees that have to be paid to non-Chinese patent owners. Unlike the other air interfaces, TD-SCDMA was not part of UMTS from the beginning but has been added in Release 4 of the specification. Like TD-CDMA, TD-SCDMA is known as IMT CDMA TDD within IMT-2000. The term "TD-SCDMA" is misleading. While it suggests covering only a channel access method, it is actually the common name for the whole air interface specification.[8] TD-SCDMA / UMTS-TDD (LCR) networks are incompatible with W-CDMA / UMTS-FDD and TD-CDMA / UMTS-TDD (HCR) networks. TD-SCDMA was developed in the People's Republic of China by the Chinese Academy of Telecommunications Technology (CATT),Datang TelecomandSiemensin an attempt to avoid dependence on Western technology. This is likely primarily for practical reasons, since other 3G formats require the payment of patent fees to a large number of Western patent holders. TD-SCDMA proponents also claim it is better suited for densely populated areas.[17]Further, it is supposed to cover all usage scenarios, whereas W-CDMA is optimised for symmetric traffic and macro cells, while TD-CDMA is best used in low mobility scenarios within micro or pico cells.[17] TD-SCDMA is based on spread-spectrum technology which makes it unlikely that it will be able to completely escape the payment of license fees to western patent holders. The launch of a national TD-SCDMA network was initially projected by 2005[21]but only reached large scale commercial trials with 60,000 users across eight cities in 2008.[22] On January 7, 2009, China granted a TD-SCDMA 3G licence toChina Mobile.[23] On September 21, 2009, China Mobile officially announced that it had 1,327,000 TD-SCDMA subscribers as of the end of August, 2009. TD-SCDMA is not commonly used outside of China.[24] TD-SCDMA uses TDD, in contrast to the FDD scheme used byW-CDMA. By dynamically adjusting the number of timeslots used for downlink anduplink, the system can more easily accommodate asymmetric traffic with different data rate requirements on downlink and uplink than FDD schemes. Since it does not require paired spectrum for downlink and uplink, spectrum allocation flexibility is also increased. Using the same carrier frequency for uplink and downlink also means that the channel condition is the same on both directions, and thebase stationcan deduce the downlink channel information from uplink channel estimates, which is helpful to the application ofbeamformingtechniques. TD-SCDMA also uses TDMA in addition to the CDMA used in WCDMA. This reduces the number of users in each timeslot, which reduces the implementation complexity ofmultiuser detectionand beamforming schemes, but the non-continuous transmission also reducescoverage(because of the higher peak power needed), mobility (because of lowerpower controlfrequency) and complicatesradio resource managementalgorithms. The "S" in TD-SCDMA stands for "synchronous", which means that uplink signals are synchronized at the base station receiver, achieved by continuous timing adjustments. This reduces theinterferencebetween users of the same timeslot using different codes by improving theorthogonalitybetween the codes, therefore increasing system capacity, at the cost of some hardware complexity in achieving uplink synchronization. On January 20, 2006,Ministry of Information Industryof the People's Republic of China formally announced that TD-SCDMA is the country's standard of 3G mobile telecommunication. On February 15, 2006, a timeline for deployment of the network in China was announced, stating pre-commercial trials would take place starting after completion of a number of test networks in select cities. These trials ran from March to October, 2006, but the results were apparently unsatisfactory. In early 2007, the Chinese government instructed the dominant cellular carrier, China Mobile, to build commercial trial networks in eight cities, and the two fixed-line carriers,China TelecomandChina Netcom, to build one each in two other cities. Construction of these trial networks was scheduled to finish during the fourth quarter of 2007, but delays meant that construction was not complete until early 2008. The standard has been adopted by 3GPP since Rel-4, known as "UTRA TDD 1.28 Mcps Option".[17] On March 28, 2008, China Mobile Group announced TD-SCDMA "commercial trials" for 60,000 test users in eight cities from April 1, 2008. Networks using other 3G standards (WCDMA and CDMA2000 EV/DO) had still not been launched in China, as these were delayed until TD-SCDMA was ready for commercial launch. In January 2009, theMinistry of Industry and Information Technology(MIIT) in China took the unusual step of assigning licences for 3 different third-generation mobile phone standards to three carriers in a long-awaited step that is expected to prompt $41 billion in spending on new equipment. The Chinese-developed standard, TD-SCDMA, was assigned to China Mobile, the world's biggest phone carrier by subscribers. That appeared to be an effort to make sure the new system has the financial and technical backing to succeed. Licences for two existing 3G standards, W-CDMA andCDMA2000 1xEV-DO, were assigned toChina Unicomand China Telecom, respectively. Third-generation, or 3G, technology supports Web surfing, wireless video and other services and the start of service is expected to spur new revenue growth. The technical split by MIIT has hampered the performance of China Mobile in the 3G market, with users and China Mobile engineers alike pointing to the lack of suitable handsets to use on the network.[25]Deployment of base stations has also been slow, resulting in lack of improvement of service for users.[26]The network connection itself has consistently been slower than that from the other two carriers, leading to a sharp decline in market share. By 2011 China Mobile has already moved its focus onto TD-LTE.[27][28]Gradual closures of TD-SCDMA stations started in 2016.[29][30] The following is a list ofmobile telecommunicationsnetworks using third-generation TD-SCDMA / UMTS-TDD (LCR) technology. In Europe,CEPTallocated the 2010–2020 MHz range for a variant of UMTS-TDD designed for unlicensed, self-provided use.[33]Some telecom groups and jurisdictions have proposed withdrawing this service in favour of licensed UMTS-TDD,[34]due to lack of demand, and lack of development of a UMTS TDD air interface technology suitable for deployment in this band. Ordinary UMTS uses UTRA-FDD as an air interface and is known asUMTS-FDD. UMTS-FDD uses W-CDMA for multiple access andfrequency-division duplexfor duplexing, meaning that the up-link and down-link transmit on different frequencies. UMTS is usually transmitted on frequencies assigned for1G,2G, or 3G mobile telephone service in the countries of operation. UMTS-TDD uses time-division duplexing, allowing the up-link and down-link to share the same spectrum. This allows the operator to more flexibly divide the usage of available spectrum according to traffic patterns. For ordinary phone service, you would expect the up-link and down-link to carry approximately equal amounts of data (because every phone call needs a voice transmission in either direction), but Internet-oriented traffic is more frequently one-way. For example, when browsing a website, the user will send commands, which are short, to the server, but the server will send whole files, that are generally larger than those commands, in response. UMTS-TDD tends to be allocated frequency intended for mobile/wireless Internet services rather than used on existing cellular frequencies. This is, in part, because TDD duplexing is not normally allowed oncellular,PCS/PCN, and 3G frequencies. TDD technologies open up the usage of left-over unpaired spectrum. Europe-wide, several bands are provided either specifically for UMTS-TDD or for similar technologies. These are 1900 MHz and 1920 MHz and between 2010 MHz and 2025 MHz. In several countries the 2500–2690 MHz band (also known as MMDS in the USA) have been used for UMTS-TDD deployments. Additionally, spectrum around the 3.5 GHz range has been allocated in some countries, notably Britain, in a technology-neutral environment. In the Czech Republic UTMS-TDD is also used in a frequency range around 872 MHz.[35] UMTS-TDD has been deployed for public and/or private networks in at least nineteen countries around the world, with live systems in, amongst other countries, Australia, Czech Republic, France, Germany, Japan, New Zealand, Botswana, South Africa, the UK, and the USA. Deployments in the US thus far have been limited. It has been selected for a public safety support network used by emergency responders in New York,[36]but outside of some experimental systems, notably one fromNextel, thus far the WiMAX standard appears to have gained greater traction as a general mobile Internet access system. A variety of Internet-access systems exist which provide broadband speed access to the net. These include WiMAX andHIPERMAN. UMTS-TDD has the advantages of being able to use an operator's existing UMTS/GSM infrastructure, should it have one, and that it includes UMTS modes optimized for circuit switching should, for example, the operator want to offer telephone service. UMTS-TDD's performance is also more consistent. However, UMTS-TDD deployers often have regulatory problems with taking advantage of some of the services UMTS compatibility provides. For example, the UMTS-TDD spectrum in the UK cannot be used to provide telephone service, though the regulatorOFCOMis discussing the possibility of allowing it at some point in the future. Few operators considering UMTS-TDD have existing UMTS/GSM infrastructure. Additionally, the WiMAX and HIPERMAN systems provide significantly larger bandwidths when the mobile station is near the tower. Like most mobile Internet access systems, many users who might otherwise choose UMTS-TDD will find their needs covered by the ad hoc collection of unconnected Wi-Fi access points at many restaurants and transportation hubs, and/or by Internet access already provided by their mobile phone operator. By comparison, UMTS-TDD (and systems like WiMAX) offers mobile, and more consistent, access than the former, and generally faster access than the latter. UMTS also specifies the Universal Terrestrial Radio Access Network (UTRAN), which is composed of multiple base stations, possibly using different terrestrial air interface standards and frequency bands. UMTS and GSM/EDGE can share a Core Network (CN), making UTRAN an alternative radio access network toGERAN(GSM/EDGE RAN), and allowing (mostly) transparent switching between the RANs according to available coverage and service needs. Because of that, UMTS's and GSM/EDGE's radio access networks are sometimes collectively referred to as UTRAN/GERAN. UMTS networks are often combined with GSM/EDGE, the latter of which is also a part of IMT-2000. The UE (User Equipment) interface of theRAN(Radio Access Network) primarily consists ofRRC(Radio Resource Control),PDCP(Packet Data Convergence Protocol),RLC(Radio Link Control) and MAC (Media Access Control) protocols. RRC protocol handles connection establishment, measurements, radio bearer services, security and handover decisions. RLC protocol primarily divides into three Modes – Transparent Mode (TM), Unacknowledge Mode (UM), Acknowledge Mode (AM). The functionality of AM entity resembles TCP operation whereas UM operation resembles UDP operation. In TM mode, data will be sent to lower layers without adding any header toSDUof higher layers. MAC handles the scheduling of data on air interface depending on higher layer (RRC) configured parameters. The set of properties related to data transmission is called Radio Bearer (RB). This set of properties decides the maximum allowed data in a TTI (Transmission Time Interval). RB includes RLC information and RB mapping. RB mapping decides the mapping between RB<->logical channel<->transport channel. Signaling messages are sent on Signaling Radio Bearers (SRBs) and data packets (either CS or PS) are sent on data RBs. RRC andNASmessages go on SRBs. Security includes two procedures: integrity and ciphering. Integrity validates the resource of messages and also makes sure that no one (third/unknown party) on the radio interface has modified the messages. Ciphering ensures that no one listens to your data on the air interface. Both integrity and ciphering are applied for SRBs whereas only ciphering is applied for data RBs. With Mobile Application Part, UMTS uses the same core network standard as GSM/EDGE. This allows a simple migration for existing GSM operators. However, the migration path to UMTS is still costly: while much of the core infrastructure is shared with GSM, the cost of obtaining new spectrum licenses and overlaying UMTS at existing towers is high. The CN can be connected to variousbackbone networks, such as theInternetor anIntegrated Services Digital Network(ISDN) telephone network. UMTS (and GERAN) include the three lowest layers ofOSI model. The network layer (OSI 3) includes theRadio Resource Managementprotocol (RRM) that manages the bearer channels between the mobile terminals and the fixed network, including the handovers. A UARFCN (abbreviationfor UTRA Absolute Radio Frequency Channel Number, where UTRA stands for UMTS Terrestrial Radio Access) is used to identify a frequency in theUMTS frequency bands. Typically channel number is derived from the frequency in MHz through the formula Channel Number = Frequency * 5. However, this is only able to represent channels that are centered on a multiple of 200 kHz, which do not align with licensing in North America. 3GPP added several special values for the common North American channels. Over 130 licenses had been awarded to operators worldwide, as of December 2004, specifying W-CDMA radio access technology that builds on GSM. In Europe, the license process occurred at the tail end of the technology bubble, and the auction mechanisms for allocation set up in some countries resulted in some extremely high prices being paid for the original 2100 MHz licenses, notably in the UK and Germany. InGermany, bidders paid a total €50.8 billion for six licenses, two of which were subsequently abandoned and written off by their purchasers (Mobilcom and theSonera/Telefónicaconsortium). It has been suggested that these huge license fees have the character of a very large tax paid on future income expected many years down the road. In any event, the high prices paid put some European telecom operators close to bankruptcy (most notablyKPN). Over the last few years some operators have written off some or all of the license costs. Between 2007 and 2009, all three Finnish carriers began to use 900 MHz UMTS in a shared arrangement with its surrounding 2G GSM base stations for rural area coverage, a trend that is expected to expand over Europe in the next 1–3 years.[needs update] The 2100 MHz band (downlink around 2100 MHz and uplink around 1900 MHz) allocated for UMTS in Europe and most of Asia is already used in North America. The 1900 MHz range is used for 2G (PCS) services, and 2100 MHz range is used for satellite communications. Regulators have, however, freed up some of the 2100 MHz range for 3G services, together with a different range around 1700 MHz for the uplink.[needs update] AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly using the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired AT&T Wireless in 2004 and has since then launched UMTS in select US cities. Cingular renamed itself AT&T Mobility and rolled out[37]some cities with a UMTS network at 850 MHz to enhance its existing UMTS network at 1900 MHz and now offers subscribers a number of dual-band UMTS 850/1900 phones. T-Mobile's rollout of UMTS in the US was originally focused on the 1700 MHz band. However, T-Mobile has been moving users from 1700 MHz to 1900 MHz (PCS) in order to reallocate the spectrum to 4GLTEservices.[38] In Canada, UMTS coverage is being provided on the 850 MHz and 1900 MHz bands on the Rogers and Bell-Telus networks. Bell and Telus share the network. Recently, new providersWind Mobile,MobilicityandVideotronhave begun operations in the 1700 MHz band. In 2008, Australian telco Telstra replaced its existing CDMA network with a national UMTS-based 3G network, branded asNextG, operating in the 850 MHz band. Telstra currently provides UMTS service on this network, and also on the 2100 MHz UMTS network, through a co-ownership of the owning and administrating company 3GIS. This company is also co-owned byHutchison 3G Australia, and this is the primary network used by their customers.Optusis currently rolling out a 3G network operating on the 2100 MHz band in cities and most large towns, and the 900 MHz band in regional areas.Vodafoneis also building a 3G network using the 900 MHz band. In India,BSNLhas started its 3G services since October 2009, beginning with the larger cities and then expanding over to smaller cities. The 850 MHz and 900 MHz bands provide greater coverage compared to equivalent 1700/1900/2100 MHz networks, and are best suited to regional areas where greater distances separate base station and subscriber. Carriers in South America are now also rolling out 850 MHz networks. UMTS phones (and data cards) are highly portable – they have been designed to roam easily onto other UMTS networks (if the providers have roaming agreements in place). In addition, almost all UMTS phones are UMTS/GSM dual-mode devices, so if a UMTS phone travels outside of UMTS coverage during a call the call may be transparently handed off to available GSM coverage. Roaming charges are usually significantly higher than regular usage charges. Most UMTS licensees consider ubiquitous, transparent globalroamingan important issue. To enable a high degree of interoperability, UMTS phones usually support several different frequencies in addition to their GSM fallback. Different countries support different UMTS frequency bands – Europe initially used 2100 MHz while the most carriers in the USA use 850 MHz and 1900 MHz. T-Mobile has launched a network in the US operating at 1700 MHz (uplink) /2100 MHz (downlink), and these bands also have been adopted elsewhere in the US and in Canada and Latin America. A UMTS phone and network must support a common frequency to work together. Because of the frequencies used, early models of UMTS phones designated for the United States will likely not be operable elsewhere and vice versa. There are now 11 different frequency combinations used around the world – including frequencies formerly used solely for 2G services. UMTS phones can use aUniversal Subscriber Identity Module, USIM (based on GSM'sSIM card) and also work (including UMTS services) with GSM SIM cards. This is a global standard of identification, and enables a network to identify and authenticate the (U)SIM in the phone. Roaming agreements between networks allow for calls to a customer to be redirected to them while roaming and determine the services (and prices) available to the user. In addition to user subscriber information and authentication information, the (U)SIM provides storage space for phone book contact. Handsets can store their data on their own memory or on the (U)SIM card (which is usually more limited in its phone book contact information). A (U)SIM can be moved to another UMTS or GSM phone, and the phone will take on the user details of the (U)SIM, meaning it is the (U)SIM (not the phone) which determines the phone number of the phone and the billing for calls made from the phone. Japan was the first country to adopt 3G technologies, and since they had not used GSM previously they had no need to build GSM compatibility into their handsets and their 3G handsets were smaller than those available elsewhere. In 2002, NTT DoCoMo's FOMA 3G network was the first commercial UMTS network – using a pre-release specification,[39]it was initially incompatible with the UMTS standard at the radio level but used standard USIM cards, meaning USIM card based roaming was possible (transferring the USIM card into a UMTS or GSM phone when travelling). Both NTT DoCoMo and SoftBank Mobile (which launched 3G in December 2002) now use standard UMTS. All of the major 2G phone manufacturers (that are still in business) are now manufacturers of 3G phones. The early 3G handsets and modems were specific to the frequencies required in their country, which meant they could only roam to other countries on the same 3G frequency (though they can fall back to the older GSM standard). Canada and USA have a common share of frequencies, as do most European countries. The article UMTS frequency bands is an overview of UMTS network frequencies around the world. Using acellular router, PCMCIA or USB card, customers are able to access 3G broadband services, regardless of their choice of computer (such as atablet PCor aPDA). Some softwareinstalls itselffrom the modem, so that in some cases absolutely no knowledge of technology is required to getonlinein moments. Using a phone that supports 3G and Bluetooth 2.0, multiple Bluetooth-capable laptops can be connected to the Internet. Some smartphones can also act as a mobileWLAN access point. There are very few 3G phones or modems available supporting all 3G frequencies (UMTS850/900/1700/1900/2100 MHz). In 2010, Nokia released a range of phones withPentaband3G coverage, including theN8andE7. Many other phones are offering more than one band which still enables extensive roaming. For example, Apple'siPhone 4contains a quadband chipset operating on 850/900/1900/2100 MHz, allowing usage in the majority of countries where UMTS-FDD is deployed. The main competitor to UMTS is CDMA2000 (IMT-MC), which is developed by the3GPP2. Unlike UMTS, CDMA2000 is an evolutionary upgrade to an existing 2G standard, cdmaOne, and is able to operate within the same frequency allocations. This and CDMA2000's narrower bandwidth requirements make it easier to deploy in existing spectra. In some, but not all, cases, existing GSM operators only have enough spectrum to implement either UMTS or GSM, not both. For example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum available is 5 MHz in each direction. A standard UMTS system would saturate that spectrum. Where CDMA2000 is deployed, it usually co-exists with UMTS. In many markets however, the co-existence issue is of little relevance, as legislative hurdles exist to co-deploying two standards in the same licensed slice of spectrum. Another competitor to UMTS isEDGE(IMT-SC), which is an evolutionary upgrade to the 2G GSM system, leveraging existing GSM spectrums. It is also much easier, quicker, and considerably cheaper for wireless carriers to "bolt-on" EDGE functionality by upgrading their existing GSM transmission hardware to support EDGE rather than having to install almost all brand-new equipment to deliver UMTS. However, being developed by 3GPP just as UMTS, EDGE is not a true competitor. Instead, it is used as a temporary solution preceding UMTS roll-out or as a complement for rural areas. This is facilitated by the fact that GSM/EDGE and UMTS specifications are jointly developed and rely on the same core network, allowing dual-mode operation includingvertical handovers. China'sTD-SCDMAstandard is often seen as a competitor, too. TD-SCDMA has been added to UMTS' Release 4 as UTRA-TDD 1.28 Mcps Low Chip Rate (UTRA-TDD LCR). UnlikeTD-CDMA(UTRA-TDD 3.84 Mcps High Chip Rate, UTRA-TDD HCR) which complements W-CDMA (UTRA-FDD), it is suitable for both micro and macrocells. However, the lack of vendors' support is preventing it from being a real competitor. While DECT is technically capable of competing with UMTS and other cellular networks in densely populated, urban areas, it has only been deployed for domestic cordless phones and private in-house networks. All of these competitors have been accepted by ITU as part of the IMT-2000 family of 3G standards, along with UMTS-FDD. On the Internet access side, competing systems include WiMAX andFlash-OFDM. From a GSM/GPRS network, the following network elements can be reused: From a GSM/GPRS communication radio network, the following elements cannot be reused: They can remain in the network and be used in dual network operation where 2G and 3G networks co-exist while network migration and new 3G terminals become available for use in the network. The UMTS network introduces new network elements that function as specified by 3GPP: The functionality of MSC changes when going to UMTS. In a GSM system the MSC handles all the circuit switched operations like connecting A- and B-subscriber through the network. In UMTS the Media gateway (MGW) takes care of data transfer in circuit switched networks. MSC controls MGW operations. Some countries, including the United States, have allocated spectrum differently from theITUrecommendations, so that the standard bands most commonly used for UMTS (UMTS-2100) have not been available.[citation needed]In those countries, alternative bands are used, preventing the interoperability of existing UMTS-2100 equipment, and requiring the design and manufacture of different equipment for the use in these markets. As is the case with GSM900 today[when?], standard UMTS 2100 MHz equipment will not work in those markets. However, it appears as though UMTS is not suffering as much from handset band compatibility issues as GSM did, as many UMTS handsets are multi-band in both UMTS and GSM modes. Penta-band (850, 900, 1700, 2100, and 1900 MHz bands), quad-band GSM (850, 900, 1800, and 1900 MHz bands) and tri-band UMTS (850, 1900, and 2100 MHz bands) handsets are becoming more commonplace.[40] In its early days[when?], UMTS had problems in many countries: Overweight handsets with poor battery life were first to arrive on a market highly sensitive to weight and form factor.[citation needed]The Motorola A830, a debut handset on Hutchison's 3 network, weighed more than 200 grams and even featured a detachable camera to reduce handset weight. Another significant issue involved call reliability, related to problems with handover from UMTS to GSM. Customers found their connections being dropped as handovers were possible only in one direction (UMTS → GSM), with the handset only changing back to UMTS after hanging up. In most networks around the world this is no longer an issue.[citation needed] Compared to GSM, UMTS networks initially required a higherbase stationdensity. For fully-fledged UMTS incorporatingvideo on demandfeatures, one base station needed to be set up every 1–1.5 km (0.62–0.93 mi). This was the case when only the 2100 MHz band was being used, however with the growing use of lower-frequency bands (such as 850 and 900 MHz) this is no longer so. This has led to increasing rollout of the lower-band networks by operators since 2006.[citation needed] Even with current technologies and low-band UMTS, telephony and data over UMTS requires more power than on comparable GSM networks.Apple Inc.cited[41]UMTS power consumption as the reason that the first generationiPhoneonly supported EDGE. Their release of the iPhone 3G quotes talk time on UMTS as half that available when the handset is set to use GSM. Other manufacturers indicate different battery lifetime for UMTS mode compared to GSM mode as well. As battery and network technology improve, this issue is diminishing. As early as 2008, it was known that carrier networks can be used to surreptitiously gather user location information.[42]In August 2014, theWashington Postreported on widespread marketing of surveillance systems usingSignalling System No. 7(SS7) protocols to locate callers anywhere in the world.[42] In December 2014, news broke that SS7's very own functions can be repurposed for surveillance, because of its relaxed security, in order to listen to calls in real time or to record encrypted calls and texts for later decryption, or to defraud users and cellular carriers.[43] Deutsche Telekomand Vodafone declared the same day that they had fixed gaps in their networks, but that the problem is global and can only be fixed with a telecommunication system-wide solution.[44] The evolution of UMTS progresses according to planned releases. Each release is designed to introduce new features and improve upon existing ones.
https://en.wikipedia.org/wiki/UMTS
This is a list ofcommercialLong-Term Evolution(LTE) networks around the world, grouped by their frequency bands. Some operators use multiple bands and are therefore listed multiple times in respective sections. Note: This list of network deployments does not imply any widespread deployment or national coverage. SeeList of LTE networks in Africa. Canada: Bell Network Availability within Tier 3 License Areas Apart from their main spectrum holdings across large regions in the country (listed below) the major US carriers (AT&T, Sprint, T-Mobile & Verizon) also hold various Cellular Market Area (CMA) and/or Economic Area (EA) licenses for the AWS 1700 band, as well as Major Trading Area (MTA) and/or Basic Trading Area (BTA) licenses for the PCS 1900 band. In several small regional areas the named operators combine these with their major spectrum holdings to increase the bandwidth for their LTE deployments. Due to the large amount of these "single" licenses they are not listed here. 10 MHz in ND[519] SeeList of LTE networks in Asia. SeeList of LTE networks in Europe.
https://en.wikipedia.org/wiki/List_of_LTE_networks
Alist ofCDMA2000networksworldwide.
https://en.wikipedia.org/wiki/List_of_CDMA2000_networks
TheIEEE 802.21standard forMedia Independent Handoff(MIH) is anIEEEstandard published in 2008. The standard supports algorithms enabling seamlesshandoverbetween wired and wireless networks of the same type as well as handover between different wired and wireless network types also calledmedia independent handover(MIH) or vertical handover. The vertical handover was first introduced by Mark Stemn and Randy Katz at U C Berkeley.[1]The standard provides information to allow handing over to and from wired802.3networks to wireless802.11,802.15,802.16,3GPPand3GPP2networks through different handover mechanisms. The IEEE 802.21working groupstarted work in March 2004. More than 30 companies have joined the working group. The group produced a first draft of the standard including the protocol definition in May 2005. The standard was published in January 2009. Cellular networks and 802.11 networks employ handover mechanisms for handover within the same network type (aka horizontal handover).Mobile IPprovides handover mechanisms for handover across subnets of different types of networks, but can be slow in the process. Current802standards do not support handover between different types of networks. They also do not provide triggers or other services to accelerate mobile IP-based handovers. Moreover, existing 802 standards provide mechanisms for detecting and selecting network access points, but do not allow for detection and selection of network access points in a way that is independent of the network type. Implementation is still in progress. Current technologies such as802.11that accomplish handover use software to accomplish handovers and suggest that software will also be the way that handover will be implemented by 802.21. The use of software as a means to implement 802.21 should not cause large increases in the cost of networking devices. An open-source software implementation is provided by ODTONE.[citation needed] Crossing different administrative connectivity domains will require agreements among different network operators. Currently, such agreements are still not in place. In smartphones today, a user can manually select to use WiFi or cellular LTE, but the connections are not automatically maintained should a disconnection of one network occurs. Hence, seamless handovers across different wire/wireless networks are still not available today. Unlicensed Mobile Access(UMA)[citation needed]technology is a mobile-centric version of 802.21. UMA is said to provide roaming and handover betweenGSM,UMTS,Bluetoothand802.11 networks. Since June 19, 2005, UMA is a part of the ETSI 3GPP standardization process under the GAN (Generic Access Network) Group. TheEvolved Packet Core(EPC) architecture for Next Generation Mobile Networks (3GPP Rel.8 and newer) provides theAccess Network Discovery and Selection Functionelement (ANDSF)[citation needed](see 3GPP TS 23.402 and 3GPP TS 24.312). Its S14 interface provides the communication path between the Core Network and the User Endpoint device on which to exchange discovery information and inter-system mobility policies, enabling as such a network-suggested reselection of access networks.
https://en.wikipedia.org/wiki/IEEE_802.21
IEEE 802.11r-2008orfast BSS transition(FT), is an amendment to theIEEE 802.11standard to permit continuous connectivity aboard wireless devices in motion, with fast and secure client transitions from oneBasic Service Set(abbreviated BSS, and also known as abase stationor more colloquially, anaccess point) to another performed in a nearly seamless manner. It was published on July 15, 2008. IEEE 802.11r-2008 was rolled up into 802.11-2012.[1]The termshandoffandroamingare often used, although 802.11 transition is not a true handoff/roaming process in the cellular sense, where the process is coordinated by the base station and is generally uninterrupted. 802.11, commonly known asWi-Fi, is widely used for wireless local area communications. Many deployed implementations have effective ranges of only a few dozen meters, so, to maintain communications, devices in motion that use it will need to transition from one access point to another. In an automotive environment, this could easily result in a transition every five to ten seconds. Transitions are already supported under the preexisting standard. The fundamental architecture for transition is identical for 802.11 with and without 802.11r: the client device (known as theStation, orSTA) is entirely in charge of deciding when to transition and to which BSS it wishes to transition. In the early days of 802.11, transition was a much simpler task for the client device. Only four messages were required for the device to establish a connection with a new BSS (five if counting the optional "I'm leaving" message (deauthentication and disassociation frame) the client could send to the old access point). However, as additional features were added to the standard, including802.11iwith802.1Xauthentication and802.11e(QoS) orWireless Multimedia Extensions(WMM) with admission control requests, the number of messages required went up dramatically. During the time these additional messages are being exchanged, the mobile device's traffic, including that from voice calls, cannot proceed, and the loss experienced by the user could amount to several seconds.[2]Generally, the highest amount of delay or loss that the edge network should introduce into a voice call is 50 ms. 802.11r was launched to attempt to undo the added burden that security and quality of service added to the transition process, and restore it to the original four-message exchange. In this way, transition problems are not eliminated, but at least are returned to the status quo ante. The primary application currently envisioned for the 802.11r standard isvoice over IP(VOIP) via mobile phones designed to work with wireless Internet networks, instead of (or in addition to) standard cellular networks. IEEE 802.11r specifies fastBasic Service Set(BSS) transitions between access points by redefining the security key negotiation protocol, allowing both the negotiation and requests for wireless resources (similar toRSVPbut defined in802.11e) to occur in parallel. The key negotiation protocol in802.11ispecifies that, for802.1X-based authentication, the client is required to renegotiate its key with theRADIUSor other authentication server supportingExtensible Authentication Protocol(EAP) on every transition, a time-consuming process. The solution is to allow for the part of the key derived from the server to be cached in the wireless network, so that a reasonable number of future connections can be based on the cached key, avoiding the 802.1X process. A feature known asopportunistic key caching(OKC) exists today, based on 802.11i, to perform the same task. 802.11r differs from OKC by fully specifying the key hierarchy. The non-802.11r BSS transition goes through six stages: At this point in an802.1XBSS, the AP and Station have a connection, but are not allowed to exchange data frames, as they have not established a key. A fast BSS transition performs the same operations except for the 802.1X negotiation, but piggybacks the PTK and QoS admission control exchanges with the 802.11 Authentication and Reassociation messages. In October 2017 security researchers Mathy Vanhoef (imec-DistriNet, KU Leuven) and Frank Piessens (imec-DistriNet, KU Leuven) published their paper "Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2" (KRACK). This paper also listed a vulnerability of common 802.11r implementations and registered theCVE identifierCVE-2017-13082. On August 4, 2018, researcher Jens Steube (ofHashcat) described a new technique[3]to crack WPA2 and WPA PSK (pre-shared key) passwords that he states will likely work against all 802.11i/p/r networks with transition functions enabled.
https://en.wikipedia.org/wiki/IEEE_802.11r
IEEE 802.11u-2011is an amendment to theIEEE 802.11-2007standard to add features that improve interworking with external networks. 802.11 is a family ofIEEEtechnical standardsfor mobile communication devices such as laptop computers or multi-mode phones to join awireless local area network(WLAN) widely used in the home, public hotspots and commercial establishments. The IEEE 802.11u standard was published on February 25, 2011. This provides a mapping between the IP's differentiated services code point (DSCP) to over-the-air Layer 2 priority on a per-device basis, facilitating end-to-end QoS. IEEE 802.11currently makes an assumption that a user's device is pre-authorized to use the network.IEEE 802.11ucovers the cases where that device is not pre-authorized. A network will be able to allow access based on the user's relationship with an external network (e.g. hotspot roaming agreements), or indicate that online enrollment is possible, or allow access to a strictly limited set of services such as emergency services (client to authority and authority to client.) From a user perspective, the aim is to improve the experience of a traveling user who turns on a laptop in a hotel many miles from home, or uses a mobile device to place a phone call. Instead of being presented with a long list of largely meaninglessSSIDsthe user could be presented with a list of networks, the services they provide, and the conditions under which the user could access them. 802.11u is central to the adoption ofUMAand other approaches to network mobile devices. Because a relatively sophisticated set of conditions can be presented, arbitrary contracts could be presented to the user, and might include providing information on motive, demographics or geographic origin of the user. As such data is valuable to tourism promotion and other public functions, 802.11u is thought to motivate more extensive deployment ofIEEE 802.11smesh networks.[citation needed] Mobile users, whose devices can move between 3G and Wi-Fi networks at a low level using802.21handoff, also need a unified and reliable way to authorize their access to all of those networks. 802.11u provides a common abstraction that all networks regardless of protocol can use to provide a common authentication experience. The IEEE 802.11u requirements specification contains requirements in the areas of enrollment, network selection, emergency call support, emergency alert notification, user traffic segmentation, and service advertisement. TheWi-Fi Allianceuses IEEE 802.11u in its "Wi-Fi Certified Passpoint" program, also known as "Hotspot 2.0".[1]Apple devices runningiOS 7support Hotspot 2.0.[2][3] There have been proposals to use IEEE 802.11u for access points to signal that they allowEAP-TLSusing only server-side authentication.[4]Unlike most TLS implementations ofHTTPS, such as majorweb browsers, the majority of implementations of EAP-TLS require client-sideX.509certificates without giving the option to disable the requirement, even though the standard does not mandate their use, which some have identified as having the potential to dramatically reduce adoption of EAP-TLS and prevent "open" but encrypted access points.[5][6]
https://en.wikipedia.org/wiki/IEEE_802.11u
MoIPorMOIPcan mean:
https://en.wikipedia.org/wiki/MoIP_(disambiguation)
Alocal area network(LAN) is acomputer networkthat interconnects computers within a limited area such as a residence, campus, or building,[1][2][3]and has itsnetwork equipmentand interconnects locally managed. LANs facilitate the distribution of data and sharing network devices, such as printers. The LAN contrasts thewide area network(WAN), which not only covers a larger geographic distance, but also generally involvesleased telecommunication circuitsorInternetlinks. An even greater contrast is theInternet, which is a system of globally connected business and personal computers. EthernetandWi-Fiare the two most common technologies used for local area networks; historical network technologies includeARCNET,Token Ring, andLocalTalk. Most wired network infrastructures utilizeCategory 5orCategory 6twisted paircabling withRJ45compatible terminations. This medium provides physical connectivity between theEthernetinterfaces present on a large number of IP-aware devices. Depending on the grade of cable and quality of installation, speeds of up to 10 Mbit/s, 100 Mbit/s, 1 Gbit/s, or 10 Gbit/s are supported. In awireless LAN, users have unrestricted movement within the coverage area. Wireless networks have become popular in residences and small businesses because of their ease of installation, convenience, and flexibility.[4]Most wireless LANs consist of devices containingwirelessradio technology that conforms to802.11standards as certified by theIEEE. Most wireless-capable residential devices operate at both the 2.4GHzand 5 GHz frequencies and fall within the 802.11n or 802.11ac standards.[5]Some older home networking devices operate exclusively at a frequency of 2.4 GHz under 802.11b and 802.11g, or 5 GHz under 802.11a. Some newer devices operate at the aforementioned frequencies in addition to 6 GHz underWi-Fi 6E.Wi-Fiis a marketing and compliance certification for IEEE 802.11 technologies.[6]TheWi-Fi Alliancehas tested compliant products, and certifies them for interoperability. The technology may be integrated intosmartphones,tablet computersandlaptops. Guests are often offeredInternet accessvia ahotspotservice. Simple LANs in office or school buildings generally consist of cabling and one or morenetwork switches; a switch is used to allow devices on a LAN to talk to one another viaEthernet. A switch can be connected to arouter,cable modem, orADSL modemforInternetaccess. LANs at residential homes usually tend to have a single router and often may include awireless repeater. A LAN can include a wide variety of other network devices such asfirewalls,load balancers, andnetwork intrusion detection.[7]Awireless access pointis required for connecting wireless devices to a network; when a router includes this device, it is referred to as awireless router. Advanced LANs are characterized by their use of redundant links with switches using thespanning tree protocolto prevent loops, their ability to manage differing traffic types viaquality of service(QoS), and their ability to segregate traffic withVLANs. Anetwork bridgebinds two different LANs or LAN segments to each other, often in order to grant a wired-only device access to a wireless network medium. Network topologydescribes the layout of interconnections between devices and network segments. At thedata link layerandphysical layer, a wide variety of LAN topologies have been used, includingring,bus,meshandstar. The star topology is the most common in contemporary times. Wireless LAN (WLAN) also has its topologies: independent basic service set (IBSS, anad-hoc network) where each node connects directly to each other (this is also standardized asWi-Fi Direct), or basic service set (BSS, an infrastructure network that uses anwireless access point).[8] DHCPis used to assign internal IP addresses to members of a local area network. A DHCP server typically runs on the router[9]with end devices as its clients. All DHCP clients request configuration settings using the DHCP protocol in order to acquire theirIP address, adefault routeand one or moreDNS serveraddresses. Once the client implements these settings, it will be able to communicate on thatinternet.[10] At the higher network layers, protocols such asNetBIOS,IPX/SPX,AppleTalkand others were once common, but theInternet protocol suite(TCP/IP) has prevailed as the standard of choice for almost all local area networks today. LANs can maintain connections with other LANs via leased lines, leased services, or across theInternetusingvirtual private networktechnologies. Depending on how the connections are established and secured, and the distance involved, such linked LANs may also be classified as ametropolitan area network(MAN) or awide area network(WAN). Local area networks may be connected to theInternet(a type ofWAN) via fixed-line means (such as aDSL/ADSLmodem[11]) or alternatively using a cellular or satellitemodem. These would additionally make use of telephone wires such asVDSLandVDSL2, coaxial cables, orfiber to the homefor running fiber-optic cables directly into a house or office building, or alternatively a cellular modem orsatellite dishin the latter non-fixed cases. WithInternet access, theInternet service provider (ISP)would grant a single WAN-facingIP addressto the network. A router is configured with the provider's IP address on the WAN interface, which is shared among all devices in the LAN bynetwork address translation. Agatewayestablishesphysicalanddata link layerconnectivity to a WAN over a service provider's native telecommunications infrastructure. Such devices typically contain acable,DSL, oroptical modembound to anetwork interface controllerfor Ethernet. Home and small business class routers are often incorporated into these devices for additional convenience, and they often also have integratedwireless access pointand 4-port Ethernetswitch. TheITU-TG.hnandIEEEPowerlinestandard, which provide high-speed (up to 1 Gbit/s) local area networking over existing home wiring, are examples of home networking technology designed specifically forIPTVdelivery.[12][relevant?] The increasing demand and usage of computers in universities and research labs in the late 1960s generated the need to provide high-speed interconnections between computer systems. A 1970 report from theLawrence Radiation Laboratorydetailing the growth of their "Octopus" network gave a good indication of the situation.[13][14] A number of experimental and early commercial LAN technologies were developed in the 1970s.Ethernetwas developed atXerox PARCbetween 1973 and 1974.[15][16]TheCambridge Ringwas developed at Cambridge University starting in 1974.[17]ARCNETwas developed byDatapointCorporation in 1976 and announced in 1977.[18]It had the first commercial installation in December 1977 atChase Manhattan Bankin New York.[19]In 1979,[20]theelectronic voting system for the European Parliamentwas the first installation of a LAN connecting hundreds (420) of microprocessor-controlled voting terminals to a polling/selecting central unit with amultidrop buswithMaster/slave (technology)arbitration.[dubious–discuss]It used 10 kilometers of simpleunshielded twisted paircategory 3 cable—the same cable used for telephone systems—installed inside the benches of the European Parliament Hemicycles in Strasbourg and Luxembourg.[21] The development and proliferation ofpersonal computersusing theCP/Moperating system in the late 1970s, and laterDOS-based systems starting in 1981, meant that many sites grew to dozens or even hundreds of computers. The initial driving force for networking was to sharestorageandprinters, both of which were expensive at the time. There was much enthusiasm for the concept, and for several years, from about 1983 onward, computer industry pundits habitually declared the coming year to be, "The year of the LAN".[22][23][24] In practice, the concept was marred by the proliferation of incompatiblephysical layerandnetwork protocolimplementations, and a plethora of methods of sharing resources. Typically, each vendor would have its own type of network card, cabling, protocol, andnetwork operating system. A solution appeared with the advent ofNovell NetWarewhich provided even-handed support for dozens of competing card and cable types, and a much more sophisticated operating system than most of its competitors. Of the competitors to NetWare, onlyBanyan Vineshad comparable technical strengths, but Banyan never gained a secure base.3Comproduced3+Shareand Microsoft producedMS-Net. These then formed the basis for collaboration betweenMicrosoftand 3Com to create a simple network operating systemLAN Managerand its cousin, IBM'sLAN Server. None of these enjoyed any lasting success; Netware dominated the personal computer LAN business from early after its introduction in 1983 until the mid-1990s when Microsoft introducedWindows NT.[25] In 1983, TCP/IP was first shown capable of supporting actual defense department applications on a Defense Communication Agency LAN testbed located at Reston, Virginia.[26][27]The TCP/IP-based LAN successfully supportedTelnet,FTP, and a Defense Department teleconferencing application.[28]This demonstrated the feasibility of employing TCP/IP LANs to interconnectWorldwide Military Command and Control System(WWMCCS) computers at command centers throughout the United States.[29]However, WWMCCS was superseded by theGlobal Command and Control System(GCCS) before that could happen. During the same period,Unix workstationswere using TCP/IP networking. Although the workstation market segment is now much reduced, the technologies developed in the area continue to be influential on the Internet and in all forms of networking—and the TCP/IP protocol has replacedIPX,AppleTalk,NBF, and other protocols used by the early PC LANs. Econetwas Acorn Computers's low-cost local area network system, intended for use by schools and small businesses. It was first developed for theAcorn AtomandAcorn System 2/3/4computers in 1981.[30][31] In the 1980s, several token ring network implementations for LANs were developed.[32][33]IBM released their own implementation of token ring in 1985,[34][35]It ran at4Mbit/s.[36]IBM claimed that their token ring systems were superior to Ethernet, especially under load, but these claims were debated.[37][38]IBM's implementation of token ring was the basis of the IEEE 802.5 standard.[39]A 16 Mbit/s version of Token Ring was standardized by the 802.5 working group in 1989.[40]IBM had market dominance over Token Ring, for example, in 1990, IBM equipment was the most widely used for Token Ring networks.[41] Fiber Distributed Data Interface(FDDI), a LAN standard, was considered an attractive campusbackbone networktechnology in the early to mid 1990s since existing Ethernet networks only offered 10 Mbit/s data rates and Token Ring networks only offered 4 Mbit/s or 16 Mbit/s rates. Thus it was a relatively high-speed choice of that era, with speeds such as 100 Mbit/s. By 1994, vendors includedCisco Systems,National Semiconductor, Network Peripherals, SysKonnect (acquired byMarvell Technology Group), and3Com.[42]FDDI installations have largely been replaced by Ethernet deployments.[43]
https://en.wikipedia.org/wiki/Local_area_network
Mobile VoIPor simplymVoIPis an extension of mobility to avoice over IPnetwork. Two types of communication are generally supported:cordless telephonesusingDECTorPCSprotocols for short range or campus communications where all base stations are linked into the sameLAN, and wider area communications using3G,4G, or5Gprotocols. There are several methodologies that allow a mobile handset to be integrated into a VoIP network. One implementation turns the mobile device into a standardSIPclient, which then uses a data network to send and receive SIP messaging, and to send and receive RTP for the voice path. This methodology of turning a mobile handset into a standard SIP client requires that the mobile handset support, at minimum, high speed IP communications. In this application, standard VoIP protocols (typically SIP) are used over any broadband IP-capable wireless network connection such asEVDOrev A (which is symmetrical high speed — both high speed up and down),HSPA,Wi-FiorWiMAX. Another implementation of mobile integration uses a soft-switch like gateway to bridge SIP and RTP into the mobile network'sSS7infrastructure. In this implementation, the mobile handset continues to operate as it always has (as a GSM or CDMA based device), but now it can be controlled by a SIP application server which can now provide advanced SIP-based services to it. Several vendors offer this kind of capability today. Mobile VoIP will require a compromise between economy and mobility. For example, voice over Wi-Fi offers potentially free service but is only available within the coverage area of a single Wi-Fi access point. Cordless protocols offer excellent voice support and even support base station handoff, but require all base stations to communicate on one LAN as the handoff protocol is generally not supported by carriers or most devices. High speed services from mobile operators using EVDO rev A or HSPA may have better audio quality and capabilities for metropolitan-wide coverage including fast handoffs among mobile base stations, yet may cost more than Wi-Fi-based VoIP services. As device manufacturers exploited more powerful processors and less costly memory,smartphonesbecame capable of sending and receiving email, browsing the web (albeit at low rates) and allowing a user to watch TV. Mobile VoIP users were predicted to exceed 100 million by 2012 and InStat projects 288 million subscribers by 2013.[1][2] The mobile operator industry business model conflicts with the expectations of Internet users that access is free and fast without extra charges for visiting specific sites, however far away they may be hosted. Because of this, most innovations in mobile VoIP will likely come from campus and corporate networks,open sourceprojects likeAsterisk, and applications where the benefits are high enough to justify expensive experiments (medical, military, etc.). Mobile VoIP, like all VoIP, relies onSIP— the standard used by most VoIP services, and now being implemented on mobile handsets andsmartphonesand an increasing number ofcordlessphones. UMA— the Unlicensed Mobile AccessGeneric Access Networkallows VoIP to run over theGSMcellular backbone. When moving between IP-based networks, as is typically the case for outdoor applications, two other protocols are required: For indoor or campus (cordless phoneequivalent) use, theIEEE P1905protocol establishes QoS guarantees forhome area networks:Wi-Fi,Bluetooth,3G,4G,5Gand wired backbones using ACpowerline networking/HomePlug/IEEE P1901,EthernetandPower over Ethernet/IEEE 802.3af/IEEE 802.3at,MoCAandG.hn. In combination withIEEE 802.21, P1905 permits a call to be initiated on a wired phone and transferred to a wireless one and then resumed on a wired one, perhaps with additional capabilities such asvideoconferencingin another room. In this case the use ofmobile VoIPenables a continuous conversation that originates, and ends with, a wired terminal device. An older technology,PCSbase station handoff, specifies equivalent capabilities forcordless phonesbased on 800, 900, 2.4, 5.8 andDECT. While these capabilities were not widely implemented, they did provide thefunctional specificationfor handoff for modern IP-based telephony. A phone can in theory offer both PCS cordless and mobile VoIP and permit calls to be handed off from traditional cordless to cell and back to cordless if both the PCS and UMA/SIP/IEEE standards suites are implemented. Some specialized long distance cordless vendors likeSenaoattempted this but it has not generally caught on. A more popular approach has been full-spectrum handsets that can communicate with any wireless network including mobile VoIP, DECT andsatellite phonenetworks, but which have limited handoff capabilities between networks. The intent ofIEEE 802.21andIEEE 802.11uis that they be added to such phones runningiPhone,QNX,Androidor othersmartphoneoperating systems, yielding a phone that is capable of communicating with literally any digital network and maintaining a continuous call at high reliability at a low access cost. Most VoIP vendors implement proprietary technologies that permit such handoff between equipment of their own manufacture, e.g. theVierasystem fromPanasonic. Typically providing mobility costs more, e.g., the Panasonic VoIP cordless phone system (KX-TGP) costs approximately three times more than its popular DECT PSTN equivalent (KX-TGA). Some companies, includingCisco, offer adapters for analog/DECT phones as alternatives to their expensive cordless. Early experiments proved that VoIP was practical and could be routed byAsteriskeven on low-end routers like theLinksys WRT54G series. Suggesting amesh network(e.g.WDS) composed of such cheap devices could similarly support roaming mobile VoIP phones. These experiments, and others for IP roaming such asSputnik, were the beginning of the5Gprotocol suite includingIEEE 802.21andIEEE 802.11u. At this time, some mobile operators attempted to restrictIP tetheringandVoIPuse on their networks, often by deliberately introducing highlatencyinto data communications making it useless for voice traffic. In the summer of 2006, a SIP (Session Initiation Protocol) stack was introduced and a VoIP client in Nokia E-series dual-mode Wi-Fi handsets (Nokia E60,Nokia E61,Nokia E70). The SIP stack and client have since been introduced in many more E and N-series dual-mode Wi-Fi handsets, most notably theNokia N95which has been very popular in Europe. Various services use these handsets. In spring 2008 Nokia introduced a built in SIP VoIP client for the very first time to the mass market device (Nokia 6300i) running Series 40 operating system. Later that year (Nokia 6260 Slidewas introduced introducing slightly updated SIP VoIP client. Nokia maintains a list of all phones that have an integrated VoIP client in Forum Nokia.[3] Aircell's battle with some companies allowing VoIP calls on flights is another example of the growing conflict of interest between incumbent operators and new VoIP operators.[4] By January 2009OpenWRT[1]was capable of supporting mobile VoIP applications viaAsteriskrunning on aUSBstick. As OpenWRT runs on mostWi-Firouters, this radically expanded the potential reach of mobile VoIP applications. Users reported acceptable results usingG.729codecs and connections to a "main NAT/Firewall router with a NAT=yes and canreinvite=no.. As such, my asterisk will stay in the audio path and can't redirect the RTP media stream (audio) to go directly from the caller to the callee." Minor problems were also reported: "Whenever there is an I/O activities ... i.e. reading the Flash space (mtdblockd process), this will create some hick-ups (or temporarily losing audio signals)." The combination of OpenWRT and Asterisk is intended as an open source replacement for proprietaryPBXes. The companyxG Technology, Inc.had a mobile VoIP and data system operating in the license-free ISM 900 MHz band (902 MHz – 928 MHz).xMaxis an end-to-end Internet Protocol (IP) system infrastructure that is currently deployed in Fort Lauderdale, Florida.[5] In January 2010Apple Inc.updated the iPhone developer SDK to allow VoIP over cellular networks. iCall[2]became the first App Store app to enable VoIP on the iPhone and iPod Touch over cellular 3G networks. In second half of 2010 Nokia introduced three new dualmode Wi-Fi capable Series40 handsets (Nokia X3-02,Nokia C3-01and, Nokia C3-01 Gold Edition) with integrated SIP VoIP that supports HD voice (AMR-WB). The mainstreaming of VoIP in the small business market led to the introduction of more devices extending VoIP to business cordless users. Panasonic introduced the KX-TGP base station supporting up to 6 cordless handsets[3], essentially a VoIP complement to its popular KX-TGA analogue phones which likewise support up to 4 cordless handsets. However, unlike the analogue system which supports only four handsets in one "conference" on one line, the TGP supports 3 simultaneous network conversations and up to 8 SIP registrations (e.g. up to 8 DID lines or extensions), as well as an Ethernet pass-through port to hook up computers on the same drop. In its publicity Panasonic specifically mentionsDigium(founded by the creator ofAsterisk), its productSwitchvoxand Asterisk itself. Several router manufacturers includingTRENDnetandNetgearreleased sub-$300Power over Ethernetswitches aimed at the VoIP market. Unlike industry standard switches that provided the full 30 watts of power per port, these allowed under 50 watts of power to all four PoE ports combined. This made them entirely suitable for VoIP and other low-power use (Motorola Canopyorsecurity cameraor Wi-Fi APs) typical of aSOHOapplication, or supporting an 8-line PBX, especially in combination with a multi-line handset such as the Panasonic KX-TGP (which does not require a powered port). Accordingly, by the end of 2011, for under US$3000 it was possible to build an office VoIP system based entirely on cordless technology capable of several hundred metres reach and on Power over Ethernet dedicated wired phones, with up to 8 DID lines and 3 simultaneous conversations per base station, with 24 handsets each capable of communicating on any subset of the 8 lines, plus an unlimited number of softphones running on computers and laptops and smartphones. This compared favourably to proprietaryPBXtechnology especially as VoIP cordless was far cheaper than PBX cordless. Cisco also released the SPA112, an Analogue Telephone Adapter (ATA) to connect one or two standard RJ-11 telephones to an Ethernet, in November 2011, retailing for under US$50. This was a competitive response to major cordless vendors such as Panasonic moving into the business VoIP cordless market Cisco had long dominated, as it suppressed the market for the cordless makers' native VoIP phones and permitted Cisco to argue the business case to spend more on switches and less on terminal devices. However, this solution would not permit the analogue phones to access every line of a multi-linePBX, only one hardwired line per phone. As of late 2011, most cellular data networks were still extremely high latency and effectively useless for VoIP. IP-only providers such as Voipstream had begun to serve urban areas, and alternative approaches such asOpenBTS(open source GSM) were competing with mobile VoIP. In November 2011, Nokia introducedNokia Asha 303with integrated SIP VoIP client that can operate both over Wi-Fi and 3G networks. In February 2012, Nokia introducedNokia Asha 302and in JuneNokia Asha 311both with integrated SIP VoIP client that can operate both over Wi-Fi and 3G networks. By September 2014, mobile-enabled VoIP (VoLTE) had been launched byT-Mobile USacross its national network and byAT&T Mobilityin a few markets.[6]Verizonplans to launch its VoLTE service "in the coming weeks," according to media reports in August, 2014.[7]It providesHD Voice, which increases mobile voice quality, and permits optional use of video calling and front and rear-facing cameras. In the future, Verizon's VoLTE is expected to also permit video sharing, chat functionality, and file transfers.
https://en.wikipedia.org/wiki/Mobile_VoIP
Code-division multiple access(CDMA) is achannel access methodused by variousradiocommunication technologies. CDMA is an example ofmultiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (seebandwidth). To permit this without undue interference between the users, CDMA employsspread spectrumtechnology and a special coding scheme (where each transmitter is assigned a code).[1][2] CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range. It is used as the access method in manymobile phone standards.IS-95, also called "cdmaOne", and its3GevolutionCDMA2000, are often simply referred to as "CDMA", butUMTS, the 3G standard used byGSMcarriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such asAT&T,UScellularandVerizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to911.[3][4] It can be also used as a channel or medium access technology, likeALOHAfor example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently. In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0es and 1es). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are calledBarker codes(with a very short sequence length of typically 8 to 32). For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used withbinary phase-shift keying(BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases)quadrature amplitude modulation(QAM) ororthogonal frequency-division multiplexing(OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based onbinary offset carrier modulation(BOC modulation), which is inspired byManchester codesand enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers. The technology of code-division multiple access channels has long been known. In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on "The Security of Overseas Transport", which was a summer research project carried out at theMassachusetts Institute of Technologyfrom June to August 1950.[5]Further research in the context ofjammingandanti-jammingwas carried out in 1952 atLincoln Lab.[6] In theSoviet Union(USSR), the first work devoted to this subject was published in 1935 byDmitry Ageev.[7]It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory.[clarification needed]The technology of CDMA was used in 1957, when the young military radio engineerLeonid Kupriyanovichin Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station.[8]LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life.[9][10]The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called "correlator."[11][12]In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed 11 kg (24 lb). It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities.[13] CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is apseudo-random codein the time domain that has a narrowambiguity functionin the frequency domain, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwiseXOR(exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration ofTb{\displaystyle T_{b}}(symbol period) is XORed with the code signal with pulse duration ofTc{\displaystyle T_{c}}(chip period). (Note:bandwidthis proportional to1/T{\displaystyle 1/T}, whereT{\displaystyle T}= bit time.) Therefore, the bandwidth of the data signal is1/Tb{\displaystyle 1/T_{b}}and the bandwidth of the spread spectrum signal is1/Tc{\displaystyle 1/T_{c}}. SinceTc{\displaystyle T_{c}}is much smaller thanTb{\displaystyle T_{b}}, the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratioTb/Tc{\displaystyle T_{b}/T_{c}}is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.[1][2] Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made bycorrelatingthe received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to ascross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.[18][19] An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived asnoiseand rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate. In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes). The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced byWalsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually aGilbert cellmixer in the circuitry. Synchronous CDMA exploits mathematical properties oforthogonalitybetweenvectorsrepresenting the data strings. For example, the binary string1011is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking theirdot product, by summing the products of their respective components (for example, ifu= (a,b) andv= (c,d), then their dot productu·v=ac+bd). If the dot product is zero, the two vectors are said to beorthogonalto each other. Some properties of the dot product aid understanding of howW-CDMAworks. If vectorsaandbare orthogonal, thena⋅b=0{\displaystyle \mathbf {a} \cdot \mathbf {b} =0}and: Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bitWalsh codesare used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded. Start with a set of vectors that are mutuallyorthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows fromWalsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called thecode,chipcode, orchipping code. In the interest of brevity, the rest of this example uses codesvwith only two bits. Each user is associated with a different code, sayv. A 1 bit is represented by transmitting a positive codev, and a 0 bit is represented by a negative code−v. For example, ifv= (v0,v1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be For the purposes of this article, we call this constructed vector thetransmitted vector. Each sender has a different, unique vectorvchosen from that set, but the construction method of the transmitted vector is identical. Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component. If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps: Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another: Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example: Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver: When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data. When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used inasynchronousCDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results inmultiple access interference(MAI) that is approximated by a Gaussian noise process (following thecentral limit theoremin statistics).Gold codesare an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users. All forms of CDMA use thespread-spectrumspreading factorto allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor. Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power. In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed.[20]Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables. In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency. Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictableDoppler shiftof the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum. Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2Nusers that only talk half of the time, then 2Nusers can be accommodated with the sameaveragebit error probability asNusers that talk all of the time. The key difference here is that the bit error probability forNusers talking all of the time is constant, whereas it is arandomquantity (with the same mean) for 2Nusers talking half of the time. In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number oforthogonalcodes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there areNtime slots in a TDMA system and 2Nusers that talk half of the time, then half of the time there will be more thanNusers needing to use more thanNtime slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system. Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal.[18][19] CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information.Convolution encodingandinterleavingcan be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome. Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored. Some CDMA devices use arake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal.[1][2] Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.[1] Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal.[2] A novel collaborative multi-user transmission and detection scheme called collaborative CDMA[21]has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and highbit error rateperformance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system.
https://en.wikipedia.org/wiki/Code-division_multiple_access
Time-division multiple access(TDMA) is achannel access methodforshared-medium networks. It allows several users to share the samefrequency channelby dividing the signal into different time slots.[1]The users transmit in rapid succession, one after the other, each using its own time slot. This allows multiple stations to share the same transmission medium (e.g. radio frequency channel) while using only a part of itschannel capacity.Dynamic TDMAis a TDMA variant that dynamically reserves a variable number of time slots in each frame to variable bit-rate data streams, based on the traffic demand of each data stream. TDMA is used in digital2Gcellular systemssuch asGlobal System for Mobile Communications(GSM),IS-136,Personal Digital Cellular(PDC) andiDEN, in the MaritimeAutomatic Identification System,[2]and in theDigital Enhanced Cordless Telecommunications(DECT) standard forportable phones. TDMA was first used insatellite communicationsystems byWestern Unionin itsWestar 3communications satellite in 1979. It is now used extensively in satellite communications,[3][4][5][6]combat-net radiosystems, andpassive optical network(PON) networks for upstream traffic from premises to the operator. TDMA is a type oftime-division multiplexing(TDM), with the special point that instead of having onetransmitterconnected to onereceiver, there are multiple transmitters. In the case of theuplinkfrom amobile phoneto abase stationthis becomes particularly difficult because the mobile phone can move around and vary thetiming advancerequired to make its transmission match the gap in transmission from its peers. Most 2G cellular systems, with the notable exception ofIS-95, are based on TDMA.GSM,D-AMPS,PDC,iDEN, andPHSare examples of TDMA cellular systems. In the GSM system, the synchronization of the mobile phones is achieved by sending timing advance commands from the base station which instruct the mobile phone to transmit earlier and by how much. This compensates for the speed-of-lightpropagation delay. The mobile phone is not allowed to transmit for its entire time slot; there is aguard intervalat the end of each time slot. As the transmission moves into the guard period, the mobile network adjusts the timing advance to synchronize the transmission. Initial synchronization of a phone requires even more care. Before a mobile transmits there is no way to know the offset required. For this reason, an entire time slot has to be dedicated to mobiles attempting to contact the network; this is known as therandom-access channel(RACH) in GSM. The mobile transmits at the beginning of the time slot as received from the network. If the mobile is near the base station, the propagation delay is short and the initiation can succeed. If, however, the mobile phone is just less than 35 km from the base station, the delay will mean the mobile's transmission arrives at the end of the time slot. In this case, the mobile will be instructed to transmit its messages starting nearly a whole time slot earlier so that it can be received at the proper time. Finally, if the mobile is beyond the 35 km cell range of GSM, the transmission will arrive in a neighbouring time slot and be ignored. It is this feature, rather than limitations of power, that limits the range of a GSM cell to 35 km when no special extension techniques are used. By changing the synchronization between the uplink and downlink at the base station, however, this limitation can be overcome.[citation needed] In the context of 3G systems, the integration of time-division multiple access (TDMA) withcode-division multiple access(CDMA) and time-division duplexing (TDD) in theUniversal Mobile Telecommunications System(UMTS) represents a sophisticated approach to optimizing spectrum efficiency and network performance.[7] UTRA-FDD (frequency division duplex) employs CDMA and FDD, where separatefrequency bandsare allocated for uplink and downlink transmissions. This separation minimizes interference and allows for continuous data transmission in both directions, making it suitable for environments with balanced traffic loads.[8] UTRA-TDD (time division duplex), on the other hand, combines CDMA with TDMA and TDD. In this scheme, the same frequency band is used for both uplink and downlink, but at different times. This time-based separation is particularly advantageous in scenarios with asymmetric traffic loads, where the data rates for uplink and downlink differ significantly. By dynamically allocating time slots based on demand, UTRA-TDD can efficiently manage varying traffic patterns and enhance overall network capacity.[8][9] The combination of these technologies in UMTS allows for more flexible and efficient use of the available spectrum, catering to diverse user demands and improving the adaptability of 3G networks to different operational environments.[8] TheITU-TG.hnstandard, which provides high-speed local area networking over existing home wiring (power lines, phone lines and coaxial cables) is based on a TDMA scheme. InG.hn, a "master" device allocatescontention-free transmission opportunities(CFTXOP) to other "slave" devices in the network. Only one device can use a CFTXOP at a time, thus avoiding collisions.FlexRayprotocol which is also a wired network used forsafety-criticalcommunication in modern cars, uses the TDMA method for data transmission control. In radio systems, TDMA is usually used alongsidefrequency-division multiple access(FDMA) and frequency-division duplex (FDD); the combination is referred to as FDMA/TDMA/FDD. This is the case in both GSM and IS-136 for example. Exceptions to this include theDECTandPersonal Handy-phone System(PHS) micro-cellular systems,UMTS-TDDUMTS variant, and China'sTD-SCDMA, which use time-division duplexing, where different time slots are allocated for the base station and handsets on the same frequency. A major advantage of TDMA is that the radio part of the mobile-only needs to listen and broadcast for its own time slot. For the rest of the time, the mobile can carry out measurements on the network, detecting surrounding transmitters on different frequencies. This allows safe inter-frequencyhandovers, something which is difficult in CDMA systems, not supported at all inIS-95and supported through complex system additions inUniversal Mobile Telecommunications System(UMTS). This in turn allows for co-existence ofmicrocelllayers withmacrocelllayers. CDMA, by comparison, supports "soft hand-off" which allows a mobile phone to be in communication with up to 6 base stations simultaneously, a type of "same-frequency handover". The incoming packets are compared for quality, and the best one is selected. CDMA's "cell breathing" characteristic, where a terminal on the boundary of two congested cells will be unable to receive a clear signal, can often negate this advantage during peak periods. A disadvantage of TDMA systems is that they createinterferenceat a frequency that is directly connected to the time slot length. This is the buzz that can sometimes be heard if a TDMA phone is left next to a radio or speakers.[10]Another disadvantage is that the "dead time" between time slots limits the potential bandwidth of a TDMA channel. These are implemented in part because of the difficulty in ensuring that different terminals transmit at exactly the times required. Handsets that are moving will need to constantly adjust their timings to ensure their transmission is received at precisely the right time because as they move further from the base station, their signal will take longer to arrive. This also means that the major TDMA systems have hard limits on cell sizes in terms of range, though in practice the power levels required to receive and transmit over distances greater than the supported range would be mostly impractical anyway. TDMA (time-division multiple access) is a communication method that allocates radio frequency (RF) bandwidth into discrete time slots, allowing multiple users to share the channel in a sequential manner. This approach not only improves spectrum efficiency compared to analog systems but also offers several specific advantages that enhance communication quality and system performance.[11] Indynamic time-division multiple access(dynamic TDMA), ascheduling algorithmdynamically reserves a variable number of time slots in each frame to variable bit-rate data streams, based on the traffic demand of each data stream. Dynamic TDMA is used in:
https://en.wikipedia.org/wiki/Time-division_multiple_access
Frequency-division multiple access(FDMA) is achannel access methodused in some multiple-access protocols. FDMA allows multiple users to send data through a singlecommunication channel, such as acoaxial cableormicrowavebeam, by dividing thebandwidthof the channel into separate non-overlappingfrequencysub-channels and allocating each sub-channel to a separate user. Users can send data through a subchannel by modulating it on acarrier waveat the subchannel's frequency. It is used insatellite communicationsystems and telephone trunklines. FDMA splits the total bandwidth into multiple channels. Each ground station on the earth is allocated a particular frequency group (or a range of frequencies). Within each group, the ground station can allocate different frequencies to individual channels, which are used by different stations connected to that ground station. Before the transmission begins, the transmitting ground station looks for an empty channel within the frequency range that is allocated to it and once it finds an empty channel, it allocates it to the particular transmitting station. Alternatives includetime-division multiple access(TDMA),code-division multiple access(CDMA), orspace-division multiple access(SDMA). These protocols are utilized differently, at different levels of the theoreticalOSI model. Disadvantage:Crosstalkmay cause interference among frequencies and disrupt the transmission. FDMA is distinct fromfrequency division duplexing(FDD). While FDMA allows multiple users simultaneous access to a transmission system, FDD refers to how the radio channel is shared between theuplinkanddownlink(for instance, the traffic going back and forth between a mobile-phone and amobile phone base station).Frequency-division multiplexing(FDM) is also distinct from FDMA. FDM is a physical layer technique that combines and transmits low-bandwidth channels through a high-bandwidth channel, like in acar radio. FDMA, on the other hand, is an access method in thedata link layer. FDMA also supportsdemand assignmentin addition to fixed assignment.Demand assignmentallows all users apparently continuous access of theradio spectrumby assigning carrier frequencies on a temporary basis using a statistical assignment process. The first FDMAdemand-assignmentsystem for satellite was developed byCOMSATfor use on theIntelsatseriesIVAandVsatellites. There are two main techniques:
https://en.wikipedia.org/wiki/Frequency-division_multiple_access
Crossband(cross-band,cross band) operation is a method oftelecommunicationin which a radio station receives signals on onefrequencyand simultaneously transmits on another for the purpose offull duplex communicationor signal relay.[1] To avoid interference within the equipment at the station, the two frequencies used need to be separated, and ideally on different 'bands'. An unattended station working in this way is aradio repeater. It re-transmits the same information that it receives. This principle is used bytelecommunications satellitesandterrestrial mobile radiosystems. Crossband operation is sometimes used byamateur radiooperators.[2]Rather than taking it in turns to transmit on the same frequency, both operators can transmit at the same time but on different bands, each one listening to the frequency that the other is using to transmit. A variation on this procedure includes establishing contact on one frequency and then changing to a pair of other frequencies to exchange messages. Crossband operation is also used in communication between ships (inter-ship) with a HF installation. Frequencies that may be used can be found in the 'Manual for use by the Maritime Mobile and Maritime Mobile-Satellite Services'. Usually inter-ship communication is simplex only (VHF or MF), HF gives the possibility to work duplex but usually the transmitter and receiver are so close to each other that this may cause problems. The solution is to work on frequencies that are far apart e.g.: sending on 8 MHz and receiving on 12 MHz. This mode is often used in amateur radio satellites, with uplink on the VHF band and downlink on UHF band such as IO-86, AO-91, SO-50 and ARISS. Some of satellites such as PO-101 and AO-91 reversed that order with UHF band uplink and VHF band downlink. Such operation required a cross-band directional antenna that can transmit and receive on different antenna. Additional articles on crossband repeat usage and setup: This article related to radio communications is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Crossband_operation
Adouble-trackrailway usually involves running one track in each direction, compared to asingle-track railwaywhere trains in both directions share the same track. In the earliest days of railways in the United Kingdom, most lines were built as double-track because of the difficulty of co-ordinating operations before the invention of thetelegraph. The lines also tended to be busy enough to be beyond the capacity of a single track. In the early days theBoard of Tradedid not consider any single-track railway line to be complete. In the earliest days of railways in the United States most lines were built as single-track for reasons of cost, and very inefficient timetable working systems were used to prevent head-on collisions on single lines. This improved with the development of thetelegraphand thetrain ordersystem. In any given country, rail traffic generally runs to one side of a double-track line, not always the same side as road traffic. Thus inBelgium,China,France(apart from the classic lines of the former GermanAlsaceandLorraine),Sweden(apart fromMalmöand further south),Switzerland,ItalyandPortugalfor example, the railways use left-hand running, while the roads use right-hand running. However, there are many exceptions: Handedness of traffic can affect locomotive design. For the driver, visibility is usually good from both sides of the driving cab, so the choice of which side the driver should sit is less important. For example, the FrenchSNCF Class BB 7200is designed to use the left-hand track and therefore uses LHD. When the design was modified for use in the Netherlands asNS Class 1600, the driving cab was not completely redesigned, keeping the driver on the left even though trains use the right-hand track in the Netherlands.[6]Generally, the left/right principle in a country is followed mostly on double track. On steam trains, the steam boiler often obscured some of the view, so the driver was preferably placed nearest to the side of the railway, so that it was easier to see the signals. On single track, when trains meet, the train that does not stop often uses the straight path in the turnout, which can be left or right.[7] Double-track railways, especially older ones, may use each track exclusively in one direction. This arrangement simplifies thesignallingsystems, especially where the signalling is mechanical (e.g.semaphore signals). Where the signals andpoints(UK term) or rail switches (US) are power-operated, it can be worthwhile to provide signals for each line which cater for movement in either direction, so that the double line becomes a pair of single lines. This allows trains to use one track where the other track is out of service due to track maintenance work, or a train failure, or for a fast train to overtake a slow train. Mostcrossing loopsare not regarded as double-track even though they consist of multiple tracks. If the crossing loop is long enough to hold several trains, and to allow opposing trains to cross without slowing down or stopping, then that may be regarded as double-track. A more modern British term for such a layout is an extended loop. The distance between the tracks' centres makes a difference in cost and performance of a double-track line. The track centres can be as closely spaced and as cheap as possible, but maintenance must be done on the side. Signals for bi-directional working cannot be mounted between the tracks, so they must be mounted on the 'wrong' side of the line or on expensivesignal bridges. For standard gauge tracks the distance may be 4 metres (13 ft) or less. Track centres are usually further apart on high speed lines, as pressure waves knock each other as high-speed trains pass. Track centres are also usually further apart on sharp curves, and the length and width of trains is contingent on theminimum railway curve radiusof the railway. Increasing the width of track centres of 6 metres (20 ft) or more makes it much easier to mount signals and overhead wiring structures. Very widely spaced centres at major bridges can have military value.[clarification needed]It also makes it harder for rogue ships and barges to knock out both bridges in the same accident. Railway lines in desert areas affected by sand dunes are sometimes built with the two tracks separated, so that if one is covered by sand, the other(s) are still serviceable. If the standard track centre is changed, it can take a very long time for most or all tracks to be brought into line. On British lines, the space between the two running rails of a single railway track is called the "four foot" (owing to it being 'four foot something' in width), while the space between the different tracks is called the "six foot". It is not safe to stand in the gap between the tracks when trains pass by on both lines, as happened in theBere Ferrers accidentof 1917. When one track of a double-track railway is out of service for maintenance or a train breaks down, all trains may be concentrated on the one usable track. There may be bi-directional signalling and suitable crossovers to enable trains to move onto the other track expeditiously (such as theChannel Tunnel), or there may be some kind of manual safeworking to control trains on what is now a section of single track.Seesingle-line working. Accidents can occur if the temporary safeworking system is not implemented properly, as in: From time to time, railways are asked to transport exceptional loads such as massive electrical transformers that are too tall, too wide or too heavy to operate normally. Special measures must be carefully taken to plan successful and safe operation ofout-of-gauge trains. For example, adjacent tracks of a double line might have to be shut down to avoid collisions with trains on those adjacent tracks. These are a form of crossing loop, but are long enough to allow trains approaching each other from opposite directions on single-track lines to cross (or pass) each other without reducing speed. In order for passing lanes to operate safely and effectively, trains must be timetabled so that they arrive at and enter the loop with close time tolerances, otherwise they will need to slow or even be brought to a complete stop to allow the oncoming train to pass. They are suited to lines with light to moderate traffic. An example of where passing lanes have been installed in order to improve travel times and increase line capacity is the 160-kilometre (100-mile) section of theMain Southern railway linein Australia betweenJuneeandAlbury. This was built as a single track line in stages between 1878 and 1881, and was partially duplicated between 2005 and 2010 by the construction of four passing lanes each 6 km (4 mi) long. In this instance, this was accomplished by extending pre-existing crossing loops of either 900 metres (3,000 ft) or 1,500 metres (4,900 ft) in length. The process of expanding a single track to double track is calledduplicationordoubling, unless the expansion is to restore what was previously double track, in which case it is calledredoubling. The strongest evidence that a line was built as single-track and duplicated at a later date consists of major structures such as bridges and tunnels that are twinned. One example is the twin Slade tunnels on theIlfracombe Branch Linein the UK. Twinned structures may be identical in appearance, or like some tunnels betweenAdelaideandBelairinSouth Australia, substantially different in appearance, being built to differentstructure gauges. Tunnels are confined spaces and are difficult to duplicate while trains keep on running. Generally they are duplicated by building a second tunnel. An exception is theHoosac Tunnel, which was duplicated by enlarging the bore. To reduce initial costs of a line that is certain to see heavy traffic in the future, a line may be built as single-track but withearthworksand structures designed for ready duplication. An example is theStrathfieldtoHamiltonline inNew South Wales, which was constructed as mainly single-track in the 1880s, with full duplication completed around 1910. All bridges, tunnels, stations, and earthworks were built for double track. Stations with platforms with 11-foot (3.4 m) centres had to be widened later to 12-foot (3.7 m) centres, except forGosford. The formerBaltimore and Ohio Railroad(B&O) line betweenBaltimoreandJersey City, now owned byCSXandConrail Shared Assets Operations, is an example of a duplication line that was reduced to single-track in most locations, but has since undergone re-duplication in many places between Baltimore andPhiladelphiawhen CSX increased freight schedules in the late 1990s. Also: Some lines are built as single-track with provision for duplication, but the duplication is never carried out. Examples are: When the capacity of a double-track railway is in excess of requirements, the two tracks may be reduced to one, in order to reduce maintenance costs and property taxes. In some countries this is calledsingling. Notable examples of this in the United Kingdom occurred on the Oxford–Worcester–Hereford, Princes Risborough–Banbury and Salisbury–Exeter main lines during the 1970s and 1980s. In all these cases, increases in traffic from the late 1990s have led to the partial reinstatement of double track. In New Zealand theMelling Linewassingledto theWestern Hutt Railway StationinLower Huttin 1958 after it became a branch line rather than part of the mainHutt Valley Line.Kirkby railway station(until 1977) andOrmskirk railway station(until 1970) were double-track railway, when they were converted intosingle-track railwaywithcross-platform interchange. In New South Wales, Australia, theMain Western RailwaybetweenWallerawangandTarana, and betweenGreshamandNewbridgewere singled in the 1990s. A new passing loop was opened on part of the closed track atRydalin the Wallerawang–Tarana section during 2019.[13] A double-track tunnel with restricted clearances is sometimes singled to form a single track tunnel with more generous clearances, such as theConnaught Tunnelin Canada or the Tickhole Tunnel inNew South Wales, Australia. In the case of the Tickhole Tunnel a new single-track tunnel was built and the two tracks in the original tunnel were replaced by one track in the centreline of the tunnel. Another case where this was necessary was theHastings Linein the United Kingdom, where the tunnels were eventually singled to permit the passage of standardBritish-gaugerolling stock. Before the singling, narrow-bodied stock, specially constructed for the line, had to be used. As part of theRegional Fast Rail projectinVictoria, Australia, the rail line betweenKynetonandBendigowas converted from double- to single-track to provide additional clearance through tunnels and under bridges for trains travelling at up to 160 km/h (99 mph). A similar process can be followed on narrow bridges (like theBoyne Viaduct, a bridge just north ofDrogheda railway stationinIreland). The bridge over theMurray RiverbetweenAlburyandWodongais double-track, but because of insufficient strength in the bridge only one train is allowed on it at a time. The bridge has since been singled as part of theNorth East Line Standardisationwith the old broad gauge track now disconnected but remains in place on the bridge. Railways that become especially busy in wartime and are duplicated, especially in World War I, may revert to single track when peace returns and the extra capacity is no longer required. TheFlanders campaignsaw duplication of theHazebrouck–Ypresline, amongst other works. Severe gradients can make theheadwayin the uphill direction much worse than the headway in the downhill direction. BetweenWhittinghamandMaitland, New South Wales, a third track was opened between Whittingham and Branxton in 2011 and Branxton to Maitland in 2012 to equalize the headway in both directions for heavy coal traffic.[16]Triple track could be a compromise between double-track andquad-track; such a system was proposed south ofStockholm Central Station, but was cancelled in favor ofCitybanan. InMelbourneandBrisbaneseveral double track lines have a third track signalled in both directions, so that two tracks are available in the peak direction during rush hours. Triple track is used in some parts of theNew York City Subwayand on theNorristown High-Speed Lineto add supplemental rush-hour services. The center track, which serves express trains, is signalled in both directions to allow two tracks to be used in the peak direction during rush hours; the outer tracks use bi-directional running and serve local trains exclusively in one direction. During service disruptions on one of the two outer tracks, trains could also bypass the affected sections on the center track. The Union Pacific Railroad mainline through Nebraska has a 108-mile (174 km) stretch of triple track betweenNorth Platteand Gibbon Junction, due to a high traffic density of 150 trains per day. Portions of theCanadian Nationalmain line in theGreater Toronto AreaandSouthern Ontarioare triple track to facilitate high traffic density of freight services,intercity, andsuburbanpassenger trains sharing the same lines.[17] India, through its state-owned Indian Railways, has initiated the construction of a third track between Jhansi and Nagpur via Bhopal (approximately 590 kilometres (370 miles)) for reducing the traffic load and delays in passenger train arrivals.[18]The construction between Bina and Bhopal[19]and between Itarsi and Budhni had been completed by April 2020.[20] TheMelbournetoAlburyrailway originally consisted of separate1,600 mm(5 ft 3 in) gauge and1,435 mm(4 ft8+1⁄2in) gauge single track lines, but when traffic on the broad gauge declined, the lines were converted tobi-directionaldouble track1,435 mm(4 ft8+1⁄2in) gauge lines. Quadruple track consists of four parallel tracks. On a quad-track line, faster trains can overtake slower ones. Quadruple track is mostly used when there are "local" trains that stop often (or slow freight trains), and also faster inter-city or high-speed "express" trains. It can also be used incommuter railorrapid transit. The layout can vary, often with the two outer tracks carrying the local trains that stop at every station so one side of stations can be reached without staircase; this can also be reversed, with express trains on the outside and locals on the inside, for example if staffed ticket booths are wanted, allowing one person for both directions. At other places two tracks on one half of the railway carry local trains and the other half faster trains. At the local train stations, the express trains can pass through the station at full speed. For example on theNuremberg-Bamberg railway, which is quadruple track for most of its course, the inner two tracks are used by theS-Bahn Nurembergwhereas the outer tracks are used for regional express andIntercity Expresstrains. The section in northernFürthwhere the line is "only" double track creates a major bottleneck. For Berlin Stadtbahn the two northern tracks are local S-Bahn and the two other for faster trains. One notable example of quadruple track in the United States was thePennsylvania Railroad's main corridor through the heart ofPennsylvaniaaround the famousHorseshoe Curve. This line is now owned by Norfolk Southern. Other examples include theHudsonandNew HavenLines, both of which are shared betweenMetro-NorthandAmtrakin New York and Connecticut. The New Haven Line is quadruple track along its entire length, while the Hudson Line is only quadruple tracked along the shared portion fromRiverdaletoCroton–Harmonand along the shared track fromGrand Central TerminaltoYankees–East 153rd Street. Amtrak'sNortheast Corridoris quadruple tracked in most portions south of New Haven, but also has a few triple-track segments. TheMetra Electric DistrictandSouth Shore Lineis quadruple-tracked on most of the main line north ofKensington/115th Street station, with local trains running in the center two tracks, and express trains on the outer two tracks. Running parallel are two additional non-electrified tracks that carry freight rail and Amtrak trains, making the entire right of way a total of six tracks. Outside the United States theChūō Main Lineis an example of a modern, heavily utilized urban quadruple track railway. Quadruple track is used in rapid transit systems as well: throughout theNew York City Subway, theChicago "L"'sNorth Side Main Line, andSEPTA'sBroad Street Linein the United States, and on theLondon Undergroundin the United Kingdom. The two tracks of a double-track railway do not have to follow the same alignment if the terrain is difficult. AtFrampton, New South Wales, Australia, the uphill track follows something of a horseshoe curve at 1 in 75 gradient, while the shorter downhill track follows the original single track at 1 in 40 grades. A similar arrangement to Frampton could not be adopted betweenRydalandSodwallson theMain Western railway linebecause the 1 in 75 uphill track is on the wrong side of the 1 in 40 downhill track, so both tracks follow the 1 in 75 grade. Another example is atGunning. BetweenJuneeandMarinna, New South Wales, Australia the two tracks are at different levels, with the original southbound and downhill track following ground level with a steep gradient, while the newer northbound and uphill track has a gentler gradient at the cost of morecut and fill. At theBethungra Spiral, Australia, the downhill track follows the original short and steep alignment, while the uphill track follows a longer, more easily graded alignment including aspiral. AtSaunderton, England, what became the London-to-Birmingham main line of theGreat Western Railwayin 1909 was initially part of a single-track branch line fromMaidenhead. Down trains follow the route of the old branch line, while up trains follow a more gently graded new construction through a tunnel. This scheme avoided the cost of a new double-track tunnel. Directional running is two separate lines operationally combined to act as a double-track line by converting each line to unidirectional traffic. An example is in centralNevada, where theWestern PacificandSouthern Pacific Railroads, longtime rivals who each built and operated tracks betweennorthern CaliforniaandUtah, agreed to share their lines between meeting points nearWinnemuccaandWells, a distance of approximately 180 miles (290 km).[21]Westbound trains from both companies used the Southern Pacific'sOverland Route, and eastbound trains used the Western Pacific'sFeather River Route(now called theCentral Corridor).[22]Crossovers were constructed where the lines ran in close proximity to allow reverse movements. This was necessary as while for most of this run the tracks straddle opposite sides of theHumboldt River, at points the two tracks are several miles apart and some destinations and branch lines can only be accessed from one of the lines. There is a grade separated crossover of the two lines in the shared track area nearPalisade, Nevada, which results in trains followingright hand trafficin the eastern half of the shared track area, butleft hand trafficin the western half. TheUnion Pacific Railroadhas since acquired both of these lines, and continues to operate them as separate lines using directional running.Amtrakalso runs theCalifornia Zephyralong these routes.[23] A similar example exists in theFraser CanyoninBritish Columbia, whereCanadian NationalandCanadian Pacific Kansas Cityeach own a single-track line – often on either side of the river. The companies have a joint arrangement where they share resources and operate the canyon as a double-track line over a 155-mile distance (249 km) between meeting points nearMissionandAshcroft.[24]The agreement effectively increased capacity through the corridor from 30 trains per day to over 100 trains daily.[25] In other cases, where the shared lines already run in close proximity, the two companies may share facilities. InConshohocken, Pennsylvania, where the formerReading RailroadandPennsylvania Railroadshared lines, the lines even shared overhead electrical wire supports, for a 2-mile (3.2 km) stretch on the northern bank of theSchuylkill River. Both lines eventually came underConrailownership in 1976, with the former PRR line being abandoned and now used as a hiking and bicycle path.[citation needed] There are about 7,500 miles (12,100 km) of routes operated directionally in the United States and Canada, with about 2,000 miles (3,200 km) of those miles running inTexas.[26] An unusual example used to exist on theIsle of Wight, where until 1926 parallel tracks betweenSmallbrook JunctionandSt John's Roadexisted. TheSouthern Railwayinstalled the actual junction, but it was only used during heavily trafficked summer months. During the winter, the lines reverted to separate single-track routes.[27] Because double and single track may use different signalling systems, it may be awkward and confusing to mix double and single track too often. For example, intermediate mechanical signal boxes on a double-track line can be closed during periods of light traffic, but this cannot be done if there is a single-line section in between. This problem is less serious with electrical signalling such asCentralized traffic control.
https://en.wikipedia.org/wiki/Double-track_railway
On anEthernetconnection, aduplex mismatchis a condition where two connected devices operate in differentduplex modes, that is, one operates in half duplex while the other one operates in full duplex. The effect of a duplex mismatch is a link that operates inefficiently. Duplex mismatch may be caused by manually setting two connected network interfaces at different duplex modes or by connecting a device that performsautonegotiationto one that is manually set to a full duplex mode.[1] When a device set to autonegotiation is connected to a device that is not using autonegotiation, the autonegotiation process fails. The autonegotiating end of the connection is still able to correctly detect the speed of the other end, but cannot correctly detect the duplex mode. For backward compatibility withEthernet hubs, the standard requires the autonegotiating device to use half duplex in these conditions. Therefore, the autonegotiating end of the connection uses half duplex while the non-negotiating peer is locked at full duplex, and this is a duplex mismatch. The Ethernet standards and major Ethernet equipment manufacturers recommend enabling autonegotiation.[2][3][4]Nevertheless, network equipment allows autonegotiation to be disabled and on some networks, autonegotiation is disabled on all ports and a fixed modality of 100 Mbit/s and full duplex is used. That was often done by network administrators intentionally upon the introduction of autonegotiation, because ofinteroperability issueswith the initial autonegotiation specification. The fixed mode of operation works well if both ends of a connection are locked to the same settings. However, maintaining such a network and guaranteeing consistency is difficult. Since autonegotiation is generally the manufacturer’s default setting it is almost certain that, in an environment where the policy is to have fixed port settings, someone will sooner or later leave a port set to use autonegotiation by mistake.[5] Communicationispossible over a connection in spite of a duplex mismatch. Single packets are sent and acknowledged without problems. As a result, a simplepingcommand fails to detect a duplex mismatch because single packets and their resulting acknowledgments at 1-second intervals do not cause any problem on the network. A terminal session which sends data slowly (in very short bursts) can also communicate successfully. However, as soon as either end of the connection attempts to send any significant amount of data, the network suddenly slows to very low speed. Since the network is otherwise working, the cause is not so readily apparent. A duplex mismatch causes problems when both ends of the connection attempt to transfer data at the same time. This happens even if the channel is used (from a high-level or user's perspective) in one direction only, in case of large data transfers. Indeed, when a large data transfer is sent over aTCP, data is sent in multiple packets, some of which will trigger an acknowledgment packet back to the sender. This results in packets being sent in both directions at the same time. In such conditions, the full-duplex end of the connection sends its packets while receiving other packets; this is exactly the point of a full-duplex connection. Meanwhile, the half-duplex end cannot accept the incoming data while it is sending – it will sense it as acollision. The half-duplex device ceases its current data transmission, sends a jam signal instead and then retries later as perCSMA/CD. This results in the full-duplex side receiving an incomplete frame with CRC error or arunt frame. It does not detect any collision since CSMA/CD is disabled on the full-duplex side. As a result, when both devices are attempting to transmit at (nearly) the same time, the packet sent by the full-duplex end will be discarded and lost due to an assumed collision and the packet sent by the half duplex device will be delayed or lost due to a CRC error in the frame.[6] The lost packets force the TCP protocol to perform error recovery, but the initial (streamlined) recovery attempts fail because the retransmitted packets are lost in exactly the same way as the original packets. Eventually, the TCP transmission window becomes full and the TCP protocol refuses to transmit any further data until the previously-transmitted data is acknowledged. This, in turn, willquiescethe new traffic over the connection, leaving only the retransmissions and acknowledgments. Since the retransmission timer grows progressively longer between attempts, eventually a retransmission will occur when there is no reverse traffic on the connection, and the acknowledgment are finally received. This will restart the TCP traffic, which in turn immediately causes lost packets as streaming resumes. The end result is a connection that is working but performsextremelypoorly because of the duplex mismatch. Symptoms of a duplex mismatch are connections that seem to work fine with apingcommand, but "lock up" easily with very low throughput on data transfers; the effective data transfer rate is likely to be asymmetrical, performing much worse in the half-duplex to full-duplex direction than the other. In normal half-duplex operationslate collisionsdo not occur. However, in a duplex mismatch the collisions seen on the half-duplex side of the link are often late collisions. The full-duplex side usually will registerframe check sequenceerrors, orrunt frames.[7][8]Viewing these standard Ethernet statistics can help diagnose the problem. Contrary to what one might reasonably expect, both sides of a connection need to be identically configured for proper operation. In other words, setting one side to automatic (either speed or duplex or both) and setting the other to be fixed (either speed or duplex or both) will likely result in either a speed mismatch, a duplex mismatch or both. A duplex mismatch can be fixed by either enabling autonegotiation (if available and working) on both ends or by forcing the same settings on both ends (availability of a configuration interface permitting). If there is no option but to have a locked setting on one end and autonegotiation the other (for example, an old device with broken autonegotiation connected to an unmanaged switch) half duplex must be used. All modern LAN equipment comes with autonegotiation enabled and the various compatibility issues have been resolved. The best way to avoid duplex mismatches is to use autonegotiation and to replace any legacy equipment that does not use autonegotiation or does not autonegotiate correctly.
https://en.wikipedia.org/wiki/Duplex_mismatch
Aduplexeris an electronic device that allows bi-directional (duplex) communication over a single path. Inradarand radio communications systems, it isolates thereceiverfrom thetransmitterwhile permitting them to share a commonantenna. Mostradio repeatersystems include a duplexer. Duplexers can be based on frequency (often awaveguide filter), polarization (such as anorthomode transducer), or timing (as is typical in radar).[1] In radar, a transmit/receive (TR) switch alternately connects the transmitter and receiver to a shared antenna. In the simplest arrangement, the switch consists of agas-discharge tubeacross the input terminals of the receiver. When the transmitter is active, the resulting high voltage causes the tube to conduct, shorting together the receiver terminals to protect it, while its complementary, the anti-transmit/receive (ATR) switch, is a similar discharge tube which decouples the transmitter from the antenna while not operating, to prevent it from wasting received energy. Ahybrid, such as amagic T, may be used as a duplexer by terminating the fourth port in amatched load.[2]This arrangement suffers from the disadvantage that half of the transmitter power is lost in the matched load, whilethermal noisein the load is delivered to the receiver. In radio communications (as opposed to radar), the transmitted and received signals can occupy different frequency bands, and so may be separated by frequency-selective filters. These are effectively a higher-performance version of adiplexer, typically with a narrow split between the two frequencies in question (typically around 2%-5% for a commercial two-way radio system). With a duplexer the high- and low-frequency signals are traveling in opposite directions at the sharedportof the duplexer. Modern duplexers often use nearby frequency bands, so the frequency separation between the two ports is also much less. For example, the transition between the uplink and downlink bands in theGSM frequency bandsmay be about one percent (915 MHz to 925 MHz). Significant attenuation (isolation) is needed to prevent the transmitter's output from overloading the receiver's input, so such duplexers employ multi-pole filters. Duplexers are commonly made for use on the 30-50 MHz ("low band"), 136-174 MHz ("high band"), 380-520 MHz ("UHF"), plus the 790–862 MHz ("800"), 896-960 MHz ("900") and 1215-1300 MHz ("1200") bands. There are two predominant types of duplexer in use - "notch duplexers", which exhibit sharp notches at the "unwanted" frequencies and only pass through a narrow band of wanted frequencies and "bandpass duplexers", which have wide-pass frequency ranges and high out-of-band attenuation. On shared-antenna sites, the bandpass duplexer variety is greatly preferred because this virtually eliminates interference between transmitters and receivers by removing out-of-band transmit emissions and considerably improving the selectivity of receivers. Most professionally engineered sites ban the use of notch duplexers and insist on bandpass duplexers for this reason. Note 1:A duplexer must be designed for operation in thefrequencybandused by the receiver and transmitter, and must be capable of handling the outputpowerof the transmitter. Note 2:A duplexer must provide adequate rejection of transmitternoiseoccurring at the receive frequency, and must be designed to operate at, or less than, the frequency separation between the transmitter and receiver. Note 3:A duplexer must provide sufficient isolation to prevent receiver desensitization. Source: fromFederal Standard 1037C The first duplexers were invented for use on theelectric telegraphand were known asduplexrather thanduplexer. They were an early form of thehybrid coil. The telegraph companies were keen to have such a device since the ability to have simultaneous traffic in both directions had the potential to save the cost of thousands of miles of telegraph wire. The first of these devices was designed in 1853 byJulius Wilhelm Gintlof the Austrian State Telegraph. Gintl's design was not very successful. Further attempts were made by Carl Frischen of Hanover with an artificial line to balance the real line as well as bySiemens & Halske, who bought and modified Frischen's design. The first truly successful duplex was designed byJoseph Barker Stearnsof Boston in 1872. This was further developed into thequadruplex telegraphbyThomas Edison. The device is estimated to have savedWestern Union$500,000 per year in construction of new telegraph lines.[3][4] The first duplexers for radar, sometimes referred to as Transmit/Receive Switches, were invented byRobert Morris PageandLeo C. Youngof theUnited States Naval Research Laboratoryin July 1936.[5] This article related to radio communications is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Duplexer
Intelecommunications, afour-wire circuitis a two-waycircuitusing two paths so arranged that the respectivesignalsare transmitted in one direction only by one path and in the other direction by the other path. The four-wire circuit gets its name from the fact that is uses four conductors to create two complete electrical circuits, one for each direction. The two separate circuits (channels) allow full-duplexoperation with lowcrosstalk. In telephony a four-wire circuit was historically used to transport and switchbasebandaudio signals in the phone companytelephone exchangebefore the advent ofdigital modulationand theelectronic switching systemeliminated baseband audio from the telco plant except for thelocal loop. The local loop is atwo-wire circuitfor one reason only: to save copper. Using half the number of copper wire conductors per circuit means that the infrastructure cost for wiring each circuit is halved. Although a lower quality circuit, the local loop allows full duplex operation by using atelephone hybridto keep near and far voice levels equivalent. As thepublic switched telephone networkexpanded in size and scope, using many individual wires inside the telco plant became so impractical and labor-intensive that in-office and inter-office signal wiring progressed to high bandwidth coaxial cable (still a popular interconnection method in the 21st century, used with the Lucent 5ESSClass-5 telephone switchto present day),microwave radio relayand ultimatelyfiber-optic communicationfor high-speed trunk circuits. At the end of the 20th century, four-wire circuits saw renewed growth for corporate local loop service for use indedicated lineservice for computer modems to interconnect companycomputer networksand to connect networks to anInternet service providerfor Internet connectivity before commodityDSLandcable modemconnectivity was widely available. This article related totelephonyis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Four-wire_circuit
Intelecommunicationsandcomputer networking,multiplexing(sometimes contracted tomuxing) is a method by which multipleanalogordigital signalsare combined into one signal over ashared medium. The aim is to share a scarce resource—a physicaltransmission medium.[citation needed]For example, in telecommunications, severaltelephone callsmay be carried using one wire. Multiplexing originated intelegraphyin the 1870s, and is now widely applied in communications. Intelephony,George Owen Squieris credited with the development of telephone carrier multiplexing in 1910. The multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the communication channel into several logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, extracts the original channels on the receiver end. A device that performs the multiplexing is called amultiplexer(MUX), and a device that performs the reverse process is called ademultiplexer(DEMUX or DMX). Inverse multiplexing(IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream. Incomputing,I/O multiplexingcan also be used to refer to the concept of processing multipleinput/outputeventsfrom a singleevent loop, with system calls likepoll[1]andselect (Unix).[2] Multiplevariable bit ratedigitalbit streamsmay be transferred efficiently over a single fixedbandwidthchannel by means ofstatistical multiplexing. This is anasynchronousmode time-domain multiplexing which is a form of time-division multiplexing. Digital bit streams can be transferred over an analog channel by means of code-division multiplexing techniques such asfrequency-hopping spread spectrum(FHSS) anddirect-sequence spread spectrum(DSSS). Inwireless communications, multiplexing can also be accomplished through alternatingpolarization(horizontal/verticalorclockwise/counterclockwise) on eachadjacent channeland satellite, or throughphased multi-antenna arraycombined with amultiple-input multiple-output communications(MIMO) scheme. In wired communication,space-division multiplexing, also known as space-division multiple access (SDMA) is the use of separate point-to-point electrical conductors for each transmitted channel. Examples include an analog stereo audio cable, with one pair of wires for the left channel and another for the right channel, and a multi-pairtelephone cable, a switchedstar networksuch as a telephone access network, a switched Ethernet network, and amesh network. In wireless communication, space-division multiplexing is achieved with multiple antenna elements forming aphased array antenna. Examples aremultiple-input and multiple-output(MIMO), single-input and multiple-output (SIMO) and multiple-input and single-output (MISO) multiplexing. An IEEE 802.11g wireless router withkantennas makes it in principle possible to communicate withkmultiplexed channels, each with a peak bit rate of 54 Mbit/s, thus increasing the total peak bit rate by the factork. Different antennas would give differentmulti-path propagation(echo) signatures, making it possible fordigital signal processingtechniques to separate different signals from each other. These techniques may also be utilized forspace diversity(improved robustness to fading) orbeamforming(improved selectivity) rather than multiplexing. Frequency-division multiplexing(FDM) is inherently an analog technology. FDM achieves the combining of several signals into one medium by sending signals in several distinct frequency ranges over a single medium.In FDM the signals are electrical signals.One of the most common applications for FDM is traditional radio and television broadcasting from terrestrial, mobile or satellite stations, or cable television. Only one cable reaches a customer's residential area, but the service provider can send multiple television channels or signals simultaneously over that cable to all subscribers without interference. Receivers must tune to the appropriate frequency (channel) to access the desired signal.[3] A variant technology, calledwavelength-division multiplexing(WDM) is used inoptical communications. Time-division multiplexing(TDM) is a digital (or in rare cases, analog) technology that uses time, instead of space or frequency, to separate the different data streams. TDM involves sequencing groups of a few bits or bytes from each individual input stream, one after the other, and in such a way that they can be associated with the appropriate receiver. If done sufficiently quickly, the receiving devices will not detect that some of the circuit time was used to serve another logical communication path. Consider an application requiring four terminals at an airport to reach a central computer. Each terminal communicated at 2400baud, so rather than acquire four individual circuits to carry such a low-speed transmission, the airline has installed a pair of multiplexers. A pair of 9600 baud modems and one dedicated analog communications circuit from the airport ticket desk back to the airline data center are also installed.[3]Someweb proxy servers(e.g.polipo) use TDM inHTTP pipeliningof multipleHTTPtransactions onto the sameTCP/IP connection.[4] Carrier-sense multiple accessandmultidropcommunication methods are similar to time-division multiplexing in that multiple data streams are separated by time on the same medium, but because the signals have separate origins instead of being combined into a single signal, are best viewed aschannel access methods, rather than a form of multiplexing. TD is a legacy multiplexing technology still providing the backbone of most National fixed-line telephony networks in Europe, providing the 2 Mbit/s voice and signaling ports on narrow-band telephone exchanges such as the DMS100. Each E1 or 2 Mbit/s TDM port provides either 30 or 31 speech timeslots in the case of CCITT7 signaling systems and 30 voice channels for customer-connected Q931, DASS2, DPNSS, V5 and CASS signaling systems.[5] Polarization-division multiplexinguses thepolarizationof electromagnetic radiation to separate orthogonal channels. It is in practical use in both radio and optical communications, particularly in 100 Gbit/s per channelfiber-optic transmission systems. Differential Cross-Polarized Wireless Communications is a novel method for polarized antenna transmission utilizing a differential technique.[6] Orbital angular momentum multiplexingis a relatively new and experimental technique for multiplexing multiple channels of signals carried using electromagnetic radiation over a single path.[7]It can potentially be used in addition to other physical multiplexing methods to greatly expand the transmission capacity of such systems. As of 2012[update]it is still in its early research phase, with small-scale laboratory demonstrations of bandwidths of up to 2.5 Tbit/s over a single light path.[8]This is a controversial subject in the academic community, with many claiming it is not a new method of multiplexing, but rather a special case of space-division multiplexing.[9] Code-division multiplexing(CDM),code-division multiple access(CDMA) orspread spectrumis a class of techniques where several channels simultaneously share the samefrequency spectrum, and this spectral bandwidth is much higher than the bit rate orsymbol rate. One form is frequency hopping, another is direct sequence spread spectrum. In the latter case, each channel transmits its bits as a coded channel-specific sequence of pulses called chips. Number of chips per bit, or chips per symbol, is thespreading factor. This coded transmission typically is accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber or radio channel or other medium, and asynchronously demultiplexed. Advantages over conventional techniques are that variable bandwidth is possible (just as instatistical multiplexing), that the wide bandwidth allows poor signal-to-noise ratio according toShannon–Hartley theorem, and that multi-path propagation in wireless communication can be combated byrake receivers. A significant application of CDMA is theGlobal Positioning System(GPS). A multiplexing technique may be further extended into amultiple access methodorchannel access method, for example, TDM intotime-division multiple access(TDMA) and statistical multiplexing intocarrier-sense multiple access(CSMA). A multiple-access method makes it possible for several transmitters connected to the same physical medium to share their capacity. Multiplexing is provided by thephysical layerof theOSI model, while multiple access also involves amedia access controlprotocol, which is part of thedata link layer. The Transport layer in the OSI model, as well as TCP/IP model, provides statistical multiplexing of several application layer data flows to/from the same computer. Code-division multiplexing(CDM) is a technique in which each channel transmits its bits as a coded channel-specific sequence of pulses. This coded transmission is typically accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber and asynchronously demultiplexed. Other widely used multiple access techniques aretime-division multiple access(TDMA) andfrequency-division multiple access(FDMA). Code-division multiplex techniques are used as an access technology, namely code-division multiple access (CDMA), in Universal Mobile Telecommunications System (UMTS) standard for the third-generation (3G) mobile communication identified by the ITU.[citation needed] The earliest communication technology using electrical wires, and therefore sharing an interest in the economies afforded by multiplexing, was theelectric telegraph. Early experiments allowed two separate messages to travel in opposite directions simultaneously, first using an electric battery at both ends, then at only one end. Émile Baudotdeveloped atime-multiplexingsystem of multipleHughesmachines in the 1870s. In 1874, thequadruplex telegraphdeveloped byThomas Edisontransmitted two messages in each direction simultaneously, for a total of four messages transiting the same wire at the same time. Several researchers were investigatingacoustic telegraphy, afrequency-division multiplexingtechnique, which led to theinvention of the telephone. Intelephony, acustomer'stelephone linenow typically ends at theremote concentratorbox, where it is multiplexed along with othertelephone linesfor thatneighborhoodor other similar area. The multiplexed signal is then carried to thecentral switching officeon significantly fewer wires and for much further distances than a customer's line can practically go. This is likewise also true fordigital subscriber lines(DSL). Fiber in the loop(FITL) is a common method of multiplexing, which usesoptical fiberas thebackbone. It not only connectsPOTSphone lines with the rest of thePSTN, but also replaces DSL by connecting directly toEthernetwired into thehome.Asynchronous Transfer Modeis often thecommunications protocolused.[citation needed] Cable TVhas long carried multiplexedtelevision channels, and late in the 20th century began offering the same services astelephone companies.IPTValso depends on multiplexing. Invideoediting and processing systems, multiplexing refers to the process of interleaving audio and video into one coherent data stream. Indigital video, such a transport stream is normally a feature of acontainer formatwhich may includemetadataand other information, such assubtitles. The audio and video streams may have variable bit rate. Software that produces such a transport stream and/or container is commonly called a multiplexer ormuxer. Ademuxeris software that extracts or otherwise makes available for separate processing the components of such a stream or container. Indigital televisionsystems, several variable bit-rate data streams are multiplexed together to a fixed bit-rate transport stream by means ofstatistical multiplexing. This makes it possible to transfer several video and audio channels simultaneously over the same frequency channel, together with various services. This may involve severalstandard-definition television(SDTV) programs (particularly onDVB-T,DVB-S2,ISDBand ATSC-C), or oneHDTV, possibly with a single SDTV companion channel over one 6 to 8 MHz-wide TV channel. The device that accomplishes this is called astatistical multiplexer. In several of these systems, the multiplexing results in anMPEG transport stream. The newer DVB standards DVB-S2 andDVB-T2has the capacity to carry severalHDTVchannels in one multiplex.[citation needed] Indigital radio, a multiplex (also known as an ensemble) is a number of radio stations that are grouped together. A multiplex is a stream of digital information that includes audio and other data.[10] Oncommunications satelliteswhich carrybroadcasttelevision networksandradio networks, this is known asmultiple channel per carrierorMCPC. Where multiplexing is not practical (such as where there are different sources using a singletransponder),single channel per carriermode is used.[citation needed] InFM broadcastingand otheranalogradiomedia, multiplexing is a term commonly given to the process of addingsubcarriersto the audio signal before it enters thetransmitter, wheremodulationoccurs. (In fact, the stereo multiplex signal can be generated using time-division multiplexing, by switching between the two (left channel and right channel) input signals at an ultrasonic rate (the subcarrier), and then filtering out the higher harmonics.) Multiplexing in this sense is sometimes known asMPX, which in turn is also an old term forstereophonicFM, seen onstereo systemssince the 1960s. Inspectroscopythe term is used to indicate that the experiment is performed with a mixture of frequencies at once and their respective response unraveled afterward using theFourier transformprinciple. Incomputer programming, it may refer to using a single in-memory resource (such as a file handle) to handle multiple external resources (such as on-disk files).[11] Some electrical multiplexing techniques do not require a physical "multiplexer" device, they refer to a "keyboard matrix" or "Charlieplexing" design style: In high-throughputDNA sequencing, the term is used to indicate that some artificial sequences (often calledbarcodesorindexes) have been added to link given sequence reads to a given sample, and thus allow for the sequencing of multiple samples in the same reaction. Insociolinguistics, multiplexity is used to describe the number of distinct connections between individuals who are part of asocial network. A multiplex network is one in which members share a number of ties stemming from more than one social context, such as workmates, neighbors, or relatives.
https://en.wikipedia.org/wiki/Multiplexing
Push-to-talk(PTT), also known aspress-to-transmit, is a method of having conversations or talking onhalf-duplexcommunication lines, includingtwo-way radio, using amomentary buttonto switch from voice reception mode to transmit mode. For example, anair traffic controllerusually supervises several aircraft and talks on one radio frequency to all of them. Those under the same frequency can hear others' transmissions while usingprocedure wordssuch as "break", "break break" to separate order during the conversation (ICAO doc 9432). In doing so, they are aware of each other's actions and intentions. Unlike in aconference call, they do not hear background noise from the ones who are not speaking. Similar considerations apply topolice radio, the use ofbusiness bandradios onconstructionsites, and other scenarios requiring coordination of several parties.Citizens Bandis another example of classic push-to-talk operation. The PTT switch is most commonly located on the radio's handheld microphone, or for small hand-held radios, directly on the radio. For heavy radio users, a PTT foot switch may be used, and also can be combined with either a boom-mounted microphone or a headset with integrated microphone. Less commonly, a separate hand-held PTT switch may be used. This type of switch was historically called apressel.[citation needed] In situations where a user may be too busy to handle a talk switch,voice operated switchesare sometimes employed. Some systems usePTT IDto identify the speaker. Push-to-talk over cellular(PTToC) is a service option for acellular phonenetwork that enables subscribers to use their phones aswalkie-talkieswith unlimited range. A typical push-to-talk connection connects almost instantly. A significant advantage of PTT is the ability for a single person to reach an active talk group with a single button press; users don't need to make severaltelephone callsto coordinate with a group. Push-to-talk cellular calls similarly providehalf-duplexcommunications – while one person transmits, the other(s) receive. This combines the operational advantages of PTT with the interference resistance and other virtues of mobile phones. Manufacturers of (POCorPoC) hardware include ToooAir[1]andHytera US Inc.[2] Mobile push-to-talk services, offered by some mobile carriers directly as well as by independent companies, adds PTT functionality to smartphones and specialized mobile handsets (hand portable and mobile/base station PTT Radio Terminals). In addition to mobile handsets, some services also work on a laptop, desktop, and tablet computers. Recent development in PTT communications is the appearance ofappsonsmartphones, some of which can function on multiple platforms. Wireless carrier-grade PTT systems have adapted to and adopted the smartphone platform by providing downloadable apps that support their PTT systems across many mobile platforms. Over-the-top (OTT) applications do not depend on a specific carrier or type of communication network,[3]and may be slower than carrier implementations.
https://en.wikipedia.org/wiki/Push-to-talk
Aduplexcommunication systemis apoint-to-pointsystem composed of two or more connected parties or devices that can communicate with one another in both directions. Duplex systems are employed in many communications networks, either to allow for simultaneous communication in both directions between two connected parties or to provide a reverse path for the monitoring and remote adjustment of equipment in the field. There are two types of duplex communication systems: full-duplex (FDX) and half-duplex (HDX). In afull-duplexsystem, both parties can communicate with each other simultaneously. An example of a full-duplex device isplain old telephone service; the parties at both ends of a call can speak and be heard by the other party simultaneously. The earphone reproduces the speech of the remote party as the microphone transmits the speech of the local party. There is a two-way communication channel between them, or more strictly speaking, there are two communication channels between them. In ahalf-duplexorsemiduplexsystem, both parties can communicate with each other, but not simultaneously; the communication is one direction at a time. An example of a half-duplex device is awalkie-talkie, atwo-way radiothat has apush-to-talkbutton. When the local user wants to speak to the remote person, they push this button, which turns on the transmitter and turns off the receiver, preventing them from hearing the remote person while talking. To listen to the remote person, they release the button, which turns on the receiver and turns off the transmitter. This terminology is not completely standardized, and some sources define this mode assimplex.[1][2] Systems that do not need duplex capability may instead usesimplex communication, in which one device transmits and the others can only listen. Examples arebroadcastradio and television,garage door openers,baby monitors,wireless microphones, andsurveillance cameras. In these devices, the communication is only in one direction. Simplex communicationis acommunication channelthat sends information in one direction only.[3] TheInternational Telecommunication Uniondefinition is a communications channel that operates in one direction at a time, but that may be reversible; this is termedhalf duplexin other contexts. For example, in TV and radiobroadcasting, information flows only from the transmitter site to multiple receivers. A pair ofwalkie-talkietwo-way radiosprovide a simplex circuit in the ITU sense; only one party at a time can talk, while the other listens until it can hear an opportunity to transmit. The transmission medium (the radio signal over the air) can carry information in only one direction. TheWestern Unioncompany used the termsimplexwhen describing the half-duplex and simplex capacity of their newtransatlantic telegraph cablecompleted betweenNewfoundlandand theAzoresin 1928.[4]The same definition for a simplex radio channel was used by theNational Fire Protection Associationin 2002.[5] Ahalf-duplex(HDX) system provides communication in both directions, but only one direction at a time, not simultaneously in both directions.[6][7][8]This terminology is not completely standardized between defining organizations, and in radio communication some sources classify this mode assimplex.[2][1][9]Typically, once one party begins a transmission, the other party on the channel must wait for the transmission to complete, before replying.[10] An example of a half-duplex system is a two-party system such as awalkie-talkie, wherein one must say "over" or another previously designated keyword to indicate the end of transmission, to ensure that only one party transmits at a time. A good analogy for a half-duplex system would be a one-lane road that allows two-way traffic, traffic can only flow in one direction at a time. Half-duplex systems are usually used to conservebandwidth, at the cost of reducing the overall bidirectional throughput, since only a singlecommunication channelis needed and is shared alternately between the two directions. For example, a walkie-talkie or a DECT phone or so-called TDD 4G or 5G phones requires only a singlefrequencyfor bidirectional communication, while acell phonein the so-called FDD mode is a full-duplex device, and generally requires two frequencies to carry the two simultaneous voice channels, one in each direction. In automatic communications systems such as two-way data-links,time-division multiplexingcan be used for time allocations for communications in a half-duplex system. For example, station A on one end of the data link could be allowed to transmit for exactly one second, then station B on the other end could be allowed to transmit for exactly one second, and then the cycle repeats. In this scheme, the channel is never left idle. In half-duplex systems, if more than one party transmits at the same time, acollisionoccurs, resulting in lost or distorted messages. Afull-duplex(FDX) system allows communication in both directions, and, unlike half-duplex, allows this to happen simultaneously.[6][7][8]Land-linetelephonenetworks are full-duplex since they allow both callers to speak and be heard at the same time. Full-duplex operation is achieved on atwo-wire circuitthrough the use of ahybrid coilin atelephone hybrid. Modern cell phones are also full-duplex.[11] There is a technical distinction between full-duplex communication, which uses a single physical communication channel for both directions simultaneously, anddual-simplexcommunication which uses two distinct channels, one for each direction. From the user perspective, the technical difference does not matter and both variants are commonly referred to asfull duplex. ManyEthernetconnections achieve full-duplex operation by making simultaneous use of two physicaltwisted pairsinside the same jacket, or two optical fibers which are directly connected to each networked device: one pair or fiber is for receiving packets, while the other is for sending packets. Other Ethernet variants, such as1000BASE-Tuse the same channels in each direction simultaneously. In any case, with full-duplex operation, the cable itself becomes a collision-free environment and doubles the maximum total transmission capacity supported by each Ethernet connection. Full-duplex has also several benefits over the use of half-duplex. Since there is only one transmitter on each twisted pair there is no contention and no collisions so time is not wasted by having to wait or retransmit frames. Full transmission capacity is available in both directions because the send and receive functions are separate. Some computer-based systems of the 1960s and 1970s required full-duplex facilities, even for half-duplex operation, since their poll-and-response schemes could not tolerate the slight delays in reversing the direction of transmission in a half-duplex line.[citation needed] Full-duplex audio systems like telephones can create echo, which is distracting to users and impedes the performance of modems. Echo occurs when the sound originating from the far end comes out of the speaker at the near end and re-enters the microphone[a]there and is then sent back to the far end. The sound then reappears at the original source end but delayed. Echo cancellationis a signal-processing operation that subtracts the far-end signal from the microphone signal before it is sent back over the network. Echo cancellation is important technology allowingmodemsto achieve good full-duplex performance. TheV.32,V.34,V.56, andV.90modem standards require echo cancellation.[12]Echo cancelers are available as both software and hardware implementations. They can be independent components in a communications system or integrated into the communication system'scentral processing unit. Wherechannel access methodsare used inpoint-to-multipointnetworks (such ascellular networks) for dividing forward and reverse communication channels on the same physical communications medium, they are known as duplexing methods.[13] Time-division duplexing(TDD) is the application oftime-division multiplexingto separate outward and return signals. It emulates full-duplex communication over a half-duplex communication link. Time-division duplexing is flexible in the case where there isasymmetryof theuplinkanddownlinkdata rates or utilization. As the amount of uplink data increases, more communication capacity can be dynamically allocated, and as the traffic load becomes lighter, capacity can be taken away. The same applies in the downlink direction. Thetransmit/receive transition gap(TTG) is the gap (time) between a downlink burst and the subsequent uplink burst. Similarly, thereceive/transmit transition gap(RTG) is the gap between an uplink burst and the subsequent downlink burst.[14] Examples of time-division duplexing systems include: Frequency-division duplexing(FDD) means that thetransmitterandreceiveroperate using differentcarrier frequencies. The method is frequently used inham radiooperation, where an operator is attempting to use arepeaterstation. The repeater station must be able to send and receive a transmission at the same time and does so by slightly altering the frequency at which it sends and receives. This mode of operation is referred to asduplex modeoroffset mode. Uplink and downlink sub-bands are said to be separated by thefrequency offset. Frequency-division duplex systems can extend their range by using sets of simple repeater stations because the communications transmitted on any single frequency always travel in the same direction. Frequency-division duplexing can be efficient in the case of symmetric traffic. In this case, time-division duplexing tends to waste bandwidth during the switch-over from transmitting to receiving, has greater inherentlatency, and may require more complexcircuitry. Another advantage of frequency-division duplexing is that it makes radio planning easier and more efficient since base stations do notheareach other (as they transmit and receive in different sub-bands) and therefore will normally not interfere with each other. Conversely, with time-division duplexing systems, care must be taken to keep guard times between neighboring base stations (which decreasesspectral efficiency) or to synchronize base stations, so that they will transmit and receive at the same time (which increases network complexity and therefore cost, and reduces bandwidth allocation flexibility as all base stations and sectors will be forced to use the same uplink/downlink ratio). Examples of frequency-division duplexing systems include:
https://en.wikipedia.org/wiki/Simplex_communication
Ad hoc Wireless Distribution Service(AWDS) is alayer 2routing protocol to connectmobile ad hoc networks, sometimes calledwireless mesh networks. It is based on alink-state routing protocol, similar toOLSR. AWDS uses alink-state routing protocolfor organizing the network. In contrast to other implementations likeOLSRit operates in layer 2. That means noIP addressesmust be assigned because the uniqueMAC addressesof theWLANhardware is used instead. Furthermore, all kinds of layer 3 protocols can be used, like IP, DHCP, IPv6, IPX, etc. The protocoldaemoncreates a virtual network interface, which can be used by thekernellike a typical LAN interface. Thelist of ad hoc routing protocolscontains a large set of alternatives. However, most of them are academic and do not exist as practical implementations.
https://en.wikipedia.org/wiki/Ad_hoc_wireless_distribution_service
Delay-tolerant networking(DTN) is an approach tocomputer networkarchitecture that seeks to address the technical issues inheterogeneous networksthat may lack continuous network connectivity. Examples of such networks are those operating in mobile or extreme terrestrial environments, or planned networks in space. Recently,[when?]the termdisruption-tolerant networkinghas gained currency in the United States due to support fromDARPA, which has funded many DTN projects. Disruption may occur because of the limits of wireless radio range, sparsity of mobile nodes, energy resources, attack, and noise. In the 1970s, spurred by thedecreasing size of computers, researchers began developing technology for routing between non-fixed locations of computers. While the field of ad hoc routing was inactive throughout the 1980s, the widespread use of wireless protocols reinvigorated the field in the 1990s asmobile ad hoc networking(MANET) andvehicular ad hoc networkingbecame areas of increasing interest. Concurrently with (but separate from) the MANET activities, DARPA had funded NASA, MITRE and others to develop a proposal for theInterplanetary Internet(IPN). Internet pioneerVint Cerfand others developed the initial IPN architecture, relating to the necessity of networking technologies that can cope with the significant delays and packet corruption of deep-space communications. In 2002,Kevin Fallstarted to adapt some of the ideas in the IPN design to terrestrial networks and coined the termdelay-tolerant networkingand the DTN acronym. A paper published in 2003 SIGCOMM conference gives the motivation for DTNs.[1]The mid-2000s brought about increased interest in DTNs, including a growing number ofacademic conferenceson delay and disruption-tolerant networking, and growing interest in combining work from sensor networks and MANETs with the work on DTN. This field saw many optimizations on classic ad hoc and delay-tolerant networking algorithms and began to examine factors such as security, reliability, verifiability, and other areas of research that are well understood in traditionalcomputer networking. The ability to transport, or route, data from a source to a destination is a fundamental ability all communication networks must have. Delay and disruption-tolerant networks (DTNs), are characterized by their lack of connectivity, resulting in a lack of instantaneous end-to-end paths. In these challenging environments, popular ad hoc routing protocols such asAODV[2]andDSR[3]fail to establish routes. This is due to these protocols trying to first establish a complete route and then, after the route has been established, forward the actual data. However, when instantaneous end-to-end paths are difficult or impossible to establish, routing protocols must take to a "store and forward" approach, where data is incrementally moved and stored throughout the network in hopes that it will eventually reach its destination.[4][5][6]A common technique used to maximize the probability of a message being successfully transferred is to replicate many copies of the message in the hope that one will succeed in reaching its destination.[7]This is feasible only on networks with large amounts of local storage and internode bandwidth relative to the expected traffic. In many common problem spaces, this inefficiency is outweighed by the increased efficiency and shortened delivery times made possible by taking maximum advantage of available unscheduled forwarding opportunities. In others, where available storage and internode throughput opportunities are more tightly constrained, a more discriminate algorithm is required. In efforts to provide a shared framework for algorithm and application development in DTNs,RFC4838and5050were published in 2007 to define a common abstraction to software running on disrupted networks. Commonly known as the Bundle Protocol, this protocol defines a series of contiguous data blocks as a bundle—where each bundle contains enough semantic information to allow the application to make progress where an individual block may not. Bundles areroutedin astore and forwardmanner between participatingnodesover varied network transport technologies (including bothIPand non-IPbased transports). The transport layers carrying the bundles across their local networks are calledbundle convergence layers.The bundle architecture therefore operates as anoverlay network, providing a new naming architecture based onEndpoint Identifiers(EIDs) and coarse-grainedclass of serviceofferings. Protocols using bundling must leverage application-level preferences for sending bundles across a network. Due to thestore and forwardnature of delay-tolerant protocols, routing solutions for delay-tolerant networks can benefit from exposure to application-layer information. For example, network scheduling can be influenced if application data must be received in its entirety, quickly, or without variation in packet delay. Bundle protocols collect application data into bundles that can be sent across heterogeneous network configurations with high-level service guarantees. The service guarantees are generally set by the application level, and theRFC5050Bundle Protocol specification includes "bulk", "normal", and "expedited" markings. In October 2014 the Internet Engineering Task Force (IETF) instantiated aDelay Tolerant Networking working groupto review and revise the protocol specified inRFC5050. The Bundle Protocol for CCSDS[8]is a profile of RFC 5050 specifically addressing the Bundle Protocol's utility for data communication in space missions. As of January 2022, the IETF published the following RFCs related to BPv7:RFC9171,9172,9173,9174. In January 2025,RFC9713was published, which updates RFC 9171. Addressing security issues has been a major focus of the bundle protocol. Possible attacks take the form of nodes behaving as a "black hole" or a "flooder".[9][10] Security concerns for delay-tolerant networks vary depending on the environment and application, thoughauthenticationandprivacyare often critical. These security guarantees are difficult to establish in a network without continuous bi-directional end-to-end paths between devices because the network hinders complicated cryptographic protocols, hinders key exchange, and each device must identify other intermittently visible devices.[11][12]Solutions have typically been modified frommobile ad hoc networkand distributed security research, such as the use of distributed certificate authorities[13]andPKIschemes. Original solutions from the delay-tolerant research community include: 1) the use ofidentity-based encryption, which allows nodes to receive information encrypted with their public identifier;[14]and 2) the use of tamper-evident tables with agossiping protocol;[15] There are a number of implementations of the Bundle Protocol: The main implementation of BPv6 are listed below. A number of other implementations exist. The draft of BPv7 lists the following implementations.[16] Various research efforts are currently investigating the issues involved with DTN: Some research efforts look at DTN for theInterplanetary Internetby examining use of the Bundle Protocol in space:
https://en.wikipedia.org/wiki/Delay-tolerant_networking
In IEEE 802.11 wireless local area networking standards (includingWi‑Fi), aservice setis a group ofwireless networkdevices which share aservice set identifier(SSID)—typically the natural language label that users see as a network name. (For example, all of the devices that together form and use a Wi‑Fi network called "Foo" are a service set.) A service set forms a logical network ofnodesoperating with shared link-layer networking parameters; they form one logical network segment. A service set is either abasic service set(BSS) or anextended service set(ESS). Abasic service setis a subgroup, within a service set, of devices that share physical-layer medium access characteristics (e.g. radio frequency, modulation scheme, security settings) such that they are wirelessly networked. The basic service set is defined by abasic service set identifier(BSSID) shared by all devices within it. The BSSID is a 48-bit label that conforms to MAC-48 conventions. While a device may have multiple BSSIDs, usually each BSSID is associated with at most one basic service set at a time.[1] A basic service set should not be confused with the coverage area of an access point, known as thebasic service area(BSA).[2] Aninfrastructure BSSis created by an infrastructure device called anaccess point(AP) for other devices to join. (Note that the termIBSSisnotused for this type of BSS but refers to theindependenttype discussed below.) The operating parameters of the infrastructure BSS are defined by the AP.[3]The Wi‑Fi segments of common home and business networks are examples of this type. Each basic service set has a unique identifier, a BSSID, which is a 48-bit number that followsMAC addressconventions.[4]An infrastructure BSSID is usually non-configurable, in which case it is either preset during manufacture or mathematically derived from a preset value such as a serial number or a MAC address of another network interface. As with the MAC addresses used for Ethernet devices, an infrastructure BSSID is a combination of a 24-bit organizationally unique identifier (OUI, the manufacturer's identity) and a 24-bit serial number. A BSSID with a value of all 1s is used to indicate the wildcard BSSID, usable only during probe requests or for communications that take place outside the context of a BSS.[5] Anindependent BSS(IBSS), orad hoc network, is created by peer devices among themselves without network infrastructure.[6]A temporary network created by a cellular telephone to share its Internet access with other devices is a common example. In contrast to the stations in an infrastructure-mode network, the stations in awireless ad hoc networkcommunicate directly with one another, i.e. without a dependence on a distribution point to relay traffic between them.[7]In this form of peer-to-peer wireless networking, the peers form anindependent basic service set(IBSS).[8]Some of the responsibilities of a distribution point—such as defining network parameters and other "beaconing" functions—are established by the first station in an ad-hoc network. However, that station does not relay traffic between the other stations; instead, the peers communicate directly with one another. Like an infrastructure BSS, an independent BSS also has a 48-bit MAC-address-like identifier. But unlike infrastructure BSS identifiers, independent BSS identifiers are not necessarily unique: theindividual/groupbit of the address is always set to 0 (individual), theuniversal/localbit of the address is always set to 1 (local), and the remaining 46 bits are randomly generated.[5] Amesh basic service set(MBSS) is a self-contained network ofmesh stationsthat share amesh profile, defined in802.11s.[9]Each node may also be an access point hosting its own basic service set, for example using the mesh BSS to provide Internet access for local users. In such a system, the BSS created by the access point is distinct from the mesh network, and a wireless client of that BSS is not part of the MBSS. The formation of the mesh BSS, as well as wireless traffic management (including path selection and forwarding) is negotiated between thenodesof the mesh infrastructure. The mesh BSS is distinct from the networks (which may also be wireless) used by a mesh's redistribution points to communicate with one another.[citation needed] Theservice set identifier(SSID) defines or extends a service set. Normally it is broadcastin the clearby stations in beacon packets to announce the presence of a network and seen by users as a wireless network name. Unlike basic service set identifiers, SSIDs are usually customizable.[10]These SSIDs can be zero to 32octetslong,[11]and are, for convenience, usually in anatural language, such as English. The 802.11 standards prior to the 2012 edition did not define any particular encoding or representation for SSIDs, which were expected to be treated and handled as an arbitrary sequence of 0–32 octets that are not limited toprintable characters. IEEE Std 802.11-2012 defines a flag to express that the SSID isUTF-8-encoded and could contain anyUnicodetext.[12]Wireless network stacks must still be prepared to handle all possible values in the SSID field. Since the contents of an SSID field are arbitrary, the 802.11 standard permits devices to advertise the presence of a wireless network with beacon packets in which the SSID field is set to null.[13][n 1]A null SSID (the SSID element'slengthfield is set to zero[11]) is called awildcard SSIDin IEEE 802.11 standards documents,[14]and as ano broadcast SSIDorhidden SSIDin the context of beacon announcements,[13][15]and can be used, for example, in enterprise and mesh networks to steer a client to a particular (e.g. less utilized) access point.[13]A station may also likewise transmit packets in which the SSID field is set to null; this prompts an associated access point to send the station a list of supported SSIDs.[16]Once a device has associated with a basic service set, for efficiency, the SSID is not sent within packet headers; only BSSIDs are used for addressing. Apple'slocation servicesinterpret the SSID of aWi‑Fi access pointending in_nomapas anopt-outfrom being included in Apple's crowdsourced location databases.[17] Anextended service set(ESS) is a wireless network, created by multiple access points, which appears to users as a single, seamless network, such as a network covering a home or office that is too large for reliable coverage by a single access point. It is a set of one or more infrastructure basic service sets on a commonlogical network segment(i.e. same IP subnet and VLAN).[18]Key to the concept is that the participating basic service sets appear as a single network to thelogical link controllayer by using the same SSID.[18][19]Thus, from the perspective of the logical link control layer, stations within an ESS may communicate with one another, and mobile stations may move transparently from one participating basic service set to another (within the same ESS).[19]Extended service sets make possible distribution services such as centralized authentication. From the perspective of the link layer, all stations within an ESS are all on the same link, and transfer from one BSS to another is transparent to logical link control.[20] The basic service sets formed inwireless ad hoc networksare, by definition, independent from other BSSs, and an independent BSS cannot therefore be part of an extended infrastructure.[21]In that formal sense an independent BSS has no extended service set. However, the network packets of both independent BSSs and infrastructure BSSs have a logical network service set identifier, and the logical link control does not distinguish between the use of that field to name an ESS network, and the use of that field to name a peer-to-peer ad hoc network. The two are effectively indistinguishable at the logical link control layer level.[20]
https://en.wikipedia.org/wiki/Independent_basic_service_set
Anad hoc routing protocolis a convention, or standard, that controls hownodesdecide which way toroutepacketsbetween computing devices in amobile ad hoc network. In ad hoc networks, nodes are not familiar with thetopologyof their networks. Instead, they have to discover it: typically, a new node announces its presence and listens for announcements broadcast by its neighbors. Each node learns about others nearby and how to reach them, and may announce that it too can reach them. Note that in a wider sense,ad hoc protocolcan also be used literally, to mean an improvised and often impromptuprotocolestablished for a specific purpose. The following is a list of some ad hoc network routing protocols. This type of protocols maintains fresh lists of destinations and their routes by periodically distributing routing tables throughout the network. The main disadvantages of such algorithms are: Examples of proactive algorithms are: This type of protocol finds a route on demand by flooding the network with Route Request packets. The main disadvantages of such algorithms are: Examples of on-demand algorithms are: This type of protocol combines the advantages of proactive and reactive routing. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding. The choice of one or the other method requires predetermination for typical cases. The main disadvantages of such algorithms are: Examples of hybrid algorithms are: With this type of protocol the choice of proactive and of reactive routing depends on the hierarchic level in which a node resides. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding on the lower levels. The choice for one or the other method requires proper attributation for respective levels. The main disadvantages of such algorithms are: Examples of hierarchical routing algorithms are:
https://en.wikipedia.org/wiki/List_of_ad_hoc_routing_protocols
Amobile wireless sensor network(MWSN)[1]can simply be defined as awireless sensor network(WSN) in which thesensor nodesare mobile. MWSNs are a smaller, emerging field of research in contrast to their well-established predecessor. MWSNs are much more versatile than static sensor networks as they can be deployed in any scenario and cope with rapidtopologychanges. However, many of their applications are similar, such as environment monitoring orsurveillance. Commonly, the nodes consist of aradiotransceiverand amicrocontrollerpowered by abattery, as well as some kind ofsensorfor detectinglight,heat,humidity,temperature, etc. Broadly speaking, there are two sets of challenges in MWSNs; hardware and environment. The main hardware constraints are limited battery power and low cost requirements. The limited power means that it's important for the nodes to be energy efficient. Price limitations often demand low complexity algorithms for simpler microcontrollers and use of only asimplexradio. The major environmental factors are the shared medium and varying topology. The shared medium dictates that channel access must be regulated in some way. This is often done using amedium access control(MAC) scheme, such ascarrier-sense multiple access(CSMA),frequency-division multiple access(FDMA) orcode-division multiple access(CDMA). The varying topology of the network comes from the mobility of nodes, which means that multihop paths from the sensors to the sink are not stable. Currently there is no standard for MWSNs, so often protocols from MANETs are borrowed, such asAssociativity-Based Routing(AR),Ad hoc On-Demand Distance Vector Routing(AODV),Dynamic Source Routing(DSR) and Greedy Perimeter Stateless Routing (GPSR).[2]MANET protocols are preferred as they are able to work in mobile environments, whereas WSN protocols often aren't suitable. Topology selection plays an important role in routing because the network topology decides the transmission path of the data packets to reach the proper destination. Here, all the topologies (Flat / Unstructured, cluster, tree, chain and hybrid topology) are not feasible for reliable data transmission on sensor nodes mobility. Instead of single topology, hybrid topology plays a vital role in data collection, and the performance is good. Hybrid topology management schemes include the Cluster Independent Data Collection Tree (CIDT).[3]and the Velocity Energy-efficient and Link-aware Cluster-Tree (VELCT);[4]both have been proposed for mobile wireless sensor networks (MWSNs). Since there is no fixed topology in these networks, one of the greatest challenges is routing data from its source to the destination. Generally these routing protocols draw inspiration from two fields;WSNsandmobile ad hoc networks(MANETs). WSN routing protocols provide the required functionality but cannot handle the high frequency of topology changes. Whereas, MANET routing protocols can deal with mobility in the network but they are designed for two way communication, which in sensor networks is often not required.[5] Protocols designed specifically for MWSNs are almost always multihop and sometimes adaptations of existing protocols. For example, Angle-based Dynamic Source Routing (ADSR),[6]is an adaptation of the wireless mesh network protocolDynamic Source Routing(DSR) for MWSNs. ADSR uses location information to work out the angle between the node intending to transmit, potential forwarding nodes and the sink. This is then used to insure that packets are always forwarded towards the sink. Also,Low Energy Adaptive Clustering Hierarchy(LEACH) protocol for WSNs has been adapted to LEACH-M (LEACH-Mobile),[7]for MWSNs. The main issue with hierarchical protocols is that mobile nodes are prone to frequently switching between clusters, which can cause large amounts of overhead from the nodes having to regularly re-associate themselves with different cluster heads. Another popular routing technique is to utilise location information from aGPSmodule attached to the nodes. This can be seen in protocols such as Zone Based Routing (ZBR),[8]which defines clusters geographically and uses the location information to keep nodes updated with the cluster they're in. In comparison, Geographically Opportunistic Routing (GOR),[9]is a flat protocol that divides the network area into grids and then uses the location information to opportunistically forward data as far as possible in each hop. Multipath protocols provide a robust mechanism for routing and therefore seem like a promising direction for MWSN routing protocols. One such protocol is the query based Data Centric Braided Multipath (DCBM).[10] Furthermore, Robust Ad-hoc Sensor Routing (RASeR)[11]and Location Aware Sensor Routing (LASeR)[12]are two protocols that are designed specifically for high speed MWSN applications, such as those that incorporate UAVs. They both take advantage of multipath routing, which is facilitated by a 'blind forwarding' technique. Blind forwarding simply allows the transmitting node to broadcast a packet to its neighbors, it is then the responsibility of the receiving nodes to decide whether they should forward the packet or drop it. The decision of whether to forward a packet or not is made using a network-wide gradient metric, such that the values of the transmitting and receiving nodes are compared to determine which is closer to the sink. The key difference between RASeR and LASeR is in the way they maintain their gradient metrics; RASeR uses the regular transmission of small beacon packets, in which nodes broadcast their current gradient. Whereas, LASeR relies on taking advantage of geographical location information that is already present on the mobile sensor node, which is likely the case in many applications. There are three types of medium access control (MAC) techniques: based ontime division,frequency divisionandcode division. Due to the relative ease of implementation, the most common choice of MAC is time-division-based, closely related to the popularCSMA/CAMAC. The vast majority of MAC protocols that have been designed withMWSNsin mind, are adapted from existingWSNMACs and focus on low power consumption, duty-cycled schemes. Protocols designed for MWSNs are usually validated with the use of either analytical, simulation or experimental results. Detailed analytical results are mathematical in nature and can provide good approximations of protocol behaviour. Simulations can be performed using software such asOPNET,NetSimandns2and is the most common method of validation. Simulations can provide close approximations to the real behaviour of a protocol under various scenarios. Physical experiments are the most expensive to perform and, unlike the other two methods, no assumptions need to be made. This makes them the most reliable form of information, when determining how a protocol will perform under certain conditions. The advantage of allowing the sensors to be mobile increases the number of applications beyond those for which static WSNs are used. Sensors can be attached to a number of platforms: In order to characterise the requirements of an application, it can be categorised as either constant monitoring, event monitoring, constant mapping or event mapping.[1]Constant type applications are time-based and as such data is generated periodically, whereas event type applications are event drive and so data is only generated when an event occurs. The monitoring applications are constantly running over a period of time, whereas mapping applications are usually deployed once in order to assess the current state of a phenomenon. Examples of applications include health monitoring, which may include heart rate, blood pressure etc.[13]This can be constant, in the case of a patient in a hospital, or event driven in the case of a wearable sensor that automatically reports your location to an ambulance team in the case of an emergency. Animals can have sensors attached to them in order to track their movements for migration patterns, feeding habits or other research purposes.[14]Sensors may also be attached tounmanned aerial vehicles(UAVs) for surveillance or environment mapping.[15]In the case of autonomous UAV aided search and rescue, this would be considered an event mapping application, since the UAVs are deployed to search an area but will only transmit data back when a person has been found.
https://en.wikipedia.org/wiki/Mobile_wireless_sensor_network
Apersonal area network(PAN) is acomputer networkfor interconnectingelectronic deviceswithin an individual person's workspace.[1]A PAN providesdata transmissionamong devices such ascomputers,smartphones,tabletsandpersonal digital assistants. PANs can be used for communication among the personal devices themselves, or for connecting to a higher level network and the Internet where one master device takes up the role asgateway. A PAN may be carried over wired interfaces such asUSB, but is predominantly carried wirelessly, also called awireless personal area network(WPAN). A PAN is wirelessly carried over a low-powered, short-distancewireless networktechnology such asIrDA,Wireless USB,Bluetooth,NearLinkorZigbee. The reach of a WPAN varies from a few centimeters to a few meters.WPANs specifically tailored for low-power operation of the sensors are sometimes also calledlow-power personal area network(LPPAN) to better distinguish them fromlow-power wide-area network(LPWAN). Wired personal area networks provide short connections between peripherals. Example technologies includeUSB,IEEE 1394andThunderbolt.[citation needed] A wireless personal area network (WPAN) is a personal area network in which the connections are wireless.IEEE 802.15has produced standards for several types of PANs operating in theISM bandincludingBluetooth. TheInfrared Data Association(IrDA) has produced standards for WPANs that operate usinginfrared communications. Bluetooth uses short-range radio waves. Uses in a WPAN include, for example, Bluetooth devices such as keyboards, pointing devices, audio headsets, and printers that may connect tosmartwatches,cell phones, or computers. A Bluetooth WPAN is also called apiconet, and is composed of up to 8 active devices in a master-slave relationship (a very large number of additional devices can be connected inparkedmode). The first Bluetooth device in the piconet is the master, and all other devices are slaves that communicate with the master. A piconet typically has a range of 10 metres (33 ft), although ranges of up to 100 metres (330 ft) can be reached under ideal circumstances. Long-range Bluetooth routers with augmented antenna arrays connect Bluetooth devices up to 1,000 feet (300 m).[2] WithBluetooth mesh networkingthe range and number of devices is extended by usingmesh networkingtechniques to relay information from one device to another. Such a network doesn't have a master device and may or may not be treated as a WPAN.[3] IrDA usesinfraredlight, which has a frequency below the human eye's sensitivity. Infrared is used in other wireless communications applications, for instance, inremote controls. Typical WPAN devices that use IrDA include printers, keyboards, and otherserial communicationinterfaces.[4] Media related toPersonal area networks (PAN)at Wikimedia Commons
https://en.wikipedia.org/wiki/Personal_area_network
Asmart meteris anelectronicdevice that records information—such as consumption ofelectric energy, voltage levels, current, and power factor—andcommunicates the informationto the consumer andelectricity suppliers. Advanced metering infrastructure (AMI) differs fromautomatic meter reading(AMR) in that it enables two-way communication between the meter and the supplier. The termsmart meteroften refers to anelectricity meter, but it also may mean a device measuringnatural gas,waterordistrict heatingconsumption.[1][2]More generally, a smart meter is anelectronicdevice that records information such as consumption ofelectric energy, voltage levels, current, and power factor. Smart meterscommunicate the informationto the consumer for greater clarity of consumption behavior, andelectricity suppliersfor system monitoring and customer billing. Smart meters typically record energy near real-time, and report regularly, in short intervals throughout the day.[3]Smart meters enable two-way communication between the meter and the central system. Smart meters may be part of asmart grid, but do not themselves constitute a smart grid.[4] AMI differs from AMR in that it enables two-way communication between the meter and the supplier. Communications from the meter to the network may be wireless, or via fixed wired connections such aspower-line communication(PLC). Wireless communication options in common use include cellular communications, Wi-Fi (readily available),wireless ad hoc networksover Wi-Fi,wireless mesh networks,low power long-range wireless (LoRa),Wize(high radio penetration rate, open, using the frequency 169 MHz)Zigbee(low power, low data rate wireless), and Wi-SUN (Smart Utility Networks). Similar meters, usually referred to asintervalor time-of-use meters, have existed for years, but smart meters usually involve real-time or near real-time sensors,power outagenotification, and power quality monitoring. These additional features are more than simple AMR. They are similar in many respects to AMI meters. Interval and time-of-use meters historically have been installed to measure commercial and industrial customers, but may not have automatic reading.[citation needed]Research by the UK consumer groupWhich?, showed that as many as one in three confuse smart meters withenergy monitors, also known as in-home display monitors.[5][when?] In 1972,Theodore Paraskevakos, while working withBoeinginHuntsville, Alabama, developed a sensor monitoring system that used digital transmission for security, fire, and medical alarm systems as well as meter reading capabilities. This technology was a spin-off from the automatic telephone line identification system, now known asCaller ID. In 1974, Paraskevakos was awarded a U.S. patent for this technology.[6]In 1977, he launched Metretek, Inc.,[7]which developed and produced the first smart meters.[8]Since this system was developed pre-Internet, Metretek utilized the IBM series 1 mini-computer. For this approach, Paraskevakos and Metretek were awarded multiple patents.[9] The installed base of smart meters in Europe at the end of 2008 was about 39 million units, according to analyst firm Berg Insight.[10]Globally, Pike Research found that smart meter shipments were 17.4 million units for the first quarter of 2011.[11]Visiongain determined that the value of the global smart meter market would reachUS$7 billion in 2012.[12] H.M. Zahid Iqbal, M. Waseem, and Dr. Tahir Mahmood, researchers of University of Engineering & Technology Taxila, Pakistan, introduced the concept of Smart Energy Meters in 2013. Their article, "Automatic Energy Meter Reading using Smart Energy Meter" outlined the key features of Smart Energy Meter including Automatic remote meter reading via GSM for utility companies and customers, Real-time monitoring of a customer's running load, Remote disconnection and reconnection of customer connections by the utility company and Convenient billing, eliminating the need of meter readers to physically visit the customers for billing. As of January 2018,[update]over 99 million electricity meters were deployed across the European Union, with an estimated 24 million more to be installed by the end of 2020. The European CommissionDG Energyestimates the 2020 installed base to have required €18.8 billion in investment, growing to €40.7 billion by 2030, with a total deployment of 266 million smart meters.[13] By the end of 2018, the U.S. had over 86 million smart meters installed.[14]In 2017, there were 665 million smart meters installed globally.[15]Revenue generation is expected to grow from $12.8 billion in 2017 to $20 billion by 2022.[16] Since the inception of electricityderegulationand market-driven pricing throughout the world, utilities have been looking for a means to match consumption with generation. Non-smart electrical and gas meters only measure total consumption, providing no information of when the energy was consumed.[17]Smart meters provide a way of measuring electricity consumption in near real-time. This allows utility companies to charge different prices for consumption according to the time of day and the season.[18]It also facilitates more accurate cash-flow models for utilities. Since smart meters can be read remotely, labor costs are reduced for utilities. Smart metering offers potential benefits to customers. These include, a) an end to estimated bills, which are a major source of complaints for many customers b) a tool to help consumers better manage their energy purchases—smart meters with a display outside their homes could provide up-to-date information on gas and electricity consumption and in doing so help people to manage their energy use and reduce their energy bills. With regards to consumption reduction, this is critical for understanding the benefits of smart meters because the relatively small percentage benefits in terms of savings are multiplied by millions of users.[19]Smart meters for water consumption can also provide detailed and timely information about customer water use and early notification of possible water leaks in their premises.[20]Electricity pricing usually peaks at certain predictable times of the day and the season. In particular, if generation is constrained, prices can rise if power from other jurisdictions or more costly generation is brought online. Proponents assert that billing customers at a higher rate for peak times encourages consumers to adjust their consumption habits to be more responsive to market prices and assert further, that regulatory and market design agencies hope these "price signals" could delay the construction of additional generation or at least the purchase of energy from higher-priced sources, thereby controlling the steady and rapid increase of electricity prices.[citation needed] An academic study based on existing trials showed that homeowners' electricity consumption on average is reduced by approximately 3-5% when provided with real-time feedback.[21] Another advantage of smart meters that benefits both customers and the utility is the monitoring capability they provide for the whole electrical system. As part of an AMI, utilities can use the real-time data from smart meters measurements related to current, voltage, and power factor to detect system disruptions more quickly, allowing immediate corrective action to minimize customer impact such as blackouts. Smart meters also help utilities understand the power grid needs with more granularity than legacy meters. This greater understanding facilitates system planning to meet customer energy needs while reducing the likelihood of additional infrastructure investments, which eliminates unnecessary spending or energy cost increases.[22] Though the task of meeting national electricity demand with accurate supply is becoming ever more challenging as intermittent renewable generation sources make up a greater proportion of the energy mix, the real-time data provided by smart meters allow grid operators to integrate renewable energy onto the grid in order to balance the networks. As a result, smart meters are considered an essential technology to the decarbonisation of the energy system.[23] Advanced metering infrastructure(AMI) refers to systems that measure, collect, and analyze energy usage, and communicate with metering devices such as electricity meters, gas meters, heat meters, and water meters, either on request or on a schedule. These systems include hardware, software, communications, consumer energy displays and controllers, customer associated systems,meter data managementsoftware, and supplier business systems. Government agencies and utilities are turning toward advanced metering infrastructure (AMI) systems as part of larger "smart grid" initiatives. AMI extends automatic meter reading (AMR) technology by providing two-way meter communications, allowing commands to be sent toward the home for multiple purposes, includingtime-based pricinginformation,demand-responseactions, or remote service disconnects. Wireless technologies are critical elements of the neighborhood network, aggregating a mesh configuration of up to thousands of meters for back haul to the utility's IT headquarters. The network between the measurement devices and business systems allows the collection and distribution of information to customers, suppliers,utility companies, and service providers. This enables these businesses to participate in demand response services. Consumers can use the information provided by the system to change their normal consumption patterns to take advantage of lower prices. Pricing can be used to curb the growth ofpeak demandconsumption. AMI differs from traditionalautomatic meter reading(AMR) in that it enables two-way communications with the meter. Systems only capable of meter readings do not qualify as AMI systems.[24] AMI implementation relies on four key components: Physical Layer Connectivity, which establishes connections between smart meters and networks, Communication Protocols to ensure secure and efficient data transmission, Server Infrastructure, which consists of centralized or distributed servers to store, process, and manage data for billing, monitoring, and demand response; and Data Analysis, where analytical tools provide insights, load forecasting, and anomaly detection for optimized energy management. Together, these components help utilities and consumers monitor and manage energy use efficiently, supporting smarter grid management.[25] Communication is a cornerstone of smart meter technology, enabling reliable and secure data transmission to central systems. However, the diversity of environments in which smart meters operate presents significant challenges. Solutions to these challenges encompass a range of communication methods[26]includingPower-line communication[27](PLC),Cellular network,[28]Wireless mesh network,[29]Short-range,[29]andsatellite[citation needed]: Additional options, such as Wi-Fi[citation needed]and internet-based networks, are also in use. However, no single communication solution is universally optimal. The challenges faced by rural utilities differ significantly from those of urban counterparts or utilities in remote, mountainous, or poorly serviced areas. Smart meters often extend their functionality through integration intoHome Area Networks (HANs). These networks enable communication within the household and may include: Technologies used in HANs vary globally but typically include PLC, wireless ad hoc networks, and Zigbee. By leveraging appropriate connectivity solutions, smart meters can address diverse environmental and infrastructural needs while delivering seamless communication and enhanced functionality.[citation needed] Electricity smart meters start to be utilized as gateways for gas and water meters, creating integrated smart metering systems.[36]In this configuration, gas and water meters communicate with the electricity meter using Wireless M-Bus (Wireless Meter-Bus), a European standard (EN 13757-4) designed for secure and efficient data transmission between utility meters and data collectors. The electricity meter then aggregates this data and transmits it to the central utility network via Power Line Communication (PLC), which leverages existing electrical wiring for data transfer. Smart meter communication protocols are essential for enabling reliable, efficient, and secure data exchange between meters, utilities, and other components of advanced metering infrastructure (AMI). These protocols address the diverse requirements of global markets, supporting various communication methods, from optical ports and serial connections to power line communication (PLC) and wireless networks. Below is an overview of key protocols, including ANSI standards widely used in North America, IEC protocols prevalent in Europe, the globally recognized OSGP for smart grid applications, and the PLC-focused Meters and More, each designed to meet specific needs in energy monitoring and management. "IEC 62056is the most widely adopted protocol"[37]for smart meter communication, enabling reliable, two-way data exchange within Advanced Metering Infrastructure (AMI) systems. It encompasses the DLMS/COSEM protocol for structuring and managing metering data. "It is widely used because of its flexibility, scalability, and ability to support different communication media such as Power Line Communication (PLC), TCP/IP, and wireless networks.".[37]It also supports data transmission over serial connections using ASCII or binary formats, with physical media options such as modulated light (via LED and photodiode) or wired connections (typically EIA-485).[38] ANSI C12.18 is anANSIStandard that describes aprotocolused for two-way communications with a meter, mostly used in North American markets. The C12.18 Standard is written specifically for meter communications via an ANSI Type 2 Optical Port, and specifies lower-level protocol details.ANSI C12.19specifies the data tables that are used.ANSI C12.21is an extension of C12.18 written for modem instead of optical communications, so it is better suited toautomatic meter reading. ANSI C12.22 is the communication protocol for remote communications.[39] TheOpen Smart Grid Protocol(OSGP) is a family of specifications published by theEuropean Telecommunications Standards Institute(ETSI) used in conjunction with the ISO/IEC 14908 control networking standard for smart metering and smart grid applications. Millions of smart meters based on OSGP are deployed worldwide.[40]On July 15, 2015, the OSGP Alliance announced the release of a new security protocol (OSGP-AES-128-PSK) and its availability from OSGP vendors.[41]This deprecated the original OSGP-RC4-PSK security protocol which had been identified to be vulnerable.[42][43] "Meters and More was created in 2010 from the coordinated work between Enel and Endesa to adopt, maintain and evolve the field-proven Meters and More open communication protocol for smart grid solutions." .[44]In 2010, the Meters and More Association was established to promote the protocol globally, ensuring interoperability and efficiency in power line communication (PLC)-based smart metering systems. Meters and More is an open communication protocol designed for advanced metering infrastructure (AMI). It facilitates reliable, high-speed data exchange over PLC networks, focusing on energy monitoring, demand response, and secure two-way communication between utilities and consumers. Unlike DLMS/COSEM, which is a globally standardized and versatile protocol supporting multiple utilities (electricity, gas, and water), Meters and More is tailored specifically for PLC-based systems, emphasizing efficiency, reliability, and ease of deployment in electricity metering. There is a growing trend toward the use ofTCP/IPtechnology as a common communication platform for Smart Meter applications, so that utilities can deploy multiple communication systems, while using IP technology as a common management platform.[45][46]A universal metering interface would allow for development and mass production of smart meters and smart grid devices prior to the communication standards being set, and then for the relevant communication modules to be easily added or switched when they are. This would lower the risk of investing in the wrong standard as well as permit a single product to be used globally even if regional communication standards vary.[47] In Advanced Metering Infrastructure (AMI), the server infrastructure is crucial for managing, storing, and processing the large volumes of data generated by smart meters. This infrastructure ensures seamless communication between smart meters, utility providers, and end-users, supporting real-time monitoring, billing, and grid management. Key Components of AMI Server Infrastructure Data analyticsfor smart meters leveragesmachine learningto extract insights fromenergy consumptiondata. Key applications include demand forecasting, dynamic pricing,Energy Disaggregation, and fault detection, enabling optimized grid performance and personalizedenergy management. These techniques drive efficiency, cost savings, and sustainability in modern energy systems. "Energy Disaggregation, or the breakdown of your energy use based on specific appliances or devices",[51]is an exploratory technique for analyzing energy consumption in households, commercial buildings, and industrial settings. By using data from a single energy meter, it employs algorithms and machine learning to estimate individual appliance usage without separate monitors. Known as Non-Intrusive Load Monitoring (NILM), this emerging method offers insights into energy efficiency, helping users optimize usage and reduce costs. While promising, energy disaggregation is still being refined for accuracy and scalability as part of smart energy management innovations.[52] The other critical technology for smart meter systems is the information technology at the utility that integrates the Smart Meter networks with utility applications, such as billing and CIS. This includes the Meter Data Management system. It also is essential for smart grid implementations thatpower line communication(PLC) technologies used within the home over aHome Area Network(HAN), are standardized and compatible. The HAN allows HVAC systems and other household appliances to communicate with the smart meter, and from there to the utility. Currently there are several broadband or narrowband standards in place, or being developed, that are not yet compatible. To address this issue, the National Institute for Standards and Technology (NIST) established the PAP15 group, which studies and recommends coexistence mechanisms with a focus on the harmonization of PLC Standards for the HAN. The objective of the group is to ensure that all PLC technologies selected for the HAN coexist as a minimum. The two leading broadband PLC technologies selected are theHomePlug AV/IEEE 1901and ITU-TG.hntechnologies.[53]Technical working groups within these organizations are working to develop appropriate coexistence mechanisms. TheHomePlug Powerline Alliancehas developed a new standard for smart grid HAN communications called theHomePlug Green PHYspecification. It is interoperable and coexistent with the widely deployedHomePlug AVtechnology and with the latestIEEE 1901global Standard and is based on BroadbandOFDMtechnology. ITU-T commissioned in 2010 a new project called G.hnem, to address the home networking aspects of energy management, built upon existing Low Frequency Narrowband OFDM technologies. Some groups have expressed concerns regarding the cost, health, fire risk,[54]securityandprivacyeffects of smart meters[55]and the remote controllable "kill switch" that is included with most of them. Many of these concerns regard wireless-only smart meters with no home energy monitoring or control or safety features. Metering-only solutions, while popular with utilities because they fit existing business models and have cheap up-front capital costs, often result in such "backlash". Often the entiresmart gridandsmart buildingconcept is discredited in part by confusion about the difference betweenhome controlandhome area networktechnology and AMI. The (now former) attorney general of Connecticut has stated that he does not believe smart meters provide any financial benefit to consumers,[56]however, the cost of the installation of the new system is absorbed by those customers. Smart meters expose the power grid tocyberattacksthat could lead topower outages, both by cutting off people's electricity[57]and by overloading the grid.[58]However many cyber security experts state that smart meters of UK and Germany have relatively high cybersecurity and that any such attack there would thus require extraordinarily high efforts or financial resources.[59][60][61]The EU Cyber security Act took effect in June 2019, which includes Directive on Security Network and Information Systems establishing notification and security requirements foroperators of essential services.[62] Through the Smartgrid Cybersecurity Committee, the U.S. Department of Energy published cybersecurity guidelines for grid operators in 2010 and updated them in 2014. The guidelines "...present an analytical framework that organizations can use to develop effective cybersecurity strategies..."[63] Implementing security protocols that protect these devices from malicious attacks has been problematic, due to their limited computational resources and long operational life.[64] The current version ofIEC 62056includes the possibility to encrypt,authenticate, orsignthe meter data. One proposed smart meter data verification method involves analyzing the network traffic in real-time to detect anomalies using anIntrusion Detection System(IDS). By identifying exploits as they are being leveraged by attackers, an IDS mitigates the suppliers' risks of energy theft by consumers and denial-of-service attacks by hackers.[65]Energy utilities must choose between a centralized IDS, embedded IDS, or dedicated IDS depending on the individual needs of the utility. Researchers have found that for a typical advanced metering infrastructure, the centralized IDS architecture is superior in terms of cost efficiency and security gains.[64] In the United Kingdom, the Data Communication Company, which transports the commands from the supplier to the smart meter, performs an additional anomaly check on commands issued (and signed) by the energy supplier. As Smart Meter devices are Intelligent Measurement Devices which periodically record the measured values and send the data encrypted to the Service Provider, therefore in Switzerland these devices need to be evaluated by an evaluation Laboratory, and need to be certified by METAS from 01.01.2020 according to Prüfmethodologie (Test Methodology for Execution of Data Security Evaluation of Swiss Smart Metering Components). According to a report published byBrian Krebs, in 2009 aPuerto Ricoelectricity supplier asked theFBIto investigate large-scale thefts of electricity related to its smart meters. The FBI found that former employees of the power company and the company that made the meters were being paid by consumers to reprogram the devices to show incorrect results, as well as teaching people how to do it themselves.[66]Several hacking tools that allow security researchers and penetration testers verify the security of electric utility smart meters have been released so far.[67] Most health concerns about the meters arise from thepulsed radiofrequency(RF) radiation emitted by wireless smart meters.[68] Members of the California State Assembly asked theCalifornia Council on Science and Technology(CCST) to study the issue of potential health impacts from smart meters, in particular whether current FCC standards are protective of public health.[69]The CCST report in April 2011 found no health impacts, based both on lack of scientific evidence of harmful effects from radio frequency (RF) waves and that the RF exposure of people in their homes to smart meters is likely to be minuscule compared to RF exposure to cell phones and microwave ovens.[70]Daniel Hirsch, retired director of the Program on Environmental and Nuclear Policy atUC Santa Cruz, criticized the CCST report on the grounds that it did not consider studies that suggest the potential for non-thermal health effects such as latent cancers from RF exposure. Hirsch also stated that the CCST report failed to correct errors in its comparison to cell phones and microwave ovens and that, when these errors are corrected, smart meters "may produce cumulative whole-body exposures far higher than that of cell phones or microwave ovens."[71] The Federal Communications Commission (FCC) has adopted recommended Permissible Exposure Limit (PEL) for all RF transmitters (including smart meters) operating at frequencies of 300 kHz to 100 GHz. These limits, based on field strength and power density, are below the levels of RF radiation that are hazardous to human health.[72] Other studies substantiate the finding of the California Council on Science and Technology (CCST). In 2011, theElectric Power Research Instituteperformed a study to gauge human exposure to smart meters as compared to the FCC PEL. The report found that most smart meters only transmit RF signals 1% of the time or less. At this rate, and at a distance of 1 foot from the meter, RF exposure would be at a rate of 0.14% of the FCC PEL.[73] An indirect potential for harm to health by smart meters is that they enable energy companies to disconnect consumers remotely, typically in response to difficulties with payment. This can cause health problems to vulnerable people in financial difficulty; in addition to denial of heat, lighting, and use of appliances, there are people who depend on power to use medical equipment essential for life. While there may be legal protections in place to protect the vulnerable, many people in the UK were disconnected in violation of the rules.[74] Issues surrounding smart meters causing fires have been reported, particularly involving the manufacturer Sensus. In 2012.PECO Energy Companyreplaced the Sensus meters it had deployed in thePhiladelphia, US region after reports that a number of the units had overheated and caused fires. In July 2014,SaskPower, the province-run utility company of the Canadian province ofSaskatchewan, halted its roll-out of Sensus meters after similar, isolated incidents were discovered. Shortly afterward,Portland General Electricannounced that it would replace 70,000 smart meters that had been deployed in the state ofOregonafter similar reports. The company noted that it had been aware of the issues since at least 2013, and they were limited to specific models it had installed between 2010 and 2012.[75]On July 30, 2014, after a total of eight recent fire incidents involving the meters, SaskPower was ordered by theGovernment of Saskatchewanto immediately end its smart meter program, and remove the 105,000 smart meters it had installed.[76] One technical reason for privacy concerns is that these meters send detailed information about how much electricity is being used each time. More frequent reports provide more detailed information. Infrequent reports may be of little benefit for the provider, as it doesn't allow as good demand management in the response of changing needs for electricity. On the other hand, widespread reports would allow the utility company to inferbehavioral patternsfor the occupants of a house, such as when the members of the household are probably asleep or absent.[77]Furthermore, the fine-grained information collected by smart meters raises growing concerns of privacy invasion due to personal behavior exposure (private activity, daily routine, etc.).[20]Current trends are to increase the frequency of reports. A solution that benefits both provider and user privacy would be to adapt the interval dynamically.[78]Another solution involves energy storage installed at the household used to reshape the energy consumption profile.[79][80]In British Columbia the electric utility is government-owned and as such must comply with privacy laws that prevent the sale of data collected by smart meters; many parts of the world are serviced by private companies that are able to sell their data.[81]In Australia debt collectors can make use of the data to know when people are at home.[82]Used as evidence in a court case inAustin, Texas, police agencies secretly collected smart meter power usage data from thousands of residences to determine which used more power than "typical" to identify marijuana growing operations.[83] Smart meter power data usage patterns can reveal much more than how much power is being used. Research has demonstrated that smart meters sampling power levels at two-second intervals can reliably identify when different electrical devices are in use.[84][85][86][87][88][89][90][91] Ross Anderson wrote about privacy concerns "It is not necessary for my meter to tell the power company, let alone the government, how much I used in every half-hour period last month"; that meters can provide "targeting information for burglars"; that detailed energy usage history can help energy companies to sell users exploitative contracts; and that there may be "a temptation for policymakers to use smart metering data to target any needed power cuts."[92] Reviews of smart meter programs, moratoriums, delays, and "opt-out" programs are some responses to the concerns of customers and government officials. In response to residents who did not want a smart meter, in June 2012 a utility in Hawaii changed its smart meter program to "opt out".[93]The utility said that once the smart grid installation project is nearing completion, KIUC may convert the deferral policy to an opt-out policy or program and may charge a fee to those members to cover the costs of servicing the traditional meters. Any fee would require approval from the Hawaii Public Utilities Commission. After receiving numerous complaints about health, hacking, and privacy concerns with the wireless digital devices, the Public Utility Commission of theUSstate ofMainevoted to allow customers to opt-out of the meter change at the cost of $12 a month.[94]InConnecticut, another US state to consider smart metering, regulators declined a request by the state's largest utility,Connecticut Light & Power, to install 1.2 million of the devices, arguing that the potential savings in electric bills do not justify the cost. CL&P already offers its customers time-based rates. The state's Attorney GeneralGeorge Jepsenwas quoted as saying the proposal would cause customers to spend upwards of $500 million on meters and get few benefits in return, a claim that Connecticut Light & Power disputed.[95] Smart meters allow dynamic pricing; it has been pointed out that, while this allows prices to be reduced at times of low demand, it can also be used to increase prices at peak times if all consumers have smart meters.[96]Additionally smart meters allow energy suppliers to switch customers to expensive prepay tariffs instantly in case of difficulties paying. In the UK during a period of very high energy prices from 2022, companies were remotely switching smart meters from a credit tariff to an expensive prepay tariff which disconnects supplies unless credit has been purchased. While regulations do not permit this without appropriate precautions to help those in financial difficulties and to protect the vulnerable, the rules were often flouted.[74](Prepaid tariffs could also be levied without smart meters, but this required a dedicated prepay meter to be installed.) In 2022, 3.2 million people were left without power at some point after running out of prepay credit.[97] There are questions about whether electricity is or should be primarily a "when you need it" service where the inconvenience/cost-benefitratio of time-shifting of loads is poor. In the Chicago area, Commonwealth Edison ran a test installing smart meters on 8,000 randomly selected households together with variable rates and rebates to encourage cutting back during peak usage.[98]InCrain's Chicago Businessarticle "Smart grid test underwhelms. In the pilot, few power down to save money.", it was reported that fewer than 9% exhibited any amount of peak usage reduction and that the overall amount of reduction was "statistically insignificant".[98]This was from a report by the Electric Power Research Institute, a utility industry think tank who conducted the study and prepared the report. Susan Satter, senior assistant Illinois attorney general for public utilities said "It's devastating to their plan......The report shows zero statistically different result compared to business as usual."[98] By 2016, the 7 million smart meters in Texas had not persuaded many people to check their energy data as the process was too complicated.[99] A report from a parliamentary group in the UK suggests people who have smart meters installed are expected to save an average of £11 annually on their energy bills, much less than originally hoped.[100]The 2016 cost-benefit analysis was updated in 2019 and estimated a similar average saving.[101] The Australian Victorian Auditor-General found in 2015 that 'Victoria's electricity consumers will have paid an estimated $2.239 billion for metering services, including the rollout and connection of smart meters. In contrast, while a few benefits have accrued to consumers, benefits realisation is behind schedule and most benefits are yet to be realised'[102] Smart meters can allow real-time pricing, and in theory this could help smooth power consumption as consumers adjust their demand in response to price changes. However, modelling by researchers at the University of Bremen suggests that in certain circumstances, "power demand fluctuations are not dampened but amplified instead."[103] In 2013,Take Back Your Power, an independent Canadian documentary directed by Josh del Sol was released describing "dirty electricity" and the aforementioned issues with smart meters.[104]The film explores the various contexts of the health, legal, and economic concerns. It features narration from themayorofPeterborough, Ontario,Daryl Bennett, as well as American researcher De-Kun Li, journalist Blake Levitt,[105]and Dr. Sam Milham. It won aLeo Awardfor best feature-length documentary and the Annual Humanitarian Award from Indie Fest the following year. In a 2011 submission to the Public Accounts Committee,Ross Andersonwrote that Ofgem was "making all the classic mistakes which have been known for years to lead to public-sector IT project failures" and that the "most critical part of the project—how smart meters will talk to domestic appliances to facilitate demand response—is essentially ignored."[106] Citizens Advicesaid in August 2018 that 80% of people with smart meters were happy with them. Still, it had 3,000 calls in 2017 about problems. These related to first-generation smart meters losing their functionality, aggressive sales practices, and still having to send smart meter readings.[107] Ross Anderson of the Foundation for Information Policy Research has criticised the UK's program on the grounds that it is unlikely to lower energy consumption, is rushed and expensive, and does not promote metering competition. Anderson writes, "the proposed architecture ensures continued dominance of metering by energy industry incumbents whose financial interests are in selling more energy rather than less," and urged ministers "to kill the project and instead promote competition in domestic energy metering, as the Germans do – and as the UK already has in industrial metering. Every consumer should have the right to appoint the meter operator of their choice."[108] The high number of SMETS1 meters installed has been criticized by Peter Earl, head of energy at the price comparison website comparethemarket.com. He said, "The Government expected there would only be a small number of the first-generation of smart meters before Smets II came in, but the reality is there are now at least five million and perhaps as many as 10 million Smets I meters."[109] UK smart meters in southern England and the Midlands use the mobile phone network to communicate, so they do not work correctly when phone coverage is weak. A solution has been proposed, but was not operational as of March 2017.[109] In March 2018 theNational Audit Office(NAO), which watches over public spending, opened an investigation into the smart meter program, which had cost £11bn by then, paid for by electricity users through higher bills.[110][111]The National Audit Office published the findings of its investigation in a report titled "Rolling out smart meters" published in November 2018.[112]The report, amongst other findings, indicated that the number of smart meters installed in the UK would fall materially short of the Department for Business, Energy & Industrial Strategy (BEIS) original ambitions of all UK consumers having a smart meter installed by 2020. In September 2019, smart meter rollout in the UK was delayed for four years.[113] Ross Anderson and Alex Henney wrote that "Ed Milibandcooked the books" to make a case for smart meters appear economically viable. They say that the first three cost-benefit analyses of residential smart meters found that it would cost more than it would save, but "ministers kept on trying until they got a positive result... To achieve 'profitability' the previous government stretched the assumptions shamelessly".[114] A counter-fraud officer atOfgemwith oversight of the roll-out of the smart meter program who raised concerns with his manager about many millions of pounds being misspent was threatened in 2018 with imprisonment under section 105 of theUtilities Act 2000, prohibiting disclosure of some information relevant to the energy sector, with the intention of protecting national security.[115][116]TheEmployment Appeal Tribunalfound that the law was in contravention of theEuropean Convention on Human Rights.[117] Top ten smart electricity meters suppliers depends on the ranking method[118] Among them
https://en.wikipedia.org/wiki/Smart_meter
Wireless community networksorwireless community projectsor simplycommunity networks, are non-centralized, self-managed and collaborative networks organized in agrassrootsfashion by communities, non-governmental organizations and cooperatives in order to provide a viable alternative tomunicipal wireless networksforconsumers.[1][2][3] Many of these organizations set upwireless mesh networkswhich rely primarily on sharing of unmetered residential and businessDSLandcable Internet. This sort of usage might be non-compliant with theterms of serviceof localinternet service provider(ISPs) that deliver their service via the consumer phone and cableduopoly. Wireless community networks sometimes advocate complete freedom fromcensorship, and this position may be at odds with theacceptable use policiesof some commercial services used. Some ISPs do allow sharing or reselling of bandwidth.[4] The First Latin American Summit of Community Networks, held in Argentina in 2018, presented the following definition for the term "community network": "Community networks are networks collectively owned and managed by the community for non-profit and community purposes. They are constituted by collectives, indigenous communities or non-profit civil society organizations that exercise their right to communicate, under the principles of democratic participation of their members, fairness, gender equality, diversity and plurality".[5] According to the Declaration on Community Connectivity,[6]elaborated through a multistakeholder process organized by theInternet Governance Forum's Dynamic Coalition on Community Connectivity, community networks are recognised by a list of characteristics: Collective ownership; Social management; Open design; Open participation; Promotion of peering and transit; Promotion of the consideration of security and privacy concerns while designing and operating the network; and promotion of the development and circulation of local content in local languages. Wireless community networks started as projects that evolved fromamateur radiousingpacket radio, and from thefree softwarecommunity which substantially overlapped with the amateur radio community.[citation needed]Wireless neighborhood networks were established by technology enthusiasts in the early 2000s.[7]The Redbricks Intranet Collective (RIC) started 1999 inManchester,UK, to allow about 30 flats in the Bentley House Estate to share the subscription cost of one leased line fromBritish Telecom(BT).[8]Wi-Fiwas quickly adopted by technology enthusiasts and hobbyists, because it was anopen standardand consumer Wi-Fi hardware was comparatively cheap.[7] Wireless community networks started out by turningwireless access pointsdesigned for short-range use in homes into multi-kilometre long-range Wi-Fi by building high-gaindirectional antennas. Rather than buying commercially available units, some of the early groups advocated home-built antennas. Examples include thecantennaandRONJA, an optical link that can be made from a smokeflueandLEDs. Thecircuitryand instructions for suchDIY networkingantennas were released under theGNU Free Documentation License(GFDL).[9][10]Municipal wireless networks, funded by local governments, started being deployed from 2003 onward.[7] Regarding the international policy scenario, discussions on Community Networks have gained prominence over the last few years, especially since the creation of theInternet Governance Forum'sDynamic Coalition on Community Connectivityin 2016, providing "a much needed platform through which various individuals and entities interested in the advancement of CNs have the possibility to associate, organise and develop, in a bottom-up participatory fashion collective 'principles, rules, decision-making procedures and shared programs that give shape to the evolution and use of the Internet.'".[3] By 2003, a number of wireless community projects had established themselves in urban areas acrossNorth America,EuropeandAustralia. In June 2000, Melbourne Wireless Inc. was established inMelbourneAustraliaas a not-for-profit project to establish a metropolitan area wireless network using off-the-shelf802.11wireless equipment. By 2003, it had 1,200 hotspots.[11]In 2000Seattle Wirelesswas founded with the stated aim of providing free WiFi access and share the cost of Internet connectivity inSeattle, USA. By April 2011, it had 80 freewireless access pointsall over Seattle and was steadily growing.[12] In August 2000, Consume was founded inLondonEnglandas "collaborative strategy for the self provisioning of a broadband telecommunications infrastructure". Founded byBen Laurieand others, Consume aimed to build a wireless infrastructure as alternative to the monopoly-held wiredmetropolitan area network.[11]Besides providing Wi-Fi access inEast London, Consume installed a large antenna on the roof of the formerGreenwich Town Halland documented the states of wireless connections in London. Consume created political pressure onmunicipal authorities, by staging public events, exhibitions, encouraging consumers to set up wireless equipment and setting up temporary Wi-Fi hotspots at events in East London. While Consume generated sustained media attention, it did not establish a lasting wireless community network.[13] TheWireless Leidenhobbyist project was established in September 2001 and constituted as non-profit foundation in 2003 with more than 300 active users. The Wireless Leiden foundation aimed to facilitate the cooperation of local government, businesses and residents to provide wireless networking inLeidenNetherlands. The first wireless community network in Spain wasRedLibre, founded in September 2001 inMadrid. By 2002 RedLibre coordinated the efforts of 15 local wireless groups and maintained free RedLibre Wi-Fi hotspots in five cities. RedLibre has been credited for facilitating the widespread availability of WLAN in the urban areas of Spain.[14] In Italy,Ninux.orgwas founded by students and hackers in 2001 to create agrassrootswireless network inRome, similar to Seattle Wireless. A turning point for Ninux was the lowering of prices in 2008 for consumer wireless equipment, such as antennas and routers. Ninux volunteers installed an increasing number of antennas on the roofs of Rome. The network served as example for other urban community wireless networks in Italy. By 2016, similar wireless networks had been installed in Florence, Bolongna, Pisa and Cosenza. While they share common technical and organizational frameworks, the working groups supporting these urban wireless community networks are driven by the different needs of the city in which they operate.[15] Houston Wireless was founded in summer 2001 as the Houston Wireless Users Group. The telecommunications providers were slow to roll out third-generation wireless (3G), so Houston Wireless was established to promote high-speed wireless access acrossHoustonand its suburbs. Houston Wireless experimented withnetwork protocolssuch asIPsec,mobile IPandIPv6, as well as wireless technologies, including802.11a,802.11gandultra-wideband(UWB). By 2003, it had 30 WLAN hotspots, 100 people on theirmailing listsand their monthly meetings were attended by about 25 people.[16] NYCwirelsss was established inNew York Cityin May 2001 to provide public hotspots and promote the use of consumer owned unlicensed low-cost wireless networking equipment. In order to get more public Wi-Fi hotspots installed, NYCwirelsss contracted with the for-profit company Cloud Networks, which was staffed by some of the founding members of the NYCwireless community project. In the aftermath of theSeptember 11 attacksin 2001 NYCwirelsss helped to provide emergency communication by quickly assembling and deploying free Wi-Fi hotspots in areas of New York City that had no other telecommunications. In summer 2002, theBryant Parkwireless network became the flagship project of NYCwireless, with about 50 users every day. By 2003 NYCwireless had more than 100 active hotspots throughout New York City.[17] In 2000,guifi.netwas founded because commercialinternet service providersdid not build a broadband Internet infrastructure in ruralCatalonia. Guifi.net was conceived as awireless mesh network, where households can become a node in the network by operating a radio transmitter. Not every node needs to be awireless router, but the network relies on some volunteers being connected to the Internet and sharing that access with others. In 2017 guifi.net had 23,000 nodes and was described as the biggest mesh network in the world.[19] In 2001, BCWireless founded to help communities inBritish Columbia,Canada, set up local Wi-Fi networks. BCWireless hobbyists experimented withIEEE 802.11bwireless networks and antennas to extend the range and power of signal, allow bandwidth sharing among local group members and establish wireless mesh networks. TheLac Seul First Nationcommunities set up their Wi-Fi network and constituted the non-profit K-Net to manage a wireless network based onIEEE 802.11gto provide the entire reserve with Wi-Fi using the unlicensed spectrum in combination with licensed spectrum at 3.5 GHz.[20] For the most, early wireless community projects had a local scope, but many still had a global awareness. In 2003, wireless community networks initiated thePico Peering Agreement(PPA) and theWireless Commons Manifesto. The two initiatives defined attempts to build an infrastructure, so that localwireless mesh networkscould become extensivewireless ad hoc networksacross local and national boundaries.[21]In 2004,Freifunkreleased theOpenWrt-based firmware FFF for Wi-Fi devices that participate in a community network, which included a PPA, so that the owner of the node agrees to provide free transit across the network.[22] There are at least three technical approaches to building a wireless community network: Wireless equipment, like many otherconsumer electronics, comes with hard-to-alterfirmwarethat is preinstalled by the manufacturer. When theLinksys WRT54G serieswas launched in 2003 with anopen sourceLinux kernelas firmware, it immediately became the subject of hacks and became the most popular hardware among community wireless volunteers. In 2005,Linksysreleased the WRT54GL version of its firmware, to make it even easier for customers to modify it. Community network hackers experimented with increasing the transmission power of the Linksys WRT54G or increasing the clock speed of theCPUto speed up data transmission.[24] Hobbyists got another boost when in 2004 theOpenWrtfirmware was released as open source alternative toproprietary firmware.[24]The Linux-basedembedded operating systemcould be used on embedded devices to route network traffic. Through successive versions, OpenWrt eventually could work on several hundred types of wireless devices and Wi-Fi routers.[25]OpenWrt was named in honor of the WRT54G. The OpenWrt developers provided extensive documentation and the ability to include one's own code in the OpenWrt source code and compile the firmware.[26] In 2004,Freifunkreleased the FFF firmware for wireless community projects, which modified OpenWrt so that the node could be configured via a web interface and added features to better support awireless ad hoc networkwithtraffic shaping, statistics,Internet gatewaysupport and an implementation of theOptimized Link State Routing Protocol(OLSR). A Wi-Fi access point that booting the FFF firmware joined the network by automatically announcing its Internet gateway capabilities to other nodes using OLSR HNA4. When a node disappeared, the other nodes registered the change in thenetwork topologythrough the discontinuation of HNA4 announcements. At the time, Freifunk in Berlin had 500 Wi-Fi access points and about 2,200 Berlin residents used the network free of charge.[27]The Freifunk FFF firmware is among the oldest approaches to establishing awireless mesh networkat significant scale. Other early attempts at developing an operating system for wireless devices that supported large scale wireless community projects were Open-Mesh andNetsukuku.[22] In 2006,Meraki Networks Incwas founded. The Meraki hardware and firmware had been developed as part of a PhD research project at theMassachusetts Institute of Technologyto provide wireless access to graduate students. For years, the low-cost Meraki products fueled the growth of wireless mesh networks in 25 countries.[28]Early Meraki-based wireless community networks included the Free-the-Net Meraki mesh in Vancouver, Canada. Constituted in 2006 as legalco-operative, members of the Vancouver Open Network Initiatives Cooperative paid fiveCanadian dollarsper month to access the community wireless network provided by individuals who attached Meraki nodes to their home wireless connection, sharing bandwidth with any cooperative members nearby and participating in a meshed wireless network.[29] By 2003, the Sidney Wireless community project had launched the NodeDB software, to facilitate the work of community networks by mapping the nodes participating in awireless mesh network. Nodes needed to be registered in thedatabase, but the software generated a list of adjacent nodes. When registering a node that participated in a community network, the maintainer of the node could leave a note on the hardware, antenna reach and firmware in operation and so find other network community members who were willing to participate in a mesh.[30] Organizationally, a wireless community network requires either a set of affordable commercial technical solutions or a critical mass of hobbyists willing to tinker to maintain operations. Mesh networks require that a high level of community participation and commitment be maintained for the network to be viable. The mesh approach currently requires uniform equipment. One market-driven aspect of the mesh approach is that users who receive a weak mesh signal can often convert it to a strong signal by obtaining and operating a repeater node, thus extending the network.[citation needed] Such volunteer organizations focusing on technology that is rapidly advancing sometimes have schisms and mergers.[citation needed]The Wi-Fi service provided by such groups is usually free and without the stigma ofpiggybacking. An alternative to the voluntary model is to use aco-operativestructure.[31] Wireless community projects made volunteer bandwidth-sharing technically feasible and have been credited with contributing to the emergence of alternative business models in the consumer Wi-Fi market. The commercial Wi-Fi providerFonwas established in 2006 in Spain. Fon customers were equipped with aLinksysWi-Fi access point that runs a modifiedOpenWrtfirmware so that Fon customers shared Wi-Fi access among each other. Public Wi-Fi provisioning through FON customers was broadened when FON entered a 50% revenue-sharing agreement with customers who made their entire unused bandwidth available for resale. In 2009, this business model gained broader acceptance whenBritish Telecomallowed its own home customers to sell unused bandwidth to BT and FON roamers.[28] Wireless community projects for the most providebest-effortWi-Fi coverage. However, since the mid-2000slocal authoritiesstarted to contract with wireless community networks to providemunicipal wireless networksor stable Wi-Fi access in a defined urban area, such as a park. Wireless community networks started to participate in a variety ofpublic-private partnerships. The non-profit community networkZAP Sherbrookehas partnered with public and private entities to provide Wi-Fi access and received financial support from theUniversity of SherbrookeandBishop's Universityto extend the coverage of its wireless mesh throughout the city ofSherbrooke,Canada.[32] Certain countries regulate the selling of internet access, requiring a license to sell internet access over a wireless network. InSouth Africait is regulated by theIndependent Communications Authority of South Africa(ICASA).[33]They require that WISP's apply for a VANS or ECNS/ECS license before being allowed to resell internet access over a wireless link. TheInternet Society's publication "Community Networks in Latin America: Challenges, Regulations and Solutions"[5]brings a summary of regulations regarding Community Networks among Latin American countries, the United States and Canada.
https://en.wikipedia.org/wiki/Wireless_community_network
Awireless mesh network(WMN) is acommunications networkmade up ofradionodesorganized in ameshtopology. It can also be a form ofwireless ad hoc network.[1] Ameshrefers to rich interconnection among devices or nodes. Wireless mesh networks often consist of mesh clients, mesh routers and gateways. Mobility of nodes is less frequent. If nodes constantly or frequently move, the mesh spends more time updating routes than delivering data. In a wireless mesh network, topology tends to be more static, so that routes computation can converge and delivery of data to their destinations can occur. Hence, this is a low-mobility centralized form of wireless ad hoc network. Also, because it sometimes relies on static nodes to act as gateways, it is not a truly all-wireless ad hoc network.[citation needed] Mesh clients are often laptops, cell phones, and other wireless devices. Mesh routers forward traffic to and from the gateways, which may or may not be connected to the Internet. The coverage area of all radio nodes working as a single network is sometimes called a mesh cloud. Access to this mesh cloud depends on the radio nodes working together to create a radio network. A mesh network is reliable and offers redundancy. When one node can no longer operate, the rest of the nodes can still communicate with each other, directly or through one or more intermediate nodes. Wireless mesh networks can self form and self heal. Wireless mesh networks work with different wireless technologies including802.11,802.15,802.16, cellular technologies and need not be restricted to any one technology or protocol. Wireless mesh radio networks were originally developed for military applications, such that every node could dynamically serve as a router for every other node. In that way, even in the event of a failure of some nodes, the remaining nodes could continue to communicate with each other, and, if necessary, serve as uplinks for the other nodes. Early wireless mesh network nodes had a singlehalf-duplexradio that, at any one instant, could either transmit or receive, but not both at the same time. This was accompanied by the development ofshared meshnetworks. This was subsequently superseded by more complex radio hardware that could receive packets from an upstream node and transmit packets to a downstream node simultaneously (on a different frequency or a different CDMA channel). This allowed the development ofswitched meshnetworks. As the size, cost, and power requirements of radios declined further, nodes could be cost-effectively equipped with multiple radios. This, in turn, permitted each radio to handle a different function, for instance, one radio for client access, and another for backhaul services. Work in this field has been aided by the use ofgame theorymethods to analyze strategies for the allocation of resources and routing of packets.[2][3][4] Wireless mesh architecture is a first step towards providing cost effective and low mobility over a specific coverage area. Wireless mesh infrastructure is, in effect, a network of routers minus the cabling between nodes. It is built of peer radio devices that do not have to be cabled to a wired port like traditionalWLANaccess points (AP)do. Mesh infrastructure carries data over large distances by splitting the distance into a series of short hops. Intermediate nodes not only boost the signal, but cooperatively pass data from point A to point B by making forwarding decisions based on their knowledge of the network, i.e. perform routing by first deriving the topology of the network. Wireless mesh networks is a relatively "stable-topology" network except for the occasional failure of nodes or addition of new nodes. The path of traffic, being aggregated from a large number of end users, changes infrequently. Practically all the traffic in an infrastructure mesh network is either forwarded to or from a gateway, while inwireless ad hoc networksor client mesh networks the traffic flows between arbitrary pairs of nodes.[5] If rate of mobility among nodes are high, i.e., link breaks happen frequently, wireless mesh networks start to break down and have low communication performance.[6] This type of infrastructure can be decentralized (with no central server) or centrally managed (with a central server).[7]Both are relatively inexpensive, and can be very reliable and resilient, as eachnodeneeds only transmit as far as the next node. Nodes act asroutersto transmit data from nearby nodes topeersthat are too far away to reach in a single hop, resulting in a network that can span larger distances. The topology of a mesh network must be relatively stable, i.e., not too much mobility. If one node drops out of the network, due to hardware failure or any other reason, its neighbors can quickly find another route using a routing protocol. Mesh networks may involve either fixed or mobile devices. The solutions are as diverse as communication needs, for example in difficult environments such as emergency situations, tunnels, oil rigs, battlefield surveillance, high-speed mobile-video applications on board public transport, real-time racing-car telemetry, or self-organizing Internet access for communities.[8]An important possible application for wireless mesh networks is VoIP. By using a quality of service scheme, the wireless mesh may support routing local telephone calls through the mesh. Most applications in wireless mesh networks are similar to those inwireless ad hoc networks. Some current applications: The principle is similar to the waypacketstravel around the wiredInternet– data hops from one device to another until it eventually reaches its destination. Dynamicroutingalgorithms implemented in each device allow this to happen. To implement such dynamic routing protocols, each device needs to communicate routing information to other devices in the network. Each device then determines what to do with the data it receives – either pass it on to the next device or keep it, depending on the protocol. The routingalgorithmused should attempt to always ensure that the data takes the most appropriate (fastest) route to its destination. Multi-radio mesh refers to having different radios operating at different frequencies to interconnect nodes in a mesh. This means there is a unique frequency used for each wireless hop and thus a dedicatedCSMAcollision domain. With more radio bands, communication throughput is likely to increase as a result of more available communication channels. This is similar to providing dual or multiple radio paths to transmit and receive data. One of the more often cited papers on wireless mesh networks identified the following areas as open research problems in 2005: A number ofwireless community networkshave been started asgrassrootsprojects across the world at various points in time. Other projects, often proprietary or tied to a single institution, are: There are more than 70 competing schemes for routing packets across mesh networks. Some of these include: TheIEEEhas developed a set of standards under the title802.11s. A less thorough list can be found atlist of ad hoc routing protocols. Standard autoconfiguration protocols, such asDHCPorIPv6 stateless autoconfigurationmay be used over mesh networks. Mesh network specific autoconfiguration protocols include:
https://en.wikipedia.org/wiki/Wireless_mesh_network
Wireless sensor networks(WSNs) refer to networks of spatially dispersed and dedicated sensors thatmonitorand record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental conditions such as temperature, sound, pollution levels, humidity and wind.[1] These are similar towireless ad hoc networksin the sense that they rely onwireless connectivityand spontaneous formation of networks so that sensor data can be transported wirelessly. WSNs monitor physical conditions, such astemperature,sound, andpressure. Modern networks are bi-directional, both collecting data[2]and enabling control of sensor activity.[3]The development of these networks was motivated by military applications such as battlefield surveillance.[4]Such networks are used in industrial and consumer applications, such as industrial process monitoring and control and machine health monitoring and agriculture.[5] A WSN is built of "nodes" – from a few to hundreds or thousands, where each node is connected to other sensors. Each such node typically has several parts: aradiotransceiverwith an internalantennaor connection to an external antenna, amicrocontroller, an electronic circuit for interfacing with the sensors and an energy source, usually abatteryor anembeddedform ofenergy harvesting. Asensor nodemight vary in size from a shoebox to (theoretically) a grain of dust, although microscopic dimensions have yet to be realized. Sensor node cost is similarly variable, ranging from a few to hundreds of dollars, depending on node sophistication. Size and cost constraints constrain resources such as energy, memory, computational speed and communications bandwidth. The topology of a WSN can vary from a simplestar networkto an advancedmulti-hopwireless mesh network. Propagation can employroutingorflooding.[6][7] Incomputer scienceandtelecommunications, wireless sensor networks are an active research area supporting many workshops and conferences, includingInternational Workshop on Embedded Networked Sensors (EmNetS),IPSN,SenSys,MobiComandEWSN. As of 2010, wireless sensor networks had deployed approximately 120million remote units worldwide.[8] Area monitoring is a common application of WSNs. In area monitoring, the WSN is deployed over a region where some phenomenon is to be monitored. A military example is the use of sensors to detect enemy intrusion; a civilian example is thegeo-fencingof gas oroil pipelines. There are several types of sensor networks for medical applications: implanted, wearable, and environment-embedded. Implantable medical devices are those that are inserted inside the human body.Wearabledevices are used on the body surface of a human or just at close proximity of the user. Environment-embedded systems employ sensors contained in the environment. Possible applications include body position measurement, location of persons, overall monitoring of ill patients in hospitals and at home. Devices embedded in the environment track the physical state of a person for continuous health diagnosis, using as input the data from a network ofdepth cameras, asensing floor, or other similar devices. Body-area networks can collect information about an individual's health, fitness, and energy expenditure.[9][10]In health care applications the privacy and authenticity of user data has prime importance. Especially due to the integration of sensor networks, with IoT, the user authentication becomes more challenging; however, a solution is presented in recent work.[11] Wireless sensor networks have been used to monitor various species andhabitats, beginning with theGreat Duck IslandDeployment, including marmots,cane toadsin Australia and zebras in Kenya.[12] There are many applications in monitoring environmental parameters,[13]examples of which are given below. They share the extra challenges of harsh environments and reduced power supply. Experiments have shown that personal exposure toair pollutionin cities can vary a lot.[14]Therefore, it is of interest to have higher temporal and spatial resolution ofpollutantsandparticulates. For research purposes, wireless sensor networks have been deployed to monitor the concentration ofdangerous gases for citizens(e.g., inLondon).[15]However, sensors for gases and particulate matter suffer from high unit-to-unit variability, cross-sensitivities, and (concept) drift.[16]Moreover, the quality of data is currently insufficient for trustworthy decision-making, as field calibration leads to unreliable measurement results, and frequent recalibration might be required. A possible solution could be blind calibration or the usage of mobile references.[17][18] A network of Sensor Nodes can be installed in a forest to detect when afirehas started. The nodes can be equipped with sensors to measure temperature, humidity and gases which are produced by fire in the trees or vegetation. The early detection is crucial for a successful action of the firefighters; thanks to Wireless Sensor Networks, the fire brigade will be able to know when a fire is started and how it is spreading. Alandslidedetection system makes use of a wireless sensor network to detect the slight movements of soil and changes in various parameters that may occur before or during a landslide. Through the data gathered it may be possible to know the impending occurrence of landslides long before it actually happens. Water qualitymonitoring involves analyzing water properties in dams, rivers, lakes and oceans, as well as underground water reserves. The use of many wireless distributed sensors enables the creation of a more accurate map of the water status, and allows the permanent deployment of monitoring stations in locations of difficult access, without the need of manual data retrieval.[19] Wireless sensor networks can be effective in preventing adverse consequences ofnatural disasters, like floods. Wireless nodes have been deployed successfully in rivers, where changes in water levels must be monitored in real time. Wireless sensor networks have been developed for machinerycondition-based maintenance(CBM) as they offer significant cost savings and enable new functionality.[20] Wireless sensors can be placed in locations difficult or impossible to reach with a wired system, such as rotating machinery and untethered vehicles. Wireless sensor networks also are used for the collection of data for monitoring of environmental information.[21]This can be as simple as monitoring the temperature in a fridge or the level of water in overflow tanks in nuclear power plants. The statistical information can then be used to show how systems have been working. The advantage of WSNs over conventional loggers is the "live" data feed that is possible. Monitoring thequalityand level of water includes many activities such as checking the quality ofundergroundor surface water and ensuring a country'swater infrastructurefor the benefit of both human and animal. It may be used to protect the wastage of water. WSN can be used to monitor the condition of civil infrastructure and related geo-physical processes close to real time, and over long periods throughdata logging, using appropriately interfaced sensors. Wireless sensor networks are used to monitorwine production, both in the field and the cellar.[22] TheWide Area Tracking System(WATS) is a prototype network for detecting a ground-based nuclear device[23]such as anuclear "briefcase bomb". WATS is being developed at theLawrence Livermore National Laboratory(LLNL). WATS would be made up of wireless gamma and neutron sensors connected through a communications network. Data picked up by the sensors undergoes"data fusion", which converts the information into easily interpreted forms; this data fusion is the most important aspect of the system.[24][obsolete source] The data fusion process occurswithinthe sensor network rather than at a centralized computer and is performed by a specially developed algorithm based onBayesian statistics.[25]WATS would not use a centralized computer for analysis because researchers found that factors such as latency and available bandwidth tended to create significant bottlenecks. Data processed in the field by the network itself (by transferring small amounts of data between neighboring sensors) is faster and makes the network more scalable.[25] An important factor in WATS development isease of deployment, since more sensors both improves the detection rate and reduces false alarms.[25]WATS sensors could be deployed in permanent positions or mounted in vehicles for mobile protection of specific locations. One barrier to the implementation of WATS is the size, weight, energy requirements and cost of currently available wireless sensors.[25]The development of improved sensors is a major component of current research at the Nonproliferation, Arms Control, and International Security (NAI) Directorate at LLNL. WATS was profiled to theU.S. House of Representatives'Military Research and Development Subcommittee on October 1, 1997, during a hearing on nuclear terrorism and countermeasures.[24]On August 4, 1998, in a subsequent meeting of that subcommittee, ChairmanCurt Weldonstated that research funding for WATS had been cut by theClintonadministration to a subsistence level and that the program had been poorly re-organized.[26] Studies show that using sensors for incident monitoring improve the response of firefighters and police to an unexpected situation.[27]For an early detection of incidents we can use acoustic sensors to detect a spike in the noise of the city because of a possible accident,[28]or use termic sensors to detect a possible fire.[29] Usinglow-power electronics, WSN:s can be cost-efficiently applied also insupply chainsin various industries.[30] The main characteristics of a WSN include Cross-layer is becoming an important studying area for wireless communications.[34]In addition, the traditional layered approach presents three main problems: So the cross-layer can be used to make the optimal modulation to improve the transmission performance, such asdata rate,energy efficiency,quality of service(QoS), etc.[34]Sensor nodes can be imagined as small computers which are extremely basic in terms of their interfaces and their components. They usually consist of aprocessing unitwith limited computational power and limited memory,sensorsorMEMS(including specific conditioning circuitry), acommunication device(usually radio transceivers or alternativelyoptical), and a power source usually in the form of a battery. Other possible inclusions areenergy harvestingmodules,[36]secondaryASICs, and possibly secondary communication interface (e.g.RS-232orUSB). The base stations are one or more components of the WSN with much more computational, energy and communication resources. They act as a gateway between sensor nodes and the end user as they typically forward data from the WSN on to a server. Other special components inroutingbased networks are routers, designed to compute, calculate and distribute the routing tables.[37] One major challenge in a WSN is to producelow costandtinysensor nodes. There are an increasing number of small companies producing WSN hardware and the commercial situation can be compared to home computing in the 1970s. Many of the nodes are still in the research and development stage, particularly their software. Also inherent to sensor network adoption is the use of very low power methods for radio communication and data acquisition. In many applications, a WSN communicates with alocal area networkorwide area networkthrough a gateway. The Gateway acts as a bridge between the WSN and the other network. This enables data to be stored and processed by devices with more resources, for example, in a remotely locatedserver. A wireless wide area network used primarily for low-power devices is known as a Low-Power Wide-Area Network (LPWAN). There are several wireless standards and solutions for sensor node connectivity.ThreadandZigbeecan connect sensors operating at 2.4 GHz with a data rate of 250 kbit/s. Many use a lower frequency to increase radio range (typically 1 km), for exampleZ-waveoperates at 915 MHz and in the EU 868 MHz has been widely used but these have a lower data rate (typically 50 kbit/s). The IEEE 802.15.4 working group provides a standard for low power device connectivity and commonly sensors and smart meters use one of these standards for connectivity. With the emergence ofInternet of Things, many other proposals have been made to provide sensor connectivity.LoRa[38]is a form ofLPWANwhich provides long range low power wireless connectivity for devices, which has been used in smart meters and other long range sensor applications. Wi-SUN[39]connects devices at home.NarrowBand IOT[40]and LTE-M[41]can connect up to millions of sensors and devices using cellular technology. Energy is the scarcest resource of WSN nodes, and it determines the lifetime of WSNs. WSNs may be deployed in large numbers in various environments, including remote and hostile regions, where ad hoc communications are a key component. For this reason, algorithms and protocols need to address the following issues: Lifetime maximization: Energy/Power Consumption of the sensing device should be minimized and sensor nodes should be energy efficient since their limited energy resource determines their lifetime. To conserve power, wireless sensor nodes normally power off both the radio transmitter and the radio receiver when not in use.[34] Wireless sensor networks are composed of low-energy, small-size, and low-range unattended sensor nodes. Recently, it has been observed that by periodically turning on and off the sensing and communication capabilities of sensor nodes, we can significantly reduce the active time and thus prolong network lifetime.[45][46]However, this duty cycling may result in high network latency, routing overhead, and neighbor discovery delays due to asynchronous sleep and wake-up scheduling. These limitations call for a countermeasure for duty-cycled wireless sensor networks which should minimize routing information, routing traffic load, and energy consumption. Researchers from Sungkyunkwan University have proposed a lightweight non-increasing delivery-latency interval routing referred as LNDIR. This scheme can discover minimum latency routes at each non-increasing delivery-latency interval instead of each time slot.[clarification needed]Simulation experiments demonstrated the validity of this novel approach in minimizing routing information stored at each sensor. Furthermore, this novel routing can also guarantee the minimum delivery latency from each source to the sink. Performance improvements of up to 12-fold and 11-fold are observed in terms of routing traffic load reduction and energy efficiency, respectively, as compared to existing schemes.[47] Operating systemsfor wireless sensor network nodes are typically less complex than general-purpose operating systems. They more strongly resembleembedded systems, for two reasons. First, wireless sensor networks are typically deployed with a particular application in mind, rather than as a general platform. Second, a need for low costs and low power leads most wireless sensor nodes to have low-power microcontrollers ensuring that mechanisms such as virtual memory are either unnecessary or too expensive to implement. It is therefore possible to use embedded operating systems such aseCosoruC/OSfor sensor networks. However, such operating systems are often designed with real-time properties. TinyOS, developed byDavid Culler, is perhaps the first operating system specifically designed for wireless sensor networks. TinyOS is based on anevent-driven programmingmodel instead ofmultithreading. TinyOS programs are composed ofevent handlersandtaskswith run-to-completion semantics. When an external event occurs, such as an incoming data packet or a sensor reading, TinyOS signals the appropriate event handler to handle the event. Event handlers can post tasks that are scheduled by the TinyOS kernel some time later. LiteOSis a newly developed OS for wireless sensor networks, which provides UNIX-like abstraction and support for the C programming language. Contiki, developed byAdam Dunkels, is an OS which uses a simpler programming style in C while providing advances such as6LoWPANandProtothreads. RIOT (operating system)is a more recent real-time OS including similar functionality to Contiki. PreonVM[48]is an OS for wireless sensor networks, which provides6LoWPANbased onContikiand support for theJavaprogramming language. Online collaborative sensor data management platforms are on-line database services that allow sensor owners to register and connect their devices to feed data into an online database for storage and also allow developers to connect to the database and build their own applications based on that data. Examples includeXivelyand theWikisensing platformArchived2021-06-09 at theWayback Machine. Such platforms simplify online collaboration between users over diverse data sets ranging from energy and environment data to that collected from transport services. Other services include allowing developers to embed real-time graphs & widgets in websites; analyse and process historical data pulled from the data feeds; send real-time alerts from any datastream to control scripts, devices and environments. The architecture of the Wikisensing system[49]describes the key components of such systems to include APIs and interfaces for online collaborators, a middleware containing the business logic needed for the sensor data management and processing and a storage model suitable for the efficient storage and retrieval of large volumes of data. At present, agent-based modeling and simulation is the only paradigm which allows thesimulationofcomplex behaviorin the environments of wireless sensors (such asflocking).[50]Agent-based simulation of wireless sensor and ad hoc networks is a relatively new paradigm.Agent-based modellingwas originally based on social simulation. Network simulatorslike Opnet, Tetcos NetSim and NS can be used to simulate a wireless sensor network. Network localization refers to the problem of estimating the location of wireless sensor nodes during deployments and in dynamic settings. For ultra-low power sensors, size, cost and environment precludes the use of Global Positioning System receivers on sensors. In 2000, Nirupama Bulusu,John HeidemannandDeborah Estrinfirst motivated and proposed a radio connectivity based system for localization of wireless sensor networks.[51]Subsequently, such localization systems have been referred to as range free localization systems, and many localization systems for wireless sensor networks have been subsequently proposed including AHLoS, APS, and Stardust. Sensors and devices used in wireless sensor networks are state-of-the-art technology with the lowest possible price. The sensor measurements we get from these devices are therefore often noisy, incomplete and inaccurate. Researchers studying wireless sensor networks hypothesize that much more information can be extracted from hundreds of unreliable measurements spread across a field of interest than from a smaller number of high-quality, high-reliability instruments with the same total cost. Macro-programming is a term coined by Matt Welsh.[52]It refers to programming the entire sensor network as an ensemble, rather than individual sensor nodes. Another way to macro-program a network is to view the sensor network as a database, which was popularized by the TinyDB system developed bySam Madden. Reprogramming is the process of updating the code on the sensor nodes. The most feasible form of reprogramming is remote reprogramming whereby the code is disseminated wirelessly while the nodes are deployed. Different reprogramming protocols exist that provide different levels of speed of operation, reliability, energy expenditure, requirement of code resident on the nodes, suitability to different wireless environments, resistance to DoS, etc. Popular reprogramming protocols are Deluge (2004), Trickle (2004), MNP (2005), Synapse (2008), and Zephyr (2009). Infrastructure-less architecture (i.e. no gateways are included, etc.) and inherent requirements (i.e. unattended working environment, etc.) of WSNs might pose several weak points that attract adversaries. Therefore,securityis a big concern when WSNs are deployed for special applications such as military and healthcare. Owing to their unique characteristics, traditional security methods ofcomputer networkswould be useless (or less effective) for WSNs. Hence, lack of security mechanisms would cause intrusions towards those networks. These intrusions need to be detected and mitigation methods should be applied. There have been important innovations in securing wireless sensor networks. Most wireless embedded networks use omni-directional  antennas and therefore neighbors can overhear communication in and out of nodes. This was used this to develop a primitive called "local monitoring"[53]which was used for detection  of sophisticated attacks, like blackhole or wormhole, which degrade the  throughput of large networks to close-to-zero. This primitive has since been  used by many researchers and commercial wireless packet sniffers. This was subsequently refined for more sophisticated attacks such as with  collusion, mobility,  and multi-antenna, multi-channel devices.[54] If a centralized architecture is used in a sensor network and the central node fails, then the entire network will collapse, however the reliability of the sensor network can be increased by using a distributed control architecture. Distributed control is used in WSNs for the following reasons: There is also no centralised body to allocate the resources and they have to be self organized. As for the distributed filtering over distributed sensor network. the general setup is to observe the underlying process through a group of sensors organized according to a given network topology, which renders the individual observer estimates the system state based not only on its own measurement but also on its neighbors'.[55] The data gathered from wireless sensor networks is usually saved in the form of numerical data in a central base station. Additionally, theOpen Geospatial Consortium(OGC) is specifying standards for interoperability interfaces and metadata encodings that enable real time integration of heterogeneous sensor webs into the Internet, allowing any individual to monitor or control wireless sensor networks through a web browser. To reduce communication costs some algorithms remove or reduce nodes' redundant sensor information and avoid forwarding data that is of no use. This technique has been used, for instance, for distributed anomaly detection[56][57][58][59]or distributed optimization.[60]As nodes can inspect the data they forward, they can measure averages or directionality for example of readings from other nodes. For example, in sensing and monitoring applications, it is generally the case that neighboring sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires techniques for in-network data aggregation and mining. Aggregation reduces the amount of network traffic which helps to reduce energy consumption on sensor nodes.[61][62]Recently, it has been found that network gateways also play an important role in improving energy efficiency of sensor nodes by scheduling more resources for the nodes with more critical energy efficiency need and advanced energy efficient scheduling algorithms need to be implemented at network gateways for the improvement of the overall network energy efficiency.[34][63] This is a form of in-network processing wheresensor nodesare assumed to be unsecured with limited available energy, while the base station is assumed to be secure with unlimited available energy. Aggregation complicates the already existing security challenges for wireless sensor networks[64]and requires new security techniques tailored specifically for these scenarios. Providing security to aggregate data in wireless sensor networks is known assecure data aggregation in WSN.[62][64][65]were the first few works discussing techniques for secure  data aggregation in wireless sensor networks. Two main security challenges in secure data aggregation are confidentiality and integrity of data. Whileencryptionis traditionally used to provide end to end confidentiality in wireless sensor network, the aggregators in a secure data aggregation scenario need to decrypt the encrypted data to perform aggregation. This exposes the plaintext at the aggregators, making the data vulnerable to attacks from an adversary. Similarly an aggregator can inject false data into the aggregate and make the base station accept false data. Thus, while data aggregation improves energy efficiency of a network, it complicates the existing security challenges.[66]
https://en.wikipedia.org/wiki/Wireless_sensor_network
TheITU-TRecommendationE.212defines mobile country codes (MCC) as well as mobile network codes (MNC). Themobile country codeconsists of three decimal digits and the mobile network code consists of two or three decimal digits (for example: MNC of 001 is not the same as MNC of 01). The first digit of the mobile country code identifies the geographic region as follows (the digits 1 and 8 are not used): An MCC is used in combination with an MNC (a combination known as an "MCC/MNC tuple") to uniquely identify a mobile network operator (carrier) using theGSM(includingGSM-R),UMTS,LTE, and5Gpublic land mobile networks. Some but not allCDMA,iDEN, andsatellitemobile networks are identified with anMCC/MNC tupleas well. ForWiMAXnetworks, a globally unique Broadband Operator ID can be derived from the MCC/MNC tuple.[1]TETRAnetworks use the mobile country code from ITU-T Recommendation E.212 together with a 14-bit binary mobile network code (T-MNC) where only values between 0 and 9999 are used.[2]However, a TETRA network may be assigned an E.212 network code as well.[3]Some network operators do not have their ownradio access networkat all. These are calledmobile virtual network operators(MVNO) and are marked in the tables as such. Note that MVNOs without their own MCC/MNC (that is, they share the MCC/MNC of their host network) are not listed here. The following tables attempt to provide a complete list of mobile network operators. Country information, includingISO 3166-1 alpha-2country codes is provided for completeness. Mostly for historical reasons, one E.212 MCC may correspond to multiple ISO country codes (e.g., MCC 362 corresponds to BQ, CW, and SX). Some operators also choose to use an MCC outside the geographic area that it was assigned to (e.g. Digicel uses the Jamaica MCC throughout the Caribbean).ITU-Tupdates an official list of mobile network codes in its Operational Bulletins which are published twice a month.[4]ITU-T also publishes complete lists: as of January 2024 list issued on 15 November 2023 was current, having all MCC/MNC before 15 November 2023.[5]The official list is often incomplete as national MNC authorities do not forward changes to the ITU in a timely manner. The official list does not provide additional details such as bands and technologies and may not list disputed territories such asAbkhaziaorKosovo.
https://en.wikipedia.org/wiki/Mobile_network_code
TheUnited States 700 MHz FCC wirelessspectrum auction, officially known asAuction 73,[1]was started by theFederal Communications Commission(FCC) on January 24, 2008 for the rights to operate the 700 MHzradio frequencybandin theUnited States. The details of process were the subject of debate among severaltelecommunicationscompanies, includingVerizon Wireless,AT&T Mobility, as well as the Internet companyGoogle. Much of the debate swirled around the open access requirements set down by the Second Report and Order released by the FCC determining the process and rules for the auction. All bidding was required by law to commence by January 28.[2] Full-powerTV stationswere forcedto transition to digital broadcastingin order to free 108 MHz ofradio spectrumfor newerwirelessservices. Most analog broadcasts ceased on June 12, 2009. The 700 MHz spectrum was previously used for analogtelevision broadcasting, specificallyUHF channels 52 through 69. The FCC ruled that the 700 MHz spectrum would no longer be necessary for TV because of the improvedspectral efficiencyof digital broadcasts. Digital broadcasts allow TV channels to be broadcast onadjacent channelswithout having to leaveemptyTV channelsasguard bandsbetween them.[3]All broadcasters were required to move to the frequencies occupied by channels 2 through 51 as part of the digital TV transition. A similar reallocation was employed in 1989 to expandanalog cellphone service, having previously eliminated TV channels 70-83 at the uppermost UHF frequencies. This created an unusual situation where old TV tuning equipment was able tolisten tocellularphone calls, although such activity was made illegal and the FCC prohibited the sale of future devices with that capability. Some of the 700 MHzspectrum licenseswere already auctioned in Auctions 44 and 49. Paired channels 54/59 (lower-700 MHz block C) and unpaired channel 55 (block D) were sold and in some areas were already being used for broadcasting and Internet access. For example, QualcommMediaFLOin 2007 started using channel 55 for broadcastingmobile TVto cell phones in some markets.[4]Qualcomm later ended the service and sold (at a large profit) channel 55 nationwide to AT&T Mobility, along with channel 56 in theNortheast Corridorand much ofCalifornia.Dish Networkbought channel 56 (block E) licenses in the remainder of the nation'smedia markets, so far using it only for testingATSC-M/H. As of 2015[update], AT&T does not appear to be using block D or E (band class 29) yet, but plans to uselink aggregationfor increaseddownloadspeeds and capacity.[5] For the 700-MHz auction, the FCC designed a new multi-round process that limits the number of package bids that each bidder can submit (12 items and 12 package bids) and the prices at which they can be submitted, provides computationally intensive feedback prices similar to the pricing approach.[6]This package bidding process (which is often referred to ascombinatorial auctions) was the first of its kind to be used by the FCC in an actual auction. Bidders were allowed to bid on individual licenses or on an all-or-nothing bid which could be done up to twelve packages, which the bidder determined at any point in the auction. Doing the auction this way allowed the bidder to avoid the exposure problem when licenses are complements. The provisional winning bids are the set of consistent bids that maximize total revenues. The 700 MHz auction represented a good test-case for package bidding for two reasons. First, the 700 MHz auction only involves 12 licenses: 2 bands (one 10 MHz and one 20 MHz) in each of the 6 regions.[7]Secondly, prospective bidders had expressed interest in alternative packaging because some Internet service providers had different needs and the flexibility would benefit them. The FCC issued Public Notice DA00-1486 adopted and described the package bidding rules for the 700 MHz auction. The FCC's original proposal allowed only nine package bids: the six 30 MHz regional bids and three nationwide bids (10, 20, or 30 MHz). Although these nine packages were consistent with the expressed desires of many prospective bidders, others felt that the nine packages were too restrictive. The activity rule is unchanged, aside from a new definition of activity and a lower activity requirement of 50%. A bidder must be active on 50% of its current eligibility or its eligibility in the next round will be reduced to two times its activity. Bids made in different rounds were treated as mutually exclusive and a bidder wishing to add a license or package to its provisional winnings must renew the provisional winning bids in the current round. The FCC placed rules onpublic safetyfor the auction. 20 MHz of the valuable 700 MHz spectrum were set aside for the creation of a public/private partnership that will eventually roll out to a new nationwide broadband network tailored to the requirements of public safety. The FCC offered the commercial licensee extra spectrum adjacent to the public safety block that the licensee can use as it wants. The licensee is allowed to use whatever bandwidth that is available on the public safety side of the network to offer data services of their own.[8] In an effort to encouragenetwork neutrality, groups such asPublic Knowledge,MoveOn.org,Media Access Project, along with individuals such asCraigslistfounderCraig Newmark, and Harvard Law professorLawrence Lessigappealed to theFederal Communications Commissionto make the newly freed airways open access to the public.[9] Prior to the bidding process, Google asked that the spectrum be free to lease wholesale and the devices operating under the spectrum be open. At the time, many providers such as Verizon and AT&T used technological measures to block external applications. In return, Google guaranteed a minimum bid of $4.6 billion. Google's specific requests were the adoption of these policies: The result of the auction was that Google was outbid by others in the auction, triggering the open platform restrictions Google had asked for without having to actually purchase any licenses.[11]Google was actively involved in the bidding process although it had no intentions of actually winning any licenses.[12]The reason for this was that it could push up the price of the bidding process in order to reach the US$4.6B reserve price, therefore triggering the open source restrictions listed above. Had Google not been actively involved in the bidding process, it would have made sense for businesses to suppress their bidding strategies in order to trigger a new auction without the restrictions imposed by Google and the FCC.[11]Google's upfront payment of $287 million in order to participate in the bidding process was largely recovered after the auction since it had not actually purchased any licences. Despite this, Google ended paying interest costs, which resulted in an estimated loss of 13 million dollars.[11] The FCC ruled in favor of Google's requests.[13]Only two of the four requirements were put in place on the upper C-Block, open applications and open devices.[14]Google had wanted the purchaser to allow 'rental' of the blocks to different providers. In retaliation, on September 13, 2007, Verizon filed a lawsuit against the Federal Communications Commission to remove the provisions Google had asked for. Verizon called the rules "arbitrary and capricious, unsupported by substantial evidence and otherwise contrary to law."[15][16][17][18] On October 23, Verizon chose to drop the lawsuit after losing its appeal for a speedy resolution on October 3. However,CTIA - The Wireless Associationchallenged the same regulations in a lawsuit filed the same day.[19]On November 13, 2008, CTIA dropped its lawsuit against the FCC.[20] The auction divided UHF spectrum into five blocks:[21] The FCC placed very detailed rules about the process of this auction of the 698–806 MHz part of the wireless spectrum. Bids were anonymous and designed to promote competition. The aggregatereserve pricefor all block C licenses was approximately $4.6 billion.[22]The total reserve price for all five blocks being auctioned in Auction 73 was just over $10 billion.[22] Auction 73 generally went as planned by telecommunications analysts. In total, Auction 73 raised $19.592 billion.[23]Verizon WirelessandAT&T Mobilitytogether accounted for $16.3 billion of the total revenue.[24]Of the 214 approved applicants, 101 successfully purchased at least one license. Despite their heavy involvement with the auction,Googledid not purchase any licenses. However, Google did place the minimum bid on Block C licenses in order to ensure that the license would be required to be open-access.[25][26][27] The results for each of the five blocks: After the end of Auction 73, there remained some licenses that either went unsold or were defaulted on by the winning bidder from Blocks A and B. A new auction, Auction 92, was held on July 19, 2011 to sell the 700 MHz band licenses that were still available. The auction closed on July 28, 2011, with 7 bidders having won 16 licenses worth $19.8 million.[30] Six years after the end of the auction of 700 MHz spectrum, block A remained largely unused, althoughT-Mobile USAbegan to deploy its extended-range LTE in 2015 on licenses purchased from Verizon Wireless and cleared ofRF interferencein several areas by TV stations changing off of channel 51. This delay was caused by technical issues which wereregulatoryand possiblyanticompetitivein nature. After the March 2008 conclusion of Auction 73, Motorola initiated steps to have3GPPestablish a new industry standard (later designated as band class 17) that would be limited to the lower 700 MHz B and C blocks. In proposing band class 17, Motorola cited the need to address concerns about high-power transmissions of TV stations still broadcasting on channel 51 and the lower-700 MHz D and E blocks. As envisioned and ultimately adopted, the band class 17 standard allowsLTEoperations in only the lower-700 MHz B and C blocks using a specific signaling protocol that would filter out all other frequencies. Although band class 17 operates on two of the three blocks common to band class 12, band class 17 devices use more narrowelectronic filters, which have the effect of permitting a smaller range of frequencies topass throughthe filter. In addition, band class 12 and 17signalingprotocolsare not compatible.[31] The creation of two non-interoperable band classes has had numerous effects. Customers are unable to switch between a licensee deploying its service using band class 17 and a licensee that provides its service using band class 12 without purchasing a new device (even when the two operators use the same 2G and 3G technologies and bands), and band class 12 and 17 devices cannotroamon each other'scellular networks.[31] When deploying its LTE network,C Spire Wirelessdecided not to use A block because of the lack of band-12 support inmobile devices, issues with roaming, and the increased cost ofbase stationsdue to lack of supply.[32]US Cellular deployed a band class 12 LTE network, however not all of US Cellular's devices were able to access it. In particular, theiPhone 5SandiPhone 5Ccould not.[33]Other wireless telecommunication providers launched LTE band class 12 networks, but have not been able to offersmartphonesthat access them, instead resorting tofixedormobilewireless broadband modems.[34]As of April 2015, only three telecom providers were offering smartphones that use band 12: US Cellular, T-Mobile USA, and Nex-Tech Wireless. While smaller US telecommunication providers were upset at the lack of interoperability,AT&Tdefended the creation of band 17 and told the other carriers to seek interoperability withSprintandT-Mobileinstead.[35]However, in September 2013, AT&T changed its stance and committed to support and sell band-12 devices.[36] Following AT&T's commitment the Federal Communications Commission ruled:[31] Consistent with these commitments, AT&T anticipates that its focus and advocacy within the 3GPP standards setting process will shift to band-12-related projects and work streams. AT&T must place priority within the 3GPP RAN committee on the development of various band-12 carrier-aggregation scenarios. Upon completing implementation of the MFBI feature, AT&T anticipates that its focus on new standards related to the paired lower-700 MHz spectrum will be almost exclusively on band 12 configurations, features and capabilities.[31] Additionally,Dish Networkagreed to lower its maximumeffective radiated powerlevels on block E, which is on the loweradjacent channelto the downlink (tower-to-user transmissions) for block A. It did this in exchange for the FCC allowing it to operate the block as a one-way service, effectively making it a broadcast, although it could still be interactive through other means. Since Dish has already been experimentally operating it as asingle-frequency network, this should not have a significant effect on whatever service it might offer in the future.
https://en.wikipedia.org/wiki/United_States_2008_wireless_spectrum_auction
Intelecommunications,white spacesrefer toradio frequenciesallocated to abroadcastingservice but not used locally.[1]National and international bodies assign frequencies for specific uses and, in most cases, license the rights to broadcast over these frequencies. Thisfrequency allocationprocess creates abandplanwhich for technical reasons assigns white space between usedradio bandsorchannelsto avoidinterference. In this case, while the frequencies are unused, they have been specifically assigned for a purpose, such as aguard band. Most commonly however, these white spaces exist naturally between used channels, since assigning nearby transmissions to immediatelyadjacent channelswill cause destructive interference to both. In addition to white space assigned for technical reasons, there is also unusedradio spectrumwhich has either never been used, or is becoming free as a result of technical changes. In particular, theswitchovertodigital televisionfrees uplarge areas between about 50 MHz and 700 MHz. This is because digital transmissions can be packed into adjacent channels, while analog ones cannot. This means that the band can be compressed into fewer channels, while still allowing for more transmissions. In the United States, the abandoned television frequencies are primarily in the upperUHF700-megahertz band, coveringTV channels52 to 69 (698 to 806 MHz). U.S. television and its white spaces will continue to exist in UHF frequencies, as well asVHFfrequencies for which mobile users and white-space devices require larger antennas. In the rest of the world, the abandoned television channels are VHF, and the resulting large VHF white spaces are being reallocated for the worldwide (except the U.S.)digital radiostandardDABandDAB+, andDMB.[citation needed] Various proposals, includingIEEE 802.11af,IEEE 802.22[2][3]and those from the White Spaces Coalition, have advocated using white spaces left by the termination ofanalog TVto providewireless broadbandInternet access. A device intended to use these available channels is a white-spaces device (WSD). Such devices are designed to detect the presence of existing but unused areas of airwaves, such as those reserved foranalog television, and utilize them forWhite Space Internetsignals. Such technology is predicted to improve the availability ofbroadband InternetandWi-Fiin rural areas.[4][5] Early ideas proposed includingGNSSreceivers and programming each WSD with adatabaseof all TV stations in an area, however this would not have avoided other non-stationary or unlicensed users in the area, or any stations licensed or altered after the device was made. Additionally, these efforts may impactwireless microphones, medicaltelemetry, and other technologies that have historically relied on these open frequencies.[citation needed] Professional wireless microphones have used white space for decades previous to so-called white space devices.[1] LikeWi-Fi, TV whitespace is a wireless connection, but uses different frequency bands. TV white space operates in 470 MHz to 698 MHz, whilst Wi-Fi operates in 2.4 and 5 GHz bands. Data transfer speed depends on the model of the radio, the vendor, the antenna length, and other factors. New radios can support more than 50 Mbit/s. Wi-Fi speed similarly depends on several factors, such as range, line of sight, and so on, but may be as much as 1000 Mbit/s using theIEEE 802.11acstandard. Range is a crucial difference between Wi-Fi and TV white space. On average, TV white space range is 6 miles, but it can be less or more depending on factors such as noise, line of sight and so on. One of the three main TV white space manufactures, Carlson wireless, advertises that their radios can go up to 24.8 miles. Both have low power consumption - 20 to 100 watts depending on the device, the antenna length, the vendor, and so on. Both technologies meet the government security standards such as FIPS 197 Compliance (Advanced Encryption Standards). While Wi-Fi works well in cities, TV white space works well in rural areas.[6] Microsoft, in a partnership with the communications authority of Argentina,Ente Nacional de Comunicaciones(ENACOM), planned to deliver wireless access to schools in the province ofMendozaon or around August, 2017. Microsoft will borrow the White Spaces hardware to ENACOM technicians, and national satellite operatorARSATwill act as the ISP. No further trial details has been delivered yet.[7] In August 2011, Industry Canada, the Canadian ministry for industry, launched a consultation on "Consultation on a Policy and Technical Framework for the Use of Non-Broadcasting Applications in the Television Broadcasting Bands Below 698 MHz"[8](pdf). The consultation closed on November 4, 2011. Submissions werereceivedfrom a wide range of organisations from the telecoms and broadcast industries. A pilot project by Indigo Telecom/Microsoft and the Kenyan government is reportedly delivering bandwidth speeds of up to 16 Mbit/s to three rural communities, which lack electricity - Male, Gakawa and Laikipia, using a solar-powered network.[9] As of July 3, 2014[update], a pilot project called Citizen Connect, a collaboration between the Microsoft 4Afrika Initiative, the MyDigitalBridge Foundation, and the MCA-N (Millennium Challenge Account Namibia), is slated to deliver broadband Internet to "twenty-seven schools and seven circuit offices of the Ministry of Education inOmusati,OshanaandOhangwena", using "TV White Space technology".[10][11] In 2014, Microsoft worked with the Philippine government to pilot a program for digitizing the management of remote fishermen.[12] After FCC, SingaporeInfo-communications Media Development Authorityis the second regulator in the world to have TV White Space regulated, ahead of UK and Canada. The Singapore efforts were driven mainly by the Singapore White Spaces Pilot Group (SWSPG)[13]founded by theInstitute for Infocomm Research, Microsoft and StarHub. The Institute for Infocomm Research subsequently spun off Whizpace[14]to commercialize TV White Space radio using strong IPs that were developed in the institute since 2006. Google, in a partnership with theIndependent Communications Authority of South Africa(ICASA),CSIR,Meraka Institute, the Wireless Access Providers Association (WAPA) andCarlson wirelessdelivers wireless access to 10 schools through 3 base stations at the campus of Stellenbosch University’s Faculty of Medicine and Health Sciences in Tygerberg, Cape Town. There was an initial trial that took place within 10 schools in order to deliver affordable internet to the selected schools in South Africa without TV interference, and to spread awareness about future TVWS technologies in South Africa. The trial took place over 10 months, from March 25, 2013 to September 25, 2013.[15] A second trial involved providing point-to-point Internet connectivity to five rural secondary schools in Limpopo province, with equally good results.[16] ICASA subsequently issued regulations on the use of television white spaces in 2018.[17]Three temporary TV white space spectrum licenses were issued by ICASA in April 2020, response to the Covid-19 pandemic, in the 470–694 MHz band, to Mthinthe Communications, Levin Global & Morai Solutions.[18] Ofcom, the licensing body of spectrum in the UK, has made white-space free to use.[19][20] On June 29, 2011, one of the largest commercial tests of white space Wi-Fi was conducted in Cambridge, England. The trial was conducted by Microsoft using technology developed byAdaptrumand backed by a consortium of ISP's and tech companies includingNokia,BSkyB, theBBC, andBT, with the actual network hardware being provided byNeul. In the demonstration, the Adaptrum whitespace system provided the broadband IP connectivity allowing a client-side MicrosoftXboxto stream live HD videos from the Internet. Also as part of the demo, a live Xbox/Kinect video chat was established between two Xbox/Kinect units connected through the same TV whitespace connection. These applications were demonstrated under a highly challenging radio propagation environment with more than 120 dB link loss through buildings, foliage, walls, furniture, people etc. and with severe multipath effects.[21] In 2017, Microsoft further expanded their research to show that small cell LTE eNodeB's operating in TV White Space could be used to provide cost effective broadband to affordable housing residents.[22] Full poweranalog televisionbroadcasts, which operated between the 54MHzand 806 MHz (54–72, 76–88, 174–216, 470–608, and 614–806)[23]television frequencies (Channels 2-69), ceased operating on June 12, 2009 per a United Statesdigital switchovermandate. At that time, full power TV stations were required to switch to digital transmission and operate only between 54 MHz and 698 MHz. This is also the timetable that the White Spaces Coalition has set to begin offering wireless broadband services to consumers. The delay allows time for the United StatesFederal Communications Commission(FCC) to test the technology and make sure that it does not interfere with existing television broadcasts. Similar technologies could be used worldwide as much of the core technology is already in place.[24] Theatrical producersandsports franchiseshoped to derail or delay the decision, arguing that their own transmissions – whether from television signals or from wireless microphones used in live music performances – could face interference from new devices that use the white spaces. However, the FCC rejected their arguments, saying enough testing has been done, and through new regulations, possible interference will be minimized. More of the broadcast spectrum was needed for wireless broadband Internet access, and in March 2009, Massachusetts SenatorJohn Kerryintroduced a bill requiring a study of efficient use of the spectrum.[25]Academics have studied the matter and have promoted the idea of using computing technology to capture the benefits of the white space.[26] The White Spaces Coalition was formed in 2007 by eight large technology companies that planned to deliver high speed internet access beginning in February 2009 to United States consumers via existing white space in unused television frequencies between 54 MHz and 698 MHz (TV Channels 2-51). The coalition expected speeds of 80 Mbit/s and above, and 400 to 800 Mbit/s for white space short-range networking. The group includedMicrosoft,Google,Dell,HP,Intel,Philips,Earthlink, andSamsung Electro-Mechanics.[27] Many of the companies involved in the White Spaces Coalition were also involved in the Wireless Innovation Alliance.[28]Another group calling itself the White Space Alliance was formed in 2011.[29] Googlesponsored a campaign named Free the Airwaves with the purpose of switching over the white spaces that were cleared up in 2009 by theDTV conversionprocess by the FCC and converted to anun-licensed spectrumthat can be used byWi-Fi-like devices.[30][31]TheNational Association of Broadcastersdisapproved of the project because they claimed it would reduce the broadcast quality of their TV signals.[32] The Federal Communications Commission's Office of Engineering and Technology released a report dated July 31, 2007 with results from its investigation of two preliminary devices submitted. The report concluded that the devices did not reliably sense the presence of television transmissions or other incumbent users, hence are not acceptable for use in their current state and no further testing was deemed necessary.[33] However, on August 13, 2007, Microsoft filed a document with the FCC in which it described a meeting that its engineers had with FCC engineers from the Office of Engineering and Technology on August 9 and 10. At this meeting the Microsoft engineers showed results from their testing done with identical prototype devices and using identical testing methods that "detectedDTV signalsat a threshold of -114dBmin laboratory bench testing with 100 percent accuracy, performing exactly as expected." In the presence of FCC engineers, the Microsoft engineers took apart the device that the FCC had tested to find the cause of the poor performance. They found that "the scanner in the device had been damaged and operated at a severely degraded level" which explained the FCC unit's inability to detect when channels were occupied. It was also pointed out that the FCC was in possession of an identical backup prototype that was in perfect operating condition that they had not tested.[34] TV broadcasters and other incumbent users of this spectrum (both licensed and unlicensed, including makers of wireless audio systems) feared that their systems would no longer function properly if unlicensed devices were to operate in the same spectrum. However, the FCC's Office of Engineering and Technology released a report dated October 15, 2008, which evaluated prototype TV-band white spaces devices submitted by Adaptrum,The Institute for Infocomm Research, Motorola and Philips. The report concluded that these devices had met the burden of proof of concept in their ability to detect and avoid legacy transmissions,[35]although none of the tested devices adequately detected wireless microphone signals in the presence of a digital TV transmitter on an adjacent channel. On November 4, 2008, the FCC voted 5-0 to approve the unlicensed use of white space,[36]thereby silencing opposition from broadcasters. The actual Second Report and Order was released ten days later and contains some serious obstacles for the development and use ofTV Band Devicesas they are called by FCC. Devices must both consult anFCC-mandated databaseto determine which channels are available for use at a given location, and must also monitor the spectrum locally once every minute to confirm that no legacywireless microphones,video assistdevices or other emitters are present. If a single transmission is detected, the device may not transmit anywhere within the entire 6 MHz channel in which the transmission was received.[37]It was hoped that, within a year, this new access will lead to more reliableInternet accessand other technologies. On September 23, 2010, the FCC released a Memorandum Opinion and Order that determined the final rules for the use of white space for unlicensed wireless devices.[38]The new rules removed mandatory sensing requirements which greatly facilitates the use of the spectrum with geolocation based channel allocation. The final rules[39]adopt a proposal from the White Spaces Coalition for very strict emission rules that prevent the direct use of IEEE 802.11 (Wi-Fi) in a single channel effectively making the new spectrum unusable for Wi-Fi technologies.[citation needed] On February 27, 2009, theNational Association of Broadcasters(NAB) and the Association for Maximum Service Television asked a Federal court to shut down the FCC's authorization of white space wireless devices. The plaintiffs allege that portable, unlicensed personal devices operating in the same band as TV broadcasts have beenprovento cause interference despite FCC tests to the contrary. The lawsuit was filed in a United States Court of Appeals for the District of Columbia Circuit. Thepetition for reviewstates that the FCC's decision to allow white space personal devices "will have a direct adverse impact" on MSTV's and NAB's members, and that the Commission's decision is "arbitrary, capricious, and otherwise not in accordance with law.".[40]A Motion to Govern the case was due to be considered on February 7, 2011.[41]In May 2012, the NAB announced it was dropping its court challenge of rules that allow the unlicensed use of empty airwaves between existing broadcast channels.[42] On October 16, 2009, researchers atMicrosoft ResearchRedmond, Washington built and deployed a white space network calledWhiteFi.[43][44]In this network, multiple clients connected to a single access point over UHF frequencies. The deployment included experiments to test how much data could be sent before interference became audible to nearby wireless microphones. On February 24, 2010, officials inWilmington, North Carolina, which was the test market for thetransition to digital television, unveiled a new municipal wireless network, after a month of testing. The network used the white spaces made available by the end of analog TV. Spectrum Bridge was to work to make sure TV stations in the market do not receive interference ("no interference issues" have been reported). Thesmart citynetwork will not compete with cell phone companies but will instead be used for "national purposes", including government and energy monitoring. TV Band Service, made up of private investors, has put up cameras in parks, and along highways to show traffic. Other uses include water level and quality, turning off lights inball parks, and public Wi-Fi in certain areas. TV Band had an 18-month experimental license.[45] In 2011, theYurok TribeinHumboldt County, California began white space trials with telecommunications equipment providerCarlson WirelessofArcata, California.[46] In July 2013, West Virginia University became the first university in the United States to use vacant broadcast TV channels to provide the campus and nearby areas with wireless broadband Internet service.[47] Also in July 2013, the Port of Pittsburgh evaluated White Space spectrum for enhancing inland waterway safety and utility with telecommunications equipment providerMetric Systems CorporationofVista, California.[48]
https://en.wikipedia.org/wiki/White_spaces_(radio)
Intelecommunications,long-term evolution(LTE) is astandardforwireless broadbandcommunication forcellularmobile devices and data terminals. It is considered to be a "transitional"4Gtechnology,[1]and is therefore also referred to as3.95Gas a step above3G.[2] LTE is based on the2GGSM/EDGEand 3GUMTS/HSPAstandards. It improves on those standards' capacity and speed by using a different radio interface and core network improvements.[3][4]LTE is the upgrade path for carriers with both GSM/UMTS networks andCDMA2000networks. LTE has been succeeded byLTE Advanced, which is officially defined as a "true" 4G technology[5]and also named "LTE+". The standard is developed by the3GPP(3rd Generation Partnership Project) and is specified in its Release 8 document series, with minor enhancements described in Release 9. LTE is also called3.95Gand has been marketed as4G LTEandAdvanced 4G;[citation needed]but the original version did not meet the technical criteria of a4Gwireless service, as specified in the 3GPP Release 8 and 9 document series forLTE Advanced. The requirements were set forth by theITU-Rorganisation in theIMT Advancedspecification; but, because of market pressure and the significant advances thatWiMAX,Evolved High Speed Packet Access, and LTE bring to the original 3G technologies, ITU-R later decided that LTE and the aforementioned technologies can be called 4G technologies.[6]The LTE Advanced standard formally satisfies the ITU-R requirements for being considered IMT-Advanced.[7]To differentiate LTE Advanced andWiMAX-Advancedfrom current[when?]4G technologies, ITU has defined the latter as "True 4G".[8][5] LTE stands for Long-Term Evolution[9]and is a registered trademark owned byETSI(European Telecommunications Standards Institute) for the wirelessdata communicationstechnology and development of the GSM/UMTS standards. However, other nations and companies do play an active role in the LTE project. The goal of LTE was to increase the capacity and speed of wireless data networks using newDSP(digital signal processing) techniques and modulations that were developed around the turn of the millennium. A further goal was the redesign and simplification of thenetwork architectureto anIP-based system with significantly reduced transferlatencycompared with the3Garchitecture. The LTE wireless interface is incompatible with2Gand 3G networks so it must be operated on a separateradio spectrum. The idea of LTE was first proposed in 1998, with the use of theCOFDMradio access technique to replace theCDMAand studying its Terrestrial use in the L band at 1428 MHz (TE) In 2004 by Japan'sNTT Docomo, with studies on the standard officially commenced in 2005.[10]In May 2007, the LTE/SAETrial Initiative (LSTI) alliance was founded as a global collaboration between vendors and operators with the goal of verifying and promoting the new standard in order to ensure the global introduction of the technology as quickly as possible.[11][12] The LTE standard was finalized in December 2008, and the first publicly available LTE service was launched byTeliaSonerainOsloandStockholmon December 14, 2009, as a data connection with a USB modem. The LTE services were launched by major North American carriers as well, with the Samsung SCH-r900 being the world's first LTE Mobile phone starting on September 21, 2010,[13][14]and Samsung Galaxy Indulge being the world's first LTE smartphone starting on February 10, 2011,[15][16]both offered byMetroPCS, and theHTC ThunderBoltoffered by Verizon starting on March 17 being the second LTE smartphone to be sold commercially.[17][18]In Canada,Rogers Wirelesswas the first to launch LTE network on July 7, 2011, offering the Sierra Wireless AirCard 313U USB mobile broadband modem, known as the "LTE Rocket stick" then followed closely by mobile devices from both HTC and Samsung.[19]Initially, CDMA operators planned to upgrade to rival standards calledUMBandWiMAX, but major CDMA operators (such asVerizon,SprintandMetroPCSin the United States,BellandTelusin Canada,au by KDDIin Japan,SK Telecomin South Korea andChina Telecom/China Unicomin China) have announced instead they intend to migrate to LTE. The next version of LTE isLTE Advanced, which was standardized in March 2011.[20]Services commenced in 2013.[21]Additional evolution known asLTE Advanced Prohave been approved in year 2015.[22] The LTE specification provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s andQoSprovisions permitting a transferlatencyof less than 5msin theradio access network. LTE has the ability to manage fast-moving mobiles and supports multi-cast and broadcast streams. LTE supports scalable carrierbandwidths, from 1.4MHzto 20 MHz and supports bothfrequency division duplexing(FDD) andtime-division duplexing(TDD). The IP-based network architecture, called theEvolved Packet Core(EPC) designed to replace theGPRS Core Network, supports seamlesshandoversfor both voice and data to cell towers with older network technology such asGSM,UMTSandCDMA2000.[23]The simpler architecture results in lower operating costs (for example, eachE-UTRAcell will support up to four times the data and voice capacity supported by HSPA[24]). BecauseLTE frequencies and bandsdiffer from country to country, only multi-band phones can use LTE in all countries where it is supported. Most carriers supporting GSM or HSUPA networks can be expected to upgrade their networks to LTE at some stage. A complete list of commercial contracts can be found at:[61] The following is a list of the top 10 countries/territories by 4G LTE coverage as measured by OpenSignal.com in February/March 2019.[72][73] For the complete list of all the countries/territories, seelist of countries by 4G LTE penetration. Long-Term Evolution Time-Division Duplex(LTE-TDD), also referred to asTDD LTE, is a4Gtelecommunications technology and standard co-developed by an international coalition of companies, includingChina Mobile,Datang Telecom,Huawei,ZTE,Nokia Solutions and Networks,Qualcomm,Samsung, andST-Ericsson. It is one of the two mobile data transmission technologies of the Long-Term Evolution (LTE) technology standard, the other beingLong-Term Evolution Frequency-Division Duplex(LTE-FDD). While some companies refer to LTE-TDD as "TD-LTE" for familiarity withTD-SCDMA, there is no reference to that abbreviation anywhere in the 3GPP specifications.[74][75][76] There are two major differences between LTE-TDD and LTE-FDD: how data is uploaded and downloaded, and what frequency spectra the networks are deployed in. While LTE-FDD uses paired frequencies to upload and download data,[77]LTE-TDD uses a single frequency, alternating between uploading and downloading data through time.[78][79]The ratio between uploads and downloads on a LTE-TDD network can be changed dynamically, depending on whether more data needs to be sent or received.[80]LTE-TDD and LTE-FDD also operate on different frequency bands,[81]with LTE-TDD working better at higher frequencies, and LTE-FDD working better at lower frequencies.[82]Frequencies used for LTE-TDD range from 1850 MHz to 3800 MHz, with several different bands being used.[83]The LTE-TDD spectrum is generally cheaper to access, and has less traffic.[81]Further, the bands for LTE-TDD overlap with those used forWiMAX, which can easily be upgraded to support LTE-TDD.[81] Despite the differences in how the two types of LTE handle data transmission, LTE-TDD and LTE-FDD share 90 percent of their core technology, making it possible for the same chipsets and networks to use both versions of LTE.[81][84]A number of companies produce dual-mode chips or mobile devices, includingSamsungandQualcomm,[85][86]while operatorsCMHKand Hi3G Access have developed dual-mode networks in Hong Kong and Sweden, respectively.[87] The creation of LTE-TDD involved a coalition of international companies that worked to develop and test the technology.[88]China Mobilewas an early proponent of LTE-TDD,[81][89]along with other companies likeDatang Telecom[88]andHuawei, which worked to deploy LTE-TDD networks, and later developed technology allowing LTE-TDD equipment to operate inwhite spaces—frequency spectra between broadcast TV stations.[75][90]Intelalso participated in the development, setting up a LTE-TDD interoperability lab with Huawei in China,[91]as well asST-Ericsson,[81]Nokia,[81]and Nokia Siemens (nowNokia Solutions and Networks),[75]which developed LTE-TDD base stations that increased capacity by 80 percent and coverage by 40 percent.[92]Qualcommalso participated, developing the world's first multi-mode chip, combining both LTE-TDD and LTE-FDD, along with HSPA and EV-DO.[86]Accelleran, a Belgian company, has also worked to build small cells for LTE-TDD networks.[93] Trials of LTE-TDD technology began as early as 2010, withReliance Industriesand Ericsson India conducting field tests of LTE-TDD inIndia, achieving 80 megabit-per second download speeds and 20 megabit-per-second upload speeds.[94]By 2011, China Mobile began trials of the technology in six cities.[75] Although initially seen as a technology utilized by only a few countries, including China and India,[95]by 2011 international interest in LTE-TDD had expanded, especially in Asia, in part due to LTE-TDD's lower cost of deployment compared to LTE-FDD.[75]By the middle of that year, 26 networks around the world were conducting trials of the technology.[76]The Global LTve (GTI) was also started in 2011, with founding partners China Mobile,Bharti Airtel,SoftBank Mobile,Vodafone,Clearwire, Aero2 andE-Plus.[96]In September 2011, Huawei announced it would partner with Polish mobile provider Aero2 to develop a combined LTE-TDD and LTE-FDD network in Poland,[97]and by April 2012,ZTE Corporationhad worked to deploy trial or commercial LTE-TDD networks for 33 operators in 19 countries.[87]In late 2012, Qualcomm worked extensively to deploy a commercial LTE-TDD network in India, and partnered with Bharti Airtel and Huawei to develop the first multi-mode LTE-TDD smartphone for India.[86] InJapan, SoftBank Mobile launched LTE-TDD services in February 2012 under the nameAdvanced eXtended Global Platform(AXGP), and marketed as SoftBank 4G (ja). The AXGP band was previously used forWillcom'sPHSservice, and after PHS was discontinued in 2010 the PHS band was re-purposed for AXGP service.[98][99] In the U.S., Clearwire planned to implement LTE-TDD, with chip-maker Qualcomm agreeing to support Clearwire's frequencies on its multi-mode LTE chipsets.[100]WithSprint'sacquisition of Clearwire in 2013,[77][101]the carrier began using these frequencies for LTE service on networks built bySamsung,Alcatel-Lucent, andNokia.[102][103] As of March 2013, 156 commercial 4G LTE networks existed, including 142 LTE-FDD networks and 14 LTE-TDD networks.[88]As of November 2013, the South Korean government planned to allow a fourth wireless carrier in 2014, which would provide LTE-TDD services,[79]and in December 2013, LTE-TDD licenses were granted to China's three mobile operators, allowing commercial deployment of 4G LTE services.[104] In January 2014, Nokia Solutions and Networks indicated that it had completed a series of tests ofvoice over LTE( VoLTE)calls on China Mobile's TD-LTE network.[105]The next month, Nokia Solutions and Networks and Sprint announced that they had demonstrated throughput speeds of 2.6 gigabits per second using a LTE-TDD network, surpassing the previous record of 1.6 gigabits per second.[106] Much of the LTE standard addresses the upgrading of 3G UMTS to what will eventually be4Gmobile communications technology. A large amount of the work is aimed at simplifying the architecture of the system, as it transitions from the existing UMTScircuit+packet switchingcombined network to an all-IP flat architecture system.E-UTRAis the air interface of LTE. Its main features are: The LTE standard supports onlypacket switchingwith its all-IP network. Voice calls in GSM, UMTS, and CDMA2000 arecircuit switched, so with the adoption of LTE, carriers will have to re-engineer their voice call network.[108]Four different approaches sprang up: One additional approach that is not initiated by operators is the usage ofover-the-top content(OTT) services, using applications likeSkypeandGoogle Talkto provide LTE voice service.[109] Most major backers of LTE preferred and promoted VoLTE from the beginning. The lack of software support in initial LTE devices, as well as core network devices, however, led to a number of carriers promotingVoLGA(Voice over LTE Generic Access) as an interim solution.[110]The idea was to use the same principles asGAN(Generic Access Network, also known as UMA or Unlicensed Mobile Access), which defines the protocols through which a mobile handset can perform voice calls over a customer's private Internet connection, usually over wireless LAN. VoLGA however never gained much support, because VoLTE (IMS) promises much more flexible services, albeit at the cost of having to upgrade the entire voice call infrastructure. VoLTE may require Single Radio Voice Call Continuity (SRVCC) to be able to smoothly perform a handover to a 2G or 3G network in case of poor LTE signal quality.[111] While the industry has standardized on VoLTE, early LTE deployments required carriers to introduce circuit-switched fallback as a stopgap measure. When placing or receiving a voice call on a non-VoLTE-enabled network or device, LTE handsets will fall back to old 2G or 3G networks for the duration of the call. To ensure compatibility, 3GPP demands at least AMR-NB codec (narrow band), but the recommended speech codec for VoLTE isAdaptive Multi-Rate Wideband, also known asHD Voice. This codec is mandated in 3GPP networks that support 16 kHz sampling.[112] Fraunhofer IIShas proposed and demonstrated "Full-HD Voice", an implementation of theAAC-ELD(Advanced Audio Coding – Enhanced Low Delay) codec for LTE handsets.[113]Where previous cell phone voice codecs only supported frequencies up to 3.5 kHz and upcomingwideband audioservices branded asHD Voiceup to 7 kHz, Full-HD Voice supports the entire bandwidth range from 20 Hz to 20 kHz. For end-to-end Full-HD Voice calls to succeed, however, both the caller's and recipient's handsets, as well as networks, have to support the feature.[114] The LTE standard covers a range of many different bands, each of which is designated by both a frequency and a band number: As a result, phones from one country may not work in other countries. Users will need a multi-band capable phone for roaming internationally. According to theEuropean Telecommunications Standards Institute's (ETSI)intellectual propertyrights (IPR) database, about 50 companies have declared, as of March 2012, holdingessential patentscovering the LTE standard.[121]The ETSI has made no investigation on the correctness of the declarations however,[121]so that "any analysis of essential LTE patents should take into account more than ETSI declarations."[122]Independent studies have found that about 3.3 to 5 percent of all revenues from handset manufacturers are spent on standard-essential patents. This is less than the combined published rates, due to reduced-rate licensing agreements, such as cross-licensing.[123][124][125]
https://en.wikipedia.org/wiki/3GPP_Long_Term_Evolution
SNOWis a family of word-based synchronousstream ciphersdeveloped by Thomas Johansson and Patrik Ekdahl atLund University. They have a 512-bitlinear feedback shift registerat their core, followed by a non-linear output state machine with a few additional words of state. SNOW 1.0,SNOW 2.0, andSNOW 3Guse a shift register of 16 32-bit words, and a 32-bitadd-rotate-XOR(ARX) output transformation with 2 or 3 words of state. Each iteration advances the shift register by 32 bits and produces 32 bits of output. SNOW-VandSNOW-Viuse a shift register of 32 16-bit words (designed to be implemented as 4 128-bitSIMDregisters) which is advanced by 16 bits per iteration. 8 LFSR iterations can be performed simultaneously using SIMD operations, after which one output transformation step is performed, producing 128 bits of output. The output transformation uses theAdvanced Encryption Standard(AES) round function (commonlyimplemented in hardwareon recent processors), and maintains 2 additional 128-bit words of state. SNOW 1.0, originally simply SNOW, was submitted to theNESSIEproject.[1]The cipher has no known intellectual property or other restrictions. The cipher works on 32-bit words and supports both 128- and 256-bit keys. The cipher consists of a combination of aLFSRand afinite-state machine(FSM) where the LFSR also feeds the next state function of the FSM. The cipher has a short initialization phase and very good performance on both 32-bit processors and in hardware. During the evaluation, weaknesses were discovered and as a result, SNOW was not included in the NESSIE suite of algorithms. The authors have developed a new version, version 2.0 of the cipher, that solves the weaknesses and improves the performance.[2] DuringETSISAGEevaluation, the design was further modified to increase its resistance against algebraic attacks with the result named SNOW 3G.[3] It has been found that related keys exist both for SNOW 2.0 and SNOW 3G,[4]allowing attacks against SNOW 2.0 in the related-key model. SNOW has been used in theESTREAMproject as a reference cipher for the performance evaluation. SNOW 2.0 is one out of stream ciphers chosen forISO/IECstandard ISO/IEC 18033-4.[5] SNOW 3G[6]is chosen as the stream cipher for the3GPPencryption algorithms UEA2 and UIA2.[7] SNOW-V was an extensive redesign published in 2019,[8]designed to match5Gcellular network speeds by generating 128 bits of output per iteration. SNOW-Vi[9]was tweaked for even higher speed using small changes to the LFSR; the output transformation is identical.
https://en.wikipedia.org/wiki/SNOW