text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Arace conditionorrace hazardis the condition of anelectronics,software, or othersystemwhere the system's substantive behavior isdependenton the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes abugwhen one or more of the possible behaviors is undesirable.
The termrace conditionwas already in use by 1954, for example inDavid A. Huffman's doctoral thesis "The synthesis of sequential switching circuits".[1]
Race conditions can occur especially inlogic circuitsormultithreadedordistributedsoftware programs. Usingmutual exclusioncan prevent race conditions in distributed software systems.
A typical example of a race condition may occur when alogic gatecombines signals that have traveled along different paths from the same source. The inputs to the gate can change at slightly different times in response to a change in the source signal. The output may, for a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate suchglitchesbut if this output functions as aclock signalfor further systems that contain memory, for example, the system can rapidly depart from its designed behaviour (in effect, the temporary glitch becomes a permanent glitch).
Consider, for example, a two-inputAND gatefed with the following logic:output=A∧A¯{\displaystyle {\text{output}}=A\wedge {\overline {A}}}A logic signalA{\displaystyle A}on one input and its negation,¬A{\displaystyle \neg A}(the ¬ is aBoolean negation), on another input in theory never output a true value:A∧A¯≠1{\displaystyle A\wedge {\overline {A}}\neq 1}. If, however, changes in the value ofA{\displaystyle A}take longer to propagate to the second input than the first whenA{\displaystyle A}changes from false to true then a brief period will ensue during which both inputs are true, and so the gate's output will also be true.[2]
A practical example of a race condition can occur when logic circuitry is used to detect certain outputs of a counter. If all the bits of the counter do not change exactly simultaneously, there will be intermediate patterns that can trigger false matches.
Acritical race conditionoccurs when the order in which internal variables are changed determines the eventual state that thestate machinewill end up in.
Anon-critical race conditionoccurs when the order in which internal variables are changed does not determine the eventual state that the state machine will end up in.
Astatic race conditionoccurs when a signal and its complement are combined.
Adynamic race conditionoccurs when it results in multiple transitions when only one is intended. They are due to interaction between gates. It can be eliminated by using no more than two levels of gating.
Anessential race conditionoccurs when an input has two transitions in less than the total feedback propagation time. Sometimes they are cured using inductivedelay lineelements to effectively increase the time duration of an input signal.
Design techniques such asKarnaugh mapsencourage designers to recognize and eliminate race conditions before they cause problems. Oftenlogic redundancycan be added to eliminate some kinds of races.
As well as these problems, some logic elements can entermetastable states, which create further problems for circuit designers.
A race condition can arise in software when a computer program has multiple code paths that are executing at the same time. If the multiple code paths take a different amount of time than expected, they can finish in a different order than expected, which can cause software bugs due to unanticipated behavior. A race can also occur between two programs, resulting in security issues.
Critical race conditions cause invalid execution andsoftware bugs. Critical race conditions often happen when the processes or threads depend on some shared state. Operations upon shared states are done incritical sectionsthat must bemutually exclusive. Failure to obey this rule can corrupt the shared state.
A data race is a type of race condition. Data races are important parts of various formalmemory models. The memory model defined in theC11andC++11standards specify that a C or C++ program containing a data race hasundefined behavior.[3][4]
A race condition can be difficult to reproduce and debug because the end result isnondeterministicand depends on the relative timing between interfering threads. Problems of this nature can therefore disappear when running in debug mode, adding extra logging, or attaching a debugger. A bug that disappears like this during debugging attempts is often referred to as a "Heisenbug". It is therefore better to avoid race conditions by careful software design.
Assume that two threads each increment the value of a global integer variable by 1. Ideally, the following sequence of operations would take place:
In the case shown above, the final value is 2, as expected. However, if the two threads run simultaneously without locking or synchronization (viasemaphores), the outcome of the operation could be wrong. The alternative sequence of operations below demonstrates this scenario:
In this case, the final value is 1 instead of the expected result of 2. This occurs because here the increment operations are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some resource such as a memory location.
Not everyone regards data races as a subset of race conditions.[5]The precise definition of data race is specific to the formal concurrency model being used, but typically it refers to a situation where a memory operation in one thread could potentially attempt to access a memory location at the same time that a memory operation in another thread is writing to that memory location, in a context where this is dangerous. This implies that a data race is different from a race condition as it is possible to havenondeterminismdue to timing even in a program without data races, for example, in a program in which all memory accesses use onlyatomic operations.
This can be dangerous because on many platforms, if two threads write to a memory location at the same time, it may be possible for the memory location to end up holding a value that is some arbitrary and meaningless combination of the bits representing the values that each thread was attempting to write; this could result in memory corruption if the resulting value is one that neither thread attempted to write (sometimes this is called a 'torn write'). Similarly, if one thread reads from a location while another thread is writing to it, it may be possible for the read to return a value that is some arbitrary and meaningless combination of the bits representing the value that the memory location held before the write, and of the bits representing the value being written.
On many platforms, special memory operations are provided for simultaneous access; in such cases, typically simultaneous access using these special operations is safe, but simultaneous access using other memory operations is dangerous. Sometimes such special operations (which are safe for simultaneous access) are calledatomicorsynchronizationoperations, whereas the ordinary operations (which are unsafe for simultaneous access) are calleddataoperations. This is probably why the term isdatarace; on many platforms, where there is a race condition involving onlysynchronizationoperations, such a race may be nondeterministic but otherwise safe; but adatarace could lead to memory corruption or undefined behavior.
The precise definition of data race differs across formal concurrency models. This matters because concurrent behavior is often non-intuitive and so formal reasoning is sometimes applied.
TheC++ standard, in draft N4296 (2014-11-19), defines data race as follows in section 1.10.23 (page 14)[6]
Two actions arepotentially concurrentif
The execution of a program contains adata raceif it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below [omitted]. Any such data race results in undefined behavior.
The parts of this definition relating to signal handlers are idiosyncratic to C++ and are not typical of definitions ofdata race.
The paperDetecting Data Races on Weak Memory Systems[7]provides a different definition:
"two memory operationsconflictif they access the same location and at least one of them is a write operation ...
"Two memory operations, x and y, in a sequentially consistent execution form a race 〈x,y〉,iffx and y conflict, and they are not ordered by the hb1 relation of the execution. The race 〈x,y〉, is adata raceiff at least one of x or y is a data operation.
Here we have two memory operations accessing the same location, one of which is a write.
The hb1 relation is defined elsewhere in the paper, and is an example of a typical "happens-before" relation; intuitively, if we can prove that we are in a situation where one memory operation X is guaranteed to be executed to completion before another memory operation Y begins, then we say that "X happens-before Y". If neither "X happens-before Y" nor "Y happens-before X", then we say that X and Y are "not ordered by the hb1 relation". So, the clause "... and they are not ordered by the hb1 relation of the execution" can be intuitively translated as "... and X and Y are potentially concurrent".
The paper considers dangerous only those situations in which at least one of the memory operations is a "data operation"; in other parts of this paper, the paper also defines a class of "synchronization operations" which are safe for potentially simultaneous use, in contrast to "data operations".
TheJava Language Specification[8]provides a different definition:
Two accesses to (reads of or writes to) the same variable are said to be conflicting if at least one of the accesses is a write ... When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race ... a data race cannot cause incorrect behavior such as returning the wrong length for an array.
A critical difference between the C++ approach and the Java approach is that in C++, a data race is undefined behavior, whereas in Java, a data race merely affects "inter-thread actions".[8]This means that in C++, an attempt to execute a program containing a data race could (while still adhering to the spec) crash or could exhibit insecure or bizarre behavior, whereas in Java, an attempt to execute a program containing a data race may produce undesired concurrency behavior but is otherwise (assuming that the implementation adheres to the spec) safe.
An important facet of data races is that in some contexts, a program that is free of data races is guaranteed to execute in asequentially consistentmanner, greatly easing reasoning about the concurrent behavior of the program. Formal memory models that provide such a guarantee are said to exhibit an "SC for DRF" (Sequential Consistency for Data Race Freedom) property. This approach has been said to have achieved recent consensus (presumably compared to approaches which guarantee sequential consistency in all cases, or approaches which do not guarantee it at all).[9]
For example, in Java, this guarantee is directly specified:[8]
A program is correctly synchronized if and only if all sequentially consistent executions are free of data races.
If a program is correctly synchronized, then all executions of the program will appear to be sequentially consistent (§17.4.3).
This is an extremely strong guarantee for programmers. Programmers do not need to reason about reorderings to determine that their code contains data races. Therefore they do not need to reason about reorderings when determining whether their code is correctly synchronized. Once the determination that the code is correctly synchronized is made, the programmer does not need to worry that reorderings will affect his or her code.
A program must be correctly synchronized to avoid the kinds of counterintuitive behaviors that can be observed when code is reordered. The use of correct synchronization does not ensure that the overall behavior of a program is correct. However, its use does allow a programmer to reason about the possible behaviors of a program in a simple way; the behavior of a correctly synchronized program is much less dependent on possible reorderings. Without correct synchronization, very strange, confusing and counterintuitive behaviors are possible.
By contrast, a draft C++ specification does not directly require an SC for DRF property, but merely observes that there exists a theorem providing it:
[Note:It can be shown that programs that correctly use mutexes and memory_order_seq_cst operations to prevent all data races and use no other synchronization operations behave as if the operations executed by their constituent threads were simply interleaved, with each value computation of an object being taken from the last side effect on that object in that interleaving. This is normally referred to as “sequential consistency”. However, this applies only to data-race-free programs, and data-race-free programs cannot observe most program transformations that do not change single-threaded program semantics. In fact, most single-threaded program transformations continue to be allowed, since any program that behaves differently as a result must perform an undefined operation.— end note
Note that the C++ draft specification admits the possibility of programs that are valid but use synchronization operations with a memory_order other than memory_order_seq_cst, in which case the result may be a program which is correct but for which no guarantee of sequentially consistency is provided. In other words, in C++, some correct programs are not sequentially consistent. This approach is thought to give C++ programmers the freedom to choose faster program execution at the cost of giving up ease of reasoning about their program.[9]
There are various theorems, often provided in the form of memory models, that provide SC for DRF guarantees given various contexts. The premises of these theorems typically place constraints upon both the memory model (and therefore upon the implementation), and also upon the programmer; that is to say, typically it is the case that there are programs which do not meet the premises of the theorem and which could not be guaranteed to execute in a sequentially consistent manner.
The DRF1 memory model[10]provides SC for DRF and allows the optimizations of the WO (weak ordering), RCsc (Release Consistencywith sequentially consistent special operations), VAX memory model, and data-race-free-0 memory models. The PLpc memory model[11]provides SC for DRF and allows the optimizations of the TSO (Total Store Order), PSO, PC (Processor Consistency), and RCpc (Release Consistencywith processor consistency special operations) models. DRFrlx[12]provides a sketch of an SC for DRF theorem in the presence of relaxed atomics.
Many software race conditions have associatedcomputer securityimplications. A race condition allows an attacker with access to a shared resource to cause other actors that utilize that resource to malfunction, resulting in effects includingdenial of service[13]andprivilege escalation.[14][15]
A specific kind of race condition involves checking for a predicate (e.g. forauthentication), then acting on the predicate, while the state can change between thetime-of-checkand thetime-of-use. When this kind ofbugexists in security-sensitive code, asecurity vulnerabilitycalled atime-of-check-to-time-of-use(TOCTTOU) bug is created.
Race conditions are also intentionally used to createhardware random number generatorsandphysically unclonable functions.[16][citation needed]PUFs can be created by designing circuit topologies with identical paths to a node and relying on manufacturing variations to randomly determine which paths will complete first. By measuring each manufactured circuit's specific set of race condition outcomes, a profile can be collected for each circuit and kept secret in order to later verify a circuit's identity.
Two or more programs may collide in their attempts to modify or access a file system, which can result in data corruption or privilege escalation.[14]File lockingprovides a commonly used solution. A more cumbersome remedy involves organizing the system in such a way that one unique process (running adaemonor the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process. This requires synchronization at the process level.
A different form of race condition exists in file systems where unrelated programs may affect each other by suddenly using up available resources such as disk space, memory space, or processor cycles. Software not carefully designed to anticipate and handle this race situation may then become unpredictable. Such a risk may be overlooked for a long time in a system that seems very reliable. But eventually enough data may accumulate or enough other software may be added to critically destabilize many parts of a system. An example of this occurred withthe near loss of the Mars Rover "Spirit"not long after landing, which occurred due to deleted file entries causing the file system library to consume all available memory space.[17]A solution is for software to request and reserve all the resources it will need before beginning a task; if this request fails then the task is postponed, avoiding the many points where failure could have occurred. Alternatively, each of those points can be equipped with error handling, or the success of the entire task can be verified afterwards, before continuing. A more common approach is to simply verify that enough system resources are available before starting a task; however, this may not be adequate because in complex systems the actions of other running programs can be unpredictable.
In networking, consider a distributed chat network likeIRC, where a user who starts a channel automatically acquires channel-operator privileges. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel. (This problem has been largelysolvedby various IRC server implementations.)
In this case of a race condition, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, thelatencyacross the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource—say, appointing one server to control who holds what privileges—would mean turning the distributed network into a centralized one (at least for that one part of the network operation).
Race conditions can also exist when a computer program is written withnon-blocking sockets, in which case the performance of the program can be dependent on the speed of the network link.
Software flaws inlife-critical systemscan be disastrous. Race conditions were among the flaws in theTherac-25radiation therapymachine, which led to the death of at least three patients and injuries to several more.[18]
Another example is theenergy management systemprovided byGE Energyand used byOhio-basedFirstEnergy Corp(among other power facilities). A race condition existed in the alarm subsystem; when three sagging power lines were tripped simultaneously, the condition prevented alerts from being raised to the monitoring technicians, delaying their awareness of the problem. This software flaw eventually led to theNorth American Blackout of 2003.[19]GE Energy later developed a software patch to correct the previously undiscovered error.
Many software tools exist to help detect race conditions in software. They can be largely categorized into two groups:static analysistools anddynamic analysistools.
Thread Safety Analysis is a static analysis tool for annotation-based intra-procedural static analysis, originally implemented as a branch of gcc, and now reimplemented inClang, supporting PThreads.[20][non-primary source needed]
Dynamic analysis tools include:
There are several benchmarks designed to evaluate the effectiveness of data race detection tools
Race conditions are a common concern in human-computerinteraction designand softwareusability. Intuitively designed human-machine interfaces require that the user receives feedback on their actions that align with their expectations, but system-generated actions can interrupt a user's current action or workflow in unexpected ways, such as inadvertently answering or rejecting an incoming call on a smartphone while performing a different task.[citation needed]
InUK railway signalling, a race condition would arise in the carrying out ofRule 55. According to this rule, if a train was stopped on a running line by a signal, the locomotive fireman would walk to the signal box in order to remind the signalman that the train was present. In at least one case, atWinwickin 1934, an accident occurred because the signalman accepted another train before the fireman arrived. Modern signalling practice removes the race condition by making it possible for the driver to instantaneously contact the signal box by radio.
Race conditions are not confined to digital systems. Neuroscience is demonstrating that race conditions can occur in mammal brains as well, for example.[25][26]
|
https://en.wikipedia.org/wiki/Race_condition#Computing
|
.propertiesis afile extensionforfilesmainly used inJava-related technologies to store the configurable parameters of anapplication. They can also be used for storing strings forInternationalization and localization; these are known as Property Resource Bundles.
Each parameter is stored as a pair ofstrings, one storing the name of the parameter (called thekey), and the other storing the value.
Unlike many popular file formats, there is noRFCfor .properties files and specification documents are not always clear, most likely due to the simplicity of the format.
Each line in a .properties file normally stores a single property. Several formats are possible for each line, includingkey=value,key = value,key:value, andkey value. Single-quotes or double-quotes are considered part of the string. Trailing space is significant and presumed to be trimmed as required by the consumer.
Commentlines in .properties files are denoted by thenumber sign(#) or theexclamation mark(!) as the first nonblankcharacter, in which all remaining text on that line is ignored. The backwards slash is used to escape a character. An example of a properties file is provided below.
In the example above:
Before Java 9, the encoding of a .properties file isISO-8859-1, also known as Latin-1. All non-ASCII characters must be entered by usingUnicodeescape characters, e.g. \uHHHH where HHHH is a hexadecimal index of the character in the Unicode character set. This allows for using .properties files asresource bundlesforlocalization. A non-Latin-1 text file can be converted to a correct .properties file by using thenative2asciitool that is shipped with theJDKor by using a tool, such as po2prop,[1]that manages the transformation from a bilingual localization format into .properties escaping.
An alternative to using unicode escape characters for non-Latin-1 character in ISO 8859-1 character encoded Java *.properties files is to use the JDK's XML Properties file format which by default isUTF-8encoded, introduced starting with Java 1.5.[2]
Another alternative is to create custom control that provides custom encoding.[3]
In Java 9 and newer, the default encoding specifically for property resource bundles is UTF-8, and if an invalid UTF-8 byte sequence is encountered it falls back to ISO-8859-1.[4][5]
Editing .properties files is done using anytext editorsuch as those typically installed on variousOperating SystemsincludingNotepadon Windows orEmacs,Vim, etc. on Linux systems.
Third-party tools are also available with additional functionality specific to editing .properties files such as:
Apache Flexuses .properties files as well, but here they are UTF-8 encoded.[6]
InApache mod_jk's uriworkermap.properties format, an exclamation mark ("!") denotes aNegationoperator when used as the first nonblank characterin a line.[7]
PerlCPANcontains Config::Properties to interface to a .properties file.[8]
SAPuses .properties files for localization within their framework SAPUI5 and its open-source variantOpenUI5[9]
There are manyNode.js(JavaScript/TypeScript) options available onNpm's package manager.[10]
PHPalso has many package options available.[11]
|
https://en.wikipedia.org/wiki/.properties
|
Regulation of artificial intelligenceis the development of public sectorpoliciesand laws for promoting and regulatingartificial intelligence(AI). It is part of the broaderregulation of algorithms.[1][2]The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like theIEEEor theOECD.[3]
Since 2016, numerousAI ethicsguidelines have been published in order to maintain social control over the technology.[4]Regulation is deemed necessary to both foster AI innovation and manage associated risks.
Furthermore, organizations deploying AI have a central role to play in creating and implementingtrustworthy AI, adhering to established principles, and taking accountability for mitigating risks.[5]
Regulating AI through mechanisms such as review boards can also be seen as social means to approach theAI control problem.[6][7]
According toStanford University's 2023 AI Index, the annual number ofbillsmentioning "artificial intelligence" passed in 127 surveyed countries jumped from one in 2016 to 37 in 2022.[8][9]
In 2017,Elon Muskcalled for regulation of AI development.[10]According toNPR, theTeslaCEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."[10]In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[11]Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEOBrian Krzanichhas argued that AI is in its infancy and that it is too early to regulate the technology.[12]Many tech companies oppose the harsh regulation of AI and "While some of the companies have said they welcome rules around A.I., they have also argued against tough regulations akin to those being created in Europe"[13]Instead of trying to regulate the technology itself, some scholars suggested developing common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[14]
In a 2022Ipsossurvey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[8]A 2023Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[15]In a 2023Fox Newspoll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[16][17]
The regulation of artificial intelligences is the development of public sector policies and laws for promoting and regulating AI.[18]Regulation is now generally considered necessary to both encourage AI and manage associated risks.[19][20][21]Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems,[22]although regulation of artificialsuperintelligencesis also considered.[23]The basic approach to regulation focuses on the risks and biases ofmachine-learningalgorithms, at the level of the input data, algorithm testing, and decision model. It also focuses on theexplainabilityof the outputs.[20]
There have been bothhard lawandsoft lawproposals to regulate AI.[24]Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges.[25][26]Among the challenges, AI technology is rapidly evolving leading to a "pacing problem" where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits.[25][26]Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope.[25]As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising because soft laws can be adapted more flexibly to meet the needs of emerging and evolving AI technology and nascent applications.[25][26]However, soft law approaches often lack substantial enforcement potential.[25][27]
Cason Schmit, Megan Doerr, and Jennifer Wagner proposed the creation of a quasi-governmental regulator by leveraging intellectual property rights (i.e.,copyleftlicensing) in certain AI objects (i.e., AI models and training datasets) and delegating enforcement rights to a designated enforcement entity.[28]They argue that AI can be licensed under terms that require adherence to specified ethical practices and codes of conduct. (e.g., soft law principles).[28]
Prominent youth organizations focused on AI, namely Encode Justice, have also issued comprehensive agendas calling for more stringent AI regulations andpublic-private partnerships.[29][30]
AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values.[31]AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.[19]A public administration approach sees a relationship between AI law and regulation, theethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.[32]The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national,[33]and international levels[34]and in a variety of fields, from public service management[35]and accountability[36]to law enforcement,[34][37]healthcare (especially the concept of a Human Guarantee),[38][39][40][41][42]the financial sector,[33]robotics,[43][44]autonomous vehicles,[43]the military[45]and national security,[46]and international law.[47][48]
Henry Kissinger,Eric Schmidt, andDaniel Huttenlocherpublished a joint statement in November 2021 entitled "Being Human in an Age of AI", calling for a government commission to regulate AI.[49]
Regulation of AI can be seen as positive social means to manage theAI control problem(the need to ensure long-term beneficial AI), with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities throughtranshumanismtechniques likebrain-computer interfacesbeing seen as potentially complementary.[7][50]Regulation of research intoartificial general intelligence(AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research intoAI safety,[50]together with the possibility ofdifferential intellectual progress(prioritizing protective strategies over risky strategies in AI development) or conducting international mass surveillance to perform AGI arms control.[7]For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangeroussuperintelligenceas well as for addressing other major threats to human well-being, such as subversion of the global financial system, until a true superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger.[7]Regulation of conscious, ethically aware AGIs focuses on how to integrate them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[7]Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.[43]
The development of a global governance board to regulate AI development was suggested at least as early as 2017.[52]In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on theInternational Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.[53]In 2019, the Panel was renamed the Global Partnership on AI.[54][55]
TheGlobal Partnership on Artificial Intelligence(GPAI) was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in theOECDPrinciples on Artificial Intelligence(2019).[56]The 15 founding members of the Global Partnership on Artificial Intelligence are Australia, Canada, the European Union, France, Germany, India, Italy, Japan, the Republic of Korea, Mexico, New Zealand, Singapore, Slovenia, the United States and the UK. In 2023, the GPAI has 29 members.[57]The GPAI Secretariat is hosted by the OECD in Paris, France. GPAI's mandate covers four themes, two of which are supported by the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, namely, responsible AI and data governance. A corresponding centre of excellence in Paris will support the other two themes on the future of work, and on innovation and commercialization. GPAI also investigated how AI can be leveraged to respond to the COVID-19 pandemic.[56]
The OECD AI Principles[58]were adopted in May 2019, and the G20 AI Principles in June 2019.[55][59][60]In September 2019 theWorld Economic Forumissued ten 'AI Government Procurement Guidelines'.[61]In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.[34]
At the United Nations (UN), several entities have begun to promote and discuss aspects of AI regulation and policy, including theUNICRI Centre for AI and Robotics.[46]In partnership with INTERPOL, UNICRI's Centre issued the reportAI and Robotics for Law Enforcementin April 2019[62]and the follow-up reportTowards Responsible AI Innovationin May 2020.[37]AtUNESCO's Scientific 40th session in November 2019, the organization commenced a two-year process to achieve a "global standard-setting instrument on ethics of artificial intelligence". In pursuit of this goal, UNESCO forums and conferences on AI were held to gather stakeholder views. A draft text of aRecommendation on the Ethics of AIof the UNESCO Ad Hoc Expert Group was issued in September 2020 and included a call for legislative gaps to be filled.[63]UNESCO tabled the international instrument on the ethics of AI for adoption at its General Conference in November 2021;[56]this was subsequently adopted.[64]While the UN is making progress with the global management of AI, its institutional and legal capability to manage the AGI existential risk is more limited.[65]
An initiative ofInternational Telecommunication Union(ITU) in partnership with 40 UN sister agencies,AI for Goodis a global platform which aims to identify practical applications of AI to advance the United NationsSustainable Development Goalsand scale those solutions for global impact. It is an action-oriented, global & inclusive United Nations platform fostering development of AI to positively impact health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities.[citation needed]
Recent research has indicated that countries will also begin to useartificial intelligenceas a tool for national cyberdefense. AI is a new factor in the cyber arms industry, as it can be used for defense purposes. Therefore, academics urge that nations should establish regulations for the use of AI, similar to how there are regulations for other military industries.[66]
The regulatory and policy landscape for AI is an emerging issue in regional and national jurisdictions globally, for example in the European Union[68]and Russia.[69]Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.[70][71]These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.[22][72]
Different countries have approached the problem in different ways. Regarding the three largest economies, it has been said that "the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach."[73]
In October 2023, theAustralian Computer Society,Business Council of Australia,Australian Chamber of Commerce and Industry,Ai Group (aka Australian Industry Group),Council of Small Business Organisations Australia, andTech Council of Australiajointly published an open letter calling for a national approach to AI strategy.[74]The letter backs the federal government establishing a whole-of-government AI taskforce.[74]
Additionally, in August of 2024, the Australian government set a Voluntary AI Safety Standard, which was followed by a Proposals Paper later in September of that year, outlining potential guardrails forhigh-risk AIthat could become mandatory. These guardrails include areas such as model testing, transparency, human oversight, and record-keeping, all of which may be enforced through new legislation. As noted, however, Australia has not yet passed AI-specific laws, but existing statutes such as thePrivacy Act 1988,Corporations Act 2001, andOnline Safety Act 2021all have applications which apply to AI use.[75]
In September 2024, a bill also was introduced which granted theAustralian Communications and Media Authoritypowers to regulateAI-generated misinformation. Several agencies, including theACMA,ACCC, andOffice of the Australian Information Commissioner, are all expected to play roles in future AI regulation.[75]
On September 30, 2021, the Brazilian Chamber of Deputies approved the Brazilian Legal Framework for Artificial Intelligence, Marco Legal da Inteligência Artificial, in regulatory efforts for the development and usage of AI technologies and to further stimulate research and innovation in AI solutions aimed at ethics, culture, justice, fairness, and accountability. This 10 article bill outlines objectives including missions to contribute to the elaboration of ethical principles, promote sustained investments in research, and remove barriers to innovation. Specifically, in article 4, the bill emphasizes the avoidance of discriminatory AI solutions, plurality, and respect for human rights. Furthermore, this act emphasizes the importance of the equality principle in deliberate decision-making algorithms, especially for highly diverse and multiethnic societies like that of Brazil.
When the bill was first released to the public, it faced substantial criticism, alarming the government for critical provisions. The underlying issue is that this bill fails to thoroughly and carefully address accountability, transparency, and inclusivity principles. Article VI establishes subjective liability, meaning any individual that is damaged by an AI system and is wishing to receive compensation must specify the stakeholder and prove that there was a mistake in the machine's life cycle. Scholars emphasize that it is out of legal order to assign an individual responsible for proving algorithmic errors given the high degree of autonomy, unpredictability, and complexity of AI systems. This also drew attention to the currently occurring issues with face recognition systems in Brazil leading to unjust arrests by the police, which would then imply that when this bill is adopted, individuals would have to prove and justify these machine errors.
The main controversy of this draft bill was directed to three proposed principles. First, the non-discrimination principle, suggests that AI must be developed and used in a way that merely mitigates the possibility of abusive and discriminatory practices. Secondly, the pursuit of neutrality principle lists recommendations for stakeholders to mitigate biases; however, with no obligation to achieve this goal. Lastly, the transparency principle states that a system's transparency is only necessary when there is a high risk of violating fundamental rights. As easily observed, the Brazilian Legal Framework for Artificial Intelligence lacks binding and obligatory clauses and is rather filled with relaxed guidelines. In fact, experts emphasize that this bill may even make accountability for AI discriminatory biases even harder to achieve. Compared to the EU's proposal of extensive risk-based regulations, the Brazilian Bill has 10 articles proposing vague and generic recommendations.
Compared to the multistakeholder participation approach taken previously in the 2000s when drafting the Brazilian Internet Bill of Rights, Marco Civil da Internet, the Brazilian Bill is assessed to significantly lack perspective. Multistakeholderism, more commonly referred to as Multistakeholder Governance, is defined as the practice of bringing multiple stakeholders to participate in dialogue, decision-making, and implementation of responses to jointly perceived problems. In the context of regulatory AI, this multistakeholder perspective captures the trade-offs and varying perspectives of different stakeholders with specific interests, which helps maintain transparency and broader efficacy. On the contrary, the legislative proposal for AI regulation did not follow a similar multistakeholder approach.
Future steps may include, expanding upon the multistakeholder perspective. There has been a growing concern about the inapplicability of the framework of the bill, which highlights that the one-shoe-fits-all solution may not be suitable for the regulation of AI and calls for subjective and adaptive provisions.
ThePan-Canadian Artificial Intelligence Strategy(2017) is supported by federal funding of Can $125 million with the objectives of increasing the number of outstanding AI researchers and skilled graduates in Canada, establishing nodes of scientific excellence at the three major AI centres, developing 'global thought leadership' on the economic, ethical, policy and legal implications of AI advances and supporting a national research community working on AI.[56]The Canada CIFAR AI Chairs Program is the cornerstone of the strategy. It benefits from funding of Can$86.5 million over five years to attract and retain world-renowned AI researchers.[56]The federal government appointed an Advisory Council on AI in May 2019 with a focus on examining how to build on Canada's strengths to ensure that AI advancements reflect Canadian values, such as human rights, transparency and openness. The Advisory Council on AI has established a working group on extracting commercial value from Canadian-owned AI and data analytics.[56]In 2020, the federal government and Government of Quebec announced the opening of the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, which will advance the cause of responsible development of AI.[56]In June 2022, the government of Canada started a second phase of the Pan-Canadian Artificial Intelligence Strategy.[76]In November 2022, Canada has introduced the Digital Charter Implementation Act (Bill C-27), which proposes three acts that have been described as a holistic package of legislation for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA).[77][78]
In September of 2023, the Canadian Government introduced a Voluntary Code of Conduct for the Responsible Development and Management of AdvancedGenerative AISystems. The code, based initially on public consultations, seeks to provide interim guidance to Canadian companies on responsible AI practices. Ultimately, its intended to serve as astopgapuntil formal legislation, such as the Artificial Intelligence and Data Act (AIDA), is enacted.[79][80]Moreover, in November 2024, the Canadian government additionally announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI) as part of a 2.4 billion CAD federal AI investment package. This includes 2 billion CAD to support a new AI Sovereign Computing Strategy and the AI Computing Access Fund, which aims to bolster Canada’s advanced computing infrastructure. Further funding includes 700 million CAD for domestic AI development, 1 billion CAD for public supercomputing infrastructure, and 300 million CAD to assist companies in accessing new AI resources.[80]
In Morocco, a new legislative proposal has been put forward by a coalition of political parties in Parliament to establish the National Agency for Artificial Intelligence (AI). This agency is intended to regulate AI technologies, enhance collaboration with international entities in the field, and increase public awareness of both the possibilities and risks associated with AI.[81]
In recent years, Morocco has made efforts to advance its use of artificial intelligence in the legal sector, particularly through AI tools that assist with judicial prediction and document analysis, helping to streamline case law research and support legal practitioners with more complex tasks. Alongside these efforts to establish a national AI agency, AI is being gradually introduced intolegislative and judicial processesin Morocco, with ongoing discussions emphasizing the benefits as well as the potential risks of these technologies.[82]
Generally speaking Morocco's broaderdigital policyincludes robustdata governancemeasures including the 2009 Personal Data Protection Law and the 2020 Cybersecurity Law, which establish requirements in areas such as privacy, breach notification, and data localization.[82]As of 2024, additional decrees have also expanded cybersecurity standards for cloud infrastructure and data audits within the nation. And while general data localization is not mandated, sensitive government and critical infrastructure data must be stored domestically. Oversight is led by the National Commission for the Protection of Personal Data (CNDP) and the General Directorate of Information Systems Security (DGSSI), though public enforcement actions in the country remain limited.[82]
The regulation of AI in China is mainly governed by theState Council of the People's Republic of China's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which theCentral Committee of the Chinese Communist Partyand the State Council of the PRC urged the governing bodies of China to promote the development of AI up to 2030. Regulation of the issues of ethical and legal support for the development of AI is accelerating, and policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.[83][84][85]In 2021, China published ethical guidelines for the use of AI in China which state that researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety.[86]In 2023, China introducedInterim Measures for the Management of Generative AI Services.[87]
On August 15, 2023, China’s firstGenerative AIMeasures officially came into force, becoming one of the first comprehensive national regulatory frameworks for generative AI. The measures apply to all providers offering generative AI services to the Chinese public, including foreign entities, ultimately setting the rules related to data protection, transparency, and algorithmic accountability.[88]In parallel, earlier regulations such as the Chinese government's Deep Synthesis Provisions (effective January 2023) and the Algorithm Recommendation Provisions (effective March 2022) continue to shape China's governance of AI-driven systems, including requirements for watermarking and algorithm filing with theCyberspace Administration of China(CAC).[89]Additionally, In October 2023, China also implemented a set of Ethics Review Measures for science and technology, mandating certain ethical assessments of AI projects which were deemed deemed socially sensitive or capable of negatively influencing public opinion.[88]As of mid-2024, over 1,400 AI algorithms had been already registered under theCAC’s algorithm filing regime, which includes disclosure requirements and penalties for noncompliance.[88]This layered approach reflects a broader policy process shaped by not only central directives but also academic input, civil society concerns, and public discourse.[89]
TheCouncil of Europe(CoE) is an international organization that promotes human rights, democracy and the rule of law. It comprises 46 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence. The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in theEuropean Convention on Human Rights. Specifically in relation to AI, "The Council of Europe's aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions". The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies.[90]The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.[63]
In 2019, the Council of Europe initiated a process to assess the need for legally binding regulation of AI, focusing specifically on its implications for human rights and democratic values. Negotiations on a treaty began in September 2022, involving the 46 member states of the Council of Europe, as well as Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay, as well as the European Union. On 17 May 2024, the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" was adopted. It was opened for signature on 5 September 2024. Although developed by a European organisation, the treaty is open for accession by states from other parts of the world. The first ten signatories were: Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union.[91][92]
The EU is one of the largest jurisdictions in the world and plays an active role in the global regulation of digital technology through theGDPR,[93]Digital Services Act, and theDigital Markets Act.[94][95]For AI in particular, theArtificial intelligence Actis regarded in 2023 as the most far-reaching regulation of AI worldwide.[96][97]
Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent.[63]The European Union is guided by a European Strategy on Artificial Intelligence,[98]supported by a High-Level Expert Group on Artificial Intelligence.[99][100]In April 2019, theEuropean Commissionpublished itsEthics Guidelines for Trustworthy Artificial Intelligence (AI),[101]following this with itsPolicy and investment recommendations for trustworthy Artificial Intelligencein June 2019.[102]The EU Commission's High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020. the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.[63]
On February 2, 2020, the European Commission published itsWhite Paper on Artificial Intelligence – A European approach to excellence and trust.[103][104]The White Paper consists of two main building blocks, an 'ecosystem of excellence' and a 'ecosystem of trust'. The 'ecosystem of trust' outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission distinguishes AI applications based on whether they are 'high-risk' or not. Only high-risk AI applications should be in the scope of a future EU regulatory framework. An AI application is considered high-risk if it operates in a risky sector (such as healthcare, transport or energy) and is "used in such a manner that significant risks are likely to arise". For high-risk AI applications, the requirements are mainly about the : "training data", "data and record-keeping", "information to be provided", "robustness and accuracy", and "human oversight". There are also requirements specific to certain usages such as remote biometric identification. AI applications that do not qualify as 'high-risk' could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.[105]
A January 2021 draft was leaked online on April 14, 2021,[106]before the Commission presented their official "Proposal for a Regulation laying down harmonised rules on artificial intelligence" a week later.[107]Shortly after, theArtificial Intelligence Act(also known as the AI Act) was formally proposed on this basis.[108]This proposal includes a refinement of the 2020 risk-based approach with, this time, 4 risk categories: "minimal", "limited", "high" and "unacceptable".[109]The proposal has been severely critiqued in the public debate. Academics have expressed concerns about various unclear elements in the proposal – such as the broad definition of what constitutes AI – and feared unintended legal implications, especially for vulnerable groups such as patients and migrants.[110][111]The risk category "general-purpose AI" was added to the AI Act to account for versatile models likeChatGPT, which did not fit the application-based regulation framework.[112]Unlike for other risk categories, general-purpose AI models can be regulated based on their capabilities, not just their uses. Weaker general-purpose AI models are subject transparency requirements, while those considered to pose "systemic risks" (notably those trained using computational capabilities exceeding 1025FLOPS) must also undergo a thorough evaluation process.[113]A subsequent version of the AI Act was finally adopted in May 2024.[114]The AI Act will be progressively enforced.[115]Recognition of emotionsand real-time remotebiometricidentification will be prohibited, with some exemptions, such as for law enforcement.[116]
The European Union's AI Act has created a regulatory framework with significant implications globally. This legislation introduces a risk-based approach to categorizing AI systems, focusing on high-risk applications like healthcare, education, and public safety.[117]It requires organizations to ensure transparency, data governance, and human oversight in their AI solutions. While this aims to foster ethical AI use, the stringent requirements could increase compliance costs and delay technology deployment, impacting innovation-driven industries.[citation needed]
Observers have expressed concerns about the multiplication of legislative proposals under thevon der Leyen Commission. The speed of the legislative initiatives is partially led by political ambitions of the EU and could put at risk the digital rights of the European citizens, including rights to privacy,[118]especially in the face of uncertain guarantees of data protection through cyber security.[100]Among the stated guiding principles in the variety of legislative proposals in the area of AI under the von der Leyen Commission are the objectives ofstrategic autonomy[119]and the concept of digital sovereignty.[120]On May 29, 2024, theEuropean Court of Auditorspublished a report stating that EU measures were not well coordinated with those of EU countries; that the monitoring of investments was not systematic; and that stronger governance was needed.[121]
In November 2020,[122]DIN,DKEand the GermanFederal Ministry for Economic Affairs and Energypublished the first edition of the"German Standardization Roadmap for Artificial Intelligence"(NRM KI) and presented it to the public at the Digital Summit of the Federal Government of Germany.[123]NRM KI describes requirements to future regulations and standards in the context of AI. The implementation of the recommendations for action is intended to help to strengthen the German economy and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for thisemerging technology. The first edition is a 200-page long document written by 300 experts. The second edition of the NRM KI was published to coincide with the German government's Digital Summit on December 9, 2022.[124]DIN coordinated more than 570 participating experts from a wide range of fields from science, industry, civil society and the public sector. The second edition is a 450-page long document.
On the one hand, NRM KI covers the focus topics in terms of applications (e.g. medicine, mobility, energy & environment, financial services, industrial automation) and fundamental issues (e.g. AI classification, security, certifiability, socio-technical systems, ethics).[124]On the other hand, it provides an overview of the central terms in the field of AI and its environment across a wide range of interest groups and information sources. In total, the document covers 116 standardisation needs and provides six central recommendations for action.[125]
On 30 October 2023, members of theG7subscribe to eleven guiding principles for the design, production and implementation of advanced artificial intelligence systems, as well as a voluntary Code of Conduct for artificial intelligence developers in the context of the Hiroshima Process.[126]
The agreement receives the applause ofUrsula von der Leyenwho finds in it the principles of the AI Directive, currently being finalized.
New guidelines also aim to establish a coordinated global effort towards the responsible development and use of advanced AI systems. While non-binding, the G7 governments encourage organizations to voluntarily adopt the guidelines, which emphasize a risk-based approach across the AI lifecycle—from pre-deployment risk assessment to post-deployment incident reporting and mitigation.[127]
TheAIP&CoCalso highlight the importance of AI system security, internal adversarial testing ('red teaming'), public transparency about capabilities and limitations, and governance procedures that include privacy safeguards and content authentication tools. The guidelines additionally promote AI innovation directed at solving global challenges such as climate change and public health, and call for advancing international technical standards.[127]
Looking ahead, the G7 intends to further refine their principles and Code of Conduct in collaboration with other organizations like theOECD,GPAI, and broader stakeholders. Areas of broader development include more clrsnrt AI terminology (e.g., “advanced AI systems”), the setting of risk benchmarks, and mechanisms for cross-border information sharing on potential AI risks. Despite general alignment on AI safety, analysts have noted that differing regulatory philosophies—such as the EU’s prescriptive AI Act versus the U.S.’s sector-specific approach—may challenge global regulatory harmonization.[128]
On October 30, 2022, pursuant to government resolution 212 of August 2021, theIsraeli Ministry of Innovation, Science and Technologyreleased its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation.[129]By December 2023, the Ministry of Innovation and theMinistry of Justicepublished a joint AI regulation and ethics policy paper, outlining several AI ethical principles and a set of recommendations including opting for sector-based regulation, a risk-based approach, preference for "soft" regulatory tools and maintaining consistency with existing global regulatory approaches to AI.[130]
In December of 2023, Israel unveiled its first comprehensive national AI policy which was jointly developed through a collaboration between ministerial and stakeholder consultation. In general, the new policy outlines ethical principles aligned with currentOECDguidelines and recommends a sector-based, risk driven regulatory framework which focuses on areas like transparency accountability.[131]It the policy, it proposes the creation of a national AI Policy Coordination Center to support regulators, and furtehr develop the tools necessary for responsible AI deployment. In addition, alongside 56 other nations, to domestic policy development, Israel signed the world’s first binding international treaty on artificial intelligence in March of 2024. The specific treaty, led by theCouncil of Europe, has obliged signatories to ensure current AI systems uphold democratic values, human rights, and the rule of law.[132]
In October 2023, the Italian privacy authority approved a regulation that provides three principles for therapeutic decisions taken by automated systems: transparency of decision-making processes, human supervision of automated decisions and algorithmic non-discrimination.[133]
In March 2024, the President of theItalian Data Protection Authorityreaffirmed their agency’s readiness to implement the European Union’s newly introducedArtificial Intelligence Act, praising the framework of institutional competence and independence.[134]Italy has continued to develop guidance on AI applications through existing legal frameworks, including recent innovations in areas such as facial recognition for law enforcement, AI in healthcare,deepfakes, andsmart assistants.[135]The Italian government’sNational AI Strategy (2022–2024)emphasizes responsible innovation and outlines goals for talent development, public and private sector adoption, and regulatory clarity, particularly in coordination with EU-level initiatives.[134]While Italy has not enacted standalone AI legislation, courts and regulators have begun interpreting existing laws to address transparency, non-discrimination, and human oversight in algorithmic decision-making.
As of July 2023[update], no AI-specific legislation exists, but AI usage is regulated by existing laws, including thePrivacy Act, theHuman Rights Act, theFair Trading Actand theHarmful Digital Communications Act.[136]
In 2020, theNew Zealand Governmentsponsored aWorld Economic Forumpilot project titled "Reimagining Regulation for the Age of AI", aimed at creating regulatory frameworks around AI.[137]The same year, the Privacy Act was updated to regulate the use of New Zealanders' personal information in AI.[138]In 2023, thePrivacy Commissionerreleased guidance on using AI in accordance with information privacy principles.[139]In February 2024, theAttorney-General and Technology Ministerannounced the formation of a Parliamentary cross-party AIcaucus, and that framework for the Government's use of AI was being developed. She also announced that no extra regulation was planned at that stage.[140]
In 2023, a bill was filed in the PhilippineHouse of Representativeswhich proposed the establishment of the Artificial Intelligence Development Authority (AIDA) which would oversee the development and research of artificial intelligence. AIDA was also proposed to be a watchdog against crimes using AI.[141]
TheCommission on Electionshas also considered in 2024 the ban of using AI and deepfake for campaigning. They look to implement regulations that would apply as early as for the 2025 general elections.[142]
In 2018, the SpanishMinistry of Science, Innovation and Universitiesapproved an R&D Strategy on Artificial Intelligence.[143]
With the formation of thesecond government of Pedro Sánchezin January 2020, the areas related tonew technologiesthat, since 2018, were in theMinistry of Economy, were strengthened. Thus, in 2020 the Secretariat of State for Digitalization and Artificial Intelligence (SEDIA) was created.[144]From this higher body, following the recommendations made by the R&D Strategy on Artificial Intelligence of 2018,[145]the National Artificial Intelligence Strategy (2020) was developed, which already provided for actions concerning the governance of artificial intelligence and the ethical standards that should govern its use. This project was also included within the Recovery, Transformation and Resilience Plan (2021).
During 2021,[144]the Government revealed that these ideas would be developed through a new government agency, and theGeneral State Budgetfor 2022 authorized its creation and allocated five millioneurosfor its development.[146]
TheCouncil of Ministers, at its meeting on 13 September 2022, began the process for the election of the AESIA headquarters.[147][148]16Spanish provincespresented candidatures, with the Government opting forA Coruña, which proposed the La Terraza building.[149]
Switzerland currently has no specific AI legislation, but on 12 February 2025, theFederal Councilannounced plans to ratify theCouncil of Europe’s AI Convention and incorporate it into Swiss law. A draft bill and implementation plan are to be prepared by the end of 2026. The approach includes sector-specific regulation, limited cross-sector rules, such as data protection, and non-binding measures such as industry agreements. The goals are to support innovation, protect fundamental rights, and build public trust in AI.[152]
The UK supported the application and development of AI in business via theDigital Economy Strategy 2015–2018[153]introduced at the beginning of 2015 byInnovate UKas part of the UK Digital Strategy.[153]In the public sector, theDepartment for Digital, Culture, Media and Sportadvised on data ethics and theAlan Turing Instituteprovided guidance on responsible design and implementation of AI systems.[154][155]In terms of cyber security, in 2020 theNational Cyber Security Centrehas issued guidance on 'Intelligent Security Tools'.[46][156]The following year, theUKpublished its 10-year National AI Strategy,[157]which describes actions to assess long-term AI risks, including AGI-related catastrophic risks.[158]
In March 2023, the UK released thewhite paperA pro-innovation approach to AI regulation.[159]This white paper presents general AI principles, but leaves significant flexibility to existing regulators in how they adapt these principles to specific areas such as transport or financial markets.[160]In November 2023, the UK hosted the firstAI safety summit, with the prime ministerRishi Sunakaiming to position the UK as a leader inAI safetyregulation.[161][162]During the summit, the UK created anAI Safety Institute, as an evolution of theFrontierAITaskforceled byIan Hogarth. The institute was notably assigned the responsibility of advancing the safety evaluations of the world's most advanced AI models, also calledfrontier AI models.[163]
The UK government indicated its reluctance to legislate early, arguing that it may reduce the sector's growth and that laws might be rendered obselete by further technological progress.[164]
Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.[165]
As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titledPreparing For the Future of Artificial Intelligence,[166]theNational Science and Technology Councilset a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....".[167]These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.
The first main report was the National Strategic Research and Development Plan for Artificial Intelligence.[168]On August 13, 2018, Section 1051 of the Fiscal Year 2019John S. McCain National Defense Authorization Act(P.L. 115-232) established theNational Security Commission on Artificial Intelligence"to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."[169]Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.[170]The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for,inter alia, the economic and national security of the United States.[171][172]
On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence,[173]the White House'sOffice of Science and Technology Policyreleased a draftGuidance for Regulation of Artificial Intelligence Applications,[174]which includes ten principles for United States agencies when deciding whether and how to regulate AI.[175]In response, theNational Institute of Standards and Technologyhas released a position paper,[176]and the Defense Innovation Board has issued recommendations on the ethical use of AI.[45]A year later, the administration called for comments on regulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.[177]
Other specific agencies working on the regulation of AI include the Food and Drug Administration,[39]which has created pathways to regulate the incorporation of AI in medical imaging.[38]National Science and Technology Council also published the National Artificial Intelligence Research and Development Strategic Plan,[178]which received public scrutiny and recommendations to further improve it towards enabling Trustworthy AI.[179]
In March 2021, the National Security Commission on Artificial Intelligence released their final report.[180]In the report, they stated that "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values."
In June 2022, SenatorsRob PortmanandGary Petersintroduced the Global Catastrophic Risk Mitigation Act. The bipartisan bill "would also help counter the risk of artificial intelligence... from being abused in ways that may pose a catastrophic risk".[181][182]On October 4, 2022, President Joe Biden unveiled a new AI Bill of Rights,[183]which outlines five protections Americans should have in the AI age: 1. Safe and Effective Systems, 2. Algorithmic Discrimination Protection, 3.Data Privacy, 4. Notice and Explanation, and 5. Human Alternatives, Consideration, and Fallback. The Bill was introduced in October 2021 by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology.[184]
In January 2023, the New York City Bias Audit Law (Local Law 144[185]) was enacted by the NYC Council in November 2021. Originally due to come into effect on 1 January 2023, the enforcement date for Local Law 144 has been pushed back due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection's (DCWP) proposed rules to clarify the requirements of the legislation. It eventually became effective on July 5, 2023.[186]From this date, the companies that are operating and hiring in New York City are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias.
In July 2023, the Biden–Harris Administration secured voluntary commitments from seven companies –Amazon,Anthropic,Google,Inflection,Meta,Microsoft, andOpenAI– to manage the risks associated with AI. The companies committed to ensure AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems' capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention toclimate change mitigation. In September 2023, eight additional companies –Adobe,Cohere,IBM,Nvidia,Palantir,Salesforce,Scale AI, andStability AI– subscribed to these voluntary commitments.[187][188]
The Biden administration, in October 2023 signaled that they would release an executive order leveraging the federal government's purchasing power to shape AI regulations, hinting at a proactive governmental stance in regulating AI technologies.[189]On October 30, 2023, President Biden released thisExecutive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order addresses a variety of issues, such as focusing on standards for critical infrastructure, AI-enhanced cybersecurity, and federally funded biological synthesis projects.[190]
The Executive Order provides the authority to various agencies and departments of the US government, including the Energy and Defense departments, to apply existing consumer protection laws to AI development.[191]
The Executive Order builds on the Administration's earlier agreements with AI companies to instate new initiatives to "red-team" or stress-test AI dual-use foundation models, especially those that have the potential to pose security risks, with data and results shared with the federal government.
The Executive Order also recognizes AI's social challenges, and calls for companies building AI dual-use foundation models to be wary of these societal problems. For example, the Executive Order states that AI should not "worsen job quality", and should not "cause labor-force disruptions". Additionally, Biden's Executive Order mandates that AI must "advance equity and civil rights", and cannot disadvantage marginalized groups.[192]It also called for foundation models to include "watermarks" to help the public discern between human and AI-generated content, which has raised controversy and criticism from deepfake detection researchers.[193]
In February 2024, SenatorScott Wienerintroduced theSafe and Secure Innovation for Frontier Artificial Intelligence Models Actto the California legislature. The bill drew heavily on theBiden executive order.[194]It had the goal of reducing catastrophic risks by mandating safety tests for the most powerful AI models. If passed, the bill would have also established a publicly-funded cloud computing cluster in California.[195]On September 29, GovernorGavin Newsomvetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses.[196]
On March 21, 2024, the State of Tennessee enacted legislation called theELVIS Act, aimed specifically ataudio deepfakes, and voice cloning.[197]This legislation was the first enacted legislation in the nation aimed at regulating AI simulation of image, voice and likeness.[198]The bill passed unanimously in theTennessee House of RepresentativesandSenate.[199]This legislation's success was hoped by its supporters to inspire similar actions in other states, contributing to a unified approach to copyright and privacy in the digital age, and to reinforce the importance of safeguarding artists' rights against unauthorized use of their voices and likenesses.[200][201]
On March 13, 2024,UtahGovernorSpencer Coxsigned the S.B 149 "Artificial Intelligence Policy Act". This legislation goes into effect on May 1, 2024. It establishes liability, notably for companies that don't disclose their use ofgenerative AIwhen required by state consumer protection laws, or when users commit criminal offense using generative AI. It also creates theOffice of Artificial Intelligence Policyand theArtificial Intelligence Learning Laboratory Program.[202][203]
In January 2025,President Trumprepealed theBiden executive order. This action reflects President Trump's preference for deregulating AI in support of innovation over safeguarding risks.[204]
In early 2025, Congress began advanced bipartisan legislation targeting AI-generated deepfakes, including the "TAKE IT DOWN Act," which would prohibit nonconsensual disclosure of AI-generated "intimate imagery", requiring all platforms to remove such content. Additionally, lawmakers also reintroduced the CREATE AI Act to codify the National AI Research Resource (NAIRR), which aimed to expand public access to computing resources, datasets, and AI testing environments. Additionally, the Trump administration also signed Executive Order #14179 to initiate a national “AI Action Plan”, focusing on securing U.S. global AI dominance in a way in which the White House can seek public input on AI safety and standards. At the state level, new laws have also been passed or proposed to regulate AI-generated impersonations, chatbot disclosures, and even synthetic political content. Meanwhile, the Department of Commerce also expanded export controls on AI technology, and NIST published an updated set of guidances on AI cybersecurity risks.[205]
Legal questions related tolethal autonomous weapons systems(LAWS), in particular compliance withthe laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of theConvention on Certain Conventional Weapons.[206]Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.[207]
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N.Security Councilto broach the issue,[47]and leading to proposals for global regulation.[208]The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by theCampaign to Stop Killer Robots– a coalition of non-governmental organizations.[209]The US government maintains that current international humanitarian law is capable of regulating the development or use of LAWS.[210]TheCongressional Research Serviceindicated in 2023 that the US doesn't have LAWS in its inventory, but that its policy doesn't prohibit the development and employment of it.[211]
|
https://en.wikipedia.org/wiki/Regulation_of_artificial_intelligence
|
Cold startis a potential problem incomputer-basedinformation systemswhich involves a degree of automateddata modelling. Specifically, it concerns the issue that the system cannot draw anyinferencesforusersor items about which it has not yet gathered sufficient information.
The cold start problem is a well known and well researched problem forrecommender systems. Recommender systems form a specific type ofinformation filtering(IF) technique that attempts to present information items (e-commerce,films,music,books,news,images,web pages) that are likely of interest to the user. Typically, a recommender system compares the user's profile to some reference characteristics. These characteristics may be related to item characteristics (content-based filtering) or the user's social environment and past behavior (collaborative filtering).
Depending on the system, the user can be associated to various kinds of interactions: ratings, bookmarks, purchases, likes, number of page visits etc.
There are three cases of cold start:[1]
The new community problem, or systemic bootstrapping, refers to the startup of the system, when virtually no information the recommender can rely upon is present.[2]This case presents the disadvantages of both the New user and the New item case, as all items and users are new.
Due to this some of the techniques developed to deal with those two cases are not applicable to the system bootstrapping.
The item cold-start problem refers to when items added to the catalogue have either none or very little interactions. This constitutes a problem mainly forcollaborative filteringalgorithms due to the fact that they rely on the item's interactions to make recommendations. If no interactions are available then a pure collaborative algorithm cannot recommend the item. In case only a few interactions are available, although a collaborative algorithm will be able to recommend it, the quality of those recommendations will be poor.[3]This raises another issue, which is not anymore related to new items, but rather tounpopular items.
In some cases (e.g. movie recommendations) it might happen that a handful of items receive an extremely high number of interactions, while most of the items only receive a fraction of them. This is referred to aspopularity bias.[4]
In the context of cold-start items the popularity bias is important because it might happen that many items, even if they have been in the catalogue for months, received only a few interactions. This creates a negative loop in which unpopular items will be poorly recommended, therefore will receive much less visibility than popular ones, and will struggle to receive interactions.[5]While it is expected that some items will be less popular than others, this issue specifically refers to the fact that the recommender has not enough collaborative information to recommend them in a meaningful and reliable way.[6]
Content-based filteringalgorithms, on the other hand, are in theory much less prone to the new item problem. Since content based recommenders choose which items to recommend based on the feature the items possess, even if no interaction for a new item exist, still its features will allow for a recommendation to be made.[7]This of course assumes that a new item will be already described by its attributes, which is not always the case. Consider the case of so-callededitorialfeatures (e.g. director, cast, title, year), those are always known when the item, in this case movie, is added to the catalogue. However, other kinds of attributes might not be e.g. features extracted from user reviews and tags.[8]Content-based algorithms relying on user provided features suffer from the cold-start item problem as well, since for new items if no (or very few) interactions exist, also no (or very few) user reviews and tags will be available.
The new user case refers to when a new user enrolls in the system and for a certain period of time the recommender has to provide recommendation without relying on the user's past interactions, since none has occurred yet.[1]This problem is of particular importance when the recommender is part of the service offered to users, since a user who is faced with recommendations of poor quality might soon decide to stop using the system before providing enough interaction to allow the recommender to understand his/her interests.
The main strategy in dealing with new users is to ask them to provide some preferences to build an initial user profile. A threshold has to be found between the length of the user registration process, which if too long might induce too many users to abandon it, and the amount of initial data required for the recommender to work properly.[2]
Similarly to the new items case, not all recommender algorithms are affected in the same way.Item-item recommenderswill be affected as they rely on user profile to weight how relevant other user's preferences are.Collaborative filteringalgorithms are the most affected as without interactions no inference can be made about the user's preferences.User-user recommenderalgorithms[9]behave slightly differently. A user-user content based algorithm will rely on user's features (e.g. age, gender, country) to find similar users and recommend the items they interacted with in a positive way, therefore being robust to the new user case. Note that all these information is acquired during the registration process, either by asking the user to input the data himself, or by leveraging data already available e.g. in his social media accounts.[10]
Due to the high number of recommender algorithms available as well as system type and characteristics, many strategies to mitigate the cold-start problem have been developed. The main approach is to rely on hybrid recommenders, in order to mitigate the disadvantages of one category or model by combining it with another.[11][12][13]
All three categories of cold-start (new community, new item, and new user) have in common the lack of user interactions and presents some commonalities in the strategies available to address them.
A common strategy when dealing with new items is to couple acollaborative filteringrecommender, for warm items, with acontent-based filteringrecommender, for cold-items. While the two algorithms can be combined in different ways, the main drawback of this method is related to the poor recommendation quality often exhibited by content-based recommenders in scenarios where it is difficult to provide a comprehensive description of the item characteristics.[14]In case of new users, if no demographic feature is present or their quality is too poor, a common strategy is to offer them non-personalized recommendations. This means that they could be recommended simply the most popular items either globally or for their specific geographical region or language.
One of the available options when dealing with cold users or items is to rapidly acquire some preference data. There are various ways to do that depending on the amount of information required. These techniques are calledpreference elicitationstrategies.[15][16]This may be done either explicitly (by querying the user) or implicitly (by observing the user's behaviour). In both cases, the cold start problem would imply that the user has to dedicate an amount of effort using the system in its 'dumb' state – contributing to the construction of their user profile – before the system can start providing any intelligent recommendations.[17]
For exampleMovieLens, a web-basedrecommender systemfor movies, asks the user to rate some movies as a part of the registration.
While preference elicitation strategy are a simple and effective way to deal with new users, the additional requirements during the registration will make the process more time-consuming for the user. Moreover, the quality of the obtained preferences might not be ideal as the user could rate items they had seen months or years ago or the provided ratings could be almost random if the user provided them without paying attention just to complete the registration quickly.
The construction of the user's profile may also be automated by integrating information from other user activities, such as browsing histories or social media platforms. If, for example, a user has been reading information about a particularmusic artistfrom a media portal, then the associated recommender system would automatically propose that artist's releases when the user visits the music store.[18]
A variation of the previous approach is to automatically assign ratings to new items, based on the ratings assigned by the community to other similar items. Item similarity would be determined according to the items' content-based characteristics.[17]
It is also possible to create initial profile of a user based on thepersonalitycharacteristics of the user and use such profile to generate personalized recommendation.[19][20]Personalitycharacteristics of the user can be identified using a personality model such asfive factor model(FFM).
Another of the possible techniques is to applyactive learning (machine learning). The main goal of active learning is to guide the user in the preference elicitation process in order to ask him to rate only the items that for the recommender point of view will be the most informative ones. This is done by analysing the available data and estimating the usefulness of the data points (e.g., ratings, interactions).[21]As an example, say that we want to build two clusters from a certain cloud of points. As soon as we have identified two points each belonging to a different cluster, which is the next most informative point? If we take a point close to one we already know we can expect that it will likely belong to the same cluster. If we choose a point which is in between the two clusters, knowing which cluster it belongs to will help us in finding where the boundary is, allowing to classify many other points with just a few observations.
The cold start problem is also exhibited byinterfaceagents. Since such an agent typically learns the user's preferences implicitly by observing patterns in the user's behaviour – "watching over the shoulder" – it would take time before the agent may perform any adaptations personalised to the user. Even then, its assistance would be limited to activities which it has formerly observed the user engaging in.[22]The cold start problem may be overcome by introducing an element of collaboration amongst agents assisting various users. This way, novel situations may be handled by requesting other agents to share what they have already learnt from their respective users.[22]
In recent years more advanced strategies have been proposed, they all rely on machine learning and attempt to merge the content and collaborative information in a single model.
One example of this approaches is calledattribute to feature mapping[23]which is tailored tomatrix factorizationalgorithms.[24]The basic idea is the following. A matrix factorization model represents the user-item interactions as the product of two rectangular matrices whose content is learned using the known interactions via machine learning. Each user will be associated to a row of the first matrix and each item with a column of the second matrix. The row or column associated to a specific user or item is calledlatent factors.[25]When a new item is added it has no associated latent factors and the lack of interactions does not allow to learn them, as it was done with other items. If each item is associated to some features (e.g. author, year, publisher, actors) it is possible to define an embedding function, which given the item features estimates the corresponding item latent factors. The embedding function can be designed in many ways and it is trained with the data already available from warm items. Alternatively, one could apply a group-specific method.[26][27]A group-specific method further decomposes each latent factor into two additive parts: One part corresponds to each item (and/or each user), while the other part is shared among items within each item group (e.g., a group of movies could be movies of the same genre). Then once a new item arrives, we can assign a group label to it, and approximates its latent factor by the group-specific part (of the corresponding item group). Therefore, although the individual part of the new item is not available, the group-specific part provides an immediate and effective solution. The same applies for a new user, as if some information is available for them (e.g. age, nationality, gender) then his/her latent factors can be estimated via an embedding function or a group-specific latent factor.
Another recent approach which bears similarities with feature mapping is building a hybridcontent-based filteringrecommender in which features, either of the items or of the users, are weighted according to the user's perception of importance. In order to identify a movie that the user could like, different attributes (e.g. which are the actors, director, country, title) will have different importance. As an example consider theJames Bondmovie series, the main actor changed many times during the years, while some did not, likeLois Maxwell. Therefore, her presence will probably be a better identifier of that kind of movie than the presence of one of the various main actors.[14][28]Although various techniques exist to apply feature weighting to user or item features inrecommender systems, most of them are from theinformation retrievaldomain liketf–idf,Okapi BM25, only a few have been developed specifically for recommenders.[29]
Hybrid feature weighting techniques in particular are tailored for the recommender system domain. Some of them learn feature weight by exploiting directly the user's interactions with items, like FBSM.[28]Others rely on an intermediate collaborative model trained on warm items and attempt to learn the content feature weights which will better approximate the collaborative model.[14]
Many of the hybrid methods can be considered special cases offactorization machines.[30][31]
The above methods rely on affiliated information from users or items. Recently, another approach mitigates the cold start problem by assigning lower constraints to the latent factors associated with the items or users that reveal more information (i.e., popular items and active users), and set higher constraints to the others (i.e., less popular items and inactive users).[32]It is shown that various recommendation models benefit from this strategy.[33]Differentiating regularization weights can be integrated with the other cold start mitigating strategies.
|
https://en.wikipedia.org/wiki/Cold_start_(recommender_systems)
|
Inphysicsandmathematics, arandom fieldis a random function over an arbitrary domain (usually a multi-dimensional space such asRn{\displaystyle \mathbb {R} ^{n}}). That is, it is a functionf(x){\displaystyle f(x)}that takes on a random value at each pointx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}(or some other domain). It is also sometimes thought of as a synonym for astochastic processwith some restriction on its index set. That is, by modern definitions, a random field is a generalization of astochastic processwhere the underlying parameter need no longer berealorintegervalued "time" but can instead take values that are multidimensionalvectorsor points on somemanifold.[1]
Given aprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}, anX-valued random field is a collection ofX-valuedrandom variablesindexed by elements in atopological spaceT. That is, a random fieldFis a collection
where eachFt{\displaystyle F_{t}}is anX-valued random variable.
In its discrete version, a random field is a list of random numbers whose indices are identified with a discrete set of points in a space (for example, n-dimensionalEuclidean space). Suppose there are four random variables,X1{\displaystyle X_{1}},X2{\displaystyle X_{2}},X3{\displaystyle X_{3}}, andX4{\displaystyle X_{4}}, located in a 2D grid at (0,0), (0,2), (2,2), and (2,0), respectively. Suppose each random variable can take on the value of -1 or 1, and the probability of each random variable's value depends on its immediately adjacent neighbours. This is a simple example of a discrete random field.
More generally, the values eachXi{\displaystyle X_{i}}can take on might be defined over a continuous domain. In larger grids, it can also be useful to think of the random field as a "function valued" random variable as described above. Inquantum field theorythe notion is generalized to a randomfunctional, one that takes on random values over aspace of functions(seeFeynman integral).
Several kinds of random fields exist, among them theMarkov random field(MRF),Gibbs random field,conditional random field(CRF), andGaussian random field. In 1974,Julian Besagproposed an approximation method relying on the relation between MRFs and Gibbs RFs.[citation needed]
An MRF exhibits theMarkov property
for each choice of values(xj)j{\displaystyle (x_{j})_{j}}. Here each∂i{\displaystyle \partial _{i}}is the set of neighbors ofi{\displaystyle i}. In other words, the probability that a random variable assumes a value depends on its immediate neighboring random variables. The probability of a random variable in an MRF[clarification needed]is given by
where the sum (can be an integral) is over the possible values of k.[clarification needed]It is sometimes difficult to compute this quantity exactly.
When used in thenatural sciences, values in a random field are often spatially correlated. For example, adjacent values (i.e. values with adjacent indices) do not differ as much as values that are further apart. This is an example of acovariancestructure, many different types of which may be modeled in a random field. One example is theIsing modelwhere sometimes nearest neighbor interactions are only included as a simplification to better understand the model.
A common use of random fields is in the generation of computer graphics, particularly those that mimic natural surfaces such aswaterandearth. Random fields have been also used in subsurface ground models as in[2]
Inneuroscience, particularly intask-related functional brain imagingstudies usingPETorfMRI, statistical analysis of random fields are one common alternative tocorrection for multiple comparisonsto find regions withtrulysignificant activation.[3]More generally, random fields can be used to correct for thelook-elsewhere effectin statistical testing, where the domain is theparameter spacebeing searched.[4]
They are also used inmachine learningapplications (seegraphical models).
Random fields are of great use in studying natural processes by theMonte Carlo methodin which the random fields correspond to naturally spatially varying properties. This leads to tensor-valued random fields[clarification needed]in which the key role is played by astatistical volume element(SVE), which is a spatial box over which properties can be averaged; when the SVE becomes sufficiently large, its properties become deterministic and one recovers therepresentative volume element(RVE) of deterministic continuum physics. The second type of random field that appears in continuum theories are those of dependent quantities (temperature, displacement, velocity, deformation, rotation, body and surface forces, stress, etc.).[5][clarification needed]
|
https://en.wikipedia.org/wiki/Random_field
|
Personal Digital Cellular(PDC) was a2Gmobiletelecommunicationsstandard used exclusively inJapan.[citation needed]
After a peak of nearly 80 million subscribers to PDC, it had 46 million subscribers in December 2005, and was slowly phased out in favor of 3G technologies likeW-CDMAandCDMA2000. At the end of March 2012, the count had dwindled down to almost 200,000 subscribers. NTT Docomo shut down their network, mova, on April 1, 2012 at midnight.[1]
LikeD-AMPSandGSM, PDC usesTDMA. The standard was defined by theRCR(later becameARIB) in April 1991, andNTT DoCoMolaunched its Digital mova service in March 1993. PDC uses 25 kHz carrier, pi/4-DQPSKmodulation with 3-timeslot 11.2 kbit/s (full-rate) or 6-timeslot 5.6 kbit/s (half-rate) voicecodecs.
PDC is implemented in the 800 MHz (downlink 810–888 MHz, uplink 893–958 MHz), and 1.5 GHz (downlink 1477–1501 MHz, uplink 1429–1453 MHz) bands. The air interface is defined in RCR STD-27 and the core network MAP by JJ-70.10.NEC,Motorola, andEricssonare the major network equipment manufacturers.[citation needed]
The services include voice (full and half-rate), supplementary services (call waiting, voice mail, three-way calling, call forwarding, and so on), data service (up to 9.6 kbit/sCSD), and packet-switched wireless data (up to 28.8 kbit/sPDC-P). Voice codecs arePDC-EFRandPDC-HR.
Compared toGSM, PDC's weak broadcast strength allows small, portable phones with light batteries at the expense of substandard voice quality and problems maintaining the connection, particularly in enclosed spaces like elevators.
PDC Enhanced Full Rateis aspeech codingstandard that was developed byARIBinJapanand used in PDC mobile networks in Japan. The carriers use one of those codecs as PDC-EFR:CS-ACELP8 kbit/s (a.k.a.NTT DoCoMoHypertalk) andACELP6.7 kbit/s (a.k.a.J-PHONECrystal Voice).[2][3]
The PDC-EFR CS-ACELP usesG.729. The PDC-EFR ACELP is compatible with theAMRmode AMR_6.70.
PDC Half Rateis aspeech codingstandard that was developed byARIBinJapanand used in PDC mobile networks in Japan. It operates with a bit-rate of 3.45 kbit/s and is based on Pitch Synchronous InnovationCELP(PSI-CELP).[4]
|
https://en.wikipedia.org/wiki/Personal_Digital_Cellular
|
Agraphing calculatoris a class of hand-held calculator that is capable of plotting graphs and solving complex functions. While there are several companies that manufacture models of graphing calculators,Hewlett-Packardis a major manufacturer.
The following table compares general and technical information for Hewlett-Packard graphing calculators:
|
https://en.wikipedia.org/wiki/Comparison_of_HP_graphing_calculators
|
Thebandwagon effectis a psychological phenomenon where people adopt certain behaviors, styles, or attitudes simply because others are doing so.[1]More specifically, it is acognitive biasby whichpublic opinionor behaviours can alter due to particular actions and beliefs rallying amongst the public.[2]It is a psychological phenomenon whereby the rate of uptake of beliefs, ideas,fads and trendsincreases with respect to the proportion of others who have already done so.[3]As more people come to believe in something, others also "hop on thebandwagon" regardless of the underlying evidence.[citation needed]
Following others' actions or beliefs can occur because ofconformismor deriving information from others. Much of the influence of the bandwagon effect comes from the desire to 'fit in' with peers; by making similar selections as other people, this is seen as a way to gain access to a particular social group.[4]An example of this isfashion trendswherein the increasing popularity of a certain garment or style encourages more acceptance.[5]When individuals makerationalchoices based on the information they receive from others, economists have proposed thatinformation cascadescan quickly form in which people ignore their personal information signals and follow the behaviour of others.[6]Cascades explain why behaviour is fragile as people understand that their behaviour is based on a very limited amount of information. As a result, fads form easily but are also easily dislodged.[citation needed]The phenomenon is observed in various fields, such aseconomics,political science,medicine, andpsychology.[7]Insocial psychology, people's tendency to align their beliefs and behaviors with a group is known as 'herd mentality' or 'groupthink'.[8]Thereverse bandwagon effect(also known as thesnob effectin certain contexts) is a cognitive bias that causes people to avoid doing something, because they believe that other people are doing it.[9]
The phenomenon where ideas become adopted as a result of their popularity has been apparent for some time. However, the metaphorical use of the termbandwagonin reference to this phenomenon began in 1848.[10]A literal "bandwagon" is awagonthat carries amusical ensemble, or band, during a parade, circus, or other entertainment event.[11][12]
The phrase "jump on the bandwagon" first appeared in American politics in 1848 during thepresidential campaignofZachary Taylor.Dan Rice, a famous and popular circus clown of the time, invited Taylor to join his circus bandwagon. As Taylor gained more recognition and his campaign became more successful, people began saying that Taylor's political opponents ought to "jump on the bandwagon" themselves if they wanted to be associated with such success.
Later, during the time ofWilliam Jennings Bryan's 1900 presidential campaign, bandwagons had become standard in campaigns,[13]and the phrase "jump on the bandwagon" was used as a derogatory term[when?], implying that people were associating themselves with success without considering that with which they associated themselves.
Despite its emergence in the late 19th century, it was only rather recently that the theoretical background of bandwagon effects has been understood.[12]One of the best-known experiments on the topic is the 1950s'Asch conformity experiment, which illustrates the individual variation in the bandwagon effect.[14][9]Academic study of the bandwagon effect especially gained interest in the 1980s, as scholars studied the effect ofpublic opinion pollson voter opinions.[10]
Individuals are highly influenced by the pressure and norms exerted by groups. As an idea or belief increases in popularity, people are more likely to adopt it; when seemingly everyone is doing something, there is an incredible pressure toconform.[1]Individuals' impressions of public opinion or preference can originate from several sources.
Some individual reasons behind the bandwagon effect include:
Another cause can come from distorted perceptions of mass opinion, known as 'false consensus' or 'pluralistic ignorance'.[failed verification]In politics, bandwagon effects can also come as result of indirect processes that are mediated by political actors. Perceptions of popular support may affect the choice of activists about which parties or candidates to support by donations or voluntary work in campaigns.[12]
The bandwagon effect works through aself-reinforcingmechanism, and can spread quickly and on a large-scale through apositive feedback loop, whereby the more who are affected by it, the more likely other people are to be affected by it too.[7][9]
A new concept that is originally promoted by only a single advocate or a minimal group of advocates can quickly grow and become widely popular, even when sufficient supporting evidence is lacking. What happens is that a new concept gains a small following, which grows until it reaches acritical mass, until for example it begins being covered bymainstream media, at which point a large-scale bandwagon effect begins, which causes more people to support this concept, in increasingly large numbers. This can be seen as a result of theavailability cascade, a self-reinforcing process through which a certain belief gains increasing prominence in public discourse.[9]
The bandwagon effect can take place invoting:[15]it occurs on an individual scale where a voters opinion on vote preference can be altered due to the rising popularity of a candidate[16]or a policy position.[17]The aim for the change in preference is for the voter to end up picking the "winner's side" in the end.[18]Voters are more so persuaded to do so in elections that are non-private or when the vote is highly publicised.[19]
The bandwagon effect has been applied to situations involvingmajority opinion, such as political outcomes, where people alter their opinions to the majority view.[20]Such a shift in opinion can occur because individualsdraw inferences[clarification needed]from the decisions of others, as in aninformational cascade.[21]
Perceptions of popular support may affect the choice of activists about which parties or candidates to support by donations or voluntary work in campaigns. They may strategically funnel these resources to contenders perceived as well supported and thus electorally viable, thereby enabling them to run more powerful, and thus more influential campaigns.[12]
American economistGary Beckerhas argued that the bandwagon effect is powerful enough to flip thedemand curveto be upward sloping. A typical demand curve is downward sloping—as prices rise,demandfalls. However, according to Becker, an upward sloping would imply that even as prices rise, the demand rises.[7]
The bandwagon effect comes about in two ways infinancial markets.
First, throughprice bubbles: these bubbles often happen in financial markets in which the price for a particularly popularsecuritykeeps on rising. This occurs when manyinvestorsline up to buy a securitybiddingup the price, which in return attracts more investors. The price can rise beyond a certain point, causing the security to be highlyovervalued.[7]
Second isliquidityholes: when unexpected news or events occur,market participantswill typically stop trading activity until the situation becomes clear. This reduces the number of buyers and sellers in the market, causing liquidity to decrease significantly. The lack of liquidity leavesprice discoverydistorted and causes massive shifts inasset prices, which can lead to increased panic, which further increases uncertainty, and the cycle continues.[7]
Inmicroeconomics, bandwagon effects may play out in interactions of demand and preference.[22]The bandwagon effect arises when people's preference for a commodity increases as the number of people buying it increases. Consumers may choose their product based on others' preferences believing that it is the superior product. This selection choice can be a result of directly observing the purchase choice of others or by observing the scarcity of a product compared to its competition as a result of the choice previous consumers have made. This scenario can also be seen in restaurants where the number of customers in a restaurant can persuade potential diners to eat there based on the perception that the food must be better than the competition due to its popularity.[4]This interaction potentially disturbs the normal results of the theory ofsupply and demand, which assumes that consumers make buying decisions exclusively based on price and their own personal preference.[7]
Decisions made by medical professionals can also be influenced by the bandwagon effect. Particularly, the widespread use and support of now-disproven medical procedures throughout history can be attributed to their popularity at the time. Layton F. Rikkers (2002),professor emeritusof surgery at theUniversity of Wisconsin–Madison,[23]calls these prevailing practicesmedical bandwagons, which he defines as "the overwhelming acceptance of unproved but popular [medical] ideas."[10]
Medical bandwagons have led to inappropriate therapies for numerous patients, and have impeded the development of more appropriate treatment.[24]
One paper from 1979 on the topic of bandwagons of medicine describes how a new medical concept or treatment can gain momentum and become mainstream, as a result of a large-scale bandwagon effect:[25]
One who supports a particular sports team, despite having shown no interest in that team until it started gaining success, can be considered a "bandwagon fan".[26]
As an increasing number of people begin to use a specific social networking site or application, people are more likely to begin using those sites or applications. The bandwagon effect alsoaffects random people that which posts are viewed and shared.[clarification needed][27]
This research used bandwagon effects to examine the comparative impact of two separate bandwagon heuristic indicators (quantitative vs. qualitative) on changes in news readers' attitudes in an online comments section. Furthermore, Study 1 demonstrated that qualitative signals had a higher influence on news readers' judgments than quantitative clues. Additionally, Study 2 confirmed the results of Study 1 and showed that people's attitudes are influenced by apparent public opinion, offering concrete proof of the influence that digital bandwagons.[28]
The bandwagon effect can also affect the way the masses dress and can be responsible for clothing trends. People tend to want to dress in a manner that suits the current trend and will be influenced by those who they see often – normally celebrities. Such publicised figures will normally act as the catalyst for the style of the current period. Once a small group of consumers attempt to emulate a particular celebrity's dress choice more people tend to copy the style due to the pressure or want to fit in and be liked by their peers.[citation needed]
|
https://en.wikipedia.org/wiki/Bandwagon_effect
|
This is alist ofrelational database management systems.
|
https://en.wikipedia.org/wiki/List_of_relational_database_management_systems
|
Aprocess flow diagram(PFD) is a diagram commonly used inchemicalandprocess engineeringto indicate the general flow of plant processes and equipment. The PFD displays the relationship betweenmajorequipment of a plant facility and does not show minor details such as piping details and designations. Another commonly used term for a PFD isprocessflowsheet. It is the key document in process design.[1]
Typically, process flow diagrams of a singleunit processinclude the following:
Process flow diagrams generally do not include:
Process flow diagrams of multiple process units within a large industrial plant will usually contain less detail and may be calledblock flow diagramsorschematic flow diagrams.
The process flow diagram below depicts a single chemical engineering unit process known as anamine treating plant:
The process flow diagram below is an example of a schematic or block flow diagram and depicts the various unit processes within a typicaloil refinery:
A PFD can be computer generated from process simulators (seeList of Chemical Process Simulators), CAD packages, or flow chart software using a library of chemical engineering symbols. Rules and symbols are available from standardization organizations such asDIN,ISOorANSI. Often PFDs are produced on large sheets of paper.
PFDs of many commercial processes can be found in the literature, specifically in encyclopedias of chemical technology, although some might be outdated. To find recent ones, patent databases such as those available from theUnited States Patent and Trademark Officecan be useful.
|
https://en.wikipedia.org/wiki/Process_flow_diagram
|
Inmathematics, aproductis the result ofmultiplication, or anexpressionthat identifiesobjects(numbers orvariables) to be multiplied, calledfactors. For example, 21 is the product of 3 and 7 (the result of multiplication), andx⋅(2+x){\displaystyle x\cdot (2+x)}is the product ofx{\displaystyle x}and(2+x){\displaystyle (2+x)}(indicating that the two factors should be multiplied together).
When one factor is aninteger, the product is called amultiple.
The order in whichrealorcomplexnumbers are multiplied has no bearing on the product; this is known as thecommutative lawof multiplication. Whenmatricesor members of various otherassociative algebrasare multiplied, the product usually depends on the order of the factors.Matrix multiplication, for example, is non-commutative, and so is multiplication in other algebras in general as well.
There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many differentalgebraic structures.
Originally, a product was and is still the result of the multiplication of two or morenumbers. For example,15is the product of3and5. Thefundamental theorem of arithmeticstates that everycomposite numberis a product ofprime numbers, that is uniqueup tothe order of the factors.
With the introduction ofmathematical notationandvariablesat the end of the 15th century, it became common to consider the multiplication of numbers that are either unspecified (coefficientsandparameters), or to be found (unknowns). These multiplications that cannot be effectively performed are calledproducts. For example, in thelinear equationax+b=0,{\displaystyle ax+b=0,}the termax{\displaystyle ax}denotes theproductof the coefficienta{\displaystyle a}and the unknownx.{\displaystyle x.}
Later and essentially from the 19th century on, newbinary operationshave been introduced, which do not involve numbers at all, and have been calledproducts; for example, thedot product. Most of this article is devoted to such non-numerical products.
The product operator for theproduct of a sequenceis denoted by the capital Greek letterpiΠ(in analogy to the use of the capital SigmaΣassummationsymbol).[1]For example, the expression∏i=16i2{\displaystyle \textstyle \prod _{i=1}^{6}i^{2}}is another way of writing1⋅4⋅9⋅16⋅25⋅36{\displaystyle 1\cdot 4\cdot 9\cdot 16\cdot 25\cdot 36}.[2]
The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as theempty product, and is equal to 1.
Commutative ringshave a product operation.
Residue classes in the ringsZ/NZ{\displaystyle \mathbb {Z} /N\mathbb {Z} }can be added:
and multiplied:
Two functions from the reals to itself can be multiplied in another way, called theconvolution.
If
then the integral
is well defined and is called the convolution.
Under theFourier transform, convolution becomes point-wise function multiplication.
The product of two polynomials is given by the following:
with
There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product,exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections.
By the very definition of a vector space, one can form the product of any scalar with any vector, giving a mapR×V→V{\displaystyle \mathbb {R} \times V\rightarrow V}.
Ascalar productis a bi-linear map:
with the following conditions, thatv⋅v>0{\displaystyle v\cdot v>0}for all0≠v∈V{\displaystyle 0\not =v\in V}.
From the scalar product, one can define anormby letting‖v‖:=v⋅v{\displaystyle \|v\|:={\sqrt {v\cdot v}}}.
The scalar product also allows one to define an angle between two vectors:
Inn{\displaystyle n}-dimensional Euclidean space, the standard scalar product (called thedot product) is given by:
Thecross productof two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors.
The cross product can also be expressed as theformal[a]determinant:
A linear mapping can be defined as a functionfbetween two vector spacesVandWwith underlying fieldF, satisfying[3]
If one only considers finite dimensional vector spaces, then
in whichbVandbWdenote thebasesofVandW, andvidenotes thecomponentofvonbVi, andEinstein summation conventionis applied.
Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mappingfmapVtoW, and let the linear mappinggmapWtoU. Then one can get
Or in matrix form:
in which thei-row,j-column element ofF, denoted byFij, isfji, andGij=gji.
The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication.
Given two matrices
their product is given by
There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite)dimensionsof vector spaces U, V and W. LetU={u1,…,ur}{\displaystyle {\mathcal {U}}=\{u_{1},\ldots ,u_{r}\}}be abasisof U,V={v1,…,vs}{\displaystyle {\mathcal {V}}=\{v_{1},\ldots ,v_{s}\}}be a basis of V andW={w1,…,wt}{\displaystyle {\mathcal {W}}=\{w_{1},\ldots ,w_{t}\}}be a basis of W. In terms of this basis, letA=MVU(f)∈Rs×r{\displaystyle A=M_{\mathcal {V}}^{\mathcal {U}}(f)\in \mathbb {R} ^{s\times r}}be the matrix representing f : U → V andB=MWV(g)∈Rr×t{\displaystyle B=M_{\mathcal {W}}^{\mathcal {V}}(g)\in \mathbb {R} ^{r\times t}}be the matrix representing g : V → W. Then
is the matrix representingg∘f:U→W{\displaystyle g\circ f:U\rightarrow W}.
In other words: the matrix product is the description in coordinates of the composition of linear functions.
Given two finite dimensional vector spacesVandW, the tensor product of them can be defined as a (2,0)-tensor satisfying:
whereV*andW*denote thedual spacesofVandW.[4]
For infinite-dimensional vector spaces, one also has the:
The tensor product,outer productandKronecker productall convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously-fixed basis, whereas the tensor product is usually given in itsintrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices).
In general, whenever one has two mathematicalobjectsthat can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as theinternal productof amonoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is theclassof all things (of a giventype) that have a tensor product.
Other kinds of products in linear algebra include:
Inset theory, aCartesian productis amathematical operationwhich returns aset(orproduct set) from multiple sets. That is, for setsAandB, the Cartesian productA×Bis the set of allordered pairs(a, b)—wherea ∈Aandb ∈B.[5]
The class of all things (of a giventype) that have Cartesian products is called aCartesian category. Many of these areCartesian closed categories. Sets are an example of such objects.
Theempty producton numbers and mostalgebraic structureshas the value of 1 (the identity element of multiplication), just like theempty sumhas the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment inlogic,set theory,computer programmingandcategory theory.
Products over other kinds ofalgebraic structuresinclude:
A few of the above products are examples of the general notion of aninternal productin amonoidal category; the rest are describable by the general notion of aproduct in category theory.
All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, seeproduct (category theory), which describes how to combine twoobjectsof some kind to create an object, possibly of a different kind. But also, in category theory, one has:
|
https://en.wikipedia.org/wiki/Product_(mathematics)
|
Anenumerative definitionof a concept or term is a special type ofextensional definitionthat gives an explicit and exhaustive listing of all theobjectsthat fall under the concept or term in question. Enumerative definitions are only possible for finite sets and only practical for relatively small sets.
An example of an enumerative definition for the setextantmonotremespecies(for which theintensional definitionis "species of currently-living mammals thatlay eggs") would be:
Thislogic-related article is astub. You can help Wikipedia byexpanding it.
Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Enumerative_definition
|
Signal processingis anelectrical engineeringsubfield that focuses on analyzing, modifying and synthesizingsignals, such assound,images,potential fields,seismic signals,altimetry processing, andscientific measurements.[1]Signal processing techniques are used to optimize transmissions,digital storageefficiency, correcting distorted signals, improvesubjective video quality, and to detect or pinpoint components of interest in a measured signal.[2]
According toAlan V. OppenheimandRonald W. Schafer, the principles of signal processing can be found in the classicalnumerical analysistechniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digitalcontrol systemsof the 1940s and 1950s.[3]
In 1948,Claude Shannonwrote the influential paper "A Mathematical Theory of Communication" which was published in theBell System Technical Journal.[4]The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission.[5]
Signal processing matured and flourished in the 1960s and 1970s, anddigital signal processingbecame widely used with specializeddigital signal processorchips in the 1980s.[5]
A signal is afunctionx(t){\displaystyle x(t)}, where this function is either[6]
Analog signal processing is for signals that have not been digitized, as in most 20th-centuryradio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance,passive filters,active filters,additive mixers,integrators, anddelay lines. Nonlinear circuits includecompandors, multipliers (frequency mixers,voltage-controlled amplifiers),voltage-controlled filters,voltage-controlled oscillators, andphase-locked loops.
Continuous-time signalprocessing is for signals that vary with the change of continuous domain (without considering some individual interrupted points).
The methods of signal processing includetime domain,frequency domain, andcomplex frequency domain. This technology mainly discusses the modeling of alinear time-invariantcontinuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals. For example, in time domain, a continuous-time signalx(t){\displaystyle x(t)}passing through alinear time-invariantfilter/system denoted ash(t){\displaystyle h(t)}, can be expressed at the output as
y(t)=∫−∞∞h(τ)x(t−τ)dτ{\displaystyle y(t)=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )\,d\tau }
In some contexts,h(t){\displaystyle h(t)}is referred to as the impulse response of the system. The aboveconvolutionoperation is conducted between the input and the system.
Discrete-time signalprocessing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude.
Analog discrete-time signal processingis a technology based on electronic devices such assample and holdcircuits, analog time-divisionmultiplexers,analog delay linesandanalog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals.[7]
The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without takingquantization errorinto consideration.
Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purposecomputersor by digital circuits such asASICs,field-programmable gate arraysor specializeddigital signal processors. Typical arithmetical operations includefixed-pointandfloating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware arecircular buffersandlookup tables. Examples of algorithms are thefast Fourier transform(FFT),finite impulse response(FIR) filter,Infinite impulse response(IIR) filter, andadaptive filterssuch as theWienerandKalman filters.
Nonlinear signal processing involves the analysis and processing of signals produced fromnonlinear systemsand can be in the time,frequency, or spatiotemporal domains.[8][9]Nonlinear systems can produce highly complex behaviors includingbifurcations,chaos,harmonics, andsubharmonicswhich cannot be produced or analyzed using linear methods.
Polynomial signal processing is a type of non-linear signal processing, wherepolynomialsystems may be interpreted as conceptually straightforward extensions of linear systems to the nonlinear case.[10]
Statistical signal processingis an approach which treats signals asstochastic processes, utilizing theirstatisticalproperties to perform signal processing tasks.[11]Statistical techniques are widely used in signal processing applications. For example, one can model theprobability distributionof noise incurred when photographing an image, and construct techniques based on this model toreduce the noisein the resulting image.
Graph signal processinggeneralizes signal processing tasks to signals living on non-Euclidean domains whose structure can be captured by a weighted graph.[12]Graph signal processing presents several key points such as sampling signal techniques,[13]recovery techniques[14]and time-varying techiques.[15]Graph signal processing has been applied with success in the field of image processing, computer vision[16][17][18]and sound anomaly detection.[19]
In communication systems, signal processing may occur at:[citation needed]
|
https://en.wikipedia.org/wiki/Statistical_signal_processing
|
Acountermeasureis a measure or action taken to counter or offset another one. As a general concept, it implies precision and is any technological or tactical solution or system designed to prevent an undesirable outcome in the process. The first known use of the term according to theMerriam-Websterdictionary was in 1923.[1]
Countermeasures can refer to the following disciplinary spectrum:
Defense countermeasures are often divided into "active" and "passive".
"Active" countermeasures mean the system user or the defender takes an active position because the incoming incident is known so the system takes active approaches to deal with such possible damage. Such an approach may include setting up a security method for the incident or actively trying to stop or intersect such damage.
"Passive" countermeasures mean the system is not aware of the incoming incident or potential security issues. To mitigate the result of any security issues, the system sets up a set of passive approach which only activates when the system encounters security problems. Usually, "Passive" countermeasures include:
This includes information on security or defensive technology, usually a way to protect the system. For example, security software or firewall could also be thought of as an approach to defensive technology. These methods detect potential security issues and report back to the system or protect the system when the system is under a certain threat.
This means the system has damage control about the possible outcome of the security problem. For example, the system might have a backup in a remote area so even if the current system is damaged, the system could switch to the remote backup and works seamlessly.
This means the system sets up a security approach to separate the core of the system. This approach is commonly used in a modern server network, where the server user has to go through ajump serverto access the core server. The jump server works as a fortification to separate the core server and the outside, which the core server sometimes is not connected to the internet and only connects to the local network, so the user needs to access the jump server to access the core server
|
https://en.wikipedia.org/wiki/Countermeasure
|
Incomputer programming,code bloatis the production ofprogram code(source codeormachine code) that is unnecessarily long, slow, or otherwise wasteful of resources. Code bloat can be caused by inadequacies in theprogramming languagein which the code is written, thecompilerused to compile it, or theprogrammerwriting it. Thus, while code bloat generally refers to source code size (as produced by the programmer), it can be used to refer instead to thegeneratedcode size or even thebinary filesize.
The following JavaScript algorithm has a large number ofredundantvariables, unnecessary logic and inefficient string concatenation.
The same logic can be stated more efficiently as follows:
The difference incode densitybetween variouscomputer languagesis so great that often lessmemoryis needed to hold both a program written in a "compact" language (such as adomain-specific programming language,Microsoft P-Code, orthreaded code), plus aninterpreterfor that compact language (written in native code), than to hold that program written directly innative code.
Some techniques for reducing code bloat include:[1]
|
https://en.wikipedia.org/wiki/Code_bloat
|
Reificationinknowledge representationis the process of turning apredicate[1]or statement[2]into an addressable object. Reification allows the representation of assertions so that they can be referred to or qualified byotherassertions, i.e., meta-knowledge.[3]
The message "John is six feet tall" is an assertion involving truth that commits the speaker to its factuality, whereas the reified statement "Mary reports that John is six feet tall" defers such commitment to Mary. In this way, the statements can be incompatible without creating contradictions inreasoning. For example, the statements "John is six feet tall" and "John is five feet tall" are mutually exclusive (and thus incompatible), but the statements "Mary reports that John is six feet tall" and "Paul reports that John is five feet tall" are not incompatible, as they are both governed by a conclusive rationale that either Mary or Paul is (or both are), in fact, incorrect.
Inlinguistics, reporting, telling, and saying are recognized asverbal processes that project a wording (or locution). If a person says that "Paul told x" and "Mary told y", this person stated only that the telling took place. In this case, the person who made these two statements did not represent a person inconsistently. In addition, if two people are talking to each other, let's say Paul and Mary, and Paul tells Mary "John is five feet tall" and Mary rejects Paul's statement by saying "No, he is actually six feet tall", the socially constructed model of John does not become inconsistent. The reason for that is that statements are to be understood as an attempt to convince the addressee of something (Austin's How to do things with words), alternatively as a request to add some attribute to the model of Paul. The response to a statement can be an acknowledgement, in which case the model is changed, or it can be a statement rejection, in which case the model does not get changed. Finally, the example above for which John is said to be "five feet tall" or "six feet tall" is only incompatible because John can only be a single number of feet tall. If the attribute were a possession as in "he has a dog" or "he also has a cat", a model inconsistency would not happen. In other words, the issue of model inconsistency has to do with our model of the domain element (John) and not with the ascription of different range elements (measurements such as "five feet tall" or "six feet tall").
|
https://en.wikipedia.org/wiki/Reification_(knowledge_representation)
|
"No Silver Bullet—Essence and Accident in Software Engineering" is a widely discussed paper onsoftware engineeringwritten byTuring AwardwinnerFred Brooksin 1986.[1]Brooks argues that "there is no single development, in either technology or management technique, which by itself promises even oneorder of magnitude[tenfold] improvement within a decade in productivity, in reliability, in simplicity." He also states that "we cannot expect ever to see two-fold gains every two years" in software development, as there is in hardware development (Moore's law).
Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. This is related toAristotle'sclassification. Accidental complexity relates to problems that engineers create and can fix. For example, modernprogramming languageshave abstracted away the details of writing and optimizingassembly languagesource codeand eliminated the delays caused bybatch processing, though other sources of accidental complexity remain. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things.
Brooks claims that accidental complexity has decreased substantially, and today's programmers spend most of their time addressing essential complexity. Brooks argues that this means shrinking all the accidental activities to zero will not give the same order-of-magnitude improvement as attempting to decrease essential complexity. While Brooks insists that there is no onesilver bullet, he believes that a series of innovations attacking essential complexity could lead to significant improvements. One technology that had made significant improvement in the area of accidental complexity was the invention ofhigh-level programming languages, such asAda.[1]
Brooks advocates "growing" software organically through incremental development. He suggests devising and implementing the main and subprograms right at the beginning, filling in the working sub-sections later. He believes thatcomputer programmingthis way excites the engineers and provides a working system at every stage of development.
Brooks goes on to argue that there is a difference between "good" designers and "great" designers. He postulates that as programming is a creative process, some designers are inherently better than others. He suggests that there is as much as a tenfold difference between an ordinary designer and a great one. He then advocates treating star designers equally well as star managers, providing them not just with equalremuneration, but also all the perks of higher status: large office, staff, travel funds, etc.
The article, and Brooks's later reflections on it, "'No Silver Bullet' Refired", can be found in the anniversary edition ofThe Mythical Man-Month.[2]
Brooks's paper has sometimes been cited in connection withWirth's law, to argue that "software systems grow faster in size and complexity than methods to handle complexity are invented."[3]
|
https://en.wikipedia.org/wiki/Accidental_complexity
|
Aletter bankis a relative of theanagramwhere all the letters of one word (the "bank") can be used as many times as desired (minimum of once each) to make a new word or phrase. For example, IMPS is a bank of MISSISSIPPI and SPROUT is a bank ofSUPPORT OUR TROOPS. As a convention, the bank should have no repeat letters within itself.
The term was coined byWill Shortz, whose first letter bank (BLUME -> BUMBLEBEE) appeared in his 1979 book, "Brain Games". In 1980, Shortz introduced letter banks to theNational Puzzlers' League(of which he is the historian), in the form of a contest puzzle. In 1981, the letter bank was announced an official puzzle type in the NPL’s magazine "The Enigma".[1]
Letter banks are the basis for the word gameAlpha Blitz.[citation needed]
Thisgame-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Letter_bank
|
Intheoretical computer science, thecircuit satisfiability problem(also known asCIRCUIT-SAT,CircuitSAT,CSAT, etc.) is thedecision problemof determining whether a givenBoolean circuithas an assignment of its inputs that makes the output true.[1]In other words, it asks whether the inputs to a given Boolean circuit can be consistently set to1or0such that the circuit outputs1. If that is the case, the circuit is calledsatisfiable. Otherwise, the circuit is calledunsatisfiable.In the figure to the right, the left circuit can be satisfied by setting both inputs to be1, but the right circuit is unsatisfiable.
CircuitSAT is closely related toBoolean satisfiability problem (SAT), and likewise, has been proven to beNP-complete.[2]It is a prototypical NP-complete problem; theCook–Levin theoremis sometimes proved on CircuitSAT instead of on the SAT, and then CircuitSAT can be reduced to the other satisfiability problems to prove their NP-completeness.[1][3]The satisfiability of a circuit containingm{\displaystyle m}arbitrary binary gates can be decided in timeO(20.4058m){\displaystyle O(2^{0.4058m})}.[4]
Given a circuit and a satisfying set of inputs, one can compute the output of each gate in constant time. Hence, the output of the circuit is verifiable in polynomial time. Thus Circuit SAT belongs tocomplexity classNP. To showNP-hardness, it is possible to construct areductionfrom3SATto Circuit SAT.
Suppose the original 3SAT formula has variablesx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}, and operators (AND, OR, NOT)y1,y2,…,yk{\displaystyle y_{1},y_{2},\dots ,y_{k}}. Design a circuit such that it has an input corresponding to every variable and a gate corresponding to every operator. Connect the gates according to the 3SAT formula. For instance, if the 3SAT formula is(¬x1∧x2)∨x3,{\displaystyle (\lnot x_{1}\land x_{2})\lor x_{3},}the circuit will have 3 inputs, one AND, one OR, and one NOT gate. The input corresponding tox1{\displaystyle x_{1}}will be inverted before sending to an AND gate withx2,{\displaystyle x_{2},}and the output of the AND gate will be sent to anOR gatewithx3.{\displaystyle x_{3}.}
Notice that the 3SAT formula is equivalent to the circuit designed above, hence their output is same for same input. Hence, If the 3SAT formula has a satisfying assignment, then the corresponding circuit will output 1, and vice versa. So, this is a valid reduction, and Circuit SAT is NP-hard.
This completes the proof that Circuit SAT is NP-Complete.
Assume that we are given a planar Boolean circuit (i.e. a Boolean circuit whose underlying graph isplanar) containing onlyNANDgates with exactly two inputs. Planar Circuit SAT is the decision problem of determining whether this circuit has an assignment of its inputs that makes the output true. This problem is NP-complete. Moreover, if the restrictions are changed so that any gate in the circuit is aNORgate, the resulting problem remains NP-complete.[5]
Circuit UNSAT is the decision problem of determining whether a given Boolean circuit outputs false for all possible assignments of its inputs. This is the complement of the Circuit SAT problem, and is thereforeCo-NP-complete.
Reduction from CircuitSAT or its variants can be used to show NP-hardness of certain problems, and provides us with an alternative to dual-rail and binary logic reductions. The gadgets that such a reduction needs to construct are:
This problem asks whether it is possible to locate all the bombs given aMinesweeperboard. It has been proven to beCoNP-Completevia a reduction from Circuit UNSAT problem.[6]The gadgets constructed for this reduction are: wire, split, AND and NOT gates and terminator.[7]There are three crucial observations regarding these gadgets. First, the split gadget can also be used as the NOT gadget and the turn gadget. Second, constructing AND and NOT gadgets is sufficient, because together they can simulate the universal NAND gate. Finally, since three NANDs can be composed intersection-free to implement an XOR, and since XOR is enough to build a crossover,[8]this gives us the needed crossover gadget.
TheTseytin transformationis a straightforward reduction from Circuit-SAT toSAT. The transformation is easy to describe if the circuit is wholly constructed out of 2-inputNAND gates(afunctionally-completeset of Boolean operators): assign everynetin the circuit a variable, then for each NAND gate, construct theconjunctive normal formclauses (v1∨v3) ∧ (v2∨v3) ∧ (¬v1∨ ¬v2∨ ¬v3), wherev1andv2are the inputs to the NAND gate andv3is the output. These clauses completely describe the relationship between the three variables. Conjoining the clauses from all the gates with an additional clause constraining the circuit's output variable to be true completes the reduction; an assignment of the variables satisfying all of the constraints existsif and only ifthe original circuit is satisfiable, and any solution is a solution to the original problem of finding inputs that make the circuit output 1.[1][9]The converse—that SAT is reducible to Circuit-SAT—follows trivially by rewriting the Boolean formula as a circuit and solving it.
|
https://en.wikipedia.org/wiki/Circuit_satisfiability
|
Asolved gameis agamewhose outcome (win, lose ordraw) can be correctly predicted from any position, assuming that both players play perfectly. This concept is usually applied toabstract strategy games, and especially to games with full information and no element of chance; solving such a game may usecombinatorial game theoryor computer assistance.
Atwo-player gamecan be solved on several levels:[1][2]
Despite their name, many game theorists believe that "ultra-weak" proofs are the deepest, most interesting and valuable. "Ultra-weak" proofs require a scholar to reason about the abstract properties of the game, and show how these properties lead to certain outcomes if perfect play is realized.[citation needed]
By contrast, "strong" proofs often proceed bybrute force—using a computerto exhaustively search agame treeto figure out what would happen if perfect play were realized. The resulting proof gives an optimal strategy for every possible position on the board. However, these proofs are not as helpful in understanding deeper reasons why some games are solvable as a draw, and other, seemingly very similar games are solvable as a win.
Given the rules of any two-person game with a finite number of positions, one can always trivially construct aminimaxalgorithm that would exhaustively traverse the game tree. However, since for many non-trivial games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database and are effectively nothing more.
As a simple example of a strong solution, the game oftic-tac-toeis easily solvable as a draw for both players with perfect play (a result manually determinable). Games likenimalso admit a rigorous analysis usingcombinatorial game theory.
Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if its solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (e.g.,Maharajah and the Sepoys). An ultra-weak solution (e.g.,ChomporHexon a sufficiently large board) generally does not affect playability.
Ingame theory,perfect playis the behavior or strategy of a player that leads to the best possible outcome for that player regardless of the response by the opponent. Perfect play for a game is known when the game is solved.[1]Based on the rules of a game, every possible final position can be evaluated (as a win, loss or draw). Bybackward reasoning, one can recursively evaluate a non-final position as identical to the position that is one move away and best valued for the player whose move it is. Thus a transition between positions can never result in a better evaluation for the moving player, and a perfect move in a position would be a transition between positions that are equally evaluated. As an example, a perfect player in a drawn position would always get a draw or win, never a loss. If there are multiple options with the same outcome, perfect play is sometimes considered the fastest method leading to a good result, or the slowest method leading to a bad result.
Perfect play can be generalized to non-perfect informationgames, as the strategy that would guarantee the highest minimalexpected outcomeregardless of the strategy of the opponent. As an example, the perfect strategy forrock paper scissorswould be to randomly choose each of the options with equal (1/3) probability. The disadvantage in this example is that this strategy will never exploit non-optimal strategies of the opponent, so the expected outcome of this strategy versus any strategy will always be equal to the minimal expected outcome.
Although the optimal strategy of a game may not (yet) be known, a game-playing computer might still benefit from solutions of the game from certainendgamepositions (in the form ofendgame tablebases), which will allow it to play perfectly after some point in the game.Computer chessprograms are well known for doing this.
|
https://en.wikipedia.org/wiki/Solved_game
|
Java code coverage toolsare of two types: first, tools that add statements to theJavasource codeand require its recompilation. Second, tools that instrument thebytecode, either before or during execution. The goal is to find out which parts of the code are tested by registering the lines ofcode executedwhen running a test.
JaCoCois anopen-sourcetoolkit for measuring and reportingJavacode coverage. JaCoCo is distributed under the terms of theEclipse Public License. It was developed as a replacement for EMMA,[1]under the umbrella of the EclEmma plug-in for Eclipse.
JaCoCo offers instructions, line and branch coverage.
In contrast toAtlassian CloverandOpenClover, which require instrumenting the source code, JaCoCo can instrument Java bytecode using two different approaches:
And can be configured to store the collected data in a file, or send it via TCP. Files from multiple runs or code parts can be merged easily.[3]Unlike Cobertura andEMMAit fully supports Java 7, Java 8,[4]Java 9, Java 10, Java 11, Java 12, Java 13, Java 14, Java 15, Java 16, Java 17, Java 18, Java 19 and Java 20.
JCov is the tool which has been developed and used with Sun JDK (and later Oracle JDK) from the very beginning of Java: from the version 1.1. JCov is capable of measuring and reportingJavacode coverage. JCov is distributed under the terms of theGNU General Public License(version 2, with the Classpath Exception). JCov has become open-source as a part of OpenJDK code tools project in 2014.
JCov is capable of reporting the following types of code coverage:
JCov implements two different ways to save the collected data:
JCov works by instrumenting Java bytecode using two different approaches:
JCov has a few more distinctive features which include, but are not limited to:
OpenClover is a free and open-source successor of Atlassian Clover, created as aforkfrom the Clover code base published by Atlassian in 2017. It contains all features of the original Clover (the server edition). The OpenClover project is led by developers who maintained Clover in years 2012–2017.[15]
OpenClover uses source code instrumentation technique and handles Java,GroovyandAspectJlanguages. Some of its features include: fine control over scope of coverage measurement, test optimisation and sophisticated reports.
OpenClover integrates withAnt,Maven,Gradle,Grails,Eclipse,IntelliJ IDEA,Bamboo,Jenkins,Hudson,Griffon,SonarQubeand AspectJ.
IntelliJ IDEA Code Coverage Agentis acode coveragetool integrated in IntelliJ IDEA IDE and TeamCity CI server. It supports branch coverage and per-test coverage tracking.
Testwell CTC++is acode coveragetool forC,C++,JavaandC#. The development of this tool started in 1989 at Testwell in Finland. Since 2013 support and development has been continued by Verifysoft Technology, a company fromOffenburg,Germany. Testwell CTC++ analyses for all code coverage levels up toModified condition/decision coverageand Multicondition Coverage.[16]The tool works with allcompilers.[17]
Cloveris a Javacode coverageanalysis utility bought and further developed byAtlassian. In April 2017 Atlassian announced end-of-life of Clover and at the same time open-sourced it under Apache 2.0 license.
Clover uses a source code instrumentation technique (as opposed to Cobertura and JaCoCo, which use byte code instrumentation), which has its advantages (such as an ability to collect code metrics) and disadvantages (re-compilation of sources is necessary).[18]Some of its features include historical reporting, huge control over the coverage gathering process, command line toolset and API for legacy integration and more.
Clover also allows testing time to be reduced by only running the tests that cover the application code that was modified since the previous build. This is called Test Optimization[19]and can lead to huge drops in the amount of time spent waiting for automated tests to complete.
Clover comes with a number of integrations both developed by Atlassian (Ant, Maven, Grails, Eclipse, IDEA, Bamboo) and by open source community (Gradle, Griffon, Jenkins, Hudson, Sonar).
In April 2017, Atlassian announced that they would no longer release new versions of Clover after version 4.1.2, and its code was made available asopen-source softwarehosted onBitbucket.[20][21]
Cobertura is anopen-sourcetool for measuring code coverage. It does so by instrumenting the byte code. It was the predecessor to JaCoCo.
EMMAis anopen-sourcetoolkit for measuring and reportingJavacode coverage. EMMA is distributed under the terms ofCommon Public Licensev1.0.
EMMA is not currently under active development; the last stable release took place in mid-2005. As replacement, JaCoCo was developed.[22]EMMA works by wrapping each line of code and each condition with a flag, which is set when that line is executed.[23]
Serenityis anopen-sourcetool creating better-automated software acceptance tests in less time. It and measures and reportsJavacode coverage. It also generates easy-to-understand reports that describe what the application does and how it works, including which tests were run andwhat requirements were met. It works with Selenium WebDriver, Appium, and BDD tools.
Major code metrics such ascyclometric complexity, stability, abstractness, and distance from main are measured. The report data is persisted to an object database and made available via Jenkins/Hudson. The interface visually replicates the Eclipse IDE interface.
Serenity dynamically enhances the byte code, making a post-compile step unnecessary. Ant and Maven projects are supported. Configuration is done in xml, an Ant example would be:
And a Maven configuration example would be:
For a full example of a configuration please refer to the Jenkins wiki athttps://wiki.jenkins-ci.org/display/JENKINS/Serenity+Plugin.
Jenkins slaves as well as Maven multi module projects are supported.
|
https://en.wikipedia.org/wiki/Java_code_coverage_tools
|
Local rigiditytheorems in the theory of discrete subgroups ofLie groupsare results which show that small deformations of certain such subgroups are always trivial. It is different fromMostow rigidityand weaker (but holds more frequently) thansuperrigidity.
The first such theorem was proven byAtle Selbergfor co-compact discrete subgroups of the unimodular groupsSLn(R){\displaystyle \mathrm {SL} _{n}(\mathbb {R} )}.[1]Shortly afterwards a similar statement was proven byEugenio Calabiin the setting of fundamental groups of compact hyperbolic manifolds. Finally, the theorem was extended to all co-compact subgroups of semisimple Lie groups byAndré Weil.[2][3]The extension to non-cocompact lattices was made later by Howard Garland andMadabusi Santanam Raghunathan.[4]The result is now sometimes referred to as Calabi—Weil (or just Weil) rigidity.
LetΓ{\displaystyle \Gamma }be a groupgeneratedby a finite number of elementsg1,…,gn{\displaystyle g_{1},\ldots ,g_{n}}andG{\displaystyle G}a Lie group. Then the mapHom(Γ,G)→Gn{\displaystyle \mathrm {Hom} (\Gamma ,G)\to G^{n}}defined byρ↦(ρ(g1),…,ρ(gn)){\displaystyle \rho \mapsto (\rho (g_{1}),\ldots ,\rho (g_{n}))}is injective and this endowsHom(Γ,G){\displaystyle \mathrm {Hom} (\Gamma ,G)}with a topologyinducedby that ofGn{\displaystyle G^{n}}. IfΓ{\displaystyle \Gamma }is a subgroup ofG{\displaystyle G}then adeformationofΓ{\displaystyle \Gamma }is any element inHom(Γ,G){\displaystyle \mathrm {Hom} (\Gamma ,G)}. Two representationsϕ,ψ{\displaystyle \phi ,\psi }are said to be conjugated if there exists ag∈G{\displaystyle g\in G}such thatϕ(γ)=gψ(γ)g−1{\displaystyle \phi (\gamma )=g\psi (\gamma )g^{-1}}for allγ∈Γ{\displaystyle \gamma \in \Gamma }. See alsocharacter variety.
The simplest statement is whenΓ{\displaystyle \Gamma }is a lattice in a simple Lie groupG{\displaystyle G}and the latter is not locally isomorphic toSL2(R){\displaystyle \mathrm {SL} _{2}(\mathbb {R} )}orSL2(C){\displaystyle \mathrm {SL} _{2}(\mathbb {C} )}andΓ{\displaystyle \Gamma }(this means that its Lie algebra is not that of one of these two groups).
Whenever such a statement holds for a pairG⊃Γ{\displaystyle G\supset \Gamma }we will say that local rigidity holds.
Local rigidity holds for cocompact lattices inSL2(C){\displaystyle \mathrm {SL} _{2}(\mathbb {C} )}. A latticeΓ{\displaystyle \Gamma }inSL2(C){\displaystyle \mathrm {SL} _{2}(\mathbb {C} )}which is not cocompact has nontrivial deformations coming from Thurston'shyperbolic Dehn surgerytheory. However, if one adds the restriction that a representation must send parabolic elements inΓ{\displaystyle \Gamma }to parabolic elements then local rigidity holds.
In this case local rigidity never holds (exceptcocompacttriangle groups). For cocompact lattices a small deformation remains a cocompact lattice but it may not be conjugated to the original one (seeTeichmüller spacefor more detail). Non-cocompact lattices are virtually free and hence have non-lattice deformations.
Local rigidity holds for lattices insemisimple Lie groupsproviding the latter have no factor of type A1 (i.e. locally isomorphic toSL2(R){\displaystyle \mathrm {SL} _{2}(\mathbb {R} )}orSL2(C){\displaystyle \mathrm {SL} _{2}(\mathbb {C} )}) or the former is irreducible.
There are also local rigidity results where the ambient group is changed, even in case where superrigidity fails. For example, ifΓ{\displaystyle \Gamma }is a lattice in theunitary groupSU(n,1){\displaystyle \mathrm {SU} (n,1)}andn≥2{\displaystyle n\geq 2}then the inclusionΓ⊂SU(n,1)⊂SU(n+1,1){\displaystyle \Gamma \subset \mathrm {SU} (n,1)\subset \mathrm {SU} (n+1,1)}is locally rigid.[5]
A uniform latticeΓ{\displaystyle \Gamma }in any compactly generated topological groupG{\displaystyle G}istopologically locally rigid, in the sense that any sufficiently small deformationφ{\displaystyle \varphi }of the inclusioni:Γ⊂G{\displaystyle i:\Gamma \subset G}is injective andφ(Γ){\displaystyle \varphi (\Gamma )}is a uniform lattice inG{\displaystyle G}. An irreducible uniform lattice in the isometry group of any proper geodesically completeCAT(0){\displaystyle \mathrm {CAT} (0)}-space not isometric to the hyperbolic plane and without Euclidean factors is locally rigid.[6]
Weil's original proof is by relating deformations of a subgroupΓ{\displaystyle \Gamma }inG{\displaystyle G}to the firstcohomologygroup ofΓ{\displaystyle \Gamma }with coefficients in the Lie algebra ofG{\displaystyle G}, and then showing that this cohomology vanishes for cocompact lattices whenG{\displaystyle G}has no simple factor of absolute type A1. A more geometric proof which also work in the non-compact cases usesCharles Ehresmann(andWilliam Thurston's) theory of(G,X){\displaystyle (G,X)}structures.[7]
|
https://en.wikipedia.org/wiki/Local_rigidity
|
Inmathematics, afactorisationof afree monoidis a sequence ofsubsetsof words with the property that every word in the free monoid can be written as a concatenation of elements drawn from the subsets. The Chen–Fox–Lyndontheorem states that theLyndon wordsfurnish a factorisation. TheSchützenbergertheorem relates the definition in terms of a multiplicative property to an additive property.[clarification needed]
LetA∗be thefree monoidon an alphabetA. LetXibe a sequence of subsets ofA∗indexed by atotally orderedindex setI. A factorisation of a wordwinA∗is an expression
withxij∈Xij{\displaystyle x_{i_{j}}\in X_{i_{j}}}andi1≥i2≥…≥in{\displaystyle i_{1}\geq i_{2}\geq \ldots \geq i_{n}}. Some authors reverse the order of the inequalities.
ALyndon wordover a totally ordered alphabetAis a word that islexicographicallyless than all its rotations.[1]TheChen–Fox–Lyndon theoremstates that every string may be formed in a unique way by concatenating a lexicographically non-increasing sequence of Lyndon words. Hence takingXlto be thesingleton set{l}for each Lyndon wordl, with the index setLof Lyndon words ordered lexicographically, we obtain a factorisation ofA∗.[2]Such a factorisation can be found inlinear timeand constant space by Duval's algorithm.[3]The algorithm[4]inPythoncode is:
TheHall setprovides a factorization.[5]Indeed, Lyndon words are a special case of Hall words. The article onHall wordsprovides a sketch of all of the mechanisms needed to establish a proof of this factorization.
Abisectionof a free monoid is a factorisation with just two classesX0,X1.[6]
Examples:
IfX,Yaredisjoint setsof non-empty words, then (X,Y) is a bisection ofA∗if and only if[7]
As a consequence, for any partitionP,QofA+there is a unique bisection (X,Y) withXa subset ofPandYa subset ofQ.[8]
This theorem states that a sequenceXiof subsets ofA∗forms a factorisation if and only if two of the following three statements hold:
|
https://en.wikipedia.org/wiki/Monoid_factorisation
|
Physician–patient privilegeis a legal concept, related tomedical confidentiality, that protects communications between a patient and theirdoctorfrom being used against the patient in court. It is a part of therules of evidencein manycommon lawjurisdictions. Almost every jurisdiction that recognizes physician–patient privilege not to testify in court, either by statute or through case law, limits the privilege to knowledge acquired during the course of providing medical services. In some jurisdictions, conversations between a patient and physician may be privileged in both criminal and civil courts.
The privilege may cover the situation where a patient confesses to a psychiatrist that they committed a particular crime. It may also cover normal inquiries regarding matters such as injuries that may result in civil action. For example, anydefendantthat the patient may be suing at the time cannot ask the doctor if the patient ever expressed the belief that their condition had improved. However, the rule generally does not apply to confidences shared with physicians when they are not serving in the role of medical providers.
The reasoning behind the rule is that a level of trust must exist in thedoctor–patient relationshipso that the physician can properly treat the patient. If the patient were fearful of telling the truth to the physician because they believed the physician would report such behavior to the authorities, the treatment process could be rendered far more difficult, or the physician could make an incorrect diagnosis.
For example, a below-age of consentpatient came to a doctor with asexually transmitted disease. The doctor is usually required to obtain a list of the patient's sexual contacts to inform them that they need treatment. This is an important health concern. However, the patient may be reluctant to divulge the names of their older sexual partners, for fear that they will be charged withstatutory rape. In some jurisdictions, the doctor cannot be forced to reveal the information revealed by their patient to anyone except to particular organizations, as specified by law, and they too are required to keep that information confidential. If, in the case the police become aware of such information, they are not allowed to use it in court as proof of the sexual misconduct, except as provided by express intent of the legislative body and formalized into law.[1]
The law inOntario, Canada, requires that physicians report patients who, in the opinion of the physician, may be unfit to drive for medical reasons as per Section 203 of theHighway Traffic Act.[2]
The law in New Hampshire places physician–patient communications on the same basis asattorney–client communications, except in cases where law enforcement officers seek blood or urine test samples and test results taken from a patient who is being investigated fordriving while intoxicated.[3]
In the United States, theFederal Rules of Evidencedo not recognize doctor–patient privilege.
At the state level, the extent of the privilege varies depending on the law of the applicable jurisdiction. For example, in Texas there is only a limited physician–patient privilege in criminal proceedings, and the privilege is limited in civil cases as well.[4]
InNew South Wales, Australia, a privilege exists for "communication made by a person in confidence to another person .... in the course of a relationship in which the confidant was acting in a professional capacity".[5]This is often interpreted as being between a health professional and their patient.
In some jurisdictions in Australia privilege may also extend tolawyers,[6]some victims,[7]journalists(shield laws),[8]andpriests.[9]It may also be invoked in apublic interest,[10]or settlement negotiations,[11]which may also beprivileged.[12]
|
https://en.wikipedia.org/wiki/Physician%E2%80%93patient_privilege
|
JIT sprayingis a class ofcomputer security exploitthat circumvents the protection ofaddress space layout randomizationanddata execution preventionby exploiting the behavior ofjust-in-time compilation.[1]It has been used to exploit thePDFformat[2]andAdobe Flash.[3]
Ajust-in-time compiler(JIT) by definition produces code as its data. Since the purpose is to produce executable data, a JIT compiler is one of the few types of programs that cannot be run in a no-executable-data environment. Because of this, JIT compilers are normally exempt from data execution prevention. A JIT spray attack doesheap sprayingwith the generated code.
To produce exploit code from JIT, an idea from Dion Blazakis[4]is used. The input program, usuallyJavaScriptorActionScript, typically contains numerous constant values that can be erroneously executed as code. For example, theXORoperation could be used:[5]
JIT then will transform bytecode to native x86 code like:
The attacker then uses a suitable bug to redirect code execution into the newly generated code. For example, abuffer overfloworuse after freebug could allow the attack to modify afunction pointeror return address.
This causes the CPU to execute instructions in a way that was unintended by the JIT authors. The attacker is usually not even limited to the expected instruction boundaries; it is possible to jump into the middle of an intended instruction to have the CPU interpret it as something else. As with non-JITROPattacks, this may be enough operations to usefully take control of the computer. Continuing the above example, jumping to the second byte of the "mov" instruction results in an "inc" instruction:
x86andx86-64allow jumping into the middle of an instruction, but not fixed-length architectures likeARM.
To protect against JIT spraying, the JIT code can be disabled or made less predictable for the attacker.[4]
|
https://en.wikipedia.org/wiki/JIT_spraying
|
Conservation biologyis the study of the conservation of nature and ofEarth'sbiodiversitywith the aim of protectingspecies, theirhabitats, andecosystemsfrom excessive rates ofextinctionand the erosion of biotic interactions.[1][2][3]It is an interdisciplinary subject drawing on natural and social sciences, and the practice ofnatural resource management.[4][5][page needed][6][7]
Theconservation ethicis based on the findings of conservation biology.
The term conservation biology and its conception as a new field originated with the convening of "The First International Conference on Research in Conservation Biology" held at theUniversity of California, San Diegoin La Jolla, California, in 1978 led by American biologists Bruce A. Wilcox andMichael E. Souléwith a group of leading university and zoo researchers and conservationists includingKurt Benirschke, SirOtto Frankel,Thomas Lovejoy, andJared Diamond. The meeting was prompted due to concern over tropical deforestation, disappearing species, and eroding genetic diversity within species.[8]The conference and proceedings that resulted[2]sought to initiate the bridging of a gap between theory in ecology andevolutionary geneticson the one hand and conservation policy and practice on the other.[9]
Conservation biology and the concept of biological diversity (biodiversity) emerged together, helping crystallize the modern era of conservation science andpolicy.[10]The inherent multidisciplinary basis for conservation biology has led to new subdisciplines including conservation social science,conservation behaviorand conservation physiology.[11]It stimulated further development ofconservation geneticswhichOtto Frankelhad originated first but is now often considered a subdiscipline as well.[12]
The rapid decline of established biological systems around the world means that conservation biology is often referred to as a "Discipline with a deadline".[13]Conservation biology is tied closely toecologyin researching thepopulation ecology(dispersal,migration,demographics,effective population size,inbreeding depression, andminimum population viability) ofrareorendangered species.[14][15]Conservation biology is concerned with phenomena that affect the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engendergenetic,population,species, and ecosystem diversity.[5][6][7][15]The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years,[16]which will increase poverty and starvation, and will reset the course of evolution on this planet.[17][18]Researchers acknowledge that projections are difficult, given the unknown potential impacts of many variables, including species introduction to new biogeographical settings and a non-analog climate.[19]
Conservation biologists research and educate on the trends and process ofbiodiversity loss, speciesextinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Conservation biologists work in the field and office, in government, universities, non-profit organizations and industry. The topics of their research are diverse, because this is an interdisciplinary network with professional alliances in the biological as well as social sciences. Those dedicated to the cause and profession advocate for a global response tothe current biodiversity crisisbased onmorals,ethics, and scientific reason. Organizations and citizens are responding to the biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales.[4][5][6][7]There is increasing recognition that conservation is not just about what is achieved but how it is done.[20]
The conservation of natural resources is the fundamental problem. Unless we solve that problem, it will avail us little to solve all others.
Conscious efforts to conserve and protectglobalbiodiversity are a recent phenomenon.[7][22]Natural resource conservation, however, has a history that extends prior to the age of conservation. Resource ethics grew out of necessity through direct relations with nature. Regulation or communal restraint became necessary to prevent selfish motives from taking more than could be locally sustained, therefore compromising the long-term supply for the rest of the community.[7]This social dilemma with respect to natural resource management is often called the "Tragedy of the Commons".[23][24]
From this principle, conservation biologists can trace communal resource based ethics throughout cultures as a solution to communal resource conflict.[7]For example, the AlaskanTlingitpeoples and theHaidaof thePacific Northwesthad resource boundaries, rules, and restrictions among clans with respect to the fishing ofsockeye salmon. These rules were guided by clan elders who knew lifelong details of each river and stream they managed.[7][25]There are numerous examples in history where cultures have followed rules, rituals, and organized practice with respect to communal natural resource management.[26][27]
The Mauryan emperorAshokaaround 250 BC issued edicts restricting the slaughter of animals and certain kinds of birds, as well as opened veterinary clinics.[citation needed]
Conservation ethics are also found in early religious and philosophical writings. There are examples in theTao,Shinto,Hindu,IslamicandBuddhisttraditions.[7][28]In Greek philosophy, Plato lamented about pastureland degradation: "What is left now is, so to say, the skeleton of a body wasted by disease; the rich, soft soil has been carried off and only the bare framework of the district left."[29]In the bible, through Moses, God commanded to let the land rest from cultivation every seventh year.[7][30]Before the 18th century, however, much of European culture considered it apagan viewto admire nature. Wilderness was denigrated while agricultural development was praised.[31]However, as early as AD 680 awildlife sanctuarywas founded on theFarne IslandsbySt Cuthbertin response to his religious beliefs.[7]
Natural historywas a major preoccupation in the 18th century, with grand expeditions and the opening of popular public displays inEuropeandNorth America. By 1900 there were 150natural history museumsinGermany, 250 inGreat Britain, 250 in theUnited States, and 300 inFrance.[32]Preservationist or conservationist sentiments are a development of the late 18th to early 20th centuries.
Before Charles Darwin set sail on HMSBeagle, most people in the world, including Darwin, believed in special creation and that all species were unchanged.[33]George-Louis Leclerc was one of the first naturalist that questioned this belief. He proposed in his 44 volume natural history book that species evolve due to environmental influences.[33]Erasmus Darwin was also a naturalist who also suggested that species evolved. Erasmus Darwin noted that some species have vestigial structures which are anatomical structures that have no apparent function in the species currently but would have been useful for the species' ancestors.[33]The thinking of these early 18th century naturalists helped to change the mindset and thinking of the early 19th century naturalists.
By the early 19th centurybiogeographywas ignited through the efforts ofAlexander von Humboldt,Charles LyellandCharles Darwin.[34]The 19th-century fascination with natural history engendered a fervor to be the first to collect rare specimens with the goal of doing so before they became extinct by other such collectors.[31][32]Although the work of many 18th and 19th century naturalists were to inspire nature enthusiasts andconservation organizations, their writings, by modern standards, showed insensitivity towards conservation as they would kill hundreds of specimens for their collections.[32]
The modern roots of conservation biology can be found in the late 18th-centuryEnlightenmentperiod particularly inEnglandandScotland.[31][35]Thinkers includingLord Monboddodescribed the importance of "preserving nature"; much of this early emphasis had its origins inChristian theology.[35]
Scientific conservation principles were first practically applied to the forests ofBritish India. The conservation ethic that began to evolve included three core principles: that human activity damaged theenvironment, that there was acivic dutyto maintain the environment for future generations, and that scientific, empirically based methods should be applied to ensure this duty was carried out. SirJames Ranald Martinwas prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment ofForest Departments.[36]
TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world.[37]Governor-GeneralLord Dalhousieintroduced the first permanent and large-scale forest conservation program in the world in 1855, a model that soon spread toother colonies, as well the United States,[38][39][40]whereYellowstone National Parkwas opened in 1872 as the world's first national park.[41][page needed]
The termconservationcame into widespread use in the late 19th century and referred to the management, mainly for economic reasons, of such natural resources astimber, fish, game,topsoil,pastureland, and minerals. In addition it referred to the preservation offorests(forestry),wildlife(wildlife refuge), parkland,wilderness, andwatersheds. This period also saw the passage of the first conservation legislation and the establishment of the first nature conservation societies. TheSea Birds Preservation Act of 1869was passed in Britain as the first nature protection law in the world[42]after extensive lobbying from the Association for the Protection of Seabirds[43]and the respectedornithologistAlfred Newton.[44]Newton was also instrumental in the passage of the firstGame lawsfrom 1872, which protected animals during their breeding season so as to prevent the stock from being brought close to extinction.[45]
One of the first conservation societies was theRoyal Society for the Protection of Birds, founded in 1889 inManchester[46]as aprotest groupcampaigning against the use ofgreat crested grebeandkittiwakeskins and feathers infur clothing. Originally known as "the Plumage League",[47]the group gained popularity and eventually amalgamated with the Fur and Feather League in Croydon, and formed the RSPB.[48]TheNational Trustformed in 1895 with the manifesto to "...promote the permanent preservation, for the benefit of the nation, of lands, ... to preserve (so far practicable) their natural aspect." In May 1912, a month after theTitanicsank, banker and expert naturalistCharles Rothschildheld a meeting at theNatural History Museumin London to discuss his idea for a new organisation to save the best places for wildlife in the British Isles. This meeting led to the formation of the Society for the Promotion of Nature Reserves, which later became theWildlife Trusts.[citation needed]
In theUnited States, theForest Reserve Act of 1891gave the President power to set aside forest reserves from the land in the public domain.John Muirfounded theSierra Clubin 1892, and theNew York Zoological Societywas set up in 1895. A series ofnational forests and preserveswere established byTheodore Rooseveltfrom 1901 to 1909.[50][51]The 1916 National Parks Act, included a 'use without impairment' clause, sought by John Muir, which eventually resulted in the removal of a proposal to build a dam inDinosaur National Monumentin 1959.[52]
In the 20th century,Canadiancivil servants, includingCharles Gordon Hewitt[53]andJames Harkin, spearheaded the movement towardwildlife conservation.[54][page needed]
In the 21st century professional conservation officers have begun to collaborate withindigenouscommunities for protecting wildlife in Canada.[55]Some conservation efforts are yet to fully take hold due to ecological neglect.[56][57][58]For example in the USA, 21st centurybowfishingof native fishes, which amounts to killing wild animals for recreation and disposing of them immediately afterwards, remains unregulated and unmanaged.[49]
In the mid-20th century, efforts arose to target individual species for conservation, notably efforts inbig catconservation inSouth Americaled by the New York Zoological Society.[59]In the early 20th century the New York Zoological Society was instrumental in developing concepts of establishing preserves for particular species and conducting the necessary conservation studies to determine the suitability of locations that are most appropriate as conservation priorities; the work of Henry Fairfield Osborn Jr.,Carl E. Akeley,Archie Carrand his son Archie Carr III is notable in this era.[60][61][62]Akeley for example, having led expeditions to theVirunga Mountainsand observed themountain gorillain the wild, became convinced that the species and the area were conservation priorities. He was instrumental in persuadingAlbert I of Belgiumto act in defense of themountain gorillaand establishAlbert National Park(since renamedVirunga National Park) in what is nowDemocratic Republic of Congo.[63]
By the 1970s, led primarily by work in the United States under theEndangered Species Act[64]along with theSpecies at Risk Act(SARA) of Canada,Biodiversity Action Plansdeveloped inAustralia,Sweden, theUnited Kingdom, hundreds of species specific protection plans ensued. Notably the United Nations acted to conserve sites of outstanding cultural or natural importance to the common heritage of mankind. The programme was adopted by the General Conference ofUNESCOin 1972. As of 2006, a total of 830 sites are listed: 644 cultural, 162 natural. The first country to pursue aggressive biological conservation through national legislation was the United States, which passed back to back legislation in the Endangered Species Act[65](1966) andNational Environmental Policy Act(1970),[66]which together injected major funding and protection measures to large-scale habitat protection and threatened species research. Other conservation developments, however, have taken hold throughout the world. India, for example, passed theWildlife Protection Act of 1972.[67]
In 1980, a significant development was the emergence of theurban conservationmovement. A local organization was established inBirmingham, UK, a development followed in rapid succession in cities across the UK, then overseas. Although perceived as agrassroots movement, its early development was driven by academic research into urban wildlife. Initially perceived as radical, the movement's view of conservation being inextricably linked with other human activity has now become mainstream in conservation thought. Considerable research effort is now directed at urban conservation biology. TheSociety for Conservation Biologyoriginated in 1985.[7]: 2
By 1992, most of the countries of the world had become committed to the principles of conservation of biological diversity with theConvention on Biological Diversity;[68]subsequently many countries began programmes ofBiodiversity Action Plansto identify and conserve threatened species within their borders, as well as protect associated habitats. The late 1990s saw increasing professionalism in the sector, with the maturing of organisations such as theInstitute of Ecology and Environmental Managementand theSociety for the Environment.
Since 2000, the concept oflandscape scale conservationhas risen to prominence, with less emphasis being given to single-species or even single-habitat focused actions. Instead an ecosystem approach is advocated by most mainstream conservationists, although concerns have been expressed by those working to protect some high-profile species.
Ecology has clarified the workings of thebiosphere; i.e., the complex interrelationships among humans, other species, and the physical environment. Theburgeoning human populationand associatedagriculture,industry, and the ensuing pollution, have demonstrated how easily ecological relationships can be disrupted.[69]
The last word in ignorance is the man who says of an animal or plant: "What good is it?" If the land mechanism as a whole is good, then every part is good, whether we understand it or not. If the biota, in the course of aeons, has built something we like but do not understand, then who but a fool would discard seemingly useless parts? To keep every cog and wheel is the first precaution of intelligent tinkering.
Extinction rates are measured in a variety of ways. Conservation biologists measure and applystatistical measuresoffossil records,[1][70]rates ofhabitat loss, and a multitude of other variables such asloss of biodiversityas a function of the rate of habitat loss and site occupancy[71]to obtain such estimates.[72]The Theory of Island Biogeography[73]is possibly the most significant contribution toward the scientific understanding of both the process and how to measure the rate of species extinction. The currentbackground extinction rateis estimated to be one species every few years.[74]Actual extinction rates are estimated to be orders of magnitudes higher.[75]While this is important, it's worth noting that there are no models in existence that account for the complexity of unpredictable factors like species movement, a non-analog climate, changing species interactions, evolutionary rates on finer time scales, and many other stochastic variables.[76][19]
The measure of ongoing species loss is made more complex by the fact that most of the Earth's species have not been described or evaluated. Estimates vary greatly on how many species actually exist (estimated range: 3,600,000–111,700,000)[77]to how many have received aspecies binomial(estimated range: 1.5–8 million).[77]Less than 1% of all species that have been described beyond simply noting its existence.[77]From these figures, the IUCN reports that 23% ofvertebrates, 5% ofinvertebratesand 70% of plants that have been evaluated are designated asendangeredorthreatened.[78][79]Better knowledge is being constructed byThe Plant Listfor actual numbers of species.
Systematic conservation planning is an effective way to seek and identify efficient and effective types of reserve design to capture or sustain the highest priority biodiversity values and to work with communities in support of local ecosystems. Margules and Pressey identify six interlinked stages in the systematic planning approach:[80]
Conservation biologists regularly prepare detailed conservation plans forgrant proposalsor to effectively coordinate their plan of action and to identify best management practices (e.g.[81]). Systematic strategies generally employ the services ofGeographic Information Systemsto assist in the decision-making process. TheSLOSS debateis often considered in planning.
Conservation physiology was defined bySteven J. Cookeand colleagues as:[11]
An integrative scientific discipline applying physiological concepts, tools, and knowledge to characterizing biological diversity and its ecological implications; understanding and predicting how organisms, populations, and ecosystems respond to environmental change and stressors; and solving conservation problems across the broad range of taxa (i.e. including microbes, plants, and animals). Physiology is considered in the broadest possible terms to include functional and mechanistic responses at all scales, and conservation includes the development and refinement of strategies to rebuild populations, restore ecosystems, inform conservation policy, generate decision-support tools, and manage natural resources.
Conservation physiology is particularly relevant to practitioners in that it has the potential to generate cause-and-effect relationships and reveal the factors that contribute to population declines.
TheSociety for Conservation Biologyis a global community of conservation professionals dedicated to advancing the science and practice of conserving biodiversity. Conservation biology as a discipline reaches beyond biology, into subjects such asphilosophy,law,economics,humanities,arts,anthropology, andeducation.[5][6]Within biology,conservation geneticsandevolutionare immense fields unto themselves, but these disciplines are of prime importance to the practice and profession of conservation biology.
Conservationists introducebiaswhen they support policies using qualitative description, such ashabitatdegradation, orhealthyecosystems. Conservation biologists advocate for reasoned and sensible management of natural resources and do so with a disclosed combination ofscience,reason,logic, andvaluesin their conservation management plans.[5]This sort of advocacy is similar to the medical profession advocating for healthy lifestyle options, both are beneficial to human well-being yet remain scientific in their approach.
There is a movement in conservation biology suggesting a new form of leadership is needed to mobilize conservation biology into a more effective discipline that is able to communicate the full scope of the problem to society at large.[82]The movement proposes an adaptive leadership approach that parallels anadaptive managementapproach. The concept is based on a new philosophy or leadership theory steering away from historical notions of power, authority, and dominance. Adaptive conservation leadership is reflective and more equitable as it applies to any member of society who can mobilize others toward meaningful change using communication techniques that are inspiring, purposeful, and collegial. Adaptive conservation leadership and mentoring programs are being implemented by conservation biologists through organizations such as the Aldo Leopold Leadership Program.[83]
Conservation may be classified as eitherin-situ conservation, which is protecting an endangered species in its naturalhabitat, orex-situ conservation, which occurs outside the natural habitat.[84]In-situ conservation involves protecting or restoring the habitat. Ex-situ conservation, on the other hand, involves protection outside of an organism's natural habitat, such as on reservations or ingene banks, in circumstances where viable populations may not be present in the natural habitat.[84]
The conservation of habitats like forest, water or soil in its natural state is crucial for any species depending in it to thrive. Instead of making the whole new environment looking alike the original habitat of wild animals is less effective than preserving the original habitats. An approach in Nepal named reforestation campaign has helped increase the density and area covered by the original forests which proved to be better than creating entirely new environment after original one is let to lost.Old Forests Store More Carbon than Young Onesas proved by latest researches, so it is more crucial to protect the old ones. The reforestation campaign launched by Himalayan Adventure Therapy in Nepal basically visits the old forests in periodic basis which are vulnerable to loss of density and the area covered due to unplanned urbanization activities. Then they plant the new saplings of same tree families of that existing forest in the areas where the old forest has been lost and also plant those saplings to the barren areas connected to the forest. This maintains the density and area covered by the forest.
Also, non-interference may be used, which is termed apreservationistmethod. Preservationists advocate for giving areas of nature and species a protected existence that halts interference from the humans.[5]In this regard, conservationists differ from preservationists in the social dimension, as conservation biology engages society and seeks equitable solutions for both society and ecosystems. Some preservationists emphasize the potential of biodiversity in a world without humans.
Ecological monitoring is the systematic collection of data relevant to theecologyof a species or habitat at repeating intervals with defined methods.[85]Long-term monitoring for environmental and ecological metrics is an important part of any successful conservation initiative. Unfortunately, long-term data for manyspeciesandhabitatsis not available in many cases.[86]A lack of historical data on speciespopulations, habitats, and ecosystems means that any current or future conservation work will have to make assumptions to determine if the work is having any effect on the population or ecosystem health. Ecological monitoring can provide early warning signals of deleterious effects (from human activities or natural changes in an environment) on an ecosystem and its species.[85]In order for signs of negative trends inecosystemor species health to be detected, monitoring methods must be carried out at appropriate time intervals, and the metric must be able to capture the trend of the population or habitat as a whole.
Long-term monitoring can include the continued measuring of many biological, ecological, and environmental metrics including annual breeding success, population size estimates, water quality,biodiversity(which can be measured in many way, i.e.Shannon Index), and many other methods. When determining which metrics to monitor for a conservation project, it is important to understand how an ecosystem functions and what role different species and abiotic factors have within the system.[87]It is important to have a precise reason for why ecological monitoring is implemented; within the context of conservation, this reasoning is often to track changes before, during, or after conservation measures are put in place to help a species or habitat recover from degradation and/or maintain integrity.[85]
Another benefit of ecological monitoring is the hard evidence it provides scientists to use for advising policy makers and funding bodies about conservation efforts. Not only is ecological monitoring data important for convincing politicians, funders, and the public why a conservation program is important to implement, but also to keep them convinced that a program should be continued to be supported.[86]
There is plenty of debate on how conservation resources can be used most efficiently; even within ecological monitoring, there is debate on which metrics that money, time and personnel should be dedicated to for the best chance of making a positive impact. One specific general discussion topic is whether monitoring should happen where there is littlehuman impact(to understand a system that has not been degraded by humans), where there is human impact (so the effects from humans can be investigated), or where there is data deserts and little is known about the habitats' and communities' response to humanperturbations.[85]
The concept ofbioindicators/indicator speciescan be applied to ecological monitoring as a way to investigate howpollutionis affecting an ecosystem.[88]Species likeamphibiansandbirdsare highly susceptible to pollutants in their environment due to their behaviours and physiological features that cause them to absorb pollutants at a faster rate than other species. Amphibians spend parts of their time in the water and on land, making them susceptible to changes in both environments.[89]They also have very permeable skin that allows them to breath and intake water, which means they also take any air or water-soluble pollutants in as well. Birds often cover a wide range in habitat types annually, and also generally revisit the same nesting site each year. This makes it easier for researchers to track ecological effects at both an individual and a population level for the species.[90]
Many conservation researchers believe that having a long-term ecological monitoring program should be a priority for conservation projects, protected areas, and regions where environmental harm mitigation is used.[91]
Conservation biologists areinterdisciplinaryresearchers that practice ethics in the biological and social sciences. Chan states[92]that conservationists must advocate for biodiversity and can do so in a scientifically ethical manner by not promoting simultaneous advocacy against other competing values.
A conservationist may be inspired by theresource conservation ethic,[7]: 15which seeks to identify what measures will deliver "the greatest good for the greatest number of people for the longest time."[5]: 13In contrast, some conservation biologists argue that nature has anintrinsic valuethat is independent ofanthropocentricusefulness orutilitarianism.[7]: 3, 12, 16–17Aldo Leopoldwas a classical thinker and writer on such conservation ethics whose philosophy, ethics and writings are still valued and revisited by modern conservation biologists.[7]: 16–17
TheInternational Union for Conservation of Nature(IUCN) has organized a global assortment of scientists and research stations across the planet to monitor the changing state of nature in an effort to tackle the extinction crisis. The IUCN provides annual updates on the status of species conservation through its Red List.[93]TheIUCN Red Listserves as an international conservation tool to identify those species most in need of conservation attention and by providing a global index on the status of biodiversity.[94]More than the dramatic rates of species loss, however, conservation scientists note that thesixth mass extinctionis a biodiversity crisis requiring far more action than a priority focus onrare,endemicorendangered species. Concerns for biodiversity loss covers a broader conservation mandate that looks atecological processes, such as migration, and a holistic examination of biodiversity at levels beyond the species, including genetic, population and ecosystem diversity.[95]Extensive, systematic, and rapid rates of biodiversity loss threatens the sustained well-being of humanity by limiting supply of ecosystem services that are otherwise regenerated by the complex and evolving holistic network of genetic and ecosystem diversity. While theconservation statusof species is employed extensively in conservation management,[94]some scientists highlight that it is the common species that are the primary source of exploitation and habitat alteration by humanity. Moreover, common species are often undervalued despite their role as the primary source of ecosystem services.[96][97]
While most in the community of conservation science "stress the importance" ofsustaining biodiversity,[98]there is debate on how to prioritize genes, species, or ecosystems, which are all components of biodiversity (e.g. Bowen, 1999). While the predominant approach to date has been to focus efforts on endangered species by conservingbiodiversity hotspots, some scientists (e.g)[99]and conservation organizations, such as theNature Conservancy, argue that it is more cost-effective, logical, and socially relevant to invest inbiodiversity coldspots.[100]The costs of discovering, naming, and mapping out the distribution of every species, they argue, is an ill-advised conservation venture. They reason it is better to understand the significance of the ecological roles of species.[95]
Biodiversity hotspots and coldspots are a way of recognizing that the spatial concentration of genes, species, and ecosystems is not uniformly distributed on the Earth's surface.[101]For example, "... 44% of all species of vascular plants and 35% of all species in four vertebrate groups are confined to 25 hotspots comprising only 1.4% of the land surface of the Earth."[102]
Those arguing in favor of setting priorities for coldspots point out that there are other measures to consider beyond biodiversity. They point out that emphasizing hotspots downplays the importance of the social and ecological connections to vast areas of the Earth's ecosystems wherebiomass, not biodiversity, reigns supreme.[103]It is estimated that 36% of the Earth's surface, encompassing 38.9% of the worlds vertebrates, lacks the endemic species to qualify as biodiversity hotspot.[104]Moreover, measures show that maximizing protections for biodiversity does not capture ecosystem services any better than targeting randomly chosen regions.[105]Population level biodiversity (mostly in coldspots) are disappearing at a rate that is ten times that at the species level.[99][106]The level of importance in addressing biomass versus endemism as a concern for conservation biology is highlighted in literature measuring the level of threat to global ecosystem carbon stocks that do not necessarily reside in areas of endemism.[107][108]A hotspot priority approach[109]would not invest so heavily in places such assteppes, theSerengeti, theArctic, ortaiga. These areas contribute a great abundance of population (not species) level biodiversity[106]andecosystem services, including cultural value and planetarynutrient cycling.[100]
Those in favor of the hotspot approach point out that species are irreplaceable components of the global ecosystem, they are concentrated in places that are most threatened, and should therefore receive maximal strategic protections.[110]This is a hotspot approach because the priority is set to target species level concerns over population level or biomass.[106][failed verification]Species richness and genetic biodiversity contributes to and engenders ecosystem stability, ecosystem processes, evolutionaryadaptability, and biomass.[111]Both sides agree, however, that conserving biodiversity is necessary to reduce the extinction rate and identify an inherent value in nature; the debate hinges on how to prioritize limited conservation resources in the most cost-effective way.
Conservation biologists have started to collaborate with leading globaleconomiststo determine how to measure thewealthandservicesof nature and to make these values apparent inglobal market transactions.[112]This system of accounting is callednatural capitaland would, for example, register the value of an ecosystem before it is cleared to make way for development.[113]TheWWFpublishes itsLiving Planet Reportand provides a global index of biodiversity by monitoring approximately 5,000 populations in 1,686 species of vertebrate (mammals, birds, fish, reptiles, and amphibians) and report on the trends in much the same way that the stock market is tracked.[114]
This method of measuring the global economic benefit of nature has been endorsed by theG8+5leaders and theEuropean Commission.[112]Nature sustains manyecosystem services[115]that benefit humanity.[116]Many of the Earth's ecosystem services arepublic goodswithout amarketand therefore nopriceorvalue.[112]When thestock marketregisters a financial crisis, traders onWall Streetare not in the business of trading stocks for much of the planet's living natural capital stored in ecosystems. There is no natural stock market with investment portfolios into sea horses, amphibians, insects, and other creatures that provide a sustainable supply of ecosystem services that are valuable to society.[116]The ecological footprint of society has exceeded the bio-regenerative capacity limits of the planet's ecosystems by about 30 percent, which is the same percentage of vertebrate populations that have registered decline from 1970 through 2005.[114]
The ecological credit crunch is a global challenge. TheLiving Planet Report 2008tells us that more than three-quarters of the world's people live in nations that are ecological debtors – their national consumption has outstripped their country's biocapacity. Thus, most of us are propping up our current lifestyles, and our economic growth, by drawing (and increasingly overdrawing) upon the ecological capital of other parts of the world.
The inherentnatural economyplays an essential role in sustaining humanity,[117]including the regulation of globalatmospheric chemistry,pollinating crops,pest control,[118]cycling soil nutrients, purifying ourwater supply,[119]supplying medicines and health benefits,[120]and unquantifiable quality of life improvements. There is a relationship, acorrelation, between markets andnatural capital, andsocial income inequityand biodiversity loss. This means that there are greater rates of biodiversity loss in places where the inequity of wealth is greatest,[121]an example of this would be the Perdido Key beach mouse. This is an endangered species that its demis started because of continued development along beaches, these mice leave in sand dunes and play an important role within this ecosystem. These mice help the grass grow inside the sandunes, they eat this grass and then this leads to then spreading seeds throughout the beach creating more grass. Sand dunes may not seem that important but they do act as a barrier for any sort of storm coming from the ocean such as hurricanes.[122][123]
Although a direct market comparison ofnatural capitalis likely insufficient in terms ofhuman value, one measure of ecosystem services suggests the contribution amounts to trillions of dollars yearly.[124][125][126][127]For example, one segment ofNorth Americanforests has been assigned an annual value of 250 billion dollars;[128]as another example,honey beepollination is estimated to provide between 10 and 18 billion dollars of value yearly.[129]The value of ecosystem services on oneNew Zealandisland has been imputed to be as great as theGDPof that region.[130]This planetary wealth is being lost at an incredible rate as the demands of human society is exceeding the bio-regenerative capacity of the Earth. While biodiversity and ecosystems are resilient, the danger of losing them is that humans cannot recreate many ecosystem functions throughtechnological innovation.
Some species, called akeystone speciesform a central supporting hub unique to their ecosystem.[131]The loss of such a species results in a collapse in ecosystem function, as well as the loss of coexisting species.[5]Keystone species are usually predators due to their ability to control the population of prey in their ecosystem.[131]The importance of a keystone species was shown by the extinction of theSteller's sea cow(Hydrodamalis gigas) through its interaction withsea otters,sea urchins, andkelp.Kelp bedsgrow and form nurseries in shallow waters to shelter creatures that support thefood chain. Sea urchins feed on kelp, while sea otters feed on sea urchins. With the rapid decline of sea otters due tooverhunting, sea urchin populationsgrazed unrestrictedon the kelp beds and theecosystem collapsed. Left unchecked, the urchins destroyed the shallow water kelp communities that supported the Steller's sea cow's diet and hastened their demise.[132]The sea otter was thought to be a keystone species because the coexistence of many ecological associates in the kelp beds relied upon otters for their survival. However this was later questioned by Turvey and Risley,[133]who showed that hunting alone would have driven the Steller's sea cow extinct.
Anindicator specieshas a narrow set of ecological requirements, therefore they become useful targets for observing the health of an ecosystem. Some animals, such asamphibianswith their semi-permeable skin and linkages towetlands, have an acute sensitivity to environmental harm and thus may serve as aminer's canary. Indicator species are monitored in an effort to captureenvironmental degradationthrough pollution or some other link to proximate human activities.[5]Monitoring an indicator species is a measure to determine if there is a significant environmental impact that can serve to advise or modify practice, such as through different forestsilviculturetreatments and management scenarios, or to measure the degree of harm that apesticidemay impart on the health of an ecosystem.
Government regulators, consultants, orNGOsregularly monitor indicator species, however, there are limitations coupled with many practical considerations that must be followed for the approach to be effective.[134]It is generally recommended that multiple indicators (genes, populations, species, communities, and landscape) be monitored for effective conservation measurement that prevents harm to the complex, and often unpredictable, response from ecosystem dynamics (Noss, 1997[135]: 88–89).
An example of anumbrella speciesis themonarch butterfly, because of its lengthymigrationsandaestheticvalue. The monarch migrates across North America, covering multiple ecosystems and so requires a large area to exist. Any protections afforded to the monarch butterfly will at the same time umbrella many other species and habitats. An umbrella species is often used asflagship species, which are species, such as thegiant panda, theblue whale, thetiger, themountain gorillaand the monarch butterfly, that capture the public's attention and attract support for conservation measures.[5]Paradoxically, however, conservation bias towards flagship species sometimes threatens other species of chief concern.[136]
Conservation biologists study trends and process from thepaleontologicalpast to theecologicalpresent as they gain an understanding of the context related tospecies extinction.[1]It is generally accepted that there have been five major global mass extinctions that register in Earth's history. These include: theOrdovician(440mya),Devonian(370 mya),Permian–Triassic(245 mya),Triassic–Jurassic(200 mya), andCretaceous–Paleogene extinction event(66 mya) extinction spasms. Within the last 10,000 years, human influence over the Earth's ecosystems has been so extensive that scientists have difficulty estimating the number of species lost;[137]that is to say the rates ofdeforestation,reef destruction,wetland drainingand other human acts are proceeding much faster than human assessment of species. The latestLiving Planet Reportby theWorld Wide Fund for Natureestimates that we have exceeded the bio-regenerative capacity of the planet, requiring 1.6 Earths to support the demands placed on our natural resources.[138]
Conservation biologists are dealing with and have publishedevidencefrom all corners of the planet indicating that humanity may be causing the sixth and fastest planetaryextinction event.[139][140][141]It has been suggested that an unprecedented number of species is becoming extinct in what is known as theHolocene extinction event.[142]The global extinction rate may be approximately 1,000 times higher than the natural background extinction rate.[143]It is estimated that two-thirds of allmammalgeneraand one-half of all mammalspeciesweighing at least 44 kilograms (97 lb) have gone extinct in the last 50,000 years.[133][144][145][146]The Global Amphibian Assessment[147]reports thatamphibians are decliningon a global scale faster than any othervertebrategroup, with over 32% of all surviving species being threatened with extinction. The surviving populations are in continual decline in 43% of those that are threatened. Since the mid-1980s the actual rates of extinction have exceeded 211 times rates measured from thefossil record.[148]However, "The current amphibian extinction rate may range from 25,039 to 45,474 times the background extinction rate for amphibians."[148]The global extinction trend occurs in every majorvertebrategroup that is being monitored. For example, 23% of allmammalsand 12% of allbirdsareRed Listedby theInternational Union for Conservation of Nature(IUCN), meaning they too are threatened with extinction. Even though extinction is natural, the decline in species is happening at such an incredible rate that evolution can simply not match, therefore, leading to the greatest continual mass extinction on Earth.[149]Humans have dominated the planet and our high consumption of resources, along with the pollution generated is affecting the environments in which other species live.[149][150]There are a wide variety of species that humans are working to protect such as the Hawaiian Crow and the Whooping Crane of Texas.[151]People can also take action on preserving species by advocating and voting for global and national policies that improve climate, under the concepts ofclimate mitigationandclimate restoration. The Earth's oceans demand particular attention as climate change continues to alter pH levels, making it uninhabitable for organisms with shells which dissolve as a result.[143]
Global assessments of coral reefs of the world continue to report drastic and rapid rates of decline. By 2000, 27% of the world's coral reef ecosystems had effectively collapsed. The largest period of decline occurred in a dramatic "bleaching" event in 1998, where approximately 16% of all the coral reefs in the world disappeared in less than a year.Coral bleachingis caused by a mixture ofenvironmental stresses, including increases in ocean temperatures andacidity, causing both the release ofsymbioticalgaeand death of corals.[152]Decline and extinction risk in coral reef biodiversity has risen dramatically in the past ten years. The loss of coral reefs, which are predicted to go extinct in the next century, threatens the balance of global biodiversity, will have huge economic impacts, and endangers food security for hundreds of millions of people.[153]Conservation biology plays an important role in international agreements covering the world's oceans[152]and other issues pertaining tobiodiversity.
These predictions will undoubtedly appear extreme, but it is difficult to imagine how such changes will not come to pass without fundamental changes in human behavior.
The oceans are threatened by acidification due to an increase in CO2levels. This is a most serious threat to societies relying heavily upon oceanicnatural resources. A concern is that the majority of allmarinespecies will not be able toevolveoracclimatein response to the changes in the ocean chemistry.[154]
The prospects of averting mass extinction seems unlikely when "90% of all of the large (average approximately ≥50 kg), open ocean tuna, billfishes, and sharks in the ocean"[18]are reportedly gone. Given the scientific review of current trends, the ocean is predicted to have few survivingmulti-cellular organismswith onlymicrobesleft to dominatemarine ecosystems.[18]
Serious concerns also being raised abouttaxonomic groupsthat do not receive the same degree of social attention or attract funds as the vertebrates. These includefungal(includinglichen-forming species),[155]invertebrate (particularlyinsect[16][156][157]) andplantcommunities[158]where the vast majority of biodiversity is represented. Conservation of fungi and conservation of insects, in particular, are both of pivotal importance for conservation biology. As mycorrhizal symbionts, and as decomposers and recyclers, fungi are essential for sustainability of forests.[155]The value of insects in thebiosphereis enormous because they outnumber all other living groups in measure ofspecies richness. The greatest bulk ofbiomasson land is found in plants, which is sustained by insect relations. This great ecological value of insects is countered by a society that often reacts negatively toward these aesthetically 'unpleasant' creatures.[159][160]
One area of concern in the insect world that has caught the public eye is the mysterious case of missinghoney bees(Apis mellifera). Honey bees provide an indispensable ecological services through their acts of pollination supporting a huge variety of agriculture crops. The use of honey and wax have become vastly used throughout the world.[161]The sudden disappearance of bees leaving empty hives orcolony collapse disorder(CCD) is not uncommon. However, in 16-month period from 2006 through 2007, 29% of 577beekeepersacross the United States reported CCD losses in up to 76% of their colonies. This sudden demographic loss in bee numbers is placing a strain on the agricultural sector. The cause behind the massive declines is puzzling scientists.Pests,pesticides, andglobal warmingare all being considered as possible causes.[162][163]
Another highlight that links conservation biology to insects, forests, and climate change is themountain pine beetle(Dendroctonus ponderosae)epidemicofBritish Columbia, Canada, which has infested 470,000 km2(180,000 sq mi) of forested land since 1999.[107]An action plan has been prepared by the Government of British Columbia to address this problem.[164][165]
This impact [pine beetle epidemic] converted the forest from a small netcarbon sinkto a large net carbon source both during and immediately after the outbreak. In the worst year, the impacts resulting from the beetle outbreak in British Columbia were equivalent to 75% of the average annual direct forest fire emissions from all of Canada during 1959–1999.
A large proportion of parasite species are threatened by extinction. A few of them are being eradicated as pests of humans or domestic animals; however, most of them are harmless. Parasites also make up a significant amount of global biodiversity, given that they make up a large proportion of all species on earth,[166]making them of increasingly prevalent conservation interest. Threats include the decline or fragmentation of host populations,[167]or the extinction of host species. Parasites are intricately woven into ecosystems and food webs, thereby occupying valuable roles in ecosystem structure and function.[168][166]
Today, many threats to biodiversity exist. An acronym that can be used to express the top threats of present-day H.I.P.P.O stands for Habitat Loss, Invasive Species, Pollution, Human Population, and Overharvesting.[169]The primary threats to biodiversity arehabitat destruction(such asdeforestation,agricultural expansion,urban development), andoverexploitation(such aswildlife trade).[137][170][171][172][173][174][175][176][177]Habitat fragmentationalso poses challenges, because the global network of protected areas only covers 11.5% of the Earth's surface.[178]A significant consequence of fragmentation and lack oflinked protected areasis the reduction of animal migration on a global scale.[179]Considering that billions of tonnes of biomass are responsible fornutrient cyclingacross the earth, the reduction of migration is a serious matter for conservation biology.[180][181]
Human activities are associated directly or indirectly with nearly every aspect of the current extinction spasm.
However, human activities need not necessarily cause irreparable harm to the biosphere. Withconservation management and planningfor biodiversity at all levels, fromgenesto ecosystems, there are examples where humans mutually coexist in a sustainable way with nature.[182]Even with the current threats to biodiversity there are ways we can improve the current condition and start anew.
Many of the threats to biodiversity, including disease and climate change, are reaching inside borders of protected areas, leaving them 'not-so protected' (e.g.Yellowstone National Park).[183]Climate change, for example, is often cited as a serious threat in this regard, because there is afeedback loopbetween species extinction and the release ofcarbon dioxideinto theatmosphere.[107][108]Ecosystems store andcyclelarge amounts of carbon which regulates global conditions.[184]In present day, there have been major climate shifts with temperature changes making survival of some species difficult.[169]Theeffects of global warmingadd a catastrophic threat toward a mass extinction of global biological diversity.[185]Numerous more species are predicted to face unprecedented levels of extinction risk due to population increase, climate change and economic development in the future.[186]Conservationists have claimed that not all the species can be saved, and they have to decide which their efforts should be used to protect. This concept is known as the Conservation Triage.[169]The extinction threat is estimated to range from 15 to 37 percent of all species by 2050,[185]or 50 percent of all species over the next 50 years.[16]The current extinction rate is 100–100,000 times more rapid today than the last several billion years.[169]
Scientific literature
Textbooks
General non-fiction
Periodicals
Training manuals
|
https://en.wikipedia.org/wiki/Conservation_biology
|
TheNX bit(no-execute bit) is aprocessorfeature that separates areas of avirtual address space(the memory layout a program uses) into sections for storing data or program instructions. Anoperating systemsupporting the NX bit can mark certain areas of the virtual address space as non-executable, preventing the processor from running any code stored there. This technique, known asexecutable space protectionorWrite XOR Execute, protects computers from malicious software that attempts to insert harmful code into another program’s data storage area and execute it, such as in abuffer overflowattack.
The term "NX bit" was introduced byAdvanced Micro Devices(AMD) as a marketing term.Intelmarkets this feature as theXD bit(execute disable), while theMIPS architecturerefers to it as theXI bit(execute inhibit). In theARM architecture, introduced inARMv6, it is known asXN(execute never).[1]The term NX bit is often used broadly to describe similar executable space protection technologies in other processors.
x86processors, since the80286, included a similar capability implemented at thesegmentlevel. However, almost all operating systems for the80386and later x86 processors implement theflat memory model, so they cannot use this capability. There was no "Executable" flag in the page table entry (page descriptor) in those processors, until, to make this capability available to operating systems using the flat memory model, AMD added a "no-execute" or NX bit to the page table entry in itsAMD64architecture, providing a mechanism that can control execution perpagerather than per whole segment.
Intel implemented a similar feature in itsItanium(Merced) processor—havingIA-64architecture—in 2001, but did not bring it to the more popular x86 processor families (Pentium,Celeron,Xeon, etc.). In the x86 architecture it was first implemented by AMD, as theNX bit, for use by itsAMD64line of processors, such as theAthlon 64andOpteron.[2]
After AMD's decision to include this functionality in its AMD64 instruction set, Intel implemented the similar XD bit feature in x86 processors beginning with thePentium 4processors based on later iterations of the Prescott core.[3]The NX bit specifically refers to bit number 63 (i.e. the most significant bit) of a 64-bit entry in thepage table. If this bit is set to 0, then code can be executed from that page; if set to 1, code cannot be executed from that page, and anything residing there is assumed to be data. It is only available with the long mode (64-bit mode) or legacyPhysical Address Extension(PAE) page-table formats, but not x86's original 32-bit page table format because page table entries in that format lack the 64th bit used to disable and enable execution.
Windows XP SP2 and later supportData Execution Prevention(DEP).
InARMv6, a new page table entry format was introduced; it includes an "execute never" bit.[1]ForARMv8-A, VMSAv8-64 block and page descriptors, and VMSAv8-32 long-descriptor block and page descriptors, for stage 1 translations have "execute never" bits for both privileged and unprivileged modes, and block and page descriptors for stage 2 translations have a single "execute never" bit (two bits due to ARMv8.2-TTS2UXN feature); VMSAv8-32 short-descriptor translation table descriptors at level 1 have "execute never" bits for both privileged and unprivileged mode and at level 2 have a single "execute never" bit.[4]
As of the Fourth Edition of the Alpha Architecture manual,DEC(now HP)Alphahas a Fault on Execute bit in page table entries with theOpenVMS,Tru64 UNIX, and Alpha LinuxPALcode.[5]
The SPARC Reference MMU forSunSPARCversion 8 has permission values of Read Only, Read/Write, Read/Execute, and Read/Write/Execute in page table entries,[6]although not all SPARC processors have a SPARC Reference MMU.
A SPARC version 9 MMU may provide, but is not required to provide, any combination of read/write/execute permissions.[7]A Translation Table Entry in a Translation Storage Buffer in Oracle SPARC Architecture 2011, Draft D1.0.0 has separate Executable and Writable bits.[8]
Page table entries forIBMPowerPC's hashed page tables have a no-execute page bit.[9]Page table entries for radix-tree page tables in the Power ISA have separate permission bits granting read/write and execute access.[10]
Translation lookaside buffer(TLB) entries and page table entries inPA-RISC1.1 and PA-RISC 2.0 support read-only, read/write, read/execute, and read/write/execute pages.[11][12]
TLB entries inItaniumsupport read-only, read/write, read/execute, and read/write/execute pages.[13]
As of the twelfth edition of thez/ArchitecturePrinciples of Operation, z/Architecture processors may support the Instruction-Execution Protection facility, which adds a bit in page table entries that controls whether instructions from a given region, segment, or page can be executed.[14]
|
https://en.wikipedia.org/wiki/NX_Bit
|
Ingroup theory, aword metricon adiscrete groupG{\displaystyle G}is a way to measure distance between any two elements ofG{\displaystyle G}. As the name suggests, the word metric is ametriconG{\displaystyle G}, assigning to any two elementsg{\displaystyle g},h{\displaystyle h}ofG{\displaystyle G}a distanced(g,h){\displaystyle d(g,h)}that measures how efficiently their differenceg−1h{\displaystyle g^{-1}h}can be expressed as awordwhose letters come from agenerating setfor the group. The word metric onGis very closely related to theCayley graphofG: the word metric measures the length of the shortest path in the Cayley graph between two elements ofG.
Agenerating setforG{\displaystyle G}must first be chosen before a word metric onG{\displaystyle G}is specified. Different choices of a generating set will typically yield different word metrics. While this seems at first to be a weakness in the concept of the word metric, it can be exploited to prove theorems about geometric properties of groups, as is done ingeometric group theory.
The group ofintegersZ{\displaystyle \mathbb {Z} }is generated by the set {-1,+1}. The integer -3 can be expressed as -1-1-1+1-1, a word of length 5 in these generators. But the word that expresses -3 most efficiently is -1-1-1, a word of length 3. The distance between 0 and -3 in the word metric is therefore equal to 3. More generally, the distance between two integersmandnin the word metric is equal to |m-n|, because the shortest word representing the differencem-nhas length equal to |m-n|.
For a more illustrative example, the elements of the groupZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }can be thought of asvectorsin theCartesian planewith integer coefficients. The groupZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }is generated by the standard unit vectorse1=⟨1,0⟩{\displaystyle e_{1}=\langle 1,0\rangle },e2=⟨0,1⟩{\displaystyle e_{2}=\langle 0,1\rangle }and their inverses−e1=⟨−1,0⟩{\displaystyle -e_{1}=\langle -1,0\rangle },−e2=⟨0,−1⟩{\displaystyle -e_{2}=\langle 0,-1\rangle }. TheCayley graphofZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }is the so-calledtaxicab geometry. It can be pictured in the plane as an infinite square grid of city streets, where each horizontal and vertical line with integer coordinates is a street, and each point ofZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }lies at the intersection of a horizontal and a vertical street. Each horizontal segment between two vertices represents the generating vectore1{\displaystyle e_{1}}or−e1{\displaystyle -e_{1}}, depending on whether the segment is travelled in the forward or backward direction, and each vertical segment representse2{\displaystyle e_{2}}or−e2{\displaystyle -e_{2}}. A car starting from⟨1,2⟩{\displaystyle \langle 1,2\rangle }and travelling along the streets to⟨−2,4⟩{\displaystyle \langle -2,4\rangle }can make the trip by many different routes. But no matter what route is taken, the car must travel at least |1 - (-2)| = 3 horizontal blocks and at least |2 - 4| = 2 vertical blocks, for a total trip distance of at least 3 + 2 = 5. If the car goes out of its way the trip may be longer, but the minimal distance travelled by the car, equal in value to the word metric between⟨1,2⟩{\displaystyle \langle 1,2\rangle }and⟨−2,4⟩{\displaystyle \langle -2,4\rangle }is therefore equal to 5.
In general, given two elementsv=⟨i,j⟩{\displaystyle v=\langle i,j\rangle }andw=⟨k,l⟩{\displaystyle w=\langle k,l\rangle }ofZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }, the distance betweenv{\displaystyle v}andw{\displaystyle w}in the word metric is equal to|i−k|+|j−l|{\displaystyle |i-k|+|j-l|}.
LetGbe a group, letSbe agenerating setforG, and suppose thatSis closed under the inverse operation onG. Awordover the setSis just a finite sequencew=s1…sL{\displaystyle w=s_{1}\ldots s_{L}}whose entriess1,…,sL{\displaystyle s_{1},\ldots ,s_{L}}are elements ofS. The integerLis called the length of the wordw{\displaystyle w}. Using the group operation inG, the entries of a wordw=s1…sL{\displaystyle w=s_{1}\ldots s_{L}}can be multiplied in order, remembering that the entries are elements ofG. The result of this multiplication is an elementw¯{\displaystyle {\bar {w}}}in the groupG, which is called theevaluationof the wordw. As a special case, the empty wordw=∅{\displaystyle w=\emptyset }has length zero, and its evaluation is the identity element ofG.
Given an elementgofG, itsword norm|g| with respect to the generating setSis defined to be the shortest length of a wordw{\displaystyle w}overSwhose evaluationw¯{\displaystyle {\bar {w}}}is equal tog. Given two elementsg,hinG, the distance d(g,h) in the word metric with respect toSis defined to be|g−1h|{\displaystyle |g^{-1}h|}. Equivalently, d(g,h) is the shortest length of a wordwoverSsuch thatgw¯=h{\displaystyle g{\bar {w}}=h}.
The word metric onGsatisfies the axioms for ametric, and it is not hard to prove this. The proof of the symmetry axiom d(g,h) = d(h,g) for a metric uses the assumption that the generating setSis closed under inverse.
The word metric has an equivalent definition formulated in more geometric terms using theCayley graphofGwith respect to the generating setS. When each edge of the Cayley graph is assigned a metric of length 1, the distance between two group elementsg,hinGis equal to the shortest length of a path in the Cayley graph from the vertexgto the vertexh.
The word metric onGcan also be defined without assuming that the generating setSis closed under inverse. To do this, first symmetrizeS, replacing it by a larger generating set consisting of eachs{\displaystyle s}inSas well as its inverses−1{\displaystyle s^{-1}}. Then define the word metric with respect toSto be the word metric with respect to the symmetrization ofS.
Suppose thatFis the free group on the two element set{a,b}{\displaystyle \{a,b\}}. A wordwin the symmetric generating set{a,b,a−1,b−1}{\displaystyle \{a,b,a^{-1},b^{-1}\}}is said to be reduced if the lettersa,a−1{\displaystyle a,a^{-1}}do not occur next to each other inw, nor do the lettersb,b−1{\displaystyle b,b^{-1}}. Every elementg∈F{\displaystyle g\in F}is represented by a unique reduced word, and this reduced word is the shortest word representingg. For example, since the wordw=b−1a{\displaystyle w=b^{-1}a}is reduced and has length 2, the word norm ofw{\displaystyle w}equals 2, so the distance in the word norm betweenb{\displaystyle b}anda{\displaystyle a}equals 2. This can be visualized in terms of the Cayley graph, where the shortest path betweenbandahas length 2.
The groupGactson itself by left multiplication: the action of eachk∈G{\displaystyle k\in G}takes eachg∈G{\displaystyle g\in G}tokg{\displaystyle kg}. This action is anisometryof the word metric. The proof is simple: the distance betweenkg{\displaystyle kg}andkh{\displaystyle kh}equals|(kg)−1(kh)|=|g−1h|{\displaystyle |(kg)^{-1}(kh)|=|g^{-1}h|}, which equals the distance betweeng{\displaystyle g}andh{\displaystyle h}.
In general, the word metric on a groupGis not unique, because different symmetric generating sets give different word metrics. However, finitely generated word metrics are unique up tobilipschitzequivalence: ifS{\displaystyle S},T{\displaystyle T}are two symmetric, finite generating sets forGwith corresponding word metricsdS{\displaystyle d_{S}},dT{\displaystyle d_{T}}, then there is a constantK≥1{\displaystyle K\geq 1}such that for anyg,h∈G{\displaystyle g,h\in G},
This constantKis just the maximum of thedS{\displaystyle d_{S}}word norms of elements ofT{\displaystyle T}and thedT{\displaystyle d_{T}}word norms of elements ofS{\displaystyle S}. This proof is also easy: any word overScan be converted by substitution into a word overT, expanding the length of the word by a factor of at mostK, and similarly for converting words overTinto words overS.
The bilipschitz equivalence of word metrics implies in turn that thegrowth rateof a finitely generated group is a well-defined isomorphism invariant of the group, independent of the choice of a finite generating set. This implies in turn that various properties of growth, such as polynomial growth, the degree of polynomial growth, and exponential growth, are isomorphism invariants of groups. This topic is discussed further in the article on thegrowth rateof a group.
Ingeometric group theory, groups are studied by theiractionson metric spaces. A principle that generalizes the bilipschitz invariance of word metrics says that any finitely generated word metric onGisquasi-isometricto anyproper,geodesic metric spaceon whichGacts,properly discontinuouslyandcocompactly. Metric spaces on whichGacts in this manner are calledmodel spacesforG.
It follows in turn that any quasi-isometrically invariant property satisfied by the word metric ofGor by any model space ofGis an isomorphism invariant ofG. Moderngeometric group theoryis in large part the study of quasi-isometry invariants.
|
https://en.wikipedia.org/wiki/Word_metric
|
Incomputational complexity theory, aprobabilistically checkable proof(PCP) is a type ofproofthat can be checked by arandomized algorithmusing a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (orcertificate), as used in theverifier-based definition of thecomplexity classNP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way.
Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The classPCP[r(n),q(n)]refers to the set ofdecision problemsthat have probabilistically checkable proofs that can be verified in polynomial time using at mostr(n) random bits and by reading at mostq(n) bits of the proof.[1]Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. ThePCP theorem, a major result in computational complexity theory, states thatPCP[O(logn),O(1)] =NP.
Given adecision problemL(or alanguageL with its alphabet set Σ), aprobabilistically checkable proof systemforLwith completenessc(n) and soundnesss(n), where0 ≤s(n) ≤c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proofπwhich statesxsolvesL(x∈L, the proof is a string∈ Σ∗). And the verifier is a randomizedoracle Turing MachineV(theverifier) that checks the proofπfor the statement thatxsolvesL(orx∈L) and decides whether to accept the statement. The system has the following properties:
For thecomputational complexityof the verifier, the verifier is polynomial time, and we have therandomness complexityr(n) to measure the maximum number of random bits thatVuses over allxof lengthnand thequery complexityq(n) of the verifier is the maximum number of queries thatVmakes to π over allxof lengthn.
In the above definition, the length of proof is not mentioned since usually it includes the alphabet set and all the witnesses. For the prover, we do not care how it arrives at the solution to the problem; we care only about the proof it gives of the solution's membership in the language.
The verifier is said to benon-adaptiveif it makes all its queries before it receives any of the answers to previous queries.
The complexity classPCPc(n),s(n)[r(n),q(n)]is the class of all decision problems having probabilistically checkable proof systems over binary alphabet of completenessc(n) and soundnesss(n), where the verifier is non-adaptive, runs in polynomial time, and it has randomness complexityr(n) and query complexityq(n).
The shorthand notationPCP[r(n),q(n)]is sometimes used forPCP1, 1/2[r(n),q(n)]. The complexity classPCPis defined asPCP1, 1/2[O(logn),O(1)].
The theory of probabilistically checkable proofs studies the power of probabilistically checkable proof systems under various restrictions of the parameters (completeness, soundness, randomness complexity, query complexity, and alphabet size). It has applications tocomputational complexity(in particularhardness of approximation) andcryptography.
The definition of a probabilistically checkable proof was explicitly introduced by Arora and Safra in 1992,[2]although their properties were studied earlier. In 1990 Babai, Fortnow, and Lund proved thatPCP[poly(n), poly(n)] =NEXP, providing the first nontrivial equivalence between standard proofs (NEXP) and probabilistically checkable proofs.[3]ThePCP theoremproved in 1992 states thatPCP[O(logn),O(1)] =NP.[2][4]
The theory ofhardness of approximationrequires a detailed understanding of the role of completeness, soundness, alphabet size, and query complexity in probabilistically checkable proofs.
From computational complexity point of view, for extreme settings of the parameters, the definition of probabilistically checkable proofs is easily seen to be equivalent to standardcomplexity classes. For example, we have the following for different setting ofPCP[r(n),q(n)]:
The PCP theorem andMIP= NEXP can be characterized as follows:
It is also known thatPCP[r(n),q(n)] ⊆NTIME(poly(n,2O(r(n))q(n))).
In particular,PCP[O(logn), poly(n)] =NP.
On the other hand, ifNP⊆PCP[o(logn),o(logn)]thenP = NP.[2]
A Linear PCP is a PCP in which the proof is a vector of elements of a finite fieldπ∈Fn{\displaystyle \pi \in \mathbb {F} ^{n}}, and such that the PCP oracle is only allowed to do linear operations on the proof. Namely, the response from the oracle to a verifier queryq∈Fn{\displaystyle q\in \mathbb {F} ^{n}}is a linear functionf(q,π){\displaystyle f(q,\pi )}. Linear PCPs have important applications in proof systems that can be compiled into SNARKs.
|
https://en.wikipedia.org/wiki/Probabilistically_checkable_proof
|
Incomputing, aroundoff error,[1]also calledrounding error,[2]is the difference between the result produced by a givenalgorithmusing exactarithmeticand the result produced by the same algorithm using finite-precision,roundedarithmetic.[3]Rounding errors are due to inexactness in the representation ofreal numbersand the arithmetic operations done with them. This is a form ofquantization error.[4]When using approximationequationsor algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals ofnumerical analysisis toestimatecomputation errors.[5]Computation errors, also callednumerical errors, include bothtruncation errorsand roundoff errors.
When a sequence of calculations with an input involving any roundoff error are made, errors may accumulate, sometimes dominating the calculation. Inill-conditionedproblems, significant error may accumulate.[6]
In short, there are two major facets of roundoff errors involved in numerical calculations:[7]
The error introduced by attempting to represent a number using a finite string of digits is a form of roundoff error calledrepresentation error.[8]Here are some examples of representation error in decimal representations:
Increasing the number of digits allowed in a representation reduces the magnitude of possible roundoff errors, but any representation limited to finitely many digits will still cause some degree of roundoff error foruncountably manyreal numbers. Additional digits used for intermediary steps of a calculation are known asguard digits.[9]
Rounding multiple times can cause error to accumulate.[10]For example, if 9.945309 is rounded to two decimal places (9.95), then rounded again to one decimal place (10.0), the total error is 0.054691. Rounding 9.945309 to one decimal place (9.9) in a single step introduces less error (0.045309). This can occur, for example, when software performs arithmetic inx86 80-bit floating-pointand then rounds the result toIEEE 754 binary64 floating-point.
Compared with thefixed-point number system, thefloating-point number systemis more efficient in representing real numbers so it is widely used in modern computers. While the real numbersR{\displaystyle \mathbb {R} }are infinite and continuous, a floating-point number systemF{\displaystyle F}is finite and discrete. Thus, representation error, which leads to roundoff error, occurs under the floating-point number system.
A floating-point number systemF{\displaystyle F}is characterized by4{\displaystyle 4}integers:
Anyx∈F{\displaystyle x\in F}has the following form:x=±(d0.d1d2…dp−1⏟significand)β×βE⏞exponent=±d0×βE+d1×βE−1+…+dp−1×βE−(p−1){\displaystyle x=\pm (\underbrace {d_{0}.d_{1}d_{2}\ldots d_{p-1}} _{\text{significand}})_{\beta }\times \beta ^{\overbrace {E} ^{\text{exponent}}}=\pm d_{0}\times \beta ^{E}+d_{1}\times \beta ^{E-1}+\ldots +d_{p-1}\times \beta ^{E-(p-1)}}wheredi{\displaystyle d_{i}}is an integer such that0≤di≤β−1{\displaystyle 0\leq d_{i}\leq \beta -1}fori=0,1,…,p−1{\displaystyle i=0,1,\ldots ,p-1}, andE{\displaystyle E}is an integer such thatL≤E≤U{\displaystyle L\leq E\leq U}.
In theIEEEstandard the base is binary, i.e.β=2{\displaystyle \beta =2}, and normalization is used. The IEEE standard stores the sign, exponent, and significand in separate fields of a floating point word, each of which has a fixed width (number of bits). The two most commonly used levels of precision for floating-point numbers are single precision and double precision.
Machine epsiloncan be used to measure the level of roundoff error in the floating-point number system. Here are two different definitions.[3]
There are two common rounding rules, round-by-chop and round-to-nearest. The IEEE standard uses round-to-nearest.
Suppose the usage of round-to-nearest and IEEE double precision.
Since the 53rd bit to the right of the binary point is a 1 and is followed by other nonzero bits, the round-to-nearest rule requires rounding up, that is, add 1 bit to the 52nd bit. Thus, the normalized floating-point representation in IEEE standard of 9.4 isfl(9.4)=1.0010110011001100110011001100110011001100110011001101×23.{\displaystyle fl(9.4)=1.0010110011001100110011001100110011001100110011001101\times 2^{3}.}
This representation is derived by discarding the infinite tail0.1100¯×2−52×23=0.0110¯×2−51×23=0.4×2−48{\displaystyle 0.{\overline {1100}}\times 2^{-52}\times 2^{3}=0.{\overline {0110}}\times 2^{-51}\times 2^{3}=0.4\times 2^{-48}}from the right tail and then added1×2−52×23=2−49{\displaystyle 1\times 2^{-52}\times 2^{3}=2^{-49}}in the rounding step.
The machine epsilonϵmach{\displaystyle \epsilon _{\text{mach}}}can be used to measure the level of roundoff error when using the two rounding rules above. Below are the formulas and corresponding proof.[3]The first definition of machine epsilon is used here.
Letx=d0.d1d2…dp−1dp…×βn∈R{\displaystyle x=d_{0}.d_{1}d_{2}\ldots d_{p-1}d_{p}\ldots \times \beta ^{n}\in \mathbb {R} }wheren∈[L,U]{\displaystyle n\in [L,U]}, and letfl(x){\displaystyle fl(x)}be the floating-point representation ofx{\displaystyle x}.
Since round-by-chop is being used, it is|x−fl(x)||x|=|d0.d1d2…dp−1dpdp+1…×βn−d0.d1d2…dp−1×βn||d0.d1d2…×βn|=|dp.dp+1…×βn−p||d0.d1d2…×βn|=|dp.dp+1dp+2…||d0.d1d2…|×β−p{\displaystyle {\begin{aligned}{\frac {|x-fl(x)|}{|x|}}&={\frac {|d_{0}.d_{1}d_{2}\ldots d_{p-1}d_{p}d_{p+1}\ldots \times \beta ^{n}-d_{0}.d_{1}d_{2}\ldots d_{p-1}\times \beta ^{n}|}{|d_{0}.d_{1}d_{2}\ldots \times \beta ^{n}|}}\\&={\frac {|d_{p}.d_{p+1}\ldots \times \beta ^{n-p}|}{|d_{0}.d_{1}d_{2}\ldots \times \beta ^{n}|}}\\&={\frac {|d_{p}.d_{p+1}d_{p+2}\ldots |}{|d_{0}.d_{1}d_{2}\ldots |}}\times \beta ^{-p}\end{aligned}}}In order to determine the maximum of this quantity, there is a need to find the maximum of the numerator and the minimum of the denominator. Sinced0≠0{\displaystyle d_{0}\neq 0}(normalized system), the minimum value of the denominator is1{\displaystyle 1}. The numerator is bounded above by(β−1).(β−1)(β−1)¯=β{\displaystyle (\beta -1).(\beta -1){\overline {(\beta -1)}}=\beta }. Thus,|x−fl(x)||x|≤β1×β−p=β1−p{\displaystyle {\frac {|x-fl(x)|}{|x|}}\leq {\frac {\beta }{1}}\times \beta ^{-p}=\beta ^{1-p}}. Therefore,ϵ=β1−p{\displaystyle \epsilon =\beta ^{1-p}}for round-by-chop.
The proof for round-to-nearest is similar.
Even if some numbers can be represented exactly by floating-point numbers and such numbers are calledmachine numbers, performing floating-point arithmetic may lead to roundoff error in the final result.
Machine addition consists of lining up the decimal points of the two numbers to be added, adding them, and then storing the result again as a floating-point number. The addition itself can be done in higher precision but the result must be rounded back to the specified precision, which may lead to roundoff error.[3]
This example shows that roundoff error can be introduced when adding a large number and a small number. The shifting of the decimal points in the significands to make the exponents match causes the loss of some of the less significant digits. The loss of precision may be described asabsorption.[11]
Note that the addition of two floating-point numbers can produce roundoff error when their sum is an order of magnitude greater than that of the larger of the two.
This kind of error can occur alongside an absorption error in a single operation.
In general, the product of two p-digit significands contains up to 2p digits, so the result might not fit in the significand.[3]Thus roundoff error will be involved in the result.
In general, the quotient of 2p-digit significands may contain more than p-digits.Thus roundoff error will be involved in the result.
Absorption also applies to subtraction.
The subtracting of two nearly equal numbers is calledsubtractive cancellation.[3]When the leading digits are cancelled, the result may be too small to be represented exactly and it will just be represented as0{\displaystyle 0}.
Even with a somewhat largerϵ{\displaystyle \epsilon }, the result is still significantly unreliable in typical cases. There is not much faith in the accuracy of the value because the most uncertainty in any floating-point number is the digits on the far right.
This is closely related to the phenomenon ofcatastrophic cancellation, in which the two numbers areknownto be approximations.
Errors can be magnified or accumulated when a sequence of calculations is applied on an initial input with roundoff error due to inexact representation.
An algorithm or numerical process is calledstableif small changes in the input only produce small changes in the output, andunstableif large changes in the output are produced.[12]For example, the computation off(x)=1+x−1{\displaystyle f(x)={\sqrt {1+x}}-1}using the "obvious" method is unstable nearx=0{\displaystyle x=0}due to the large error introduced in subtracting two similar quantities, whereas the equivalent expressionf(x)=x1+x+1{\displaystyle \textstyle {f(x)={\frac {x}{{\sqrt {1+x}}+1}}}}is stable.[12]
Even if a stable algorithm is used, the solution to a problem may still be inaccurate due to the accumulation of roundoff error when the problem itself isill-conditioned.
Thecondition numberof a problem is the ratio of the relative change in the solution to the relative change in the input.[3]A problem iswell-conditionedif small relative changes in input result in small relative changes in the solution. Otherwise, the problem isill-conditioned.[3]In other words, a problem isill-conditionedif its conditions number is "much larger" than 1.
The condition number is introduced as a measure of the roundoff errors that can result when solving ill-conditioned problems.[7]
|
https://en.wikipedia.org/wiki/Round-off_error
|
Instatistical theory, aU-statisticis a class of statistics defined as the average over the application of a given function applied to all tuples of a fixed size. The letter "U" stands for unbiased.[citation needed]In elementary statistics, U-statistics arise naturally in producingminimum-variance unbiased estimators.
The theory of U-statistics allows aminimum-variance unbiased estimatorto be derived from eachunbiased estimatorof anestimable parameter(alternatively,statisticalfunctional) for large classes ofprobability distributions.[1][2]An estimable parameter is ameasurable functionof the population'scumulative probability distribution: For example, for every probability distribution, the population median is an estimable parameter. The theory of U-statistics applies to general classes of probability distributions.
Many statistics originally derived for particular parametric families have been recognized as U-statistics for general distributions. Innon-parametric statistics, the theory of U-statistics is used to establish for statistical procedures (such as estimators and tests) and estimators relating to theasymptotic normalityand to the variance (in finite samples) of such quantities.[3]The theory has been used to study more general statistics as well asstochastic processes, such asrandom graphs.[4][5][6]
Suppose that a problem involvesindependent and identically-distributed random variablesand that estimation of a certain parameter is required. Suppose that a simple unbiased estimate can be constructed based on only a few observations: this defines the basic estimator based on a given number of observations. For example, a single observation is itself an unbiased estimate of the mean and a pair of observations can be used to derive an unbiased estimate of the variance. The U-statistic based on this estimator is defined as the average (across all combinatorial selections of the given size from the full set of observations) of the basic estimator applied to the sub-samples.
Pranab K. Sen(1992) provides a review of the paper byWassily Hoeffding(1948), which introduced U-statistics and set out the theory relating to them, and in doing so Sen outlines the importance U-statistics have in statistical theory. Sen says,[7]“The impact of Hoeffding (1948) is overwhelming at the present time and is very likely to continue in the years to come.” Note that the theory of U-statistics is not limited to[8]the case ofindependent and identically-distributed random variablesor to scalar random-variables.[9]
The term U-statistic, due to Hoeffding (1948), is defined as follows.
LetK{\displaystyle K}be either the real or complex numbers, and letf:(Kd)r→K{\displaystyle f\colon (K^{d})^{r}\to K}be aK{\displaystyle K}-valued function ofr{\displaystyle r}d{\displaystyle d}-dimensional variables.
For eachn≥r{\displaystyle n\geq r}the associated U-statisticfn:(Kd)n→K{\displaystyle f_{n}\colon (K^{d})^{n}\to K}is defined to be the average of the valuesf(xi1,…,xir){\displaystyle f(x_{i_{1}},\dotsc ,x_{i_{r}})}over the setIr,n{\displaystyle I_{r,n}}ofr{\displaystyle r}-tuples of indices from{1,2,…,n}{\displaystyle \{1,2,\dotsc ,n\}}with distinct entries.
Formally,
In particular, iff{\displaystyle f}is symmetric the above is simplified to
where nowJr,n{\displaystyle J_{r,n}}denotes the subset ofIr,n{\displaystyle I_{r,n}}ofincreasingtuples.
Each U-statisticfn{\displaystyle f_{n}}is necessarily asymmetric function.
U-statistics are very natural in statistical work, particularly in Hoeffding's context ofindependent and identically distributed random variables, or more generally forexchangeable sequences, such as insimple random samplingfrom a finite population, where the defining property is termed ‘inheritance on the average’.
Fisher'sk-statistics and Tukey'spolykaysare examples ofhomogeneous polynomialU-statistics (Fisher, 1929; Tukey, 1950).
For a simple random sampleφof sizentaken from a population of sizeN, the U-statistic has the property that the average over sample valuesƒn(xφ) is exactly equal to the population valueƒN(x).[clarification needed]
Some examples:
Iff(x)=x{\displaystyle f(x)=x}the U-statisticfn(x)=x¯n=(x1+⋯+xn)/n{\displaystyle f_{n}(x)={\bar {x}}_{n}=(x_{1}+\cdots +x_{n})/n}is the sample mean.
Iff(x1,x2)=|x1−x2|{\displaystyle f(x_{1},x_{2})=|x_{1}-x_{2}|}, the U-statistic is the mean pairwise deviationfn(x1,…,xn)=2/(n(n−1))∑i>j|xi−xj|{\displaystyle f_{n}(x_{1},\ldots ,x_{n})=2/(n(n-1))\sum _{i>j}|x_{i}-x_{j}|}, defined forn≥2{\displaystyle n\geq 2}.
Iff(x1,x2)=(x1−x2)2/2{\displaystyle f(x_{1},x_{2})=(x_{1}-x_{2})^{2}/2}, the U-statistic is thesample variancefn(x)=∑(xi−x¯n)2/(n−1){\displaystyle f_{n}(x)=\sum (x_{i}-{\bar {x}}_{n})^{2}/(n-1)}with divisorn−1{\displaystyle n-1}, defined forn≥2{\displaystyle n\geq 2}.
The thirdk{\displaystyle k}-statistick3,n(x)=∑(xi−x¯n)3n/((n−1)(n−2)){\displaystyle k_{3,n}(x)=\sum (x_{i}-{\bar {x}}_{n})^{3}n/((n-1)(n-2))},
the sampleskewnessdefined forn≥3{\displaystyle n\geq 3},
is a U-statistic.
The following case highlights an important point. Iff(x1,x2,x3){\displaystyle f(x_{1},x_{2},x_{3})}is themedianof three values,fn(x1,…,xn){\displaystyle f_{n}(x_{1},\ldots ,x_{n})}is not the median ofn{\displaystyle n}values. However, it is a minimum variance unbiased estimate of the expected value of the median of three values, not the median of the population. Similar estimates play a central role where the parameters of a family ofprobability distributionsare being estimated by probability weighted moments orL-moments.
|
https://en.wikipedia.org/wiki/U-statistic
|
Inmathematics,generalised means(orpower meanorHölder meanfromOtto Hölder)[1]are a family of functions for aggregating sets of numbers. These include as special cases thePythagorean means(arithmetic,geometric, andharmonicmeans).
Ifpis a non-zeroreal number, andx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}are positive real numbers, then thegeneralized meanorpower meanwith exponentpof these positive real numbers is[2][3]
Mp(x1,…,xn)=(1n∑i=1nxip)1/p.{\displaystyle M_{p}(x_{1},\dots ,x_{n})=\left({\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{p}\right)^{{1}/{p}}.}
(Seep-norm). Forp= 0we set it equal to the geometric mean (which is the limit of means with exponents approaching zero, as proved below):
M0(x1,…,xn)=(∏i=1nxi)1/n.{\displaystyle M_{0}(x_{1},\dots ,x_{n})=\left(\prod _{i=1}^{n}x_{i}\right)^{1/n}.}
Furthermore, for asequenceof positive weightswiwe define theweighted power meanas[2]Mp(x1,…,xn)=(∑i=1nwixip∑i=1nwi)1/p{\displaystyle M_{p}(x_{1},\dots ,x_{n})=\left({\frac {\sum _{i=1}^{n}w_{i}x_{i}^{p}}{\sum _{i=1}^{n}w_{i}}}\right)^{{1}/{p}}}and whenp= 0, it is equal to theweighted geometric mean:
M0(x1,…,xn)=(∏i=1nxiwi)1/∑i=1nwi.{\displaystyle M_{0}(x_{1},\dots ,x_{n})=\left(\prod _{i=1}^{n}x_{i}^{w_{i}}\right)^{1/\sum _{i=1}^{n}w_{i}}.}
The unweighted means correspond to setting allwi= 1.
A few particular values ofpyield special cases with their own names:[4]
For the purpose of the proof, we will assume without loss of generality thatwi∈[0,1]{\displaystyle w_{i}\in [0,1]}and∑i=1nwi=1.{\displaystyle \sum _{i=1}^{n}w_{i}=1.}
We can rewrite the definition ofMp{\displaystyle M_{p}}using the exponential function as
Mp(x1,…,xn)=exp(ln[(∑i=1nwixip)1/p])=exp(ln(∑i=1nwixip)p){\displaystyle M_{p}(x_{1},\dots ,x_{n})=\exp {\left(\ln {\left[\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\right]}\right)}=\exp {\left({\frac {\ln {\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)}}{p}}\right)}}
In the limitp→ 0, we can applyL'Hôpital's ruleto the argument of the exponential function. We assume thatp∈R{\displaystyle p\in \mathbb {R} }butp≠ 0, and that the sum ofwiis equal to 1 (without loss in generality);[7]Differentiating the numerator and denominator with respect top, we havelimp→0ln(∑i=1nwixip)p=limp→0∑i=1nwixiplnxi∑j=1nwjxjp1=limp→0∑i=1nwixiplnxi∑j=1nwjxjp=∑i=1nwilnxi∑j=1nwj=∑i=1nwilnxi=ln(∏i=1nxiwi){\displaystyle {\begin{aligned}\lim _{p\to 0}{\frac {\ln {\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)}}{p}}&=\lim _{p\to 0}{\frac {\frac {\sum _{i=1}^{n}w_{i}x_{i}^{p}\ln {x_{i}}}{\sum _{j=1}^{n}w_{j}x_{j}^{p}}}{1}}\\&=\lim _{p\to 0}{\frac {\sum _{i=1}^{n}w_{i}x_{i}^{p}\ln {x_{i}}}{\sum _{j=1}^{n}w_{j}x_{j}^{p}}}\\&={\frac {\sum _{i=1}^{n}w_{i}\ln {x_{i}}}{\sum _{j=1}^{n}w_{j}}}\\&=\sum _{i=1}^{n}w_{i}\ln {x_{i}}\\&=\ln {\left(\prod _{i=1}^{n}x_{i}^{w_{i}}\right)}\end{aligned}}}
By the continuity of the exponential function, we can substitute back into the above relation to obtainlimp→0Mp(x1,…,xn)=exp(ln(∏i=1nxiwi))=∏i=1nxiwi=M0(x1,…,xn){\displaystyle \lim _{p\to 0}M_{p}(x_{1},\dots ,x_{n})=\exp {\left(\ln {\left(\prod _{i=1}^{n}x_{i}^{w_{i}}\right)}\right)}=\prod _{i=1}^{n}x_{i}^{w_{i}}=M_{0}(x_{1},\dots ,x_{n})}as desired.[2]
Assume (possibly after relabeling and combining terms together) thatx1≥⋯≥xn{\displaystyle x_{1}\geq \dots \geq x_{n}}. Then
limp→∞Mp(x1,…,xn)=limp→∞(∑i=1nwixip)1/p=x1limp→∞(∑i=1nwi(xix1)p)1/p=x1=M∞(x1,…,xn).{\displaystyle {\begin{aligned}\lim _{p\to \infty }M_{p}(x_{1},\dots ,x_{n})&=\lim _{p\to \infty }\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\\&=x_{1}\lim _{p\to \infty }\left(\sum _{i=1}^{n}w_{i}\left({\frac {x_{i}}{x_{1}}}\right)^{p}\right)^{1/p}\\&=x_{1}=M_{\infty }(x_{1},\dots ,x_{n}).\end{aligned}}}
The formula forM−∞{\displaystyle M_{-\infty }}follows fromM−∞(x1,…,xn)=1M∞(1/x1,…,1/xn)=xn.{\displaystyle M_{-\infty }(x_{1},\dots ,x_{n})={\frac {1}{M_{\infty }(1/x_{1},\dots ,1/x_{n})}}=x_{n}.}
Letx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}be a sequence of positive real numbers, then the following properties hold:[1]
In general, ifp<q, thenMp(x1,…,xn)≤Mq(x1,…,xn){\displaystyle M_{p}(x_{1},\dots ,x_{n})\leq M_{q}(x_{1},\dots ,x_{n})}and the two means are equal if and only ifx1=x2= ... =xn.
The inequality is true for real values ofpandq, as well as positive and negative infinity values.
It follows from the fact that, for all realp,∂∂pMp(x1,…,xn)≥0{\displaystyle {\frac {\partial }{\partial p}}M_{p}(x_{1},\dots ,x_{n})\geq 0}which can be proved usingJensen's inequality.
In particular, forpin{−1, 0, 1}, the generalized mean inequality implies thePythagorean meansinequality as well as theinequality of arithmetic and geometric means.
We will prove the weighted power mean inequality. For the purpose of the proof we will assume the following without loss of generality:wi∈[0,1]∑i=1nwi=1{\displaystyle {\begin{aligned}w_{i}\in [0,1]\\\sum _{i=1}^{n}w_{i}=1\end{aligned}}}
The proof for unweighted power means can be easily obtained by substitutingwi= 1/n.
Suppose an average between power means with exponentspandqholds:(∑i=1nwixip)1/p≥(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\geq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}}applying this, then:(∑i=1nwixip)1/p≥(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}{\frac {w_{i}}{x_{i}^{p}}}\right)^{1/p}\geq \left(\sum _{i=1}^{n}{\frac {w_{i}}{x_{i}^{q}}}\right)^{1/q}}
We raise both sides to the power of −1 (strictly decreasing function in positive reals):(∑i=1nwixi−p)−1/p=(1∑i=1nwi1xip)1/p≤(1∑i=1nwi1xiq)1/q=(∑i=1nwixi−q)−1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{-p}\right)^{-1/p}=\left({\frac {1}{\sum _{i=1}^{n}w_{i}{\frac {1}{x_{i}^{p}}}}}\right)^{1/p}\leq \left({\frac {1}{\sum _{i=1}^{n}w_{i}{\frac {1}{x_{i}^{q}}}}}\right)^{1/q}=\left(\sum _{i=1}^{n}w_{i}x_{i}^{-q}\right)^{-1/q}}
We get the inequality for means with exponents−pand−q, and we can use the same reasoning backwards, thus proving the inequalities to be equivalent, which will be used in some of the later proofs.
For anyq> 0and non-negative weights summing to 1, the following inequality holds:(∑i=1nwixi−q)−1/q≤∏i=1nxiwi≤(∑i=1nwixiq)1/q.{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{-q}\right)^{-1/q}\leq \prod _{i=1}^{n}x_{i}^{w_{i}}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}.}
The proof follows fromJensen's inequality, making use of the fact thelogarithmis concave:log∏i=1nxiwi=∑i=1nwilogxi≤log∑i=1nwixi.{\displaystyle \log \prod _{i=1}^{n}x_{i}^{w_{i}}=\sum _{i=1}^{n}w_{i}\log x_{i}\leq \log \sum _{i=1}^{n}w_{i}x_{i}.}
By applying theexponential functionto both sides and observing that as a strictly increasing function it preserves the sign of the inequality, we get∏i=1nxiwi≤∑i=1nwixi.{\displaystyle \prod _{i=1}^{n}x_{i}^{w_{i}}\leq \sum _{i=1}^{n}w_{i}x_{i}.}
Takingq-th powers of thexiyields∏i=1nxiq⋅wi≤∑i=1nwixiq∏i=1nxiwi≤(∑i=1nwixiq)1/q.{\displaystyle {\begin{aligned}&\prod _{i=1}^{n}x_{i}^{q{\cdot }w_{i}}\leq \sum _{i=1}^{n}w_{i}x_{i}^{q}\\&\prod _{i=1}^{n}x_{i}^{w_{i}}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}.\end{aligned}}}
Thus, we are done for the inequality with positiveq; the case for negatives is identical but for the swapped signs in the last step:
∏i=1nxi−q⋅wi≤∑i=1nwixi−q.{\displaystyle \prod _{i=1}^{n}x_{i}^{-q{\cdot }w_{i}}\leq \sum _{i=1}^{n}w_{i}x_{i}^{-q}.}
Of course, taking each side to the power of a negative number-1/qswaps the direction of the inequality.
∏i=1nxiwi≥(∑i=1nwixi−q)−1/q.{\displaystyle \prod _{i=1}^{n}x_{i}^{w_{i}}\geq \left(\sum _{i=1}^{n}w_{i}x_{i}^{-q}\right)^{-1/q}.}
We are to prove that for anyp<qthe following inequality holds:(∑i=1nwixip)1/p≤(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}}ifpis negative, andqis positive, the inequality is equivalent to the one proved above:(∑i=1nwixip)1/p≤∏i=1nxiwi≤(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\leq \prod _{i=1}^{n}x_{i}^{w_{i}}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}}
The proof for positivepandqis as follows: Define the following function:f:R+→R+f(x)=xqp{\displaystyle f(x)=x^{\frac {q}{p}}}.fis a power function, so it does have a second derivative:f″(x)=(qp)(qp−1)xqp−2{\displaystyle f''(x)=\left({\frac {q}{p}}\right)\left({\frac {q}{p}}-1\right)x^{{\frac {q}{p}}-2}}which is strictly positive within the domain off, sinceq>p, so we knowfis convex.
Using this, and the Jensen's inequality we get:f(∑i=1nwixip)≤∑i=1nwif(xip)(∑i=1nwixip)q/p≤∑i=1nwixiq{\displaystyle {\begin{aligned}f\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)&\leq \sum _{i=1}^{n}w_{i}f(x_{i}^{p})\\[3pt]\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{q/p}&\leq \sum _{i=1}^{n}w_{i}x_{i}^{q}\end{aligned}}}after raising both side to the power of1/q(an increasing function, since1/qis positive) we get the inequality which was to be proven:
(∑i=1nwixip)1/p≤(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}}
Using the previously shown equivalence we can prove the inequality for negativepandqby replacing them with−qand−p, respectively.
The power mean could be generalized further to thegeneralizedf-mean:
Mf(x1,…,xn)=f−1(1n⋅∑i=1nf(xi)){\displaystyle M_{f}(x_{1},\dots ,x_{n})=f^{-1}\left({{\frac {1}{n}}\cdot \sum _{i=1}^{n}{f(x_{i})}}\right)}
This covers the geometric mean without using a limit withf(x) = log(x). The power mean is obtained forf(x) =xp. Properties of these means are studied in de Carvalho (2016).[3]
A power mean serves a non-linearmoving averagewhich is shifted towards small signal values for smallpand emphasizes big signal values for bigp. Given an efficient implementation of amoving arithmetic meancalledsmoothone can implement a moving power mean according to the followingHaskellcode.
|
https://en.wikipedia.org/wiki/Generalized_mean
|
Krippendorff's alpha coefficient,[1]named after academicKlaus Krippendorff, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s,alphahas been used incontent analysiswhere textual units are categorized by trained readers, in counseling andsurvey researchwhere experts code open-ended interview data into analyzable terms, in psychological testing where alternative tests of the same phenomena need to be compared, or inobservational studieswhere unstructured happenings are recorded for subsequent analysis.
Krippendorff's alpha generalizes several known statistics, often called measures of inter-coder agreement,inter-rater reliability, reliability of coding given sets of units (as distinct from unitizing) but it also distinguishes itself from statistics that are called reliability coefficients but are unsuitable to the particulars of coding data generated for subsequent analysis.
Krippendorff's alpha is applicable to any number of coders, each assigning one value to one unit of analysis, to incomplete (missing) data, to any number of values available for coding a variable, to binary, nominal, ordinal, interval, ratio, polar, and circular metrics (note that this is not a metric in the mathematical sense, but often the square of amathematical metric, seelevels of measurement), and it adjusts itself to small sample sizes of the reliability data. The virtue of a single coefficient with these variations is that computed reliabilities are comparable across any numbers of coders, values, different metrics, and unequal sample sizes.
Software for calculating Krippendorff's alpha is available.[2][3][4][5][6][7][8][9]
Reliability data are generated in a situation in whichm≥ 2 jointly instructed (e.g., by acode book) but independently working coders assign any one of a set of values 1,...,Vto a common set ofNunits of analysis. In their canonical form, reliability data are tabulated in anm-by-Nmatrix containingNvaluesvijthat codercihas assigned to unituj. Definemjas the number of values assigned to unitjacross all codersc. When data are incomplete,mjmay be less thanm. Reliability data require that values be pairable, i.e.,mj≥ 2. The total number of pairable values is∑j=1Nmj={\displaystyle \sum _{j=1}^{N}m_{j}=}n≤mN.
To help clarify, here is what the canonical form looks like, in the abstract:
We denote byR{\displaystyle R}the set of all possible responses an observer can give. The responses of all observers for an example is called a unit (it forms a multiset). We denote a multiset with these units as the items,U{\displaystyle U}.
Alpha is given by:
whereDo{\displaystyle D_{o}}is the disagreement observed andDe{\displaystyle D_{e}}is the disagreement expected by chance.
whereδ{\displaystyle \delta }is a metric function (note that this is not a metric in the mathematical sense, but often the square of a mathematical metric, see below),n{\displaystyle n}is the total number of pairable elements,mu{\displaystyle m_{u}}is the number of items in a unit,ncku{\displaystyle n_{cku}}number of(c,k){\displaystyle (c,k)}pairs in unitu{\displaystyle u}, andP{\displaystyle P}is thepermutation function. Rearranging terms, the sum can be interpreted in a conceptual way as the weighted average of the disagreements of the individual units---weighted by the number of coders assigned to unit j:
Do=1n∑j=1NmjE(δj){\displaystyle D_{o}={\frac {1}{n}}\sum _{j=1}^{N}m_{j}\,\mathbb {E} (\delta _{j})}
whereE(δj){\displaystyle \mathbb {E} (\delta _{j})}is the mean of the(mj2){\displaystyle m_{j} \choose 2}numbersδ(vij,vi′j){\displaystyle \delta (v_{ij},v_{i'j})}(herei>i′{\displaystyle i>i'}and define pairable elements). Note that in the casemj=m{\displaystyle m_{j}=m}for allj{\displaystyle j},Do{\displaystyle D_{o}}is just the average all the numbersδ(vij,vi′j){\displaystyle \delta (v_{ij},v_{i'j})}withi>i′{\displaystyle i>i'}. There is also an interpretation ofDo{\displaystyle D_{o}}as the (weighted) average observed distance from the diagonal.
wherePck{\displaystyle P_{ck}}is the number of ways the pair(c,k){\displaystyle (c,k)}can be made. This can be seen to be the average distance from the diagonal of all possible pairs of responses that could be derived from the multiset of all observations.
The above is equivalent to the usual form ofα{\displaystyle \alpha }once it has been simplified algebraically.[10]
One interpretation of Krippendorff'salphais:α=1−Dwithin units=in errorDwithin and between units=in total{\displaystyle \alpha =1-{\frac {D_{{\text{within units}}={\text{in error}}}}{D_{{\text{within and between units}}={\text{in total}}}}}}
In this general form, disagreementsDoandDemay be conceptually transparent but are computationally inefficient. They can be simplified algebraically, especially when expressed in terms of the visually more instructive coincidence matrix representation of the reliability data.
A coincidence matrix cross tabulates thenpairable values from the canonical form of the reliability data into av-by-vsquare matrix, wherevis the number of values available in a variable. Unlike contingency matrices, familiar in association and correlation statistics, which tabulatepairsof values (cross tabulation), a coincidence matrix tabulates all pairablevalues. A coincidence matrix omits references to coders and is symmetrical around its diagonal, which contains all perfect matches,viu=vi'ufor two codersiandi', across all unitsu. The matrix of observed coincidences contains frequencies:
omitting unpaired values, whereI(∘) = 1 if∘is true, and 0 otherwise.
Because a coincidence matrix tabulates all pairable values and its contents sum to the totaln, when four or more coders are involved,ockmay be fractions.
The matrix of expected coincidences contains frequencies:
which sum to the samenc,nk, andnas doesock. In terms of these coincidences, Krippendorff'salphabecomes:
Difference functionsδ(v,v′){\displaystyle \delta (v,v')}[11]between valuesvandv'reflect the metric properties (levels of measurement) of their variable.
In general:
In particular:
Inasmuch as mathematical statements of the statistical distribution ofalphaare always only approximations, it is preferable to obtainalpha’sdistribution bybootstrapping.[12][13]Alpha'sdistribution gives rise to two indices:
The minimum acceptablealphacoefficient should be chosen according to the importance of the conclusions to be drawn from imperfect data. When the costs of mistaken conclusions are high, the minimumalphaneeds to be set high as well. In the absence of knowledge of the risks of drawing false conclusions from unreliable data, social scientists commonly rely on data with reliabilitiesα≥ 0.800, consider data with 0.800 >α≥ 0.667 only to draw tentative conclusions, and discard data whose agreement measures α < 0.667.[14]
Let the canonical form of reliability data be a 3-coder-by-15 unit matrix with 45 cells:
Suppose “*” indicates a default category like “cannot code,” “no answer,” or “lacking an observation.” Then, * provides no information about the reliability of data in the four values that matter. Note that unit 2 and 14 contains no information and unit 1 contains only one value, which is not pairable within that unit. Thus, these reliability data consist not ofmN= 45 but ofn= 26 pairable values, not inN= 15 but in 12 multiply coded units.
The coincidence matrix for these data would be constructed as follows:
In terms of the entries in this coincidence matrix, Krippendorff'salphamay be calculated from:
For convenience, because products withδ(v,v)=0{\displaystyle \delta (v,v)=0}andδ(v,v′)=δ(v′,v){\displaystyle \delta (v,v')=\delta (v',v)}, only the entries in one of the off-diagonal triangles of the coincidence matrix are listed in the following:
Considering that allδnominal(v,v′)=1{\displaystyle \delta _{\text{nominal}}(v,v')=1}whenv≠v′{\displaystyle v{\neq }v'}for nominal data the above expression yields:
Withδinterval(1,2)=δinterval(2,3)=δinterval(3,4)=12,δinterval(1,3)=δinterval(2,4)=22,andδinterval(1,4)=32,{\displaystyle \delta _{\text{interval}}(1,2)=\delta _{\text{interval}}(2,3)=\delta _{\text{interval}}(3,4)=1^{2},\qquad \delta _{\text{interval}}(1,3)=\delta _{\text{interval}}(2,4)=2^{2},{\text{ and }}\delta _{\text{interval}}(1,4)=3^{2},}for interval data the above expression yields:
Here,αinterval>αnominal{\displaystyle \alpha _{\text{interval}}>\alpha _{\text{nominal}}}because disagreements happens to occur largely among neighboring values, visualized by occurring closer to the diagonal of the coincidence matrix, a condition thatαinterval{\displaystyle \alpha _{\text{interval}}}takes into account butαnominal{\displaystyle \alpha _{\text{nominal}}}does not. When the observed frequenciesov≠v′are on the average proportional to the expected frequencies ev ≠ v',αinterval=αnominal{\displaystyle \alpha _{\text{interval}}=\alpha _{\text{nominal}}}.
Comparingalphacoefficients across different metrics can provide clues to how coders conceptualize the metric of a variable.
Krippendorff'salphabrings several known statistics under a common umbrella, each of them has its own limitations but no additional virtues.
Krippendorff'salphais more general than any of these special purpose coefficients. It adjusts to varying sample sizes and affords comparisons across a wide variety of reliability data, mostly ignored by the familiar measures.
Semantically, reliability is the ability to rely on something, here on coded data for subsequent analysis. When a sufficiently large number of coders agree perfectly on what they have read or observed, relying on their descriptions is a safe bet. Judgments of this kind hinge on the number of coders duplicating the process and how representative the coded units are of the population of interest. Problems of interpretation arise when agreement is less than perfect, especially when reliability is absent.
Naming a statistic as one of agreement, reproducibility, or reliability does not make it a valid index of whether one can rely on coded data in subsequent decisions. Its mathematical structure must fit the process of coding units into a system of analyzable terms.
|
https://en.wikipedia.org/wiki/Krippendorff%27s_alpha
|
Inmathematics, arational functionis anyfunctionthat can be defined by arational fraction, which is analgebraic fractionsuch that both thenumeratorand thedenominatorarepolynomials. Thecoefficientsof the polynomials need not berational numbers; they may be taken in anyfieldK. In this case, one speaks of a rational function and a rational fractionoverK. The values of thevariablesmay be taken in any fieldLcontainingK. Then thedomainof the function is the set of the values of the variables for which the denominator is not zero, and thecodomainisL.
The set of rational functions over a fieldKis a field, thefield of fractionsof theringof thepolynomial functionsoverK.
A functionf{\displaystyle f}is called a rational function if it can be written in the form[1]
whereP{\displaystyle P}andQ{\displaystyle Q}arepolynomial functionsofx{\displaystyle x}andQ{\displaystyle Q}is not thezero function. Thedomainoff{\displaystyle f}is the set of all values ofx{\displaystyle x}for which the denominatorQ(x){\displaystyle Q(x)}is not zero.
However, ifP{\displaystyle \textstyle P}andQ{\displaystyle \textstyle Q}have a non-constantpolynomial greatest common divisorR{\displaystyle \textstyle R}, then settingP=P1R{\displaystyle \textstyle P=P_{1}R}andQ=Q1R{\displaystyle \textstyle Q=Q_{1}R}produces a rational function
which may have a larger domain thanf{\displaystyle f}, and is equal tof{\displaystyle f}on the domain off.{\displaystyle f.}It is a common usage to identifyf{\displaystyle f}andf1{\displaystyle f_{1}}, that is to extend "by continuity" the domain off{\displaystyle f}to that off1.{\displaystyle f_{1}.}Indeed, one can define a rational fraction as anequivalence classof fractions of polynomials, where two fractionsA(x)B(x){\displaystyle \textstyle {\frac {A(x)}{B(x)}}}andC(x)D(x){\displaystyle \textstyle {\frac {C(x)}{D(x)}}}are considered equivalent ifA(x)D(x)=B(x)C(x){\displaystyle A(x)D(x)=B(x)C(x)}. In this caseP(x)Q(x){\displaystyle \textstyle {\frac {P(x)}{Q(x)}}}is equivalent toP1(x)Q1(x).{\displaystyle \textstyle {\frac {P_{1}(x)}{Q_{1}(x)}}.}
Aproper rational functionis a rational function in which thedegreeofP(x){\displaystyle P(x)}is less than the degree ofQ(x){\displaystyle Q(x)}and both arereal polynomials, named by analogy to aproper fractioninQ.{\displaystyle \mathbb {Q} .}[2]
Incomplex analysis, a rational function
is the ratio of two polynomials with complex coefficients, whereQis not the zero polynomial andPandQhave no common factor (this avoidsftaking the indeterminate value 0/0).
The domain offis the set of complex numbers such thatQ(z)≠0{\displaystyle Q(z)\neq 0}.
Every rational function can be naturally extended to a function whose domain and range are the wholeRiemann sphere(complex projective line).
A complex rational function with degree one is aMöbius transformation.
Rational functions are representative examples ofmeromorphic functions.[3]
Iteration of rational functions on theRiemann sphere(i.e. arational mapping) createsdiscrete dynamical systems.[4]
There are several non equivalent definitions of the degree of a rational function.
Most commonly, thedegreeof a rational function is the maximum of thedegreesof its constituent polynomialsPandQ, when the fraction is reduced tolowest terms. If the degree offisd, then the equation
hasddistinct solutions inzexcept for certain values ofw, calledcritical values, where two or more solutions coincide or where some solution is rejectedat infinity(that is, when the degree of the equation decreases after havingcleared the denominator).
Thedegreeof thegraphof a rational function is not the degree as defined above: it is the maximum of the degree of the numerator and one plus the degree of the denominator.
In some contexts, such as inasymptotic analysis, thedegreeof a rational function is the difference between the degrees of the numerator and the denominator.[5]: §13.6.1[6]: Chapter IV
Innetwork synthesisandnetwork analysis, a rational function of degree two (that is, the ratio of two polynomials of degree at most two) is often called abiquadratic function.[7]
The rational function
is not defined at
It is asymptotic tox2{\displaystyle {\tfrac {x}{2}}}asx→∞.{\displaystyle x\to \infty .}
The rational function
is defined for allreal numbers, but not for allcomplex numbers, since ifxwere a square root of−1{\displaystyle -1}(i.e. theimaginary unitor its negative), then formal evaluation would lead to division by zero:
which is undefined.
Aconstant functionsuch asf(x) = πis a rational function since constants are polynomials. The function itself is rational, even though thevalueoff(x)is irrational for allx.
Everypolynomial functionf(x)=P(x){\displaystyle f(x)=P(x)}is a rational function withQ(x)=1.{\displaystyle Q(x)=1.}A function that cannot be written in this form, such asf(x)=sin(x),{\displaystyle f(x)=\sin(x),}is not a rational function. However, the adjective "irrational" isnotgenerally used for functions.
EveryLaurent polynomialcan be written as a rational function while the converse is not necessarily true, i.e., the ring of Laurent polynomials is asubringof the rational functions.
The rational functionf(x)=xx{\displaystyle f(x)={\tfrac {x}{x}}}is equal to 1 for allxexcept 0, where there is aremovable singularity. The sum, product, or quotient (excepting division by the zero polynomial) of two rational functions is itself a rational function. However, the process of reduction to standard form may inadvertently result in the removal of such singularities unless care is taken. Using the definition of rational functions as equivalence classes gets around this, sincex/xis equivalent to 1/1.
The coefficients of aTaylor seriesof any rational function satisfy alinear recurrence relation, which can be found by equating the rational function to a Taylor series with indeterminate coefficients, and collectinglike termsafter clearing the denominator.
For example,
Multiplying through by the denominator and distributing,
After adjusting the indices of the sums to get the same powers ofx, we get
Combining like terms gives
Since this holds true for allxin theradius of convergenceof the original Taylor series, we can compute as follows. Since theconstant termon the left must equal the constant term on the right it follows that
Then, since there are no powers ofxon the left, all of thecoefficientson the right must be zero, from which it follows that
Conversely, any sequence that satisfies a linear recurrence determines a rational function when used as the coefficients of a Taylor series. This is useful in solving such recurrences, since by usingpartial fraction decompositionwe can write any proper rational function as a sum of factors of the form1 / (ax+b)and expand these asgeometric series, giving an explicit formula for the Taylor coefficients; this is the method ofgenerating functions.
Inabstract algebrathe concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from anyfield. In this setting, given a fieldFand some indeterminateX, arational expression(also known as arational fractionor, inalgebraic geometry, arational function) is any element of thefield of fractionsof thepolynomial ringF[X]. Any rational expression can be written as the quotient of two polynomialsP/QwithQ≠ 0, although this representation isn't unique.P/Qis equivalent toR/S, for polynomialsP,Q,R, andS, whenPS=QR. However, sinceF[X] is aunique factorization domain, there is aunique representationfor any rational expressionP/QwithPandQpolynomials of lowest degree andQchosen to bemonic. This is similar to how afractionof integers can always be written uniquely in lowest terms by canceling out common factors.
The field of rational expressions is denotedF(X). This field is said to be generated (as a field) overFby (atranscendental element)X, becauseF(X) does not contain any proper subfield containing bothFand the elementX.
Likepolynomials, rational expressions can also be generalized tonindeterminatesX1,...,Xn, by taking the field of fractions ofF[X1,...,Xn], which is denoted byF(X1,...,Xn).
An extended version of the abstract idea of rational function is used in algebraic geometry. There thefunction field of an algebraic varietyVis formed as the field of fractions of thecoordinate ringofV(more accurately said, of aZariski-denseaffine open set inV). Its elementsfare considered as regular functions in the sense of algebraic geometry on non-empty open setsU, and also may be seen as morphisms to theprojective line.
Rational functions are used innumerical analysisforinterpolationandapproximationof functions, for example thePadé approximantsintroduced byHenri Padé. Approximations in terms of rational functions are well suited forcomputer algebra systemsand other numericalsoftware. Like polynomials, they can be evaluated straightforwardly, and at the same time they express more diverse behavior than polynomials.
Rational functions are used to approximate or model more complex equations in science and engineering includingfieldsandforcesin physics,spectroscopyin analytical chemistry, enzyme kinetics in biochemistry, electronic circuitry, aerodynamics, medicine concentrations in vivo,wave functionsfor atoms and molecules, optics and photography to improve image resolution, and acoustics and sound.[citation needed]
Insignal processing, theLaplace transform(for continuous systems) or thez-transform(for discrete-time systems) of theimpulse responseof commonly-usedlinear time-invariant systems(filters) withinfinite impulse responseare rational functions over complex numbers.
|
https://en.wikipedia.org/wiki/Rational_function
|
There are many differentnumeral systems, that is,writing systemsfor expressingnumbers.
"Abaseis a natural number B whosepowers(B multiplied by itself some number of times) are specially designated within a numerical system."[1]: 38The term is not equivalent toradix, as it applies to all numerical notation systems (not just positional ones with a radix) and most systems of spoken numbers.[1]Some systems have two bases, a smaller (subbase) and a larger (base); an example is Roman numerals, which are organized by fives (V=5, L=50, D=500, the subbase) and tens (X=10, C=100, M=1,000, the base).
零一二三四五六七八九十百千萬億 (Default,Traditional Chinese)〇一二三四五六七八九十百千万亿 (Default,Simplified Chinese)
Bengali০ ১ ২ ৩ ৪ ৫ ৬ ৭ ৮ ৯
Devanagari० १ २ ३ ४ ५ ६ ७ ८ ९
Gujarati૦ ૧ ૨ ૩ ૪ ૫ ૬ ૭ ૮ ૯
Kannada೦ ೧ ೨ ೩ ೪ ೫ ೬ ೭ ೮ ೯
Malayalam൦ ൧ ൨ ൩ ൪ ൫ ൬ ൭ ൮ ൯
Odia୦ ୧ ୨ ୩ ୪ ୫ ୬ ୭ ୮ ୯
Punjabi੦ ੧ ੨ ੩ ੪ ੫ ੬ ੭ ੮ ੯
Tamil௦ ௧ ௨ ௩ ௪ ௫ ௬ ௭ ௮ ௯
Telugu౦ ౧ ౨ ౩ ౪ ౫ ౬ ౭ ౮ ౯
Tibetan༠ ༡ ༢ ༣ ༤ ༥ ༦ ༧ ༨ ༩
Urdu۰ ۱ ۲ ۳ ۴ ۵ ۶ ۷ ۸ ۹
Numeral systems are classified here as to whether they usepositional notation(also known as place-value notation), and further categorized byradixor base.
The common names are derivedsomewhat arbitrarilyfrom a mix ofLatinandGreek, in some cases including roots from both languages within a single name.[27]There have been some proposals for standardisation.[28]
Someemailspam filterstag messages with a number ofasterisksin ane-mail headersuch asX-Spam-BarorX-SPAM-LEVEL. The larger the number, the more likely the email is considered spam.
All known numeral systems developed before theBabylonian numeralsare non-positional,[65]as are many developed later, such as theRoman numerals. The French Cistercian monks createdtheir own numeral system.
|
https://en.wikipedia.org/wiki/List_of_numeral_systems
|
TCP Gender Changeris a method in computer networking for making an internalTCP/IPbasednetwork serveraccessible beyond its protectivefirewall.
It consists of two nodes, one resides on the internal thelocal area networkwhere it can access the desired server, and the other node runs outside of the local area network, where the client can access it. These nodes are respectively called CC (Connect-Connect) and LL (Listen-Listen).
The reason behind naming the nodes are the fact that Connect-Connect node initiates two connections one to the Listen-Listen node and one to the actual server. The Listen-Listen node, however, passively Listens on twoTCP/IPports, one to receive a connection from CC and the other one for an incoming connection from the client.
The CC node, which runs inside the network will establish a control connection to the LL, and waiting for LL's signal to open a
connection to the internal server. Upon receiving a client connection LL will signal the CC node to connect the server, once done CC will let LL know of the result and if successful LL will keep the client connection and thus the client and server can communicate while CC and LL both relay the data back and forth.
One of the cases where it can be very useful is to connect to a desktop machine behind a firewall runningVNC, which would make the desktop remotely accessible over the network and beyond the firewall. Another useful scenario would be to create aVPNusingPPPoverSSH, or even simply using SSH to connect to an internalUnixbased server.
|
https://en.wikipedia.org/wiki/TCP_Gender_Changer
|
Clojure(/ˈkloʊʒər/, likeclosure)[17][18]is adynamicandfunctionaldialectof theprogramming languageLispon theJavaplatform.[19][20]
Like most other Lisps, Clojure'ssyntaxis built onS-expressionsthat are firstparsedintodata structuresby aLisp readerbefore beingcompiled.[21][17]Clojure's reader supports literal syntax formaps, sets, andvectorsalong with lists, and these are compiled to the mentioned structures directly.[21]Clojure treatscode as dataand has aLisp macrosystem.[22]Clojure is aLisp-1and is not intended to be code-compatible with other dialects of Lisp, since it uses its own set of data structures incompatible with other Lisps.[22]
Clojure advocatesimmutabilityandimmutable data structuresand encourages programmers to be explicit about managing identity and its states.[23]This focus on programming with immutable values and explicit progression-of-time constructs is intended to facilitate developing more robust, especiallyconcurrent, programs that are simple and fast.[24][25][17]While its type system is entirelydynamic, recent efforts have also sought the implementation of adependent type system.[26]
The language was created byRich Hickeyin the mid-2000s, originally for the Java platform; the language has since been ported to other platforms, such as theCommon Language Runtime(.NET). Hickey continues to lead development of the language as itsbenevolent dictator for life.
Rich Hickeyis the creator of the Clojure language.[19]Before Clojure, he developeddotLisp, a similar project based on the.NETplatform,[27]and three earlier attempts to provide interoperability between Lisp andJava: aJava foreign language interface forCommon Lisp(jfli),[28]AForeign Object Interface for Lisp(FOIL),[29]and aLisp-friendly interface to Java Servlets(Lisplets).[30]
Hickey spent about two and a half years working on Clojure before releasing it publicly in October 2007,[31]much of that time working exclusively on Clojure with no outside funding. At the end of this time, Hickey sent an email announcing the language to some friends in the Common Lisp community.
Clojure's name, according to Hickey, is aword playon the programming concept "closure" incorporating the letters C, L, and J forC#,Lisp, andJavarespectively—three languages which had a major influence on Clojure's design.[18]
Rich Hickey developed Clojure because he wanted a modernLispforfunctional programming, symbiotic with the establishedJavaplatform, and designed forconcurrency.[24][25][32][17]He has also stressed the importance of simplicity in programming language design and software architecture, advocating forloose coupling,polymorphismviaprotocols and type classesinstead ofinheritance,stateless functionsthat arenamespacedinstead ofmethodsorreplacing syntax with data.[33][34][35]
Clojure's approach tostateis characterized by the concept of identities,[23]which are represented as a series of immutable states over time. Since states are immutable values, any number of workers can operate on them in parallel, and concurrency becomes a question of managing changes from one state to another. For this purpose, Clojure provides several mutablereference types, each having well-defined semantics for the transition between states.[23]
Clojure runs on theJavaplatform and as a result, integrates withJavaand fully supports calling Java code from Clojure,[36][17]and Clojure code can be called from Java, too.[37]The community uses tools such as Clojurecommand-line interface(CLI)[38]orLeiningenfor project automation, providing support forMavenintegration. These tools handle project package management and dependencies and are configured using Clojure syntax.
As a Lisp dialect, Clojure supportsfunctionsasfirst-class objects, aread–eval–print loop(REPL), and a macro system.[6]Clojure'sLisp macrosystem is very similar to that ofCommon Lispwith the exception that Clojure's version of thebackquote(termed "syntax quote") qualifies symbols with theirnamespace. This helps prevent unintended name capture, as binding to namespace-qualified names is forbidden. It is possible to force a capturing macro expansion, but it must be done explicitly. Clojure does not allow user-defined reader macros, but the reader supports a more constrained form of syntactic extension.[39]Clojure supportsmultimethods[40]and forinterface-like abstractions has aprotocol[41]based polymorphism and data type system usingrecords,[42]providing high-performance and dynamic polymorphism designed to avoid theexpression problem.
Clojure has support forlazy sequencesand encourages the principle ofimmutabilityandpersistent data structures. As afunctional language, emphasis is placed onrecursionandhigher-order functionsinstead of side-effect-based looping. Automatictail calloptimization is not supported as the JVM does not support it natively;[43][44][45]it is possible to do so explicitly by using therecurkeyword.[46]Forparallelandconcurrentprogramming Clojure providessoftware transactional memory,[47]a reactiveagent system,[1]andchannel-based concurrent programming.[48]
Clojure 1.7 introduced reader conditionals by allowing the embedding of Clojure, ClojureScript and ClojureCLR code in the same namespace.[49][21]Transducers were added as a method for composing transformations. Transducers enable higher-order functions such asmapandfoldto generalize over any source of input data. While traditionally these functions operate onsequences, transducers allow them to work on channels and let the user define their own models for transduction.[50][51][52]
Extensible Data Notation, oredn,[53]is a subset of the Clojure language intended as a data transfer format. It can be used to serialize and deserialize Clojure data structures, and Clojure itself uses a superset of edn to represent programs.
edn is used in a similar way toJSONorXML, but has a relatively large list of built-in elements, shown here with examples:
In addition to those elements, it supports extensibility through the use oftags, which consist of the character#followed by a symbol. When encountering a tag, the reader passes the value of the next element to the corresponding handler, which returns a data value. For example, this could be a tagged element:#myapp/Person {:first "Fred" :last "Mertz"}, whose interpretation will depend on the appropriate handler of the reader.
This definition of extension elements in terms of the others avoids relying on either convention or context to convey elements not included in the base set.
The primary platform of Clojure isJava,[20][36]but other target implementations exist. The most notable of these is ClojureScript,[54]which compiles toECMAScript3,[55]and ClojureCLR,[56]a full port on the.NETplatform, interoperable with its ecosystem.
Other implementations of Clojure on different platforms include:
Tooling for Clojure development has seen significant improvement over the years. The following is a list of some popularIDEsandtext editorswith plug-ins that add support for programming in Clojure:[69]
In addition to the tools provided by the community, the official Clojurecommand-line interface(CLI) tools[38]have also become available onLinux,macOS, andWindowssince Clojure 1.9.[83]
The development process is restricted to the Clojure core team, though issues are publicly visible at the ClojureJIRAproject page.[84]Anyone can ask questions or submit issues and ideas at ask.clojure.org.[85]If it's determined that a new issue warrants a JIRA ticket, a core team member will triage it and add it. JIRA issues are processed by a team of screeners and finally approved by Rich Hickey.[86][87]
With continued interest in functional programming, Clojure's adoption by software developers using the Java platform has continued to increase.[88]The language has also been recommended by software developers such as Brian Goetz,[89][90][91]Eric Evans,[92][93]James Gosling,[94]Paul Graham,[95]andRobert C. Martin.[96][97][98][99]ThoughtWorks, while assessing functional programming languages for their Technology Radar,[100]described Clojure as "a simple, elegant implementation of Lisp on the JVM" in 2010 and promoted its status to "ADOPT" in 2012.[101]
In the "JVM Ecosystem Report 2018" (which was claimed to be "the largest survey ever of Java developers"), that was prepared in collaboration by Snyk and Java Magazine, ranked Clojure as the 2nd most used programming language on the JVM for "main applications".[102]Clojure is used in industry by firms[103]such asApple,[104][105]Atlassian,[106]Funding Circle,[107]Netflix,[108]Nubank,[109]Puppet,[110]andWalmart[111]as well as government agencies such asNASA.[112]It has also been used for creative computing, including visual art, music, games, and poetry.[113]
In the 2023 edition ofStack OverflowDeveloper Survey, Clojure was the fourth mostadmiredin the category of programming and scripting languages, with 68.51% of the respondents who have worked with it last year saying they would like to continue using it. In thedesiredcategory, however it was marked as such by only 2.2% of the surveyed, whereas the highest scoringJavaScriptwasdesiredby 40.15% of the developers participating in the survey.[114]
|
https://en.wikipedia.org/wiki/Clojure
|
Clusteringcan refer to the following:
Incomputing:
Ineconomics:
Ingraph theory:
|
https://en.wikipedia.org/wiki/Clustering
|
Incomputer science,2-satisfiability,2-SATor just2SATis acomputational problemof assigning values to variables, each of which has two possible values, in order to satisfy a system ofconstraintson pairs of variables. It is a special case of the generalBoolean satisfiability problem, which can involve constraints on more than two variables, and ofconstraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which areNP-complete, 2-satisfiability can be solved inpolynomial time.
Instances of the 2-satisfiability problem are typically expressed asBoolean formulasof a special type, calledconjunctive normal form(2-CNF) orKrom formulas. Alternatively, they may be expressed as a special type ofdirected graph, theimplication graph, which expresses the variables of an instance and their negations as vertices in a graph, and constraints on pairs of variables as directed edges. Both of these kinds of inputs may be solved inlinear time, either by a method based onbacktrackingor by using thestrongly connected componentsof the implication graph.Resolution, a method for combining pairs of constraints to make additional valid constraints, also leads to a polynomial time solution. The 2-satisfiability problems provide one of two major subclasses of the conjunctive normal form formulas that can be solved in polynomial time; the other of the two subclasses isHorn-satisfiability.
2-satisfiability may be applied to geometry and visualization problems in which a collection of objects each have two potential locations and the goal is to find a placement for each object that avoids overlaps with other objects. Other applications include clustering data to minimize the sum of the diameters of the clusters, classroom and sports scheduling, and recovering shapes from information about their cross-sections.
Incomputational complexity theory, 2-satisfiability provides an example of anNL-completeproblem, one that can be solved non-deterministically using a logarithmic amount of storage and that is among the hardest of the problems solvable in this resource bound. The set of all solutions to a 2-satisfiability instance can be given the structure of amedian graph, but counting these solutions is#P-completeand therefore not expected to have a polynomial-time solution. Random instances undergo a sharp phase transition from solvable to unsolvable instances as the ratio of constraints to variables increases past 1, a phenomenon conjectured but unproven for more complicated forms of the satisfiability problem. A computationally difficult variation of 2-satisfiability, finding a truth assignment that maximizes the number of satisfied constraints, has anapproximation algorithmwhose optimality depends on theunique games conjecture, and another difficult variation, finding a satisfying assignment minimizing the number of true variables, is an important test case forparameterized complexity.
A 2-satisfiability problem may be described using aBoolean expressionwith a special restricted form. It is aconjunction(a Booleanandoperation) ofclauses, where each clause is adisjunction(a Booleanoroperation) of two variables or negated variables. The variables or their negations appearing in this formula are known asliterals.[1]For example, the following formula is in conjunctive normal form, with seven variables, eleven clauses, and 22 literals:(x0∨x2)∧(x0∨¬x3)∧(x1∨¬x3)∧(x1∨¬x4)∧(x2∨¬x4)∧(x0∨¬x5)∧(x1∨¬x5)∧(x2∨¬x5)∧(x3∨x6)∧(x4∨x6)∧(x5∨x6).{\displaystyle {\begin{aligned}&(x_{0}\lor x_{2})\land (x_{0}\lor \lnot x_{3})\land (x_{1}\lor \lnot x_{3})\land (x_{1}\lor \lnot x_{4})\land {}\\&(x_{2}\lor \lnot x_{4})\land {}(x_{0}\lor \lnot x_{5})\land (x_{1}\lor \lnot x_{5})\land (x_{2}\lor \lnot x_{5})\land {}\\&(x_{3}\lor x_{6})\land (x_{4}\lor x_{6})\land (x_{5}\lor x_{6}).\end{aligned}}}
The 2-satisfiability problem is to find atruth assignmentto these variables that makes the whole formula true. Such an assignment chooses whether to make each of the variables true or false, so that at least one literal in every clause becomes true. For the expression shown above, one possible satisfying assignment is the one that sets all seven of the variables to true. Every clause has at least one non-negated variable, so this assignment satisfies every clause. There are also 15 other ways of setting all the variables so that the formula becomes true. Therefore, the 2-satisfiability instance represented by this expression is satisfiable.
Formulas in this form are known as 2-CNF formulas. The "2" in this name stands for the number of literals per clause, and "CNF" stands forconjunctive normal form, a type of Boolean expression in the form of a conjunction of disjunctions.[1]They are also called Krom formulas, after the work ofUC Davismathematician Melven R. Krom, whose 1967 paper was one of the earliest works on the 2-satisfiability problem.[2]
Each clause in a 2-CNF formula islogically equivalentto an implication from one variable or negated variable to the other. For example, the second clause in the example may be written in any of three equivalent ways:(x0∨¬x3)≡(¬x0⇒¬x3)≡(x3⇒x0).{\displaystyle (x_{0}\lor \lnot x_{3})\;\equiv \;(\lnot x_{0}\Rightarrow \lnot x_{3})\;\equiv \;(x_{3}\Rightarrow x_{0}).}Because of this equivalence between these different types of operation, a 2-satisfiability instance may also be written inimplicative normal form, in which we replace eachorclause in the conjunctive normal form by the two implications to which it is equivalent.[3]
A third, more graphical way of describing a 2-satisfiability instance is as animplication graph. An implication graph is adirected graphin which there is onevertexper variable or negated variable, and an edge connecting one vertex to another whenever the corresponding variables are related by an implication in the implicative normal form of the instance. An implication graph must be askew-symmetric graph, meaning that it has asymmetrythat takes each variable to its negation and reverses the orientations of all of the edges.[4]
Several algorithms are known for solving the 2-satisfiability problem. The most efficient of them takelinear time.[2][4][5]
Krom (1967)described the followingpolynomial timedecision procedure for solving 2-satisfiability instances.[2]
Suppose that a 2-satisfiability instance contains two clauses that both use the same variablex, but thatxis negated in one clause and not in the other. Then the two clauses may be combined to produce a third clause, having the two other literals in the two clauses; this third clause must also be satisfied whenever the first two clauses are both satisfied. This is calledresolution. For instance, we may combine the clauses(a∨b){\displaystyle (a\lor b)}and(¬b∨¬c){\displaystyle (\lnot b\lor \lnot c)}in this way to produce the clause(a∨¬c){\displaystyle (a\lor \lnot c)}. In terms of the implicative form of a 2-CNF formula, this rule amounts to finding two implications¬a⇒b{\displaystyle \lnot a\Rightarrow b}andb⇒¬c{\displaystyle b\Rightarrow \lnot c}, and inferring bytransitivitya third implication¬a⇒¬c{\displaystyle \lnot a\Rightarrow \lnot c}.[2]
Krom writes that a formula isconsistentif repeated application of this inference rule cannot generate both the clauses(x∨x){\displaystyle (x\lor x)}and(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}, for any variablex{\displaystyle x}. As he proves, a 2-CNF formula is satisfiable if and only if it is consistent. For, if a formula is not consistent, it is not possible to satisfy both of the two clauses(x∨x){\displaystyle (x\lor x)}and(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}simultaneously. And, if it is consistent, then the formula can be extended by repeatedly adding one clause of the form(x∨x){\displaystyle (x\lor x)}or(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}at a time, preserving consistency at each step, until it includes such a clause for every variable. At each of these extension steps, one of these two clauses may always be added while preserving consistency, for if not then the other clause could be generated using the inference rule. Once all variables have a clause of this form in the formula, a satisfying assignment of all of the variables may be generated by setting a variablex{\displaystyle x}to true if the formula contains the clause(x∨x){\displaystyle (x\lor x)}and setting it to false if the formula contains the clause(¬x∨¬x){\displaystyle (\lnot x\lor \lnot x)}.[2]
Krom was concerned primarily withcompletenessof systems of inference rules, rather than with the efficiency of algorithms. However, his method leads to apolynomial timebound for solving 2-satisfiability problems. By grouping together all of the clauses that use the same variable, and applying the inference rule to each pair of clauses, it is possible to find all inferences that are possible from a given 2-CNF instance, and to test whether it is consistent, in total timeO(n3), wherenis the number of variables in the instance. This formula comes from multiplying the number of variables by theO(n2)number of pairs of clauses involving a given variable, to which the inference rule may be applied. Thus, it is possible to determine whether a given 2-CNF instance is satisfiable in timeO(n3). Because finding a satisfying assignment using Krom's method involves a sequence ofO(n)consistency checks, it would take timeO(n4).Even, Itai & Shamir (1976)quote a faster time bound ofO(n2)for this algorithm, based on more careful ordering of its operations. Nevertheless, even this smaller time bound was greatly improved by the later linear time algorithms ofEven, Itai & Shamir (1976)andAspvall, Plass & Tarjan (1979).
In terms of the implication graph of the 2-satisfiability instance, Krom's inference rule can be interpreted as constructing thetransitive closureof the graph. AsCook (1971)observes, it can also be seen as an instance of theDavis–Putnam algorithmfor solving satisfiability problems using the principle ofresolution. Its correctness follows from the more general correctness of the Davis–Putnam algorithm. Its polynomial time bound follows from the fact that each resolution step increases the number of clauses in the instance, which is upper bounded by a quadratic function of the number of variables.[6]
Even, Itai & Shamir (1976)describe a technique involving limitedbacktrackingfor solving constraint satisfaction problems withbinary variablesand pairwise constraints. They apply this technique to a problem of classroom scheduling, but they also observe that it applies to other problems including 2-SAT.[5]
The basic idea of their approach is to build a partial truth assignment, one variable at a time. Certain steps of the algorithms are "choice points", points at which a variable can be given either of two different truth values, and later steps in the algorithm may cause it to backtrack to one of these choice points. However, only the most recent choice can be backtracked over. All choices made earlier than the most recent one are permanent.[5]
Initially, there is no choice point, and all variables are unassigned. At each step, the algorithm chooses the variable whose value to set, as follows:
Intuitively, the algorithm follows all chains of inference after making each of its choices. This either leads to a contradiction and a backtracking step, or, if no contradiction is derived, it follows that the choice was a correct one that leads to a satisfying assignment. Therefore, the algorithm either correctly finds a satisfying assignment or it correctly determines that the input is unsatisfiable.[5]
Even et al. did not describe in detail how to implement this algorithm efficiently. They state only that by "using appropriate data structures in order to find the implications of any decision", each step of the algorithm (other than the backtracking) can be performed quickly. However, some inputs may cause the algorithm to backtrack many times, each time performing many steps before backtracking, so its overall complexity may be nonlinear. To avoid this problem, they modify the algorithm so that, after reaching each choice point, it begins simultaneously testing both of the two assignments for the variable set at the choice point, spending equal numbers of steps on each of the two assignments. As soon as the test for one of these two assignments would create another choice point, the other test is stopped, so that at any stage of the algorithm there are only two branches of the backtracking tree that are still being tested. In this way, the total time spent performing the two tests for any variable is proportional to the number of variables and clauses of the input formula whose values are permanently assigned. As a result, the algorithm takeslinear timein total.[5]
Aspvall, Plass & Tarjan (1979)found a simpler linear time procedure for solving 2-satisfiability instances, based on the notion ofstrongly connected componentsfromgraph theory.[4]
Two vertices in a directed graph are said to be strongly connected to each other if there is a directed path from one to the other and vice versa. This is anequivalence relation, and the vertices of the graph may be partitioned into strongly connected components, subsets within which every two vertices are strongly connected. There are several efficient linear time algorithms for finding the strongly connected components of a graph, based ondepth-first search:Tarjan's strongly connected components algorithm[7]and thepath-based strong component algorithm[8]each perform a single depth-first search.Kosaraju's algorithmperforms two depth-first searches, but is very simple.
In terms of the implication graph, two literals belong to the same strongly connected component whenever there exist chains of implications from one literal to the other and vice versa. Therefore, the two literals must have the same value in any satisfying assignment to the given 2-satisfiability instance. In particular, if a variable and its negation both belong to the same strongly connected component, the instance cannot be satisfied, because it is impossible to assign both of these literals the same value. As Aspvall et al. showed, this is anecessary and sufficient condition: a 2-CNF formula is satisfiable if and only if there is no variable that belongs to the same strongly connected component as its negation.[4]
This immediately leads to a linear time algorithm for testing satisfiability of 2-CNF formulae: simply perform a strong connectivity analysis on the implication graph and check that each variable and its negation belong to different components. However, as Aspvall et al. also showed, it also leads to a linear time algorithm for finding a satisfying assignment, when one exists. Their algorithm performs the following steps:
Due to the reverse topological ordering and the skew-symmetry, when a literal is set to true, all literals that can be reached from it via a chain of implications will already have been set to true. Symmetrically, when a literalxis set to false, all literals that lead to it via a chain of implications will themselves already have been set to false. Therefore, the truth assignment constructed by this procedure satisfies the given formula, which also completes the proof of correctness of the necessary and sufficient condition identified by Aspvall et al.[4]
As Aspvall et al. show, a similar procedure involving topologically ordering the strongly connected components of the implication graph may also be used to evaluatefully quantified Boolean formulaein which the formula being quantified is a 2-CNF formula.[4]
A number of exact and approximate algorithms for theautomatic label placementproblem are based on 2-satisfiability. This problem concerns placing textual labels on the features of a diagram or map. Typically, the set of possible locations for each label is highly constrained, not only by the map itself (each label must be near the feature it labels, and must not obscure other features), but by each other: every two labels should avoid overlapping each other, for otherwise they would become illegible. In general, finding a label placement that obeys these constraints is anNP-hardproblem. However, if each feature has only two possible locations for its label (say, extending to the left and to the right of the feature) then label placement may be solved in polynomial time. For, in this case, one may create a 2-satisfiability instance that has a variable for each label and that has a clause for each pair of labels that could overlap, preventing them from being assigned overlapping positions. If the labels are all congruent rectangles, the corresponding 2-satisfiability instance can be shown to have only linearly many constraints, leading to near-linear time algorithms for finding a labeling.[10]Poon, Zhu & Chin (1998)describe a map labeling problem in which each label is a rectangle that may be placed in one of three positions with respect to a line segment that it labels: it may have the segment as one of its sides, or it may be centered on the segment. They represent these three positions using two binary variables in such a way that, again, testing the existence of a valid labeling becomes a 2-satisfiability problem.[11]
Formann & Wagner (1991)use 2-satisfiability as part of anapproximation algorithmfor the problem of finding square labels of the largest possible size for a given set of points, with the constraint that each label has one of its corners on the point that it labels. To find a labeling with a given size, they eliminate squares that, if doubled, would overlap another point, and they eliminate points that can be labeled in a way that cannot possibly overlap with another point's label. They show that these elimination rules cause the remaining points to have only two possible label placements per point, allowing a valid label placement (if one exists) to be found as the solution to a 2-satisfiability instance. By searching for the largest label size that leads to a solvable 2-satisfiability instance, they find a valid label placement whose labels are at least half as large as the optimal solution. That is, theapproximation ratioof their algorithm is at most two.[10][12]Similarly, if each label is rectangular and must be placed in such a way that the point it labels is somewhere along its bottom edge, then using 2-satisfiability to find the largest label size for which there is a solution in which each label has the point on a bottom corner leads to an approximation ratio of at most two.[13]
Similar applications of 2-satisfiability have been made for other geometric placement problems. Ingraph drawing, if the vertex locations are fixed and each edge must be drawn as a circular arc with one of two possible locations (for instance as anarc diagram), then the problem of choosing which arc to use for each edge in order to avoid crossings is a 2-satisfiability problem with a variable for each edge and a constraint for each pair of placements that would lead to a crossing. However, in this case it is possible to speed up the solution, compared to an algorithm that builds and then searches an explicit representation of the implication graph, by searching the graphimplicitly.[14]InVLSIintegrated circuit design, if a collection of modules must be connected by wires that can each bend at most once, then again there are two possible routes for the wires, and the problem of choosing which of these two routes to use, in such a way that all wires can be routed in a single layer of the circuit, can be solved as a 2-satisfiability instance.[15]
Boros et al. (1999)consider another VLSI design problem: the question of whether or not to mirror-reverse each module in a circuit design. This mirror reversal leaves the module's operations unchanged, but it changes the order of the points at which the input and output signals of the module connect to it, possibly changing how well the module fits into the rest of the design. Boroset al.consider a simplified version of the problem in which the modules have already been placed along a single linear channel, in which the wires between modules must be routed, and there is a fixed bound on the density of the channel (the maximum number of signals that must pass through any cross-section of the channel). They observe that this version of the problem may be solved as a 2-satisfiability instance, in which the constraints relate the orientations of pairs of modules that are directly across the channel from each other. As a consequence, the optimal density may also be calculated efficiently, by performing a binary search in which each step involves the solution of a 2-satisfiability instance.[16]
One way ofclustering a set of data pointsin ametric spaceinto two clusters is to choose the clusters in such a way as to minimize the sum of thediametersof the clusters, where the diameter of any single cluster is the largest distance between any two of its points. This is preferable to minimizing the maximum cluster size, which may lead to very similar points being assigned to different clusters. If the target diameters of the two clusters are known, a clustering that achieves those targets may be found by solving a 2-satisfiability instance. The instance has one variable per point, indicating whether that point belongs to the first cluster or the second cluster. Whenever any two points are too far apart from each other for both to belong to the same cluster, a clause is added to the instance that prevents this assignment.
The same method also can be used as a subroutine when the individual cluster diameters are unknown. To test whether a given sum of diameters can be achieved without knowing the individual cluster diameters, one may try all maximal pairs of target diameters that add up to at most the given sum, representing each pair of diameters as a 2-satisfiability instance and using a 2-satisfiability algorithm to determine whether that pair can be realized by a clustering. To find the optimal sum of diameters one may perform a binary search in which each step is a feasibility test of this type. The same approach also works to find clusterings that optimize other combinations than sums of the cluster diameters, and that use arbitrary dissimilarity numbers (rather than distances in a metric space) to measure the size of a cluster.[17]The time bound for this algorithm is dominated by the time to solve a sequence of 2-satisfiability instances that are closely related to each other, andRamnath (2004)shows how to solve these related instances more quickly than if they were solved independently from each other, leading to a total time bound ofO(n3)for the sum-of-diameters clustering problem.[18]
Even, Itai & Shamir (1976)consider a model of classroom scheduling in which a set ofnteachers must be scheduled to teach each ofmcohorts of students. The number of hours per week that teacheri{\displaystyle i}spends with cohortj{\displaystyle j}is described by entryRij{\displaystyle R_{ij}}of a matrixR{\displaystyle R}given as input to the problem, and each teacher also has a set of hours during which he or she is available to be scheduled. As they show, the problem isNP-complete, even when each teacher has at most three available hours, but it can be solved as an instance of 2-satisfiability when each teacher only has two available hours. (Teachers with only a single available hour may easily be eliminated from the problem.) In this problem, each variablevij{\displaystyle v_{ij}}corresponds to an hour that teacheri{\displaystyle i}must spend with cohortj{\displaystyle j}, the assignment to the variable specifies whether that hour is the first or the second of the teacher's available hours, and there is a 2-satisfiability clause preventing any conflict of either of two types: two cohorts assigned to a teacher at the same time as each other, or one cohort assigned to two teachers at the same time.[5]
Miyashiro & Matsui (2005)apply 2-satisfiability to a problem of sports scheduling, in which the pairings of around-robin tournamenthave already been chosen and the games must be assigned to the teams' stadiums. In this problem, it is desirable to alternate home and away games to the extent possible, avoiding "breaks" in which a team plays two home games in a row or two away games in a row. At most two teams can avoid breaks entirely, alternating between home and away games; no other team can have the same home-away schedule as these two, because then it would be unable to play the team with which it had the same schedule. Therefore, an optimal schedule has two breakless teams and a single break for every other team. Once one of the breakless teams is chosen, one can set up a 2-satisfiability problem in which each variable represents the home-away assignment for a single team in a single game, and the constraints enforce the properties that any two teams have a consistent assignment for their games, that each team have at most one break before and at most one break after the game with the breakless team, and that no team has two breaks. Therefore, testing whether a schedule admits a solution with the optimal number of breaks can be done by solving a linear number of 2-satisfiability problems, one for each choice of the breakless team. A similar technique also allows finding schedules in which every team has a single break, and maximizing rather than minimizing the number of breaks (to reduce the total mileage traveled by the teams).[19]
Tomographyis the process of recovering shapes from their cross-sections. Indiscrete tomography, a simplified version of the problem that has been frequently studied, the shape to be recovered is apolyomino(a subset of the squares in the two-dimensionalsquare lattice), and the cross-sections provide aggregate information about the sets of squares in individual rows and columns of the lattice. For instance, in the popularnonogrampuzzles, also known as paint by numbers or griddlers, the set of squares to be determined represents the darkpixelsin abinary image, and the input given to the puzzle solver tells him or her how many consecutive blocks of dark pixels to include in each row or column of the image, and how long each of those blocks should be. In other forms of digital tomography, even less information about each row or column is given: only the total number of squares, rather than the number and length of the blocks of squares. An equivalent version of the problem is that we must recover a given0-1 matrixgiven only the sums of the values in each row and in each column of the matrix.
Although there exist polynomial time algorithms to find a matrix having given row and column sums,[20]the solution may be far from unique: any submatrix in the form of a 2 × 2identity matrixcan be complemented without affecting the correctness of the solution. Therefore, researchers have searched for constraints on the shape to be reconstructed that can be used to restrict the space of solutions. For instance, one might assume that the shape is connected; however, testing whether there exists a connected solution is NP-complete.[21]An even more constrained version that is easier to solve is that the shape isorthogonally convex: having a single contiguous block of squares in each row and column.
Improving several previous solutions,Chrobak & Dürr (1999)showed how to reconstruct connected orthogonally convex shapes efficiently, using 2-SAT.[22]The idea of their solution is to guess the indexes of rows containing the leftmost and rightmost cells of the shape to be reconstructed, and then to set up a 2-satisfiability problem that tests whether there exists a shape consistent with these guesses and with the given row and column sums. They use four 2-satisfiability variables for each square that might be part of the given shape, one to indicate whether it belongs to each of four possible "corner regions" of the shape, and they use constraints that force these regions to be disjoint, to have the desired shapes, to form an overall shape with contiguous rows and columns, and to have the desired row and column sums. Their algorithm takes timeO(m3n)wheremis the smaller of the two dimensions of the input shape andnis the larger of the two dimensions. The same method was later extended to orthogonally convex shapes that might be connected only diagonally instead of requiring orthogonal connectivity.[23]
A part of a solver for full nonogram puzzles, Batenburg and Kosters (2008,2009) used 2-satisfiability to combine information obtained from several otherheuristics. Given a partial solution to the puzzle, they usedynamic programmingwithin each row or column to determine whether the constraints of that row or column force any of its squares to be white or black, and whether any two squares in the same row or column can be connected by an implication relation. They also transform the nonogram into a digital tomography problem by replacing the sequence of block lengths in each row and column by its sum, and use amaximum flowformulation to determine whether this digital tomography problem combining all of the rows and columns has any squares whose state can be determined or pairs of squares that can be connected by an implication relation. If either of these two heuristics determines the value of one of the squares, it is included in the partial solution and the same calculations are repeated. However, if both heuristics fail to set any squares, the implications found by both of them are combined into a 2-satisfiability problem and a 2-satisfiability solver is used to find squares whose value is fixed by the problem, after which the procedure is again repeated. This procedure may or may not succeed in finding a solution, but it is guaranteed to run in polynomial time. Batenburg and Kosters report that, although most newspaper puzzles do not need its full power, both this procedure and a more powerful but slower procedure which combines this 2-satisfiability approach with the limited backtracking ofEven, Itai & Shamir (1976)[5]are significantly more effective than the dynamic programming and flow heuristics without 2-satisfiability when applied to more difficult randomly generated nonograms.[24]
Next to 2-satisfiability, the other major subclass of satisfiability problems that can be solved in polynomial time isHorn-satisfiability. In this class of satisfiability problems, the input is again a formula in conjunctive normal form. It can have arbitrarily many literals per clause but at most one positive literal.Lewis (1978)found a generalization of this class,renamable Horn satisfiability, that can still be solved in polynomial time by means of an auxiliary 2-satisfiability instance. A formula isrenamable Hornwhen it is possible to put it into Horn form by replacing some variables by their negations. To do so, Lewis sets up a 2-satisfiability instance with one variable for each variable of the renamable Horn instance, where the 2-satisfiability variables indicate whether or not to negate the corresponding renamable Horn variables.
In order to produce a Horn instance, no two variables that appear in the same clause of the renamable Horn instance should appear positively in that clause; this constraint on a pair of variables is a 2-satisfiability constraint. By finding a satisfying assignment to the resulting 2-satisfiability instance, Lewis shows how to turn any renamable Horn instance into a Horn instance in polynomial time.[25]By breaking up long clauses into multiple smaller clauses, and applying a linear-time 2-satisfiability algorithm, it is possible to reduce this to linear time.[26]
2-satisfiability has also been applied to problems of recognizingundirected graphsthat can be partitioned into anindependent setand a small number ofcomplete bipartite subgraphs,[27]inferring business relationships among autonomous subsystems of the internet,[28]and reconstruction ofevolutionary trees.[29]
A nondeterministic algorithm for determining whether a 2-satisfiability instance isnotsatisfiable, using only alogarithmicamount of writable memory, is easy to describe: simply choose (nondeterministically) a variablevand search (nondeterministically) for a chain of implications leading fromvto its negation and then back tov. If such a chain is found, the instance cannot be satisfiable. By theImmerman–Szelepcsényi theorem, it is also possible in nondeterministic logspace to verify that a satisfiable 2-satisfiability instance is satisfiable.
2-satisfiability isNL-complete,[30]meaning that it is one of the "hardest" or "most expressive" problems in thecomplexity classNLof problems solvable nondeterministically in logarithmic space. Completeness here means that a deterministic Turing machine using only logarithmic space can transform any other problem inNLinto an equivalent 2-satisfiability problem. Analogously to similar results for the more well-known complexity classNP, this transformation together with the Immerman–Szelepcsényi theorem allow any problem in NL to be represented as asecond order logicformula with a single existentially quantified predicate with clauses limited to length 2. Such formulae are known as SO-Krom.[31]Similarly, theimplicative normal formcan be expressed infirst order logicwith the addition of an operator fortransitive closure.[31]
The set of all solutions to a 2-satisfiability instance has the structure of amedian graph, in which an edge corresponds to the operation of flipping the values of a set of variables that are all constrained to be equal or unequal to each other. In particular, by following edges in this way one can get from any solution to any other solution. Conversely, any median graph can be represented as the set of solutions to a 2-satisfiability instance in this way. The median of any three solutions is formed by setting each variable to the value it holds in themajorityof the three solutions. This median always forms another solution to the instance.[32]
Feder (1994)describes an algorithm for efficiently listing all solutions to a given 2-satisfiability instance, and for solving several related problems.[33]There also exist algorithms for finding two satisfying assignments that have the maximalHamming distancefrom each other.[34]
#2SATis the problem of counting the number of satisfying assignments to a given 2-CNF formula. Thiscounting problemis#P-complete,[35]which implies that it is not solvable inpolynomial timeunlessP = NP. Moreover, there is nofully polynomial randomized approximation schemefor #2SAT unlessNP=RPand this even holds when the input is restricted to monotone 2-CNF formulas, i.e., 2-CNF formulas in which eachliteralis a positive occurrence of a variable.[36]
The fastest known algorithm for computing the exact number of satisfying assignments to a 2SAT formula runs in timeO(1.2377n){\displaystyle O(1.2377^{n})}.[37][38][39]
One can form a 2-satisfiability instance at random, for a given numbernof variables andmof clauses, by choosing each clause uniformly at random from the set of all possible two-variable clauses. Whenmis small relative ton, such an instance will likely be satisfiable, but larger values ofmhave smaller probabilities of being satisfiable. More precisely, ifm/nis fixed as a constant α ≠ 1, the probability of satisfiability tends to alimitasngoes to infinity: if α < 1, the limit is one, while if α > 1, the limit is zero. Thus, the problem exhibits aphase transitionat α = 1.[40]
In the maximum-2-satisfiability problem (MAX-2-SAT), the input is a formula inconjunctive normal formwith twoliteralsper clause, and the task is to determine the maximum number of clauses that can be simultaneously satisfied by an assignment. Like the more generalmaximum satisfiability problem, MAX-2-SAT isNP-hard. The proof is by reduction from3SAT.[41]
By formulating MAX-2-SAT as a problem of finding acut(that is, a partition of the vertices into two subsets) maximizing the number of edges that have one endpoint in the first subset and one endpoint in the second, in a graph related to the implication graph, and applyingsemidefinite programmingmethods to this cut problem, it is possible to find in polynomial time an approximate solution that satisfies at least 0.940... times the optimal number of clauses.[42]AbalancedMAX 2-SAT instance is an instance of MAX 2-SAT where every variable appears positively and negatively with equal weight. For this problem, Austrin has improved the approximation ratio tomin{(3−cosθ)−1(2+(2/π)θ):π/2≤θ≤π}=0.943...{\displaystyle \min \left\{(3-\cos \theta )^{-1}(2+(2/\pi )\theta )\,:\,\pi /2\leq \theta \leq \pi \right\}=0.943...}.[43]
If theunique games conjectureis true, then it is impossible to approximate MAX 2-SAT, balanced or not, with anapproximation constantbetter than 0.943... in polynomial time.[44]Under the weaker assumption thatP ≠ NP, the problem is only known to be inapproximable within a constant better than 21/22 = 0.95454...[45]
Various authors have also explored exponential worst-case time bounds for exact solution of MAX-2-SAT instances.[46]
In the weighted 2-satisfiability problem (W2SAT), the input is ann{\displaystyle n}-variable 2SAT instance and an integerk, and the problem is to decide whether there exists a satisfying assignment in which exactlykof the variables are true.[47]
The W2SAT problem includes as a special case thevertex cover problem, of finding a set ofkvertices that together touch all the edges of a given undirected graph. For any given instance of the vertex cover problem, one can construct an equivalent W2SAT problem with a variable for each vertex of a graph. Each edgeuvof the graph may be represented by a 2SAT clauseu∨vthat can be satisfied only by including eitheruorvamong the true variables of the solution. Then the satisfying instances of the resulting 2SAT formula encode solutions to the vertex cover problem, and there is a satisfying assignment withktrue variables if and only if there is a vertex cover withkvertices. Therefore, like vertex cover, W2SAT isNP-complete.
Moreover, inparameterized complexityW2SAT provides a naturalW[1]-completeproblem,[47]which implies that W2SAT is notfixed-parameter tractableunless this holds for all problems inW[1]. That is, it is unlikely that there exists an algorithm for W2SAT whose running time takes the formf(k)·nO(1). Even more strongly, W2SAT cannot be solved in timeno(k)unless theexponential time hypothesisfails.[48]
As well as finding the first polynomial-time algorithm for 2-satisfiability,Krom (1967)also formulated the problem of evaluatingfully quantified Boolean formulaein which the formula being quantified is a 2-CNF formula. The 2-satisfiability problem is the special case of this quantified 2-CNF problem, in which all quantifiers areexistential. Krom also developed an effective decision procedure for these formulae.Aspvall, Plass & Tarjan (1979)showed that it can be solved in linear time, by an extension of their technique of strongly connected components and topological ordering.[2][4]
The 2-satisfiability problem can also be asked for propositionalmany-valued logics. The algorithms are not usually linear, and for some logics the problem is even NP-complete. See Hähnle (2001,2003) for surveys.[49]
|
https://en.wikipedia.org/wiki/2-satisfiability
|
Bluetooth Meshis a computermesh networkingstandardbased onBluetooth Low Energythat allows for many-to-many communication over Bluetooth radio. The Bluetooth Mesh specifications were defined in the Mesh Profile[1]and Mesh Model[2]specifications by theBluetooth Special Interest Group(Bluetooth SIG). Bluetooth Mesh was conceived in 2014[3]and adopted on July 13, 2017(2017-07-13).[4]
Bluetooth Mesh is amesh networkingstandard that operates on aflood networkprinciple. It's based on the nodes relaying the messages: every relay node that receives a network packet that
can be retransmitted with TTL = TTL - 1. Message caching is used to prevent relaying recently seen messages.
Communication is carried in the messages that may be up to 384 bytes long, when using Segmentation and Reassembly (SAR) mechanism, but most of the messages fit in one segment, that is 11 bytes. Each message starts with an opcode, which may be a single byte (for special messages), 2 bytes (for standard messages), or 3 bytes (for vendor-specific messages).
Every message has a source and a destination address, determining which devices process messages. Devices publish messages to destinations which can be single things / groups of things / everything.
Each message has a sequence number that protects the network against replay attacks.
Each message is encrypted and authenticated. Two keys are used to secure messages: (1) network keys – allocated to a single mesh network, (2) application keys – specific for a given application functionality, e.g. turning the light on vs reconfiguring the light.
Messages have atime to live(TTL). Each time message is received and retransmitted, TTL is decremented which limits the number of "hops", eliminating endless loops.
Bluetooth Mesh has a layered architecture, with multiple layers as below.
Nodes that support the various features can be formed into a particular mesh network topology.
to enable larger networks.
advertising bearers.
duty cycles only in conjunction with a node supporting the Friend feature.
messages destined for those nodes.
The practical limits of Bluetooth Mesh technology are unknown. Some limits that are built into the specification include:
Number of virtual groups is 2128.
As of version 1.0 of Bluetooth Mesh specification,[2]the following standard models and model groups have been defined:
Foundation models have been defined in the core specification. Two of them are mandatory for all mesh nodes.
Provisioning is a process of installing the device into a network. It is a mandatory step to build a Bluetooth Mesh network.
In the provisioning process, a provisioner securely distributes a network key and a unique address space for a device. The provisioning protocol uses P256 Elliptic CurveDiffie-HellmanKey Exchange to create a temporary key to encrypt network key and other information. This provides security from a passive eavesdropper.
It also provides various authentication mechanisms to protect network information, from an active eavesdropper who usesman-in-the-middle attack, during provisioning process.
A key unique to a device known as "Device Key" is derived from elliptic curve shared secret on provisioner and device during the provisioning process. This device key is used by the provisioner to encrypt messages for that specific device.
The security of the provisioning process has been analyzed in a paper presented during theIEEE CNS2018 conference.[5]
The provisioning can be performed using a Bluetooth GATT connection or advertising using the specific bearer.[1]
Free softwareandopen source softwareimplementations include the following:
|
https://en.wikipedia.org/wiki/Bluetooth_mesh#Implementations
|
Criminal Reduction Utilising Statistical History(CRUSH) is anIBMpredictive analyticssystem that attempts to predict the location of future crimes.[1]It was developed as part of the Blue CRUSH program in conjunction withMemphis Police Departmentand theUniversity of MemphisCriminology and Research department.[2]In Memphis it was “credited as a key factor behind a 31 per cent fall in crime and 15 per cent drop in violent crime.”[3]
As of July 2010[update], it was being trialed by two British police forces.[1]
In 2014 a modified version of the system, called CRASH (Crash Reduction Analysing Statistical History) became operational inTennesseeaimed at preventing vehicle accidents.[4]
Thiscrime-related article is astub. You can help Wikipedia byexpanding it.
Thislaw enforcement–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Criminal_Reduction_Utilising_Statistical_History
|
The following tables provide acomparison ofcomputer algebra systems(CAS).[1][2][3]A CAS is a package comprising a set of algorithms for performing symbolic manipulations on algebraic objects, a language to implement them, and an environment in which to use the language.[4][5]A CAS may include a user interface and graphics capability; and to be effective may require a large library of algorithms, efficient data structures and a fast kernel.[6]
These computer algebra systems are sometimes combined with "front end" programs that provide a better user interface, such as the general-purposeGNU TeXmacs.
Below is a summary of significantly developedsymbolicfunctionality in each of the systems.
Those which do not "edit equations" may have aGUI, plotting, ASCII graphic formulae and math font printing. The ability to generate plaintext files is also a sought-after feature because it allows a work to be understood by people who do not have a computer algebra system installed.
The software can run under their respectiveoperating systemsnatively withoutemulation. Some systems must be compiled first using an appropriate compiler for the source language and target platform. For some platforms, only older releases of the software may be available.
Somegraphing calculatorshave CAS features.
2.01.7000 (ClassPad II, fx-CG500)
|
https://en.wikipedia.org/wiki/Comparison_of_computer_algebra_systems
|
Asoftware-defined perimeter(SDP), sometimes referred to as ablack cloud, is a method of enhancing computer security. The SDP framework was developed by theCloud Security Allianceto control access to resources based on identity. In an SDP, connectivity follows a need-to-know model, where both device posture and identity are verified before access to application infrastructure is granted.[1]The application infrastructure in a software-defined perimeter is effectively "black"—a term used by theDepartment of Defenseto describe an undetectable infrastructure—lacking visibleDNSinformation orIP addresses.[dubious–discuss]Proponents of these systems claim that an SDP mitigates many common network-based attacks, including server scanning, denial-of-service,SQL injection, operating system and application vulnerability exploits,man-in-the-middle attacks,pass-the-hash, pass-the-ticket, and other attacks by unauthorized users.[2]
An SDP is a security methodology that controls access to resources based on user identity and device posture. It follows azero-trustmodel, verifying both factors before granting access to applications. This approach aims to make internal infrastructure invisible to the internet, reducing the attack surface for threats like denial-of-service (DoS) and server scanning (Ref. [1]).
Traditional network security relies on a fixed perimeter, typically protected by firewalls. While this isolates internal services, it becomes vulnerable with the rise of:
SDPs address these issues by:
An SDP consists of two main components:
The workflow involves:
There are several ways to deploy SDPs, each suited for specific scenarios:
SDPs offer security benefits in various situations:
Software-defined perimeters offer a dynamic approach to network security, aligning with zero-trust principles. They can enhance security for on-premise, cloud, and hybrid environments.
|
https://en.wikipedia.org/wiki/Software-defined_perimeter
|
Automatic vectorization, inparallel computing, is a special case ofautomatic parallelization, where acomputer programis converted from ascalarimplementation, which processes a single pair ofoperandsat a time, to avectorimplementation, which processes one operation on multiple pairs of operands at once. For example, modern conventional computers, including specializedsupercomputers, typically havevector operationsthat simultaneously perform operations such as the following four additions (viaSIMDorSPMDhardware):
However, in mostprogramming languagesone typically writes loops that sequentially perform additions of many numbers. Here is an example of such a loop, written inC:
A vectorizingcompilertransforms such loops into sequences of vector operations. These vector operations perform additions on blocks of elements from the arraysa,bandc. Automatic vectorization is a major research topic in computer science.[citation needed]
Early computers usually had one logic unit, which executed one instruction on one pair of operands at a time.Computer languagesand programs therefore were designed to execute in sequence. Modern computers, though, can do many things at once. So, many optimizing compilers perform automatic vectorization, where parts of sequential programs are transformed into parallel operations.
Loop vectorizationtransforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets. Loop vectorization is implemented inIntel'sMMX,SSE, andAVX, inPower ISA'sAltiVec, inARM'sNEON,SVEand SVE2, and inRISC-V'sVector Extensioninstruction sets.
Many constraints prevent or hinder vectorization. Sometimes vectorization can slow down execution, for example because ofpipelinesynchronization or data-movement timing.Loop dependence analysisidentifies loops that can be vectorized, relying on thedata dependenceof the instructions inside loops.
Automatic vectorization, like anyloop optimizationor other compile-time optimization, must exactly preserve program behavior.
All dependencies must be respected during execution to prevent incorrect results.
In general, loop invariant dependencies andlexically forward dependenciescan be easily vectorized, and lexically backward dependencies can be transformed into lexically forward dependencies. However, these transformations must be done safely, in order to ensure that the dependence betweenall statementsremain true to the original.
Cyclic dependencies must be processed independently of the vectorized instructions.
Integerprecision(bit-size) must be kept during vector instruction execution. The correct vector instruction must be chosen based on the size and behavior of the internal integers. Also, with mixed integer types, extra care must be taken to promote/demote them correctly without losing precision. Special care must be taken withsign extension(because multiple integers are packed inside the same register) and during shift operations, or operations withcarry bitsthat would otherwise be taken into account.
Floating-pointprecision must be kept as well, unlessIEEE-754compliance is turned off, in which case operations will be faster but the results may vary slightly. Big variations, even ignoring IEEE-754 usually signify programmer error.
To vectorize a program, the compiler's optimizer must first understand the dependencies between statements and re-align them, if necessary. Once the dependencies are mapped, the optimizer must properly arrange the implementing instructions changing appropriate candidates to vector instructions, which operate on multiple data items.
The first step is to build thedependency graph, identifying which statements depend on which other statements. This involves examining each statement and identifying every data item that the statement accesses, mapping array access modifiers to functions and checking every access' dependency to all others in all statements.Alias analysiscan be used to certify that the different variables access (or intersect) the same region in memory.
The dependency graph contains all local dependencies with distance not greater than the vector size. So, if the vector register is 128 bits, and the array type is 32 bits, the vector size is 128/32 = 4. All other non-cyclic dependencies should not invalidate vectorization, since there won't be any concurrent access in the same vector instruction.
Suppose the vector size is the same as 4 ints:
Using the graph, the optimizer can then cluster thestrongly connected components(SCC) and separate vectorizable statements from the rest.
For example, consider a program fragment containing three statement groups inside a loop: (SCC1+SCC2), SCC3 and SCC4, in that order, in which only the second group (SCC3) can be vectorized. The final program will then contain three loops, one for each group, with only the middle one vectorized. The optimizer cannot join the first with the last without violating statement execution order, which would invalidate the necessary guarantees.
Some non-obvious dependencies can be further optimized based on specific idioms.
For instance, the following self-data-dependencies can be vectorized because the value of the right-hand values (RHS) are fetched and then stored on the left-hand value, so there is no way the data will change within the assignment.
Self-dependence by scalars can be vectorized byvariable elimination.
The general framework for loop vectorization is split into four stages:
Some vectorizations cannot be fully checked at compile time. For example, library functions can defeat optimization if the data they process is supplied by the caller. Even in these cases, run-time optimization can still vectorize loops on-the-fly.
This run-time check is made in thepreludestage and directs the flow to vectorized instructions if possible, otherwise reverts to standard processing, depending on the variables that are being passed on the registers or scalar variables.
The following code can easily be vectorized at compile time, as it doesn't have any dependence on external parameters. Also, the language guarantees that neither will occupy the same region in memory as any other variable, as they are local variables and live only in the executionstack.
On the other hand, the code below has no information on memory positions, because the references arepointersand the memory they point to may overlap.
A quick run-time check on theaddressof bothaandb, plus the loop iteration space (128) is enough to tell if the arrays overlap or not, thus revealing any dependencies. (Note that from C99, qualifying the parameters with therestrictkeyword—here:int *restrict a, int *restrict b)—tells the compiler that the memory ranges pointed to byaandbdo not overlap, leading to the same outcome as the example above.)
There exist some tools to dynamically analyze existing applications to assess the inherent latent potential for SIMD parallelism, exploitable through further compiler advances and/or via manual code changes.[1]
An example would be a program to multiply two vectors of numeric data. A scalar approach would be something like:
This could be vectorized to look something like:
Here, c[i:i+3] represents the four array elements from c[i] to c[i+3] and the vector processor can perform four operations for a single vector instruction. Since the four vector operations complete in roughly the same time as one scalar instruction, the vector approach can run up to four times faster than the original code.
There are two distinct compiler approaches: one based on the conventional vectorization technique and the other based onloop unrolling.
This technique, used for conventional vector machines, tries to find and exploit SIMD parallelism at the loop level. It consists of two major steps as follows.
In the first step, the compiler looks for obstacles that can prevent vectorization. A major obstacle for vectorization istrue data dependencyshorter than the vector length. Other obstacles include function calls and short iteration counts.
Once the loop is determined to be vectorizable, the loop is stripmined by the vector length and each scalar instruction within the loop body is replaced with the corresponding vector instruction. Below, the component transformations for this step are shown using the above example.
This relatively new technique specifically targets modern SIMD architectures with short vector lengths.[2]Although loops can be unrolled to increase the amount of SIMD parallelism in basic blocks, this technique exploits SIMD parallelism within basic blocks rather than loops. The two major steps are as follows.
To show step-by-step transformations for this approach, the same example is used again.
Here, sA1, sB1, ... represent scalar variables and vA, vB, and vC represent vector variables.
Most automatically vectorizing commercial compilers use the conventional loop-level approach except the IBM XL Compiler,[3][obsolete source]which uses both.
The presence of if-statements in the loop body requires the execution of instructions in all control paths to merge the multiple values of a variable. One general approach is to go through a sequence of code transformations: predication → vectorization(using one of the above methods) → remove vector predicates → remove scalar predicates.[4]If the following code is used as an example to show these transformations;
where (P) denotes a predicate guarding the statement.
Having to execute the instructions in all control paths in vector code has been one of the major factors that slow down the vector code with respect to the scalar baseline. The more complex the control flow becomes and the more instructions are bypassed in the scalar code, the larger the vectorization overhead becomes. To reduce this vectorization overhead, vector branches can be inserted to bypass vector instructions similar to the way scalar branches bypass scalar instructions.[5]Below, AltiVec predicates are used to show how this can be achieved.
There are two things to note in the final code with vector branches; First, the predicate defining instruction for vPA is also included within the body of the outer vector branch by using vec_any_gt. Second, the profitability of the inner vector branch for vPB depends on the conditional probability of vPB having false values in all fields given vPA has false values in all fields.
Consider an example where the outer branch in the scalar baseline is always taken, bypassing most instructions in the loop body. The intermediate case above, without vector branches, executes all vector instructions. The final code, with vector branches, executes both the comparison and the branch in vector mode, potentially gaining performance over the scalar baseline.
In mostCandC++compilers, it is possible to useintrinsic functionsto manually vectorise, at the expense of programmer effort and maintainability.
|
https://en.wikipedia.org/wiki/Automatic_vectorization
|
Inmathematics, aLaurent polynomial(named
afterPierre Alphonse Laurent) in one variable over afieldF{\displaystyle \mathbb {F} }is alinear combinationof positive and negative powers of the variable withcoefficientsinF{\displaystyle \mathbb {F} }. Laurent polynomials inX{\displaystyle X}form aringdenotedF[X,X−1]{\displaystyle \mathbb {F} [X,X^{-1}]}.[1]They differ from ordinarypolynomialsin that they may have terms of negative degree. The construction of Laurent polynomials may be iterated, leading to the ring of Laurent polynomials in several variables. Laurent polynomials are of particular importance in the study ofcomplex variables.
ALaurent polynomialwith coefficients in a fieldF{\displaystyle \mathbb {F} }is an expression of the form
whereX{\displaystyle X}is a formal variable, the summation indexk{\displaystyle k}is aninteger(not necessarily positive) and only finitely many coefficientspk{\displaystyle p_{k}}are non-zero. Two Laurent polynomials are equal if their coefficients are equal. Such expressions can be added, multiplied, and brought back to the same form by reducing similar terms. Formulas for addition and multiplication are exactly the same as for the ordinary polynomials, with the only difference that both positive and negative powers ofX{\displaystyle X}can be present:
and
Since only finitely many coefficientsai{\displaystyle a_{i}}andbj{\displaystyle b_{j}}are non-zero, all sums in effect have only finitely many terms, and hence represent Laurent polynomials.
|
https://en.wikipedia.org/wiki/Laurent_polynomial
|
Amobile broadband modem, also known aswireless modemorcellular modem, is a type ofmodemthat allows apersonal computeror arouterto receivewirelessInternet accessvia amobile broadbandconnection instead of usingtelephoneorcable televisionlines. A mobile Internet user can connect using a wireless modem to a wirelessInternet Service Provider(ISP) to getInternet access.[1][2]
While someanaloguemobile phones provided a standardRJ11telephone socket into which a normal landline modem could be plugged, this only provided slowdial-upconnections, usually 2.4 kilobit per second (kbit/s) or less. The next generation of phones, known as 2G (for 'second generation'), were digital, and offered faster dial-up speeds of 9.6 kbit/s or 14.4 kbit/s without the need for a separate modem. A further evolution calledHSCSDused multiple GSM channels (two or three in each direction) to support up to 43.2 kbit/s. All of these technologies still required their users to have a dial-upISPto connect to and provide the Internet access - it was not provided by the mobile phone network itself.
The release of2.5Gphones with support forpacketdata changed this. The 2.5G networks break both digital voice and data into small chunks, and mix both onto the network simultaneously in a process calledpacket switching. This allows the phone to have a voice connection and a data connection at the same time, rather than a single channel that has to be used for one or the other. The network can link the data connection into a company network, but for most users the connection is to the Internet. This allows web browsing on the phone, but a PC can also tap into this service if it connects to the phone. The PC needs to send a special telephone number to the phone to get access to the packet data connection. From the PC's viewpoint, the connection still looks like a normal PPP dial-up link, but it is all terminating on the phone, which then handles the exchange of data with the network. Speeds on 2.5G networks are usually in the 30–50 kbit/s range.
The firstpersonal computerwith a built-in mobile broadband modem was the ITC 286 CAT, a laptop byIntelligence Technology Corporation. Released in 1988, it featured aHayes-compatibleAMPSmodem capable of transmitting data at 1.2 kbit/s.[3][4]
3Gnetworks have taken this approach to a higher level, using different underlying technology but the same principles. They routinely provide speeds over 300 kbit/s. Due to the now increased internet speed, internet connection sharing viaWLANhas become a workable reality. Devices which allow internet connection sharing or other types of routing on cellular networks are called alsocellular routers.
A further evolution is the3.5GtechnologyHSDPA, which provides speeds of multipleMegabits per second. Several of themobile network operatorsthat provide 3G or faster wireless internet access offer plans and wireless modems that enable computers to connect to and access the internet. These wireless modems are typically in the form of a small USB based device or a small, portable mobile hotspot that acts as a WiFi access point (hotspot) to enable multiple devices to connect to the internet.WiMAXbased services that provide high speed wireless internet access are available in some countries and also rely on wireless modems that connect to the provider's wireless network. Wireless USB modems are nicknamed as "dongles".
Early 3G mobile broadband modems used thePCMCIAorExpressCardports, commonly found on legacy laptops. The expression "connect card" (instead of connection card) had been registered and used the first time byVodafoneas brand for its products but now is become abrandnomerorgenericized trademarkused incolloquialorcommercialspeech for similar product, made by different manufacturers, too. Major producers areHuawei,Option N.V., Novatel Wireless. More recently, the expression "connect card" is also used to identify internetUSBkeys. Vodafone brands this type of device as a Vodem.[5]
Often a mobile network operator will supply a 'locked' modem or other wireless device that can only be used on their network. It is possible to use online unlocking services that will remove the 'lock' so the device accepts SIM cards from any network.
Standalone mobile broadband modems are designed to be connected directly to one computer. In the past thePCMCIAandExpressCardstandards were used to connect to the computer. AsUSBconnectivity became almost universal, these various standards were largely superseded by USB modems in the early 21st century. Some models haveGPSsupport, providing geographical location information.[6]
Many mobile broadband modems sold nowadays also have built-in routing capabilities. They provide traditional networking interfaces such asEthernet,USBandWi-Fi.[7]
Numeroussmartphonessupport theHayes command setand therefore can be used as a mobile broadband modem. Somemobile network operatorscharge a fee for this facility,[8]if able to detect the tethering. Other networks have an allowance for full speed mobile broadband access, which—if exceeded—can result in overage charges or slower speeds.[9]
An Internet-accessing smartphone may have the same capabilities as a standalone modem, and, when connected via a USB cable to a computer, can serve as a modem for the computer. Smartphones with built-in Wi-Fi also typically provide routing andwireless access pointfacilities. This method of connecting is commonly referred to as "tethering."[9]
There are competingcommon carriersbroadcastingsignal in most countries.
|
https://en.wikipedia.org/wiki/Mobile_broadband_modem
|
In theanalysis of algorithms, themaster theorem for divide-and-conquer recurrencesprovides anasymptotic analysisfor manyrecurrence relationsthat occur in theanalysisofdivide-and-conquer algorithms. The approach was first presented byJon Bentley,Dorothea Blostein(née Haken), andJames B. Saxein 1980, where it was described as a "unifying method" for solving such recurrences.[1]The name "master theorem" was popularized by the widely used algorithms textbookIntroduction to AlgorithmsbyCormen,Leiserson,Rivest, andStein.
Not all recurrence relations can be solved by this theorem; its generalizations include theAkra–Bazzi method.
Consider a problem that can be solved using arecursive algorithmsuch as the following:
The above algorithm divides the problem into a number (a) of subproblems recursively, each subproblem being of sizen/b. The factor by which the size of subproblems is reduced (b) need not, in general, be the same as the number of subproblems (a). Its solutiontreehas a node for each recursive call, with the children of that node being the other calls made from that call. The leaves of the tree are the base cases of the recursion, the subproblems (of size less thank) that do not recurse. The above example would haveachild nodes at each non-leaf node. Each node does an amount of work that corresponds to the size of the subproblemnpassed to that instance of the recursive call and given byf(n){\displaystyle f(n)}. The total amount of work done by the entire algorithm is the sum of the work performed by all the nodes in the tree.
The runtime of an algorithm such as thepabove on an input of sizen, usually denotedT(n){\displaystyle T(n)}, can be expressed by therecurrence relation
wheref(n){\displaystyle f(n)}is the time to create the subproblems and combine their results in the above procedure. This equation can be successively substituted into itself and expanded to obtain an expression for the total amount of work done.[2]The master theorem allows many recurrence relations of this form to be converted toΘ-notationdirectly, without doing an expansion of the recursive relation.
The master theorem always yieldsasymptotically tight boundsto recurrences fromdivide and conquer algorithmsthat partition an input into smaller subproblems of equal sizes, solve the subproblems recursively, and then combine the subproblem solutions to give a solution to the original problem. The time for such an algorithm can be expressed by adding the work that they perform at the top level of their recursion (to divide the problems into subproblems and then combine the subproblem solutions) together with the time made in the recursive calls of the algorithm. IfT(n){\displaystyle T(n)}denotes the total time for the algorithm on an input of sizen{\displaystyle n}, andf(n){\displaystyle f(n)}denotes the amount of time taken at the top level of the recurrence then the time can be expressed by arecurrence relationthat takes the form:
Heren{\displaystyle n}is the size of an input problem,a{\displaystyle a}is the number of subproblems in the recursion, andb{\displaystyle b}is the factor by which the subproblem size is reduced in each recursive call (b>1{\displaystyle b>1}). Crucially,a{\displaystyle a}andb{\displaystyle b}must not depend onn{\displaystyle n}. The theorem below also assumes that, as a base case for the recurrence,T(n)=Θ(1){\displaystyle T(n)=\Theta (1)}whenn{\displaystyle n}is less than some boundκ>0{\displaystyle \kappa >0}, the smallest input size that will lead to a recursive call.
Recurrences of this form often satisfy one of the three following regimes, based on how the work to split/recombine the problemf(n){\displaystyle f(n)}relates to thecritical exponentccrit=logba{\displaystyle c_{\operatorname {crit} }=\log _{b}a}. (The table below uses standardbig O notation). Throughout,(logn)k{\displaystyle (\log n)^{k}}is used for clarity, though in textbooks this is usually renderedlogkn{\displaystyle \log ^{k}n}.
i.e. the recursion tree is leaf-heavy.
(upper-bounded by a lesser exponent polynomial)
(The splitting term does not appear; the recursive tree structure dominates.)
(rangebound by the critical-exponent polynomial, times zero or more optionallog{\displaystyle \log }s)
(The bound is the splitting term, where the log is augmented by a single power.)
Ifb=a2{\displaystyle b=a^{2}}andf(n)=Θ(n1/2logn){\displaystyle f(n)=\Theta (n^{1/2}\log n)}, thenT(n)=Θ(n1/2(logn)2){\displaystyle T(n)=\Theta (n^{1/2}(\log n)^{2})}.
i.e. the recursion tree is root-heavy.
(lower-bounded by a greater-exponent polynomial)
then the total is dominated by the splitting termf(n){\displaystyle f(n)}:
A useful extension of Case 2 handles all values ofk{\displaystyle k}:[3]
(The bound is the splitting term, where the log is augmented by a single power.)
(The bound is the splitting term, where the log reciprocal is replaced by an iterated log.)
(The bound is the splitting term, where the log disappears.)
As one can see from the formula above:
Next, we see if we satisfy the case 1 condition:
It follows from the first case of the master theorem that
(This result is confirmed by the exact solution of the recurrence relation, which isT(n)=1001n3−1000n2{\displaystyle T(n)=1001n^{3}-1000n^{2}}, assumingT(1)=1{\displaystyle T(1)=1}).
T(n)=2T(n2)+10n{\displaystyle T(n)=2T\left({\frac {n}{2}}\right)+10n}
As we can see in the formula above the variables get the following values:
Next, we see if we satisfy the case 2 condition:
So it follows from the second case of the master theorem:
Thus the given recurrence relationT(n){\displaystyle T(n)}was inΘ(nlogn){\displaystyle \Theta (n\log n)}.
(This result is confirmed by the exact solution of the recurrence relation, which isT(n)=n+10nlog2n{\displaystyle T(n)=n+10n\log _{2}n}, assumingT(1)=1{\displaystyle T(1)=1}).
As we can see in the formula above the variables get the following values:
Next, we see if we satisfy the case 3 condition:
The regularity condition also holds:
So it follows from the third case of the master theorem:
Thus the given recurrence relationT(n){\displaystyle T(n)}was inΘ(n2){\displaystyle \Theta (n^{2})}, that complies with thef(n){\displaystyle f(n)}of the original formula.
(This result is confirmed by the exact solution of the recurrence relation, which isT(n)=2n2−n{\displaystyle T(n)=2n^{2}-n}, assumingT(1)=1{\displaystyle T(1)=1}.)
The following equations cannot be solved using the master theorem:[4]
In the second inadmissible example above, the difference betweenf(n){\displaystyle f(n)}andnlogba{\displaystyle n^{\log _{b}a}}can be expressed with the ratiof(n)nlogba=n/lognnlog22=nnlogn=1logn{\displaystyle {\frac {f(n)}{n^{\log _{b}a}}}={\frac {n/\log n}{n^{\log _{2}2}}}={\frac {n}{n\log n}}={\frac {1}{\log n}}}. It is clear that1logn<nϵ{\displaystyle {\frac {1}{\log n}}<n^{\epsilon }}for any constantϵ>0{\displaystyle \epsilon >0}. Therefore, the difference is not polynomial and the basic form of the Master Theorem does not apply. The extended form (case 2b) does apply, giving the solutionT(n)=Θ(nloglogn){\displaystyle T(n)=\Theta (n\log \log n)}.
|
https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
|
In computing,directorimmediate mode[1][2]in an interactive programming system is the immediate execution ofcommands,statements, orexpressions. In many interactive systems, most of these can both be included in programs or executed directly in aread–eval–print loop(REPL).
Most interactive systems also offer the possibility of defining programs in the REPL, either with explicit declarations, such asPython'sdef, or by labelling them withline numbers. Programs can then be run by calling a named or numbered procedure or by running a main program.
Many programming systems, fromLispandJOSStoPythonandPerlhave interactiveREPLswhich also allow defining programs. Mostintegrated development environmentsoffer a direct mode where, duringdebuggingand while the program execution is suspended, commands can be executed directly in the current scope and the result is displayed.
|
https://en.wikipedia.org/wiki/Direct_mode
|
Semiconductor device modelingcreates models for the behavior ofsemiconductor devicesbased on fundamental physics, such as the doping profiles of the devices. It may also include the creation ofcompact models(such as the well known SPICEtransistormodels), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. Normally it starts from the output of asemiconductor process simulation.
The figure to the right provides a simplified conceptual view of "the big picture". This figure shows two inverter stages and the resulting input-output voltage-time plot of the circuit. From the digital systems point of view the key parameters of interest are: timing delays, switching power, leakage current and cross-coupling (crosstalk) with other blocks. The voltage levels and transition speed are also of concern.
The figure also shows schematically the importance of Ionversus Ioff, which in turn is related to drive-current (and mobility) for the "on" device and several leakage paths for the "off" devices. Not shown explicitly in the figure are the capacitances—both intrinsic and parasitic—that affect dynamic performance.
The power scaling which is now a major driving force in the industry is reflected in the simplified equation shown in the figure—critical parameters are capacitance, power supply and clocking frequency. Key parameters that relate device behavior to system performance include thethreshold voltage, driving current and subthreshold characteristics.
It is the confluence of system performance issues with the underlying technology and device design variables that results in the ongoing scaling laws that we now codify asMoore's law.
The physics and modeling of devices inintegrated circuitsis dominated by MOS and bipolar transistor modeling. However, other devices are important, such as memory devices, that have rather different modeling requirements. There are of course also issues ofreliability engineering—for example, electro-static discharge (ESD) protection circuits and devices—where substrate and parasitic devices are of pivotal importance. These effects and modeling are not considered by most device modeling programs; the interested reader is referred to several excellent monographs in the area of ESD and I/O modeling.[1][2][3]
Physics driven device modeling is intended to be accurate, but it is not fast enough for higher level tools, includingcircuit simulatorssuch asSPICE. Therefore, circuit simulators normally use more empirical models (often called compact models) that do not directly model the underlying physics. For example,inversion-layer mobility modeling, or the modeling of mobility and its dependence on physical parameters, ambient and operating conditions is an important topic both forTCAD(technology computer aided design) physical models and for circuit-level compact models. However, it is not accurately modeled from first principles, and so resort is taken to fitting experimental data. For mobility modeling at the physical level the electrical variables are the various scattering mechanisms, carrier densities, and local potentials and fields, including their technology and ambient dependencies.
By contrast, at the circuit-level, models parameterize the effects in terms of terminal voltages and empirical scattering parameters. The two representations can be compared, but it is unclear in many cases how the experimental data is to be interpreted in terms of more microscopic behavior.
The evolution of technology computer-aided design (TCAD)—the synergistic combination of process, device and circuit simulation and modeling tools—finds its roots inbipolartechnology, starting in the late 1960s, and the challenges of junction isolated, double-and triple-diffused transistors. These devices and technology were the basis of the first integrated circuits; nonetheless, many of the scaling issues and underlying physical effects are integral toIC design, even after four decades of IC development. With these early generations of IC, process variability and parametric yield were an issue—a theme that will reemerge as a controlling factor in future IC technology as well.
Process control issues—both for the intrinsic devices and all the associated parasitics—presented formidable challenges and mandated the development of a range of advanced physical models for process and device simulation. Starting in the late 1960s and into the 1970s, the modeling approaches exploited were dominantly one- and two-dimensional simulators. While TCAD in these early generations showed exciting promise in addressing the physics-oriented challenges of bipolar technology, the superior scalability and power consumption of MOS technology revolutionized the IC industry. By the mid-1980s, CMOS became the dominant driver for integrated electronics. Nonetheless, these early TCAD developments[4][5]set the stage for their growth and broad deployment as an essential toolset that has leveraged technology development through the VLSI and ULSI eras which are now the mainstream.
IC development for more than a quarter-century has been dominated by the MOS technology. In the 1970s and 1980s NMOS was favored owing to speed and area advantages, coupled with technology limitations and concerns related to isolation, parasitic effects and process complexity. During that era of NMOS-dominatedLSIand the emergence of VLSI, the fundamental scaling laws of MOS technology were codified and broadly applied.[6]It was also during this period that TCAD reached maturity in terms of realizing robust process modeling (primarily one-dimensional) which then became an integral technology design tool, used universally across the industry.[7]At the same time device simulation, dominantly two-dimensional owing to the nature of MOS devices, became the work-horse of technologists in the design and scaling of devices.[8][9]The transition fromNMOStoCMOStechnology resulted in the necessity of tightly coupled and fully 2D simulators for process and device simulations. This third generation of TCAD tools became critical to address the full complexity of twin-well CMOS technology (see Figure 3a), including issues of design rules and parasitic effects such aslatchup.[10][11]An abbreviated perspective of this period, through the mid-1980s, is given in;[12]and from the point of view of how TCAD tools were used in the design process, see.[13]
|
https://en.wikipedia.org/wiki/Semiconductor_device_modeling
|
Incomputer programming,instrumentationis the act of modifying software so thatanalysiscan be performed on it.[1]
Generally, instrumentation either modifiessource codeorbinary code. Execution environments like the JVM provide separate interfaces to add instrumentation to program executions, such as theJVMTI, which enables instrumentation during program start.
Instrumentation enablesprofiling:[2]measuring dynamic behavior during a test run. This is useful for properties of a program that cannot beanalyzed staticallywith sufficient precision, such asperformanceandalias analysis.
Instrumentation can include:
Instrumentation is limited by execution coverage. If the program never reaches a particular point of execution, then instrumentation at that point collects no data. For instance, if a word processor application is instrumented, but the user never activates the print feature, then the instrumentation can say nothing about the routines which are used exclusively by the printing feature.
Instrumentation increases the execution time of a program. In some contexts, this increase might be dramatic and hence limit the limit the application of instrumentation to debugging contexts. The instrumentation overhead differs depending on the used instrumentation technology.[4]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Instrumentation_(computer_programming)
|
Cell Broadcast(CB) is a method of simultaneously sendingshort messagesto multiplemobile telephoneusers in a defined area. It is defined by theETSI's GSM committee and3GPPand is part of the2G,3G,4Gand5Gstandards.[1]It is also known as Short Message Service-Cell Broadcast (SMS-CB or CB SMS).[2][3]
Unlike Short Message Service-Point to Point (SMS-PP), Cell Broadcast is aone-to-manygeo-targeted andgeo-fencedmessaging service. Cell Broadcast technology is widely used forpublic warning systems.[4]
Cell Broadcast messaging was first demonstrated in Paris in 1997. Some mobile operators used Cell Broadcast for communicating thearea codeof the antenna cell to the mobile user (via channel 050),[5]for nationwide or citywide alerting, weather reports, mass messaging,location-basednews, etc. Cell broadcast has been widely deployed since 2008 by major Asian, US, Canadian, South American and European network operators. Not all operators have the Cell Broadcast messaging function activated in their network yet, but most of the currently used handsets support cell broadcast, however on many devices it is disabled by default and there isn't a standardised interface to enable the feature.[1]
One Cell Broadcast message can reach a large number of telephones at once. Cell Broadcast messages are directed to specificradio cellsof a mobile phone network, rather than to a specific telephone.[6]The latest generation of Cell Broadcast Systems (CBS) can send to the whole mobile network (e.g. 1,000,000 cells) in less than 10 seconds, reaching millions of mobile subscribers at the same time. A Cell Broadcast message is an unconfirmedpushservice, meaning that the originators of the messages do not know who has received the message, allowing for services based on anonymity.[1]Cell Broadcast is compliant with the latestEU General Data Protection Regulation (GDPR)as mobile phone numbers are not required by CB. The originator (alerting authority) of the Cell Broadcast message can request the success rate of a message. In such a case the Cell Broadcast System will respond with the number of addressed cells and the number of cells that have broadcast the Cell Broadcast (alert) message.
Each radio cell covers a certain geographic area, typically a few kilometers in diameter, so by only sending the Cell Broadcast message to specific radio cells, the broadcast can be limited to a specific area (geotargeting). This is useful for messages that are only relevant in a specific area, such as flood warnings.
The CB message parameters contain the broadcasting schedule. If the start-time is left open, the CBC system will assume an immediate start, which will be the case for Public Warning messages. If the end-time is left open, the message will be repeated indefinitely. A subsequent cancel message shall be used to stop this message. The repetition rate can be set between 2 seconds and to values beyond 30 minutes. Each repeated CB message will have the same message identifier (indicating the source of the message), and the same serial number. Using this information, the mobile telephone is able to identify and ignore broadcasts of already received messages.
A Cell Broadcast message page is composed of 82octets, which, using the default character set, can encode 93 characters. Up to 15 of these pages may be concatenated to form a Cell Broadcast message[1](hence maximum length of one Cell Broadcast message is therefore 1395 characters).[3]
A Cell Broadcast Centre (CBC), a system which is the source of SMS-CB message, is connected to aBase Station Controller (BSC)inGSMnetworks, to aRadio Network Controller (RNC)inUMTSnetworks, to aMobility Management Entity (MME)inLTE (telecommunication)networks or to a core Access and Mobility management Function (AMF) in5Gnetworks.
The technical implementation of the Cell Broadcast service is described in the3GPPspecification TS 23.041[7]
A CBC sends CB messages, a list of cells where messages are to be broadcast, and the requested repetition rate and number of times they shall be broadcast to the BSC/RNC/MME/AMF. The BSC's/RNC's/MME/AMF responsibility is to deliver the CB messages to thebase stations (BTSs),Node Bs,ENodeBsand gNodeBs which handle the requested cells.
Cell Broadcast is not affected by traffic load; therefore, it is very suitable during a disaster when load spikes of data (social mediaandmobile apps), regular SMS and voice calls usage (mass call events) tend to significantly congest mobile networks, as multiple events have shown.
Public Warning Systems, otherwise known as Emergency Alert Systems, implemented through Cell Broadcast technology vary by country, but are broadly the same. Technical standards are outlined in the 3GPP TS 23.041 standard. Large implementations mentioned in 3GPP standards areWireless Emergency Alerts (CMAS)in the United States andEU-Alertin Europe (set out inETSIstandards, but national implementation varies). Alerts can be geo-targeted, when only phones in a defined geographical area are set to receive an alert.[8]When an alert is received, a notification is shown in a unique format and a dedicated sound is played even if the phone is set to silent: atwo-tone attention soundⓘof 853Hzand 960 Hzsine waves, as prescribed by both WEA (CMAS) and ETSI standards.[9][8]Cell Broadcast emergency alerts can be broadcast in a local language and an additional language, which will be displayed depending on the user's device language setting.[10]Most phone manufacturers adhere to these standards but have slightly different user interfaces.[11]Similar toemergency calls, devices do not usually need aSIM cardto receive alerts.[12]
Emergency Alerts in most implementations of Cell Broadcast have distinct alert categories or levels, using a message identifier outlined in 3GPP standards. The alert category or level is defined by the severity of the warning, e.g. threat to life, imminent danger or advisory message. Depending on national implementation, users may be able to opt-out of receiving lower level alerts. However, the highest level of alert will usually always be displayed on a user's device.[13][10]
Below is a comparison table on alert categories/levels across systems (based on the common 3GPP message identifiers):[8]
Whenroaming, if the user's home carrier supports Cell Broadcast emergency alerts, alerts will be displayed if the category/level of alert is enabled and equivalent to their home carrier's system.[10][8]
Cell Broadcast messages can use a CAP (Common Alerting Protocol) message as an input as specified byOASIS (organization)or theWireless Emergency Alerts(WEA) C-interface protocol, which has been specified jointly by theAlliance for Telecommunications Industry Solutions(ATIS) and theTelecommunications Industry Association(TIA).
Advantages of using Cell Broadcast for Public warning are:
A point of criticism in the past on Cell Broadcast was that there was no uniform user experience on all mobile devices in a country.[1]
Wireless Emergency Alerts and Government alerts using Cell Broadcast are supported in most models of mobile telephones. Some smart phones have a configuration menu that offer opt-out capabilities for certain public warning severity levels.[5][14]
In case a national civil defence organisation is adopting one of the 3GPP's Public Warning System standards, PWS - also known as CMAS in North America,EU-Alertin Europe, LAT-Alert in South America, Earthquake and Tsunami Warning System in Japan, each subscriber in that country either making use of the home network or its roaming automatically makes use of the embedded Public warning Cell Broadcast feature present in everyAndroid (operating system)[5]andiOSmobile device.[14]
In countries[who?]that have selected Cell Broadcast to transmit public warning messages, up to 99% of the handsets receive the cell broadcast message (reaching between 85 and 95% of the entire population as not all people have a mobile phone) within seconds after the government authorities have submitted the message; see as examplesEmergency Mobile Alert(New Zealand),Wireless Emergency Alerts(USA) andNL-Alert(Netherlands).
Many countries and regions have implemented location-based alert systems based on cell broadcast. The alert messages to the population, already broadcast by various media, are relayed over the mobile network using cell broadcast.
The following countries and regions have selected Cell Broadcast to use for their national public warning system but are currently in the process of implementing.
|
https://en.wikipedia.org/wiki/Cell_Broadcast
|
TheMotorola 68000 series(also known as680x0,m68000,m68k, or68k) is a family of32-bitcomplex instruction set computer(CISC)microprocessors. During the 1980s and early 1990s, they were popular inpersonal computersandworkstationsand were the primary competitors ofIntel'sx86microprocessors. They were best known as the processors used in the early AppleMacintosh, the SharpX68000, the CommodoreAmiga, theSinclair QL, theAtari STandFalcon, theAtari Jaguar, theSega Genesis(Mega Drive) andSega CD, thePhilips CD-i, theCapcom System I(Arcade), theAT&T UNIX PC, the TandyModel 16/16B/6000, the Sun MicrosystemsSun-1,Sun-2andSun-3, theNeXT Computer,NeXTcube,NeXTstation, andNeXTcube Turbo, earlySilicon GraphicsIRIS workstations, theAesthedes, computers fromMASSCOMP, theTexas InstrumentsTI-89/TI-92calculators, thePalm Pilot(all models running Palm OS 4.x or earlier), theControl Data CorporationCDCNETDevice Interface, theVTechPrecomputer Unlimited and theSpace Shuttle. Although no modern desktop computers are based on processors in the 680x0 series, derivative processors are still widely used inembedded systems.
Motorolaceased development of the 680x0 series architecture in 1994, replacing it with thePowerPCRISCarchitecture, which was developed in conjunction withIBMandApple Computeras part of theAIM alliance.
68010:
68020:
68030:
68040:
68060:
The 680x0 line of processors has been used in a variety of systems, from high-endTexas Instrumentscalculators (theTI-89,TI-92, andVoyage 200lines) to all of the members of thePalm Pilotseries that run Palm OS 1.x to 4.x (OS 5.x isARM-based), and evenradiation-hardenedversions in the critical control systems of theSpace Shuttle.
The 680x0 CPU family became most well known for poweringdesktop computersandvideo game consolessuch as theMacintosh 128K,Amiga,Sinclair QL,Atari ST,Genesis / Mega Drive,NG AES/Neo Geo CD,CDTV. They were the processors of choice in the 1980s forUnixworkstationsandserverssuch as AT&T'sUNIX PC, Tandy'sModel 16/16B/6000, Sun Microsystems'Sun-1,Sun-2,Sun-3,NeXT Computer,Silicon Graphics(SGI), and numerous others.
TheSaturnuses the 68000 for audio processing and other I/O tasks, while theJaguarincludes a 68000 intended for basic system control and input processing, but was frequently used for running game logic. Many arcade boards also use 68000 processors including those from Capcom, SNK, and Sega.
The first several versions of Adobe'sPostScriptinterpreters were 68000-based. The 68000 in the AppleLaserWriterand LaserWriter Plus was clocked faster than the version used then in Macintosh computers. A fast 68030 in later PostScript interpreters, including the standard resolution LaserWriter IIntx, IIf and IIg (also 300 dpi), the higher resolution LaserWriter Pro 600 series (usually 600 dpi, but limited to 300 dpi with minimum RAM installed) and the very high resolutionLinotronicimagesetters, the 200PS (1500+ dpi) and 300PS (2500+ dpi). Thereafter, Adobe generally preferred a RISC for its processor, as its competitors, with their PostScript clones, had already gone with RISCs, often an AMD 29000-series. The early 68000-based Adobe PostScript interpreters and their hardware were named forCold War-era U.S. rockets and missiles: Atlas, Redstone, etc.
Microcontrollersderived from the 68000 family have been used in a huge variety of applications.CPU32andColdFiremicrocontrollers have been manufactured in the millions as automotive engine controllers.
Many proprietary video editing systems used 68000 processors, such as the MacroSystem Casablanca, which was a black box with an easy to use graphic interface (1997). It was intended for the amateur and hobby videographer market. It is also worth noting its earlier, bigger and more professional counterpart, the "DraCo" (1995). The groundbreakingQuantel Paintboxseries of early based 24-bit paint and effects system was originally released in 1981 and during its lifetime it used nearly the entire range of 68000 family processors, with the sole exception of the 68060, which was never implemented in its design. Another contender in the video arena, the Abekas 8150 DVE system, used the 680EC30, and the Play Trinity, later renamed Globecaster, uses several 68030s. The Bosch FGS-4000/4500 Video Graphics System manufactured by Robert Bosch Corporation, later BTS (1983), used a 68000 as its main processor; it drove several others to perform 3D animation in a computer that could easily apply Gouraud and Phong shading. It ran a modifiedMotorola VERSAdosoperating system.
People who are familiar with thePDP-11orVAXusually feel comfortable with the 68000 series. With the exception of the split of general-purpose registers into specialized data and address registers, the 68000 architecture is in many ways a 32-bit PDP-11.
It had a moreorthogonal instruction setthan those of many processors that came before (e.g., 8080) and after (e.g., x86). That is, it was typically possible to combine operations freely with operands, rather than being restricted to using certain addressing modes with certain instructions. This property made programming relatively easy for humans, and also made it easier to write code generators for compilers.
The 68000 series has eight 32-bit general-purpose dataregisters(D0-D7), and eight address registers (A0-A7). The last address register is thestack pointer, and assemblers accept the label SP as equivalent to A7.
In addition, it has a 16-bit status register. The upper 8 bits is the system byte, and modification of it is privileged. The lower 8 bits is the user byte, also known as the condition code register (CCR), and modification of it is not privileged. The 68000 comparison, arithmetic, and logic operations modify condition codes to record their results for use by later conditional jumps. The condition code bits are "zero" (Z), "carry" (C), "overflow" (V), "extend" (X), and "negative" (N). The "extend" (X) flag deserves special mention, because it is separate from thecarry flag. This permits the extra bit from arithmetic, logic, and shift operations to be separated from the carry for flow-of-control and linkage.
While the 68000 had a 'supervisor mode', it did not meet thePopek and Goldberg virtualization requirementsdue to the single instruction 'MOVE from SR', which copies the status register to another register, being unprivileged but sensitive. In theMotorola 68010and later, this was made privileged, to better support virtualization software.
The 68000 seriesinstruction setcan be divided into the following broad categories:
TheMotorola 68020added some new instructions that include some minor improvements and extensions to the supervisor state, several instructions for software management of a multiprocessing system (which were removed in the 68060), some support for high-level languages which did not get used much (and was removed from future 680x0 processors), bigger multiply (32×32→64 bits) and divide (64÷32→32 bits quotient and 32 bits remainder) instructions, and bit field manipulations.
The standardaddressing modesare:
Plus: access to thestatus register, and, in later models, other special registers.
The Motorola 68020 added ascaled indexingaddress mode, and added another level ofindirectionto many of the pre-existing modes.
Most instructions have dot-letter suffixes, permitting operations to occur on 8-bit bytes (".b"), 16-bit words (".w"), and 32-bit longs (".l").
Most instructions aredyadic, that is, the operation has a source, and a destination, and the destination is changed. Notable instructions were:
Motorola mainly used even numbers for major revisions to the CPU core such as 68000, 68020, 68040 and 68060. The 68010 was a revised version of the 68000 with minor modifications to the core, and likewise the 68030 was a revised 68020 with some more powerful features, none of them significant enough to classify as a major upgrade to the core.
The 68050 was reportedly "a minor upgrade of the 68040" that lost a battle for resources within Motorola, competing against projects that had been scheduled to succeed it: the 0.5μm, low-power, low-cost "LP040", and the superscalar, superpipelined "Q", borrowing from the 88110 and anticipated as the 68060.[19]Subsequent reports indicated that Motorola had considered the 68050 as not meriting the necessary investment in production of the part.[20]Odd-numbered releases had always been reactions to issues raised within the prior even numbered part; hence, it was generally expected that the 68050 would have reduced the 68040's power consumption (and thus heat dissipation), improved exception handling in the FPU, used a smaller feature size and optimized the microcode in line with program use of instructions. Many of these optimizations were included with the 68060 and were part of its design goals. For any number of reasons, likely that the 68060 was in development, that the Intel 80486 was not progressing as quickly as Motorola assumed it would, and that 68060 was a demanding project, the 68050 was cancelled early in development.
There is also no revision of the68060, as Motorola was in the process of shifting away from the 68000 and88kprocessor lines into its newPowerPCbusiness, so the 68070 was never developed. Had it been, it would have been a revised 68060, likely with a superior FPU (pipelining was widely speculated upon on Usenet).
There was a CPU with the68070designation, which was a licensed and somewhat slower version of the 16/32-bit 68000 with a basic DMA controller,I²Chost and an on-chip serial port. This 68070 was used as the main CPU in thePhilipsCD-i. This CPU was, however, produced byPhilipsand not officially part of Motorola's 680x0 lineup.
Motorola had announced a product roadmap beyond the 68060 featuring the 68080 rated at 200-350 MIPS, due by 1995, and a product rated at 800 MIPS, possibly with the name 68100, by 2000.[20]
The 4th-generation68060provided equivalent functionality (though not instruction-set-architecture compatibility) to most of the features of the IntelP5 microarchitecture.
The Personal ComputersXT/370andAT/370PC-based IBM-compatible mainframeseach included two modified Motorola 68000 processors with custommicrocodeto emulateS/370mainframe instructions.[21][22]
An Arizona-based company,Edge Computer Corp, reportedly founded by former Honeywell designers, produced processors compatible with the 68000 series, these being claimed as having "a three to five times performance – and18 to 24 months' time– advantage" over Motorola's own products.[23]In 1987, the company introduced the Edge 1000 range of "32-bit superminicomputers implementing the Motorola instruction set in the Edge mainframe architecture", employing two independent pipelines - an instruction fetch pipeline (IFP) and operand executive pipeline (OEP) - relying on a branch prediction unit featuring a 4096-entry branch cache, retrieving instructions and operands over multiple buses.[24]An agreement between Edge Computer and Olivetti subsequently led to the latter introducing products in its own "Linea Duo" range based on Edge Computer's machines.[25]The company was subsequently renamed to Edgcore Technology Inc.[26]: 12(also reported as Edgecore Technology Inc.[27]). Edgcore's deal withOlivettiin 1987 to supply the company's E1000 processor was followed in 1989 by another deal withPhilips Telecommunications Data Systemsto supply the E2000 processor, this supporting the 68030 instruction set and reportedly offering a performance rating of 16 VAX MIPS.[28]Similar deals withNixdorf ComputerandHitachiwere also signed in 1989.[29][30]
Edge Computer reportedly had an agreement with Motorola.[27]Despite increasing competition from RISC products, Edgcore sought to distinguish its products in the market by emphasising its "alliance" with Motorola, employing a marketing campaign drawing from Aesop's fables with "the fox (Edgecore) who climbs on the back of the stallion (Motorola) to pluck fruit off the higher branches of the tree".[31]Other folktale advertising themes such asLittle Red Riding Hoodwere employed.[32]With the company's investors having declined to finance the company further, and with a number of companies having been involved in discussions with other parties,Arix Corp. announced the acquisition of Edgcore in July 1989.[30]Arix was reportedly able to renew its deal with Hitachi in 1990, whereas the future of previous deals with Olivetti and Philips remained in some doubt after the acquisition of Edgcore.[33]
In 1992, a company calledInternational Meta Systems(IMS) announced a RISC-based CPU, theIMS 3250, that could reportedly emulate the "Intel 486 or Motorola 68040 at full native speeds and at a fraction of their cost". Clocked at100MHz, emulations had supposedly been developed of a25 MHz486 and30 MHz68040, including floating-point unit support, with the product aiming for mid-1993 production at a per-unit cost of$50 to 60.[34]Amidst the apparent proliferation of emulation support in processors such as thePowerPC 615, in 1994, IMS had reportedly filed a patent on its emulation technology but had not found any licensees.[35]Repeated delays to the introduction of this product, blamed on one occasion on "a need to improve the chip's speech-processing capabilities",[36]apparently led to the company seeking to introduce another chip, theMeta6000, aiming to compete with Intel's P6 products.[37]Ultimately, IMS entered bankruptcy having sold patents to a litigator, TechSearch, who in 1998 attempted to sue Intel for infringement of an IMS patent.[38]TechSearch reportedly lost their case but sought to appeal, also seeking to sue Intel for "libel and slander" on the basis of comments made by an Intel representative who had characterised TechSearch's business model unfavourably in remarks to the press.[39]
After the mainline 68000 processors' demise, the 68000 family has been used to some extent inmicrocontrollerand embedded microprocessor versions. These chips include the ones listed under "other" above, i.e. theCPU32(aka68330), theColdFire, theQUICCand theDragonBall.
With the advent ofFPGAtechnology an international team of hardware developers have re-created the68000with many enhancements as an FPGA core. Their core is known as the68080and is used in Vampire-branded Amiga accelerators.[40]
Magnetic Scrollsused a subset of the 68000's instructions as a base for the virtual machine in theirtext adventures.
During the 1980s and early 1990s, when the 68000 was widely used in desktop computers, it mainly competed againstIntel'sx86architecture used inIBM PC compatibles. Generation 1 68000 CPUs competed against mainly the16-bit8086,8088, and80286. Generation 2 competed against the80386(the first 32-bit x86 processor), and generation 3 against the80486. The fourth generation competed with theP5Pentiumline, but it was not nearly as widely used as its predecessors, since much of the old 68000 marketplace was either defunct or nearly so (as was the case with Atari and NeXT), or converting to newer architectures (PowerPCfor theMacintoshandAmiga,SPARCforSun, andMIPSforSilicon Graphics(SGI)).
There are dozens of processor architectures that are successful inembedded systems. Some are microcontrollers which are much simpler, smaller, and cheaper than the 68000, while others are relatively sophisticated and can run complex software. Embedded versions of the 68000 often compete with processor architectures based onPowerPC,ARM,MIPS,SuperH, and others.
|
https://en.wikipedia.org/wiki/680x0
|
Inmathematics, ametric spaceis asettogether with a notion ofdistancebetween itselements, usually calledpoints. The distance is measured by afunctioncalled ametricordistance function.[1]Metric spaces are a general setting for studying many of the concepts ofmathematical analysisandgeometry.
The most familiar example of a metric space is3-dimensional Euclidean spacewith its usual notion of distance. Other well-known examples are asphereequipped with theangular distanceand thehyperbolic plane. A metric may correspond to ametaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with theHamming distance, which measures the number of characters that need to be changed to get from one string to another.
Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, includingRiemannian manifolds,normed vector spaces, andgraphs. Inabstract algebra, thep-adic numbersarise as elements of thecompletionof a metric structure on therational numbers. Metric spaces are also studied in their own right inmetric geometry[2]andanalysis on metric spaces.[3]
Many of the basic notions ofmathematical analysis, includingballs,completeness, as well asuniform,Lipschitz, andHölder continuity, can be defined in the setting of metric spaces. Other notions, such ascontinuity,compactness, andopenandclosed sets, can be defined for metric spaces, but also in the even more general setting oftopological spaces.
To see the utility of different notions of distance, consider thesurface of the Earthas a set of points. We can measure the distance between two such points by the length of theshortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural inseismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points.
The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts.
Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as withWasserstein metricson spaces ofmeasures) or the degree of difference between two objects (for example, theHamming distancebetween two strings of characters, or theGromov–Hausdorff distancebetween metric spaces themselves).
Formally, ametric spaceis anordered pair(M,d)whereMis a set anddis ametriconM, i.e., afunctiond:M×M→R{\displaystyle d\,\colon M\times M\to \mathbb {R} }satisfying the following axioms for all pointsx,y,z∈M{\displaystyle x,y,z\in M}:[4][5]
If the metricdis unambiguous, one often refers byabuse of notationto "the metric spaceM".
By taking all axioms except the second, one can show that distance is always non-negative:0=d(x,x)≤d(x,y)+d(y,x)=2d(x,y){\displaystyle 0=d(x,x)\leq d(x,y)+d(y,x)=2d(x,y)}Therefore the second axiom can be weakened toIfx≠y, thend(x,y)≠0{\textstyle {\text{If }}x\neq y{\text{, then }}d(x,y)\neq 0}and combined with the first to maked(x,y)=0⟺x=y{\textstyle d(x,y)=0\iff x=y}.[6]
Thereal numberswith the distance functiond(x,y)=|y−x|{\displaystyle d(x,y)=|y-x|}given by theabsolute differenceform a metric space. Many properties of metric spaces and functions between them are generalizations of concepts inreal analysisand coincide with those concepts when applied to the real line.
The Euclidean planeR2{\displaystyle \mathbb {R} ^{2}}can be equipped with many different metrics. TheEuclidean distancefamiliar from school mathematics can be defined byd2((x1,y1),(x2,y2))=(x2−x1)2+(y2−y1)2.{\displaystyle d_{2}((x_{1},y_{1}),(x_{2},y_{2}))={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.}
ThetaxicaborManhattandistanceis defined byd1((x1,y1),(x2,y2))=|x2−x1|+|y2−y1|{\displaystyle d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{2}-x_{1}|+|y_{2}-y_{1}|}and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article.
Themaximum,L∞{\displaystyle L^{\infty }}, orChebyshev distanceis defined byd∞((x1,y1),(x2,y2))=max{|x2−x1|,|y2−y1|}.{\displaystyle d_{\infty }((x_{1},y_{1}),(x_{2},y_{2}))=\max\{|x_{2}-x_{1}|,|y_{2}-y_{1}|\}.}This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves akingwould have to make on achessboardto travel from one point to another on the given space.
In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formulad∞(p,q)≤d2(p,q)≤d1(p,q)≤2d∞(p,q),{\displaystyle d_{\infty }(p,q)\leq d_{2}(p,q)\leq d_{1}(p,q)\leq 2d_{\infty }(p,q),}which holds for every pair of pointsp,q∈R2{\displaystyle p,q\in \mathbb {R} ^{2}}.
A radically different distance can be defined by settingd(p,q)={0,ifp=q,1,otherwise.{\displaystyle d(p,q)={\begin{cases}0,&{\text{if }}p=q,\\1,&{\text{otherwise.}}\end{cases}}}UsingIverson brackets,d(p,q)=[p≠q]{\displaystyle d(p,q)=[p\neq q]}In thisdiscrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points.
All of these metrics make sense onRn{\displaystyle \mathbb {R} ^{n}}as well asR2{\displaystyle \mathbb {R} ^{2}}.
Given a metric space(M,d)and asubsetA⊆M{\displaystyle A\subseteq M}, we can considerAto be a metric space by measuring distances the same way we would inM. Formally, theinduced metriconAis a functiondA:A×A→R{\displaystyle d_{A}:A\times A\to \mathbb {R} }defined bydA(x,y)=d(x,y).{\displaystyle d_{A}(x,y)=d(x,y).}For example, if we take the two-dimensional sphereS2as a subset ofR3{\displaystyle \mathbb {R} ^{3}}, the Euclidean metric onR3{\displaystyle \mathbb {R} ^{3}}induces the straight-line metric onS2described above. Two more useful examples are the open interval(0, 1)and the closed interval[0, 1]thought of as subspaces of the real line.
Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. Hisdistancewas given by logarithm of across ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models forelliptic geometryandhyperbolic geometry, andFelix Klein, in several publications, established the field ofnon-euclidean geometrythrough the use of theCayley-Klein metric.
The idea of an abstract space with metric properties was addressed in 1906 byRené Maurice Fréchet[7]and the termmetric spacewas coined byFelix Hausdorffin 1914.[8][9][10]
Fréchet's work laid the foundation for understandingconvergence,continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff andStefan Banachfurther refined and expanded the framework of metric spaces. Hausdorff introducedtopological spacesas a generalization of metric spaces. Banach's work infunctional analysisheavily relied on the metric structure. Over time, metric spaces became a central part ofmodern mathematics. They have influenced various fields includingtopology,geometry, andapplied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts.
A distance function is enough to define notions of closeness and convergence that were first developed inreal analysis. Properties that depend on the structure of a metric space are referred to asmetric properties. Every metric space is also atopological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are reallytopological properties.
For any pointxin a metric spaceMand any real numberr> 0, theopen ballof radiusraroundxis defined to be the set of points that are strictly less than distancerfromx:Br(x)={y∈M:d(x,y)<r}.{\displaystyle B_{r}(x)=\{y\in M:d(x,y)<r\}.}This is a natural way to define a set of points that are relatively close tox. Therefore, a setN⊆M{\displaystyle N\subseteq M}is aneighborhoodofx(informally, it contains all points "close enough" tox) if it contains an open ball of radiusraroundxfor somer> 0.
Anopen setis a set which is a neighborhood of all its points. It follows that the open balls form abasefor a topology onM. In other words, the open sets ofMare exactly the unions of open balls. As in any topology,closed setsare the complements of open sets. Sets may be both open and closed as well as neither open nor closed.
This topology does not carry all the information about the metric space. For example, the distancesd1,d2, andd∞defined above all induce the same topology onR2{\displaystyle \mathbb {R} ^{2}}, although they behave differently in many respects. Similarly,R{\displaystyle \mathbb {R} }with the Euclidean metric and its subspace the interval(0, 1)with the induced metric arehomeomorphicbut have very different metric properties.
Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are calledmetrizableand are particularly well-behaved in many ways: in particular, they areparacompact[11]Hausdorff spaces(hencenormal) andfirst-countable.[a]TheNagata–Smirnov metrization theoremgives a characterization of metrizability in terms of other topological properties, without reference to metrics.
Convergence of sequencesin Euclidean space is defined as follows:
Convergence of sequences in a topological space is defined as follows:
In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern fortopological propertiesof metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis.
Informally, a metric space iscompleteif it has no "missing points": every sequence that looks like it should converge to something actually converges.
To make this precise: a sequence(xn)in a metric spaceMisCauchyif for everyε > 0there is an integerNsuch that for allm,n>N,d(xm,xn) < ε. By the triangle inequality, any convergent sequence is Cauchy: ifxmandxnare both less thanεaway from the limit, then they are less than2εaway from each other. If the converse is true—every Cauchy sequence inMconverges—thenMis complete.
Euclidean spaces are complete, as isR2{\displaystyle \mathbb {R} ^{2}}with the other metrics described above. Two examples of spaces which are not complete are(0, 1)and the rationals, each with the metric induced fromR{\displaystyle \mathbb {R} }. One can think of(0, 1)as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it inR{\displaystyle \mathbb {R} }(for example, its successive decimal approximations). These examples show that completeness isnota topological property, sinceR{\displaystyle \mathbb {R} }is complete but the homeomorphic space(0, 1)is not.
This notion of "missing points" can be made precise. In fact, every metric space has a uniquecompletion, which is a complete space that contains the given space as adensesubset. For example,[0, 1]is the completion of(0, 1), and the real numbers are the completion of the rationals.
Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, thep-adic numbersare defined as the completion of the rationals under a different metric. Completion is particularly common as a tool infunctional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example,weak solutionstodifferential equationstypically live in a completion (aSobolev space) rather than the original space of nice functions for which the differential equation actually makes sense.
A metric spaceMisboundedif there is anrsuch that no pair of points inMis more than distancerapart.[b]The least suchris called thediameterofM.
The spaceMis calledprecompactortotally boundedif for everyr> 0there is a finitecoverofMby open balls of radiusr. Every totally bounded space is bounded. To see this, start with a finite cover byr-balls for some arbitraryr. Since the subset ofMconsisting of the centers of these balls is finite, it has finite diameter, sayD. By the triangle inequality, the diameter of the whole space is at mostD+ 2r. The converse does not hold: an example of a metric space that is bounded but not totally bounded isR2{\displaystyle \mathbb {R} ^{2}}(or any other infinite set) with the discrete metric.
Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces:
One example of a compact space is the closed interval[0, 1].
Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool isLebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover.
Unlike in the case of topological spaces or algebraic structures such asgroupsorrings, there is no single "right" type ofstructure-preserving functionbetween metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}are two metric spaces. The words "function" and "map" are used interchangeably.
One interpretation of a "structure-preserving" map is one that fully preserves the distance function:
It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called anisometry.[13]One perhaps non-obvious example of an isometry between spaces described in this article is the mapf:(R2,d1)→(R2,d∞){\displaystyle f:(\mathbb {R} ^{2},d_{1})\to (\mathbb {R} ^{2},d_{\infty })}defined byf(x,y)=(x+y,x−y).{\displaystyle f(x,y)=(x+y,x-y).}
If there is an isometry between the spacesM1andM2, they are said to beisometric. Metric spaces that are isometric areessentially identical.
On the other end of the spectrum, one can forget entirely about the metric structure and studycontinuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are:
Ahomeomorphismis a continuous bijection whose inverse is also continuous; if there is a homeomorphism betweenM1andM2, they are said to behomeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example,R{\displaystyle \mathbb {R} }is unbounded and complete, while(0, 1)is bounded but not complete.
A functionf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isuniformly continuousif for every real numberε > 0there existsδ > 0such that for all pointsxandyinM1such thatd(x,y)<δ{\displaystyle d(x,y)<\delta }, we haved2(f(x),f(y))<ε.{\displaystyle d_{2}(f(x),f(y))<\varepsilon .}
The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the pointx. However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences inM1to Cauchy sequences inM2. In other words, uniform continuity preserves some metric properties which are not purely topological.
On the other hand, theHeine–Cantor theoremstates that ifM1is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces.
ALipschitz mapis one that stretches distances by at most a bounded factor. Formally, given a real numberK> 0, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isK-Lipschitzifd2(f(x),f(y))≤Kd1(x,y)for allx,y∈M1.{\displaystyle d_{2}(f(x),f(y))\leq Kd_{1}(x,y)\quad {\text{for all}}\quad x,y\in M_{1}.}Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric.[14]For example, a curve in a metric space isrectifiable(has finite length) if and only if it has a Lipschitz reparametrization.
A 1-Lipschitz map is sometimes called anonexpandingormetric map. Metric maps are commonly taken to be the morphisms of thecategory of metric spaces.
AK-Lipschitz map forK< 1is called acontraction. TheBanach fixed-point theoremstates that ifMis a complete metric space, then every contractionf:M→M{\displaystyle f:M\to M}admits a uniquefixed point. If the metric spaceMis compact, the result holds for a slightly weaker condition onf: a mapf:M→M{\displaystyle f:M\to M}admits a unique fixed point ifd(f(x),f(y))<d(x,y)for allx≠y∈M1.{\displaystyle d(f(x),f(y))<d(x,y)\quad {\mbox{for all}}\quad x\neq y\in M_{1}.}
Aquasi-isometryis a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example,R2{\displaystyle \mathbb {R} ^{2}}and its subspaceZ2{\displaystyle \mathbb {Z} ^{2}}are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important ingeometric group theory: theŠvarc–Milnor lemmastates that all spaces on which a groupacts geometricallyare quasi-isometric.[15]
Formally, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}is aquasi-isometric embeddingif there exist constantsA≥ 1andB≥ 0such that1Ad2(f(x),f(y))−B≤d1(x,y)≤Ad2(f(x),f(y))+Bfor allx,y∈M1.{\displaystyle {\frac {1}{A}}d_{2}(f(x),f(y))-B\leq d_{1}(x,y)\leq Ad_{2}(f(x),f(y))+B\quad {\text{ for all }}\quad x,y\in M_{1}.}It is aquasi-isometryif in addition it isquasi-surjective, i.e. there is a constantC≥ 0such that every point inM2{\displaystyle M_{2}}is at distance at mostCfrom some point in the imagef(M1){\displaystyle f(M_{1})}.
Given two metric spaces(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}:
Anormed vector spaceis a vector space equipped with anorm, which is a function that measures the length of vectors. The norm of a vectorvis typically denoted by‖v‖{\displaystyle \lVert v\rVert }. Any normed vector space can be equipped with a metric in which the distance between two vectorsxandyis given byd(x,y):=‖x−y‖.{\displaystyle d(x,y):=\lVert x-y\rVert .}The metricdis said to beinducedby the norm‖⋅‖{\displaystyle \lVert {\cdot }\rVert }. Conversely,[16]if a metricdon avector spaceXis
then it is the metric induced by the norm‖x‖:=d(x,0).{\displaystyle \lVert x\rVert :=d(x,0).}A similar relationship holds betweenseminormsandpseudometrics.
Among examples of metrics induced by a norm are the metricsd1,d2, andd∞onR2{\displaystyle \mathbb {R} ^{2}}, which are induced by theManhattan norm, theEuclidean norm, and themaximum norm, respectively. More generally, theKuratowski embeddingallows one to see any metric space as a subspace of a normed vector space.
Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied infunctional analysis. Completeness is particularly important in this context: a complete normed vector space is known as aBanach space. An unusual property of normed vector spaces is thatlinear transformationsbetween them are continuous if and only if they are Lipschitz. Such transformations are known asbounded operators.
Acurvein a metric space(M,d)is a continuous functionγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}. Thelengthofγis measured byL(γ)=sup0=x0<x1<⋯<xn=T{∑k=1nd(γ(xk−1),γ(xk))}.{\displaystyle L(\gamma )=\sup _{0=x_{0}<x_{1}<\cdots <x_{n}=T}\left\{\sum _{k=1}^{n}d(\gamma (x_{k-1}),\gamma (x_{k}))\right\}.}In general, this supremum may be infinite; a curve of finite length is calledrectifiable.[17]Suppose that the length of the curveγis equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length,γbecomes ageodesic: a curve which is a distance-preserving function.[15]A geodesic is a shortest possible path between any two of its points.[c]
Ageodesic metric spaceis a metric space which admits a geodesic between any two of its points. The spaces(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}and(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}are both geodesic metric spaces. In(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}, geodesics are unique, but in(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}, there are often infinitely many geodesics between two points, as shown in the figure at the top of the article.
The spaceMis alength space(or the metricdisintrinsic) if the distance between any two pointsxandyis the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points(1, 0)and(-1, 0)can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface.
Given any metric space(M,d), one can define a new, intrinsic distance functiondintrinsiconMby setting the distance between pointsxandyto be the infimum of thed-lengths of paths between them. For instance, ifdis the straight-line distance on the sphere, thendintrinsicis the great-circle distance. However, in some casesdintrinsicmay have infinite values. For example, ifMis theKoch snowflakewith the subspace metricdinduced fromR2{\displaystyle \mathbb {R} ^{2}}, then the resulting intrinsic distance is infinite for any pair of distinct points.
ARiemannian manifoldis a space equipped with a Riemannianmetric tensor, which determines lengths oftangent vectorsat every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable pathγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}in a Riemannian manifoldMhas length defined as the integral of the length of the tangent vector to the path:L(γ)=∫0T|γ˙(t)|dt.{\displaystyle L(\gamma )=\int _{0}^{T}|{\dot {\gamma }}(t)|dt.}On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such assub-RiemannianandFinsler metrics.
The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is aCAT(k)space(a synthetic condition which depends purely on the metric) if and only if itssectional curvatureis bounded above byk.[20]ThusCAT(k)spaces generalize upper curvature bounds to general metric spaces.
Real analysis makes use of both the metric onRn{\displaystyle \mathbb {R} ^{n}}and theLebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside inmetric measure spaces: spaces that have both ameasureand a metric which are compatible with each other. Formally, ametric measure spaceis a metric space equipped with aBorel regular measuresuch that every ball has positive measure.[21]For example Euclidean spaces of dimensionn, and more generallyn-dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with theLebesgue measure. Certainfractalmetric spaces such as theSierpiński gasketcan be equipped with the α-dimensionalHausdorff measurewhere α is theHausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure.
One application of metric measure spaces is generalizing the notion ofRicci curvaturebeyond Riemannian manifolds. Just asCAT(k)andAlexandrov spacesgeneralize sectional curvature bounds,RCD spacesare a class of metric measure spaces which generalize lower bounds on Ricci curvature.[22]
Ametric space isdiscreteif its induced topology is thediscrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular,finite metric spaces(those having afinitenumber of points) are studied incombinatoricsandtheoretical computer science.[23]Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can beisometrically embeddedin a Euclidean space or inHilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points.[24][25]
For anyundirected connected graphG, the setVof vertices ofGcan be turned into a metric space by defining thedistancebetween verticesxandyto be the length of the shortest edge path connecting them. This is also calledshortest-path distanceorgeodesic distance. Ingeometric group theorythis construction is applied to theCayley graphof a (typically infinite)finitely-generated group, yielding theword metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set.[15]
An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics.
A significant result in this area is that any finite metric space can be probabilistically embedded into atree metricwith an expected distortion ofO(logn){\displaystyle O(logn)}, wheren{\displaystyle n}is the number of points in the metric space.[26]
This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound ofΩ(logn){\displaystyle \Omega (logn)}. The tree metrics produced in this embeddingdominatethe original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure.
The result has significant implications for various computational problems:
The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. TheO(logn){\displaystyle O(logn)}distortion bound has led to improvedapproximation ratiosin several algorithmic problems, demonstrating the practical significance of this theoretical result.
In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples:
The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves.HausdorffandGromov–Hausdorff distancedefine metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively.
Suppose(M,d)is a metric space, and letSbe a subset ofM. Thedistance fromSto a pointxofMis, informally, the distance fromxto the closest point ofS. However, since there may not be a single closest point, it is defined via aninfimum:d(x,S)=inf{d(x,s):s∈S}.{\displaystyle d(x,S)=\inf\{d(x,s):s\in S\}.}In particular,d(x,S)=0{\displaystyle d(x,S)=0}if and only ifxbelongs to theclosureofS. Furthermore, distances between points and sets satisfy a version of the triangle inequality:d(x,S)≤d(x,y)+d(y,S),{\displaystyle d(x,S)\leq d(x,y)+d(y,S),}and therefore the mapdS:M→R{\displaystyle d_{S}:M\to \mathbb {R} }defined bydS(x)=d(x,S){\displaystyle d_{S}(x)=d(x,S)}is continuous. Incidentally, this shows that metric spaces arecompletely regular.
Given two subsetsSandTofM, theirHausdorff distanceisdH(S,T)=max{sup{d(s,T):s∈S},sup{d(t,S):t∈T}}.{\displaystyle d_{H}(S,T)=\max\{\sup\{d(s,T):s\in S\},\sup\{d(t,S):t\in T\}\}.}Informally, two setsSandTare close to each other in the Hausdorff distance if no element ofSis too far fromTand vice versa. For example, ifSis an open set in Euclidean spaceTis anε-netinsideS, thendH(S,T)<ε{\displaystyle d_{H}(S,T)<\varepsilon }. In general, the Hausdorff distancedH(S,T){\displaystyle d_{H}(S,T)}can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets ofM.
The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. TheGromov–Hausdorff distancebetween compact spacesXandYis the infimum of the Hausdorff distance over all metric spacesZthat containXandYas subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications.
If(M1,d1),…,(Mn,dn){\displaystyle (M_{1},d_{1}),\ldots ,(M_{n},d_{n})}are metric spaces, andNis theEuclidean normonRn{\displaystyle \mathbb {R} ^{n}}, then(M1×⋯×Mn,d×){\displaystyle {\bigl (}M_{1}\times \cdots \times M_{n},d_{\times }{\bigr )}}is a metric space, where theproduct metricis defined byd×((x1,…,xn),(y1,…,yn))=N(d1(x1,y1),…,dn(xn,yn)),{\displaystyle d_{\times }{\bigl (}(x_{1},\ldots ,x_{n}),(y_{1},\ldots ,y_{n}){\bigr )}=N{\bigl (}d_{1}(x_{1},y_{1}),\ldots ,d_{n}(x_{n},y_{n}){\bigr )},}and the induced topology agrees with theproduct topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained ifNis thetaxicab norm, ap-norm, themaximum norm, or any other norm which is non-decreasing as the coordinates of a positiven-tuple increase (yielding the triangle inequality).
Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metricd(x,y)=∑i=1∞12idi(xi,yi)1+di(xi,yi).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {d_{i}(x_{i},y_{i})}{1+d_{i}(x_{i},y_{i})}}.}
The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies ofR{\displaystyle \mathbb {R} }is notfirst-countableand thus is not metrizable.
IfMis a metric space with metricd, and∼{\displaystyle \sim }is anequivalence relationonM, then we can endow the quotient setM/∼{\displaystyle M/\!\sim }with a pseudometric. The distance between two equivalence classes[x]{\displaystyle [x]}and[y]{\displaystyle [y]}is defined asd′([x],[y])=inf{d(p1,q1)+d(p2,q2)+⋯+d(pn,qn)},{\displaystyle d'([x],[y])=\inf\{d(p_{1},q_{1})+d(p_{2},q_{2})+\dotsb +d(p_{n},q_{n})\},}where theinfimumis taken over all finite sequences(p1,p2,…,pn){\displaystyle (p_{1},p_{2},\dots ,p_{n})}and(q1,q2,…,qn){\displaystyle (q_{1},q_{2},\dots ,q_{n})}withp1∼x{\displaystyle p_{1}\sim x},qn∼y{\displaystyle q_{n}\sim y},qi∼pi+1,i=1,2,…,n−1{\displaystyle q_{i}\sim p_{i+1},i=1,2,\dots ,n-1}.[30]In general this will only define apseudometric, i.e.d′([x],[y])=0{\displaystyle d'([x],[y])=0}does not necessarily imply that[x]=[y]{\displaystyle [x]=[y]}. However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces),d′{\displaystyle d'}is a metric.
The quotient metricd′{\displaystyle d'}is characterized by the followinguniversal property. Iff:(M,d)→(X,δ){\displaystyle f\,\colon (M,d)\to (X,\delta )}is a metric (i.e. 1-Lipschitz) map between metric spaces satisfyingf(x) =f(y)wheneverx∼y{\displaystyle x\sim y}, then the induced functionf¯:M/∼→X{\displaystyle {\overline {f}}\,\colon {M/\sim }\to X}, given byf¯([x])=f(x){\displaystyle {\overline {f}}([x])=f(x)}, is a metric mapf¯:(M/∼,d′)→(X,δ).{\displaystyle {\overline {f}}\,\colon (M/\sim ,d')\to (X,\delta ).}
The quotient metric does not always induce thequotient topology. For example, the topological quotient of the metric spaceN×[0,1]{\displaystyle \mathbb {N} \times [0,1]}identifying all points of the form(n,0){\displaystyle (n,0)}is not metrizable since it is notfirst-countable, but the quotient metric is a well-defined metric on the same set which induces acoarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient.[31]
A topological space issequentialif and only if it is a (topological) quotient of a metric space.[32]
There are several notions of spaces which have less structure than a metric space, but more than a topological space.
There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, infunctional analysispseudometrics often come fromseminormson vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term intopology.
Some authors define metrics so as to allow the distance functiondto attain the value ∞, i.e. distances are non-negative numbers on theextended real number line.[4]Such a function is also called anextended metricor "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using asubadditivemonotonically increasing bounded function which is zero at zero, e.g.d′(x,y)=d(x,y)/(1+d(x,y)){\displaystyle d'(x,y)=d(x,y)/(1+d(x,y))}ord″(x,y)=min(1,d(x,y)){\displaystyle d''(x,y)=\min(1,d(x,y))}.
The requirement that the metric take values in[0,∞){\displaystyle [0,\infty )}can be relaxed to consider metrics with values in other structures, including:
These generalizations still induce auniform structureon the space.
ApseudometriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) onlyd(x,x)=0{\displaystyle d(x,x)=0}for allx{\displaystyle x}is required.[34]In other words, the axioms for a pseudometric are:
In some contexts, pseudometrics are referred to assemimetrics[35]because of their relation toseminorms.
Occasionally, aquasimetricis defined as a function that satisfies all axioms for a metric with the possible exception of symmetry.[36]The name of this generalisation is not entirely standardized.[37]
Quasimetrics are common in real life. For example, given a setXof mountain villages, the typical walking times between elements ofXform a quasimetric because travel uphill takes longer than travel downhill. Another example is thelength of car ridesin a city with one-way streets: here, a shortest path from pointAto pointBgoes along a different set of streets than a shortest path fromBtoAand may have a different length.
A quasimetric on the reals can be defined by settingd(x,y)={x−yifx≥y,1otherwise.{\displaystyle d(x,y)={\begin{cases}x-y&{\text{if }}x\geq y,\\1&{\text{otherwise.}}\end{cases}}}The 1 may be replaced, for example, by infinity or by1+y−x{\displaystyle 1+{\sqrt {y-x}}}or any othersubadditivefunction ofy-x. This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size byfiling it down, but it is difficult or impossible to grow it.
Given a quasimetric onX, one can define anR-ball aroundxto be the set{y∈X|d(x,y)≤R}{\displaystyle \{y\in X|d(x,y)\leq R\}}. As in the case of a metric, such balls form a basis for a topology onX, but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed)Sorgenfrey line.
In ametametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are:
Metametrics appear in the study ofGromov hyperbolic metric spacesand their boundaries. Thevisual metametricon such a space satisfiesd(x,x)=0{\displaystyle d(x,x)=0}for pointsx{\displaystyle x}on the boundary, but otherwised(x,x){\displaystyle d(x,x)}is approximately the distance fromx{\displaystyle x}to the boundary. Metametrics were first defined by Jussi Väisälä.[38]In other work, a function satisfying these axioms is called apartial metric[39][40]or adislocated metric.[34]
AsemimetriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }that satisfies the first three axioms, but not necessarily the triangle inequality:
Some authors work with a weaker form of the triangle inequality, such as:
The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to asquasimetrics,[41]nearmetrics[42]orinframetrics.[43]
The ρ-inframetric inequalities were introduced to modelround-trip delay timesin theinternet.[43]The triangle inequality implies the 2-inframetric inequality, and theultrametric inequalityis exactly the 1-inframetric inequality.
Relaxing the last three axioms leads to the notion of apremetric, i.e. a function satisfying the following conditions:
This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics[44]or pseudometrics;[45]in translations of Russian books it sometimes appears as "prametric".[46]A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance.[47]
Any premetric gives rise to a topology as follows. For a positive realr{\displaystyle r}, ther{\displaystyle r}-ballcentered at a pointp{\displaystyle p}is defined as
A set is calledopenif for any pointp{\displaystyle p}in the set there is anr{\displaystyle r}-ballcentered atp{\displaystyle p}which is contained in the set. Every premetric space is a topological space, and in fact asequential space.
In general, ther{\displaystyle r}-ballsthemselves need not be open sets with respect to this topology.
As for metrics, the distance between two setsA{\displaystyle A}andB{\displaystyle B}, is defined as
This defines a premetric on thepower setof a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric.
Any premetric gives rise to apreclosure operatorcl{\displaystyle cl}as follows:
The prefixespseudo-,quasi-andsemi-can also be combined, e.g., apseudoquasimetric(sometimes calledhemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the openr{\displaystyle r}-ballsform a basis of open sets. A very basic example of a pseudoquasimetric space is the set{0,1}{\displaystyle \{0,1\}}with the premetric given byd(0,1)=1{\displaystyle d(0,1)=1}andd(1,0)=0.{\displaystyle d(1,0)=0.}The associated topological space is theSierpiński space.
Sets equipped with an extended pseudoquasimetric were studied byWilliam Lawvereas "generalized metric spaces".[48]From acategoricalpoint of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of themetric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients.
Lawvere also gave an alternate definition of such spaces asenriched categories. The ordered set(R,≥){\displaystyle (\mathbb {R} ,\geq )}can be seen as acategorywith onemorphisma→b{\displaystyle a\to b}ifa≥b{\displaystyle a\geq b}and none otherwise. Using+as thetensor productand 0 as theidentitymakes this category into amonoidal categoryR∗{\displaystyle R^{*}}.
Every (extended pseudoquasi-)metric space(M,d){\displaystyle (M,d)}can now be viewed as a categoryM∗{\displaystyle M^{*}}enriched overR∗{\displaystyle R^{*}}:
The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. Amultisetis a generalization of the notion of asetin which an element can occur more than once. Define the multiset unionU=XY{\displaystyle U=XY}as follows: if an elementxoccursmtimes inXandntimes inYthen it occursm+ntimes inU. A functiondon the set of nonempty finite multisets of elements of a setMis a metric[49]if
By considering the cases of axioms 1 and 2 in which the multisetXhas two elements and the case of axiom 3 in which the multisetsX,Y, andZhave one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements.
A simple example is the set of all nonempty finite multisetsX{\displaystyle X}of integers withd(X)=max(X)−min(X){\displaystyle d(X)=\max(X)-\min(X)}. More complex examples areinformation distancein multisets;[49]andnormalized compression distance(NCD) in multisets.[50]
|
https://en.wikipedia.org/wiki/Metric_space
|
Inmathematics, theessential spectrumof abounded operator(or, more generally, of adensely definedclosed linear operator) is a certain subset of itsspectrum, defined by a condition of the type that says, roughly speaking, "fails badly to be invertible".
In formal terms, letX{\displaystyle X}be aHilbert spaceand letT{\displaystyle T}be aself-adjoint operatoronX{\displaystyle X}.
Theessential spectrumofT{\displaystyle T}, usually denotedσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}, is the set of allreal numbersλ∈R{\displaystyle \lambda \in \mathbb {R} }such that
is not aFredholm operator, whereIX{\displaystyle I_{X}}denotes theidentity operatoronX{\displaystyle X}, so thatIX(x)=x{\displaystyle I_{X}(x)=x}, for allx∈X{\displaystyle x\in X}.
(An operator is Fredholm if itskernelandcokernelare finite-dimensional.)
The definition of essential spectrumσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}will remain unchanged if we allow it to consist of all thosecomplex numbersλ∈C{\displaystyle \lambda \in \mathbb {C} }(instead of just real numbers) such that the above condition holds. This is due to the fact that the spectrum of self-adjoint consists only of real numbers.
The essential spectrum is alwaysclosed, and it is a subset of thespectrumσ(T){\displaystyle \sigma (T)}. As mentioned above, sinceT{\displaystyle T}is self-adjoint, the spectrum is contained on the real axis.
The essential spectrum is invariant under compact perturbations. That is, ifK{\displaystyle K}is acompactself-adjoint operator onX{\displaystyle X}, then the essential spectra ofT{\displaystyle T}and that ofT+K{\displaystyle T+K}coincide, i.e.σess(T)=σess(T+K){\displaystyle \sigma _{\mathrm {ess} }(T)=\sigma _{\mathrm {ess} }(T+K)}. This explains why it is called theessential spectrum:Weyl(1910) originally defined the essential spectrum of a certain differential operator to be the spectrum independent of boundary conditions.
Weyl's criterionis as follows. First, a numberλ{\displaystyle \lambda }is in the spectrumσ(T){\displaystyle \sigma (T)}of the operatorT{\displaystyle T}if and only if there exists asequence{ψk}k∈N⊆X{\displaystyle \{\psi _{k}\}_{k\in \mathbb {N} }\subseteq X}in the Hilbert spaceX{\displaystyle X}such that‖ψk‖=1{\displaystyle \Vert \psi _{k}\Vert =1}and
Furthermore,λ{\displaystyle \lambda }is in the essential spectrum if there is a sequence satisfying this condition, but such that it contains no convergentsubsequence(this is the case if, for example{ψk}k∈N{\displaystyle \{\psi _{k}\}_{k\in \mathbb {N} }}is anorthonormalsequence); such a sequence is called asingular sequence. Equivalently,λ{\displaystyle \lambda }is in the essential spectrumσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}if there exists a sequence satisfying the above condition, which alsoconverges weaklyto the zero vector0X{\displaystyle \mathbf {0} _{X}}inX{\displaystyle X}.
The essential spectrumσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}is a subset of the spectrumσ(T){\displaystyle \sigma (T)}and its complement is called thediscrete spectrum, so
IfT{\displaystyle T}is self-adjoint, then, by definition, a numberλ{\displaystyle \lambda }is in thediscrete spectrumσdisc{\displaystyle \sigma _{\mathrm {disc} }}ofT{\displaystyle T}if it is an isolated eigenvalue of finite multiplicity, meaning that the dimension of the space
has finite but non-zero dimension and that there is anε>0{\displaystyle \varepsilon >0}such thatμ∈σ(T){\displaystyle \mu \in \sigma (T)}and|μ−λ|<ε{\displaystyle |\mu -\lambda |<\varepsilon }imply thatμ{\displaystyle \mu }andλ{\displaystyle \lambda }are equal.
(For general, non-self-adjoint operatorsS{\displaystyle S}onBanach spaces, by definition, a complex numberλ∈C{\displaystyle \lambda \in \mathbb {C} }is in thediscrete spectrumσdisc(S){\displaystyle \sigma _{\mathrm {disc} }(S)}if it is anormal eigenvalue; or, equivalently, if it is an isolated point of the spectrum and the rank of the correspondingRiesz projectoris finite.)
LetX{\displaystyle X}be aBanach spaceand letT:D(T)→X{\displaystyle T:\,D(T)\to X}be aclosed linear operatoronX{\displaystyle X}withdense domainD(T){\displaystyle D(T)}. There are several definitions of the essential spectrum, which are not equivalent.[1]
Each of the above-defined essential spectraσess,k(T){\displaystyle \sigma _{\mathrm {ess} ,k}(T)},1≤k≤5{\displaystyle 1\leq k\leq 5}, is closed. Furthermore,
and any of these inclusions may be strict. For self-adjoint operators, all the above definitions of the essential spectrum coincide.
Define theradiusof the essential spectrum by
Even though the spectra may be different, the radius is the same for allk=1,2,3,4,5{\displaystyle k=1,2,3,4,5}.
The definition of the setσess,2(T){\displaystyle \sigma _{\mathrm {ess} ,2}(T)}is equivalent to Weyl's criterion:σess,2(T){\displaystyle \sigma _{\mathrm {ess} ,2}(T)}is the set of allλ{\displaystyle \lambda }for which there exists a singular sequence.
The essential spectrumσess,k(T){\displaystyle \sigma _{\mathrm {ess} ,k}(T)}is invariant under compact perturbations fork=1,2,3,4{\displaystyle k=1,2,3,4}, but not fork=5{\displaystyle k=5}.
The setσess,4(T){\displaystyle \sigma _{\mathrm {ess} ,4}(T)}gives the part of the spectrum that is independent of compact perturbations, that is,
whereB0(X){\displaystyle B_{0}(X)}denotes the set ofcompact operatorsonX{\displaystyle X}(D.E. Edmunds and W.D. Evans, 1987).
The spectrum of a closed, densely defined operatorT{\displaystyle T}can be decomposed into a disjoint union
whereσdisc(T){\displaystyle \sigma _{\mathrm {disc} }(T)}is thediscrete spectrumofT{\displaystyle T}.
The self-adjoint case is discussed in
A discussion of the spectrum for general operators can be found in
The original definition of the essential spectrum goes back to
|
https://en.wikipedia.org/wiki/Essential_spectrum
|
The following is a list ofmobile telecommunicationsnetworks usingthird-generationUniversal Mobile Telecommunications System(UMTS) technology. This list does not aim to cover all networks, but instead focuses on networks deployed on frequencies other than 2100 MHz which is commonly deployed around the globe and on Multiband deployments.
Networks in Europe, the Middle East and Africa are exclusively deployed on 2100 MHz (Band 1) and/or 900 MHz (Band 8).
Networks in this region are commonly deployed on 850 MHz (Band 5) and/or 1900 MHz (Band 2) unless denoted otherwise.
Networks in Asia are commonly deployed on 2100 MHz (Band 1) unless denoted otherwise.
|
https://en.wikipedia.org/wiki/List_of_UMTS_networks
|
Intheoretical computer scienceand mathematics,computational complexity theoryfocuses on classifyingcomputational problemsaccording to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as analgorithm.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematicalmodels of computationto study these problems and quantifying theircomputational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used incommunication complexity), the number ofgatesin a circuit (used incircuit complexity) and the number of processors (used inparallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. TheP versus NP problem, one of the sevenMillennium Prize Problems,[1]is part of the field of computational complexity.
Closely related fields intheoretical computer scienceareanalysis of algorithmsandcomputability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically.
Acomputational problemcan be viewed as an infinite collection ofinstancestogether with a set (possibly empty) ofsolutionsfor every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem ofprimality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, theinstanceis a particular input to the problem, and thesolutionis the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of thetravelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 14 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites inMilanwhose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is astringover analphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings arebitstrings. As in a real-worldcomputer, mathematical objects other than bitstrings must be suitably encoded. For example,integerscan be represented inbinary notation, andgraphscan be encoded directly via theiradjacency matrices, or by encoding theiradjacency listsin binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
Decision problemsare one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is eitheryesorno(alternatively, 1 or 0). A decision problem can be viewed as aformal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of analgorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answeryes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrarygraph. The problem consists in deciding whether the given graph isconnectedor not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
Afunction problemis a computational problem where a single output (of atotal function) is expected for every input, but the output is more complex than that of adecision problem—that is, the output is not just yes or no. Notable examples include thetraveling salesman problemand theinteger factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples(a,b,c){\displaystyle (a,b,c)}such that the relationa×b=c{\displaystyle a\times b=c}holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with2n{\displaystyle 2n}vertices compared to the time taken for a graph withn{\displaystyle n}vertices?
If the input size isn{\displaystyle n}, the time taken can be expressed as a function ofn{\displaystyle n}. Since the time taken on different inputs of the same size can be different, the worst-case time complexityT(n){\displaystyle T(n)}is defined to be the maximum time taken over all inputs of sizen{\displaystyle n}. IfT(n){\displaystyle T(n)}is a polynomial inn{\displaystyle n}, then the algorithm is said to be apolynomial timealgorithm.Cobham's thesisargues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm.
A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of theChurch–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as aRAM machine,Conway's Game of Life,cellular automata,lambda calculusor any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
Many types of Turing machines are used to define complexity classes, such asdeterministic Turing machines,probabilistic Turing machines,non-deterministic Turing machines,quantum Turing machines,symmetric Turing machinesandalternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are calledrandomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, seenon-deterministic algorithm.
Many machine models different from the standardmulti-tape Turing machineshave been proposed in the literature, for examplerandom-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary.[2]What all these models have in common is that the machines operatedeterministically.
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so thatnon-deterministic timeis a very important resource in analyzing computational problems.
For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as thedeterministic Turing machineis used. The time required by a deterministic Turing machineM{\displaystyle M}on inputx{\displaystyle x}is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machineM{\displaystyle M}is said to operate within timef(n){\displaystyle f(n)}if the time required byM{\displaystyle M}on each input of lengthn{\displaystyle n}is at mostf(n){\displaystyle f(n)}. A decision problemA{\displaystyle A}can be solved in timef(n){\displaystyle f(n)}if there exists a Turing machine operating in timef(n){\displaystyle f(n)}that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within timef(n){\displaystyle f(n)}on a deterministic Turing machine is then denoted byDTIME(f(n){\displaystyle f(n)}).
Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, anycomplexity measurecan be viewed as a computational resource. Complexity measures are very generally defined by theBlum complexity axioms. Other complexity measures used in complexity theory includecommunication complexity,circuit complexity, anddecision tree complexity.
The complexity of an algorithm is often expressed usingbig O notation.
Thebest, worst and average casecomplexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of sizen{\displaystyle n}may be faster to solve than others, we define the following complexities:
The order from cheap to costly is: Best, average (ofdiscrete uniform distribution), amortized, worst.
For example, the deterministic sorting algorithmquicksortaddresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes timeO(n2{\displaystyle n^{2}}). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting isO(nlogn){\displaystyle O(n\log n)}. The best case occurs when each pivoting divides the list in half, also needingO(nlogn){\displaystyle O(n\log n)}time.
To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field ofanalysis of algorithms. To show an upper boundT(n){\displaystyle T(n)}on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at mostT(n){\displaystyle T(n)}. However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound ofT(n){\displaystyle T(n)}for a problem requires showing that no algorithm can have time complexity lower thanT(n){\displaystyle T(n)}.
Upper and lower bounds are usually stated using thebig O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, ifT(n)=7n2+15n+40{\displaystyle T(n)=7n^{2}+15n+40}, in big O notation one would writeT(n)∈O(n2){\displaystyle T(n)\in O(n^{2})}.
Acomplexity classis a set of problems of related complexity. Simpler complexity classes are defined by the following factors:
Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:
But bounding the computation time above by some concrete functionf(n){\displaystyle f(n)}often yields complexity classes that depend on the chosen machine model. For instance, the language{xx∣xis any binary string}{\displaystyle \{xx\mid x{\text{ is any binary string}}\}}can be solved inlinear timeon a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time,Cobham-Edmonds thesisstates that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity classP, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems isFP.
Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:
Logarithmic-space classes do not account for the space required to represent the problem.
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE bySavitch's theorem.
Other important complexity classes includeBPP,ZPPandRP, which are defined usingprobabilistic Turing machines;ACandNC, which are defined using Boolean circuits; andBQPandQMA, which are defined using quantum Turing machines.#Pis an important complexity class of counting problems (not decision problems). Classes likeIPandAMare defined usingInteractive proof systems.ALLis the class of all decision problems.
For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n{\displaystyle n}) is contained in DTIME(n2{\displaystyle n^{2}}), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.
More precisely, thetime hierarchy theoremstates thatDTIME(o(f(n)))⊊DTIME(f(n)⋅log(f(n))){\displaystyle {\mathsf {DTIME}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DTIME}}{\big (}f(n)\cdot \log(f(n)){\big )}}.
Thespace hierarchy theoremstates thatDSPACE(o(f(n)))⊊DSPACE(f(n)){\displaystyle {\mathsf {DSPACE}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DSPACE}}{\big (}f(n){\big )}}.
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.
Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problemX{\displaystyle X}can be solved using an algorithm forY{\displaystyle Y},X{\displaystyle X}is no more difficult thanY{\displaystyle Y}, and we say thatX{\displaystyle X}reducestoY{\displaystyle Y}. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such aspolynomial-time reductionsorlog-space reductions.
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problemX{\displaystyle X}ishardfor a class of problemsC{\displaystyle C}if every problem inC{\displaystyle C}can be reduced toX{\displaystyle X}. Thus no problem inC{\displaystyle C}is harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inC{\displaystyle C}. The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set ofNP-hardproblems.
If a problemX{\displaystyle X}is inC{\displaystyle C}and hard forC{\displaystyle C}, thenX{\displaystyle X}is said to becompleteforC{\displaystyle C}. This means thatX{\displaystyle X}is the hardest problem inC{\displaystyle C}. (Since many problems could be equally hard, one might say thatX{\displaystyle X}is one of the hardest problems inC{\displaystyle C}.) Thus the class ofNP-completeproblems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem,Π2{\displaystyle \Pi _{2}}, to another problem,Π1{\displaystyle \Pi _{1}}, would indicate that there is no known polynomial-time solution forΠ1{\displaystyle \Pi _{1}}. This is because a polynomial-time solution toΠ1{\displaystyle \Pi _{1}}would yield a polynomial-time solution toΠ2{\displaystyle \Pi _{2}}. Similarly, because all NP problems can be reduced to the set, finding anNP-completeproblem that can be solved in polynomial time would mean that P = NP.[3]
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called theCobham–Edmonds thesis. The complexity classNP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as theBoolean satisfiability problem, theHamiltonian path problemand thevertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[3]If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types ofinteger programmingproblems inoperations research, many problems inlogistics,protein structure predictioninbiology,[5]and the ability to find formal proofs ofpure mathematicstheorems.[6]The P versus NP problem is one of theMillennium Prize Problemsproposed by theClay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.[7]
It was shown by Ladner that ifP≠NP{\displaystyle {\textsf {P}}\neq {\textsf {NP}}}then there exist problems inNP{\displaystyle {\textsf {NP}}}that are neither inP{\displaystyle {\textsf {P}}}norNP{\displaystyle {\textsf {NP}}}-complete.[4]Such problems are calledNP-intermediateproblems. Thegraph isomorphism problem, thediscrete logarithm problemand theinteger factorization problemare examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be inP{\displaystyle {\textsf {P}}}or to beNP{\displaystyle {\textsf {NP}}}-complete.
Thegraph isomorphism problemis the computational problem of determining whether two finitegraphsareisomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is inP{\displaystyle {\textsf {P}}},NP{\displaystyle {\textsf {NP}}}-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[8]If graph isomorphism is NP-complete, thepolynomial time hierarchycollapses to its second level.[9]Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due toLászló BabaiandEugene Lukshas run timeO(2nlogn){\displaystyle O(2^{\sqrt {n\log n}})}for graphs withn{\displaystyle n}vertices, although some recent work by Babai offers some potentially new perspectives on this.[10]
Theinteger factorization problemis the computational problem of determining theprime factorizationof a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less thank{\displaystyle k}. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as theRSAalgorithm. The integer factorization problem is inNP{\displaystyle {\textsf {NP}}}and inco-NP{\displaystyle {\textsf {co-NP}}}(and even in UP and co-UP[11]). If the problem isNP{\displaystyle {\textsf {NP}}}-complete, the polynomial time hierarchy will collapse to its first level (i.e.,NP{\displaystyle {\textsf {NP}}}will equalco-NP{\displaystyle {\textsf {co-NP}}}). The best known algorithm for integer factorization is thegeneral number field sieve, which takes timeO(e(6493)(logn)3(loglogn)23){\displaystyle O(e^{\left({\sqrt[{3}]{\frac {64}{9}}}\right){\sqrt[{3}]{(\log n)}}{\sqrt[{3}]{(\log \log n)^{2}}}})}[12]to factor an odd integern{\displaystyle n}. However, the best knownquantum algorithmfor this problem,Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
Many known complexity classes are suspected to be unequal, but this has not been proved. For instanceP⊆NP⊆PP⊆PSPACE{\displaystyle {\textsf {P}}\subseteq {\textsf {NP}}\subseteq {\textsf {PP}}\subseteq {\textsf {PSPACE}}}, but it is possible thatP=PSPACE{\displaystyle {\textsf {P}}={\textsf {PSPACE}}}. IfP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, thenP{\displaystyle {\textsf {P}}}is not equal toPSPACE{\displaystyle {\textsf {PSPACE}}}either. Since there are many known complexity classes betweenP{\displaystyle {\textsf {P}}}andPSPACE{\displaystyle {\textsf {PSPACE}}}, such asRP{\displaystyle {\textsf {RP}}},BPP{\displaystyle {\textsf {BPP}}},PP{\displaystyle {\textsf {PP}}},BQP{\displaystyle {\textsf {BQP}}},MA{\displaystyle {\textsf {MA}}},PH{\displaystyle {\textsf {PH}}}, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines,co-NP{\displaystyle {\textsf {co-NP}}}is the class containing thecomplementproblems (i.e. problems with theyes/noanswers reversed) ofNP{\displaystyle {\textsf {NP}}}problems. It is believed[13]thatNP{\displaystyle {\textsf {NP}}}is not equal toco-NP{\displaystyle {\textsf {co-NP}}}; however, it has not yet been proven. It is clear that if these two complexity classes are not equal thenP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, sinceP=co-P{\displaystyle {\textsf {P}}={\textsf {co-P}}}. Thus ifP=NP{\displaystyle P=NP}we would haveco-P=co-NP{\displaystyle {\textsf {co-P}}={\textsf {co-NP}}}whenceNP=P=co-P=co-NP{\displaystyle {\textsf {NP}}={\textsf {P}}={\textsf {co-P}}={\textsf {co-NP}}}.
Similarly, it is not known ifL{\displaystyle {\textsf {L}}}(the set of all problems that can be solved in logarithmic space) is strictly contained inP{\displaystyle {\textsf {P}}}or equal toP{\displaystyle {\textsf {P}}}. Again, there are many complexity classes between the two, such asNL{\displaystyle {\textsf {NL}}}andNC{\displaystyle {\textsf {NC}}}, and it is not known if they are distinct or equal classes.
It is suspected thatP{\displaystyle {\textsf {P}}}andBPP{\displaystyle {\textsf {BPP}}}are equal. However, it is currently open ifBPP=NEXP{\displaystyle {\textsf {BPP}}={\textsf {NEXP}}}.
A problem that can theoretically be solved, but requires impractical and infinite resources (e.g., time) to do so, is known as anintractable problem.[14]Conversely, a problem that can be solved in practice is called atractable problem, literally "a problem that can be handled". The terminfeasible(literally "cannot be done") is sometimes used interchangeably withintractable,[15]though this risks confusion with afeasible solutioninmathematical optimization.[16]
Tractable problems are frequently identified with problems that have polynomial-time solutions (P{\displaystyle {\textsf {P}}},PTIME{\displaystyle {\textsf {PTIME}}}); this is known as theCobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that areEXPTIME-hard. IfNP{\displaystyle {\textsf {NP}}}is not the same asP{\displaystyle {\textsf {P}}}, thenNP-hardproblems are also intractable in this sense.
However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not inP{\displaystyle {\textsf {P}}}does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem inPresburger arithmetichas been shown not to be inP{\displaystyle {\textsf {P}}}, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-completeknapsack problemover a wide range of sizes in less than quadratic time andSAT solversroutinely handle large instances of the NP-completeBoolean satisfiability problem.
To see why exponential-time algorithms are generally unusable in practice, consider a program that makes2n{\displaystyle 2^{n}}operations before halting. For smalln{\displaystyle n}, say 100, and assuming for the sake of example that the computer does1012{\displaystyle 10^{12}}operations each second, the program would run for about4×1010{\displaystyle 4\times 10^{10}}years, which is the same order of magnitude as theage of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes1.0001n{\displaystyle 1.0001^{n}}operations is practical untiln{\displaystyle n}gets relatively large.
Similarly, a polynomial time algorithm is not always practical. If its running time is, say,n15{\displaystyle n^{15}}, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice evenn3{\displaystyle n^{3}}orn2{\displaystyle n^{2}}algorithms are often impractical on realistic sizes of problems.
Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied innumerical analysis. One approach to complexity theory of numerical analysis[17]isinformation based complexity.
Continuous complexity theory can also refer to complexity theory of the use ofanalog computation, which uses continuousdynamical systemsanddifferential equations.[18]Control theorycan be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.[19]
An early example of algorithm complexity analysis is the running time analysis of theEuclidean algorithmdone byGabriel Laméin 1844.
Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines byAlan Turingin 1936, which turned out to be a very robust and flexible simplification of a computer.
The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" byJuris HartmanisandRichard E. Stearns, which laid out the definitions oftime complexityandspace complexity, and proved the hierarchy theorems.[20]In addition, in 1965Edmondssuggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size.[21]
Earlier papers studying problems solvable by Turing machines with specific bounded resources include[20]John Myhill's definition oflinear bounded automata(Myhill 1960),Raymond Smullyan's study of rudimentary sets (1961), as well asHisao Yamada's paper[22]on real-time computations (1962). Somewhat earlier,Boris Trakhtenbrot(1956), a pioneer in the field from the USSR, studied another specific complexity measure.[23]As he remembers:
However, [my] initial interest [in automata theory] was increasingly set aside in favor of computational complexity, an exciting fusion of combinatorial methods, inherited fromswitching theory, with the conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 when I coined the term "signalizing function", which is nowadays commonly known as "complexity measure".[24]
In 1967,Manuel Blumformulated a set ofaxioms(now known asBlum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-calledspeed-up theorem. The field began to flourish in 1971 whenStephen CookandLeonid Levinprovedthe existence of practically relevant problems that areNP-complete. In 1972,Richard Karptook this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diversecombinatorialandgraph theoreticalproblems, each infamous for its computational intractability, are NP-complete.[25]
|
https://en.wikipedia.org/wiki/Computational_intractability
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Thesociology of the Internet(or thesocial psychology of the internet) involves the application of sociological or social psychological theory and method to theInternetas a source of information and communication. The overlapping field ofdigital sociologyfocuses on understanding the use ofdigital mediaas part of everyday life, and how these various technologies contribute to patterns of human behavior, social relationships, andconcepts of the self.Sociologistsare concerned with the social implications of the technology; newsocial networks,virtual communitiesand ways ofinteractionthat have arisen, as well as issues related tocyber crime.
The Internet—the newestin a series of majorinformation breakthroughs—is of interest for sociologists in various ways: as a tool forresearch, for example, in usingonlinequestionnairesinstead of paper ones, as a discussion platform, and as a research topic. Thesociologyof the Internet in the stricter sense concerns the analysis ofonline communities(e.g. as found innewsgroups),virtual communitiesandvirtual worlds, organizational change catalyzed throughnew mediasuch as the Internet, and social change at-large in the transformation fromindustrialtoinformational society(or toinformation society). Online communities can be studied statistically throughnetwork analysisand at the same time interpreted qualitatively, such as throughvirtual ethnography. Social change can be studied through statisticaldemographicsor through the interpretation of changing messages and symbols in onlinemedia studies.
The Internet is a relatively new phenomenon. AsRobert Darntonwrote, it is a revolutionary change that "took place yesterday, or the day before, depending on how you measure it."[1]The Internet developed from theARPANET, dating back to 1969; as a term it was coined in 1974. TheWorld Wide Webas we know it was shaped in the mid-1990s, whengraphical interfaceand services likeemailbecame popular and reached wider (non-scientific and non-military) audiences andcommerce.[1][2]Internet Explorerwas first released in 1995;Netscapea year earlier.Googlewas founded in 1998.[1][2]Wikipediawas founded in 2001.Facebook,MySpace, andYouTubein the mid-2000s.Web 2.0is still emerging. The amount of information available on the net and thenumber of Internet users worldwidehas continued to grow rapidly.[2]The term 'digital sociology' is now becoming increasingly used to denote new directions in sociological research into digital technologies since Web 2.0.
The first scholarly article to have the termdigital sociologyin the title appeared in 2009.[3]The author reflects on the ways in which digital technologies may influence both sociological research and teaching. In 2010, 'digital sociology' was described, byRichard Neal, in terms of bridging the growing academic focus with the increasing interest from global business.[4]It was not until 2013 that the first purely academic book tackling the subject of 'digital sociology' was published.[5]The first sole-authored book entitledDigital Sociologywas published in 2015,[6]and the first academic conference on "Digital Sociology" was held in New York, NY in the same year.[7]
Although the termdigital sociologyhas not yet fully entered the cultural lexicon, sociologists have engaged in research related to the Internet since its inception. These sociologists have addressed many social issues relating toonline communities,cyberspaceand cyber-identities. This and similar research has attracted many different names such ascyber-sociology, thesociology of the internet, thesociology of online communities, thesociology of social media, thesociology of cyberculture, or something else again.
Digital sociology differs from these terms in that it is wider in its scope, addressing not only the Internet orcyberculturebut also the impact of the other digital media and devices that have emerged since the first decade of the twenty-first century. Since the Internet has become more pervasive and linked with everyday life, references to the 'cyber' in the social sciences seems now to have been replaced by the 'digital'. 'Digital sociology' is related to other sub-disciplines such asdigital humanitiesanddigital anthropology. It is beginning to supersede and incorporate the other titles above, as well as including the newestWeb 2.0digital technologies into its purview, such aswearable technology,augmented reality,smart objects, theInternet of Thingsandbig data.
According to DiMaggio et al. (1999),[2]research tends to focus on the Internet's implications in five domains:
Early on, there were predictions that the Internet would change everything (or nothing); over time, however, a consensus emerged that the Internet, at least in the current phase of development, complements rather than displaces previously implementedmedia.[2]This has meant a rethinking of the 1990s ideas of "convergence of new and old media". Further, the Internet offers a rare opportunity to study changes caused by the newly emerged—and likely, still evolving—information and communication technology(ICT).[2]
The Internet has createdsocial network services, forums ofsocial interactionandsocial relations, such asFacebook,MySpace,Meetup, andCouchSurfingwhich facilitate both online and offline interaction.
Thoughvirtual communitieswere once thought to be composed of strictly virtual social ties, researchers often find that even those social ties formed in virtual spaces are often maintained both online and offline[8][9]
There are ongoing debates about the impact of the Internet onstrongandweak ties, whether the Internet is creating more or lesssocial capital,[10][11]the Internet's role in trends towards social isolation,[12]and whether it creates a more or less diverse social environment.
It is often said the Internet is a new frontier, and there is a line of argument to the effect that social interaction, cooperation and conflict among users resembles the anarchistic and violentAmerican frontierof the early 19th century.[13]
In March 2014, researchers from theBenedictine University at MesainArizonastudied how online interactions affect face-to-face meetings. The study is titled, "Face to Face Versus Facebook: Does Exposure to Social Networking Web Sites Augment or Attenuate Physiological Arousal Among the Socially Anxious," published inCyberpsychology, Behavior, and Social Networking.[14]They analyzed 26 female students with electrodes to measure social anxiety. Prior to meeting people, the students were shown pictures of the subject they were expected to meet. Researchers found that meeting someone face-to-face after looking at their photos increases arousal, which the study linked to an increase in social anxiety. These findings confirm previous studies that found that socially anxious people prefer online interactions. The study also recognized that the stimulated arousal can be associated with positive emotions and could lead to positive feelings.
Recent research has taken theInternet of Thingswithin its purview, as global networks of interconnected everyday objects are said to be the next step in technological advancement.[15]Certainly, global space- and earth-based networks are expanding coverage of the IoT at a fast pace. This has a wide variety of consequences, with current applications in the health, agriculture, traffic and retail fields.[16]Companies such asSamsungandSigfoxhave invested heavily in said networks, and their social impact will have to be measured accordingly, with some sociologists suggesting the formation of socio-technical networks of humans and technical systems.[17][18]Issues of privacy,right to information, legislation and content creation will come into public scrutiny in light of these technological changes.[16][19]
Digital sociology is connected with data and data emotions[20]Data emotions happens when people use digital technologies that can effect their decision-making skills or emotions. Social media platforms collects users data while also effecting their emotional state of mind, which causes either solidarity or social engagement amongst users. Social media platforms such as Instagram and Twitter can evoke emotions of love, affection, and empathy. Viral challenges such as the 2014 Ice Bucket Challenge[20]and viral memes has brought people together through mass participation displaying cultural knowledge and understanding of self. Mass participation in viral events prompts users to spread information (data) to one another effecting psychological state of mind and emotions. The link between digital sociology and data emotions is formed through the integration of technological devices within everyday life and activities.
Researchers have investigated the use oftechnology(as opposed to the Internet) by children and how it can be used excessively, where it can cause medical health and psychological issues.[21]The use of technological devices by children can cause them to become addicted to them and can lead them to experience negative effects such asdepression,attention problems,loneliness,anxiety,aggressionandsolitude.[21]Obesityis another result from the use of technology by children, due to how children may prefer to use their technological devices rather than doing any form of physical activity.[22]Parents can take control and implement restrictions to the use of technological devices by their children, which will decrease the negative results technology can have if it is prioritized as well as help put a limit to it being used excessively.[22]
Children can use technology to enhance their learning skills - for example: using online programs to improve the way they learn how to read or do math. The resources technology provides for children may enhance their skills, but children should be cautious of what they get themselves into due to how cyber bullying may occur.Cyber bullyingcan cause academic and psychological effects due to how children are suppressed by people who bully them through the Internet.[23]When technology is introduced to children they are not forced to accept it, but instead children are permitted to have an input on what they feel about either deciding to use their technological device or not.[24][need quotation to verify]. The routines of children have changed due to the increasing popularity of internet connected devices, with Social Policy researcher Janet Heaton concluding that, "while the children's health and quality of life benefited from the technology, the time demands of the care routines and lack of compatibility with other social and institutional timeframes had some negative implications".[25]Children's frequent use of technology commonly leads to decreased time available to pursue meaningful friendships, hobbies and potential career options.
While technology can have negative impacts on the lives of children, it can also be used as a valuablelearningtool that can encourage cognitive, linguistic and social development. In a 2010 study by the University of New Hampshire, children that used technological devices exhibited greater improvements in problem-solving, intelligence, language skills and structural knowledge in comparison to those children who did not incorporate the use of technology in their learning.[26]In a 1999 paper, it was concluded that "studies did find improvements in student scores on tests closely related to material covered in computer-assisted instructional packages", which demonstrates how technology can have positive influences on children by improving their learning capabilities.[27]Problems have arisen between children and their parents as well when parents limit what children can use their technological devices for, specifically what they can and cannot watch on their devices, making children frustrated.[28]
The Internet has achieved new relevance as a political tool. The presidential campaign ofHoward Deanin 2004 in theUnited Statesbecame famous for its ability to generate donations via the Internet, and the 2008 campaign ofBarack Obamabecame even more so. Increasingly,social movementsand other organizations use the Internet to carry out both traditional and the newInternet activism.
Some governments are also getting online. Some countries, such as those ofCuba,Iran,North Korea,Myanmar, thePeople's Republic of China, andSaudi Arabiause filtering and censoring software torestrict what people in their countries can access on the Internet. In theUnited Kingdom, they also use software to locate and arrest various individuals they perceive as a threat. Other countries including the United States, have enacted laws making the possession or distribution of certain material such aschild pornographyillegal but do not use filtering software. In some countriesInternet service providershave agreed to restrict access to sites listed by police.
While much has been written of the economic advantages ofInternet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforceeconomic inequalityand thedigital divide.[29]Electronic commerce may be responsible forconsolidationand the decline ofmom-and-pop,brick and mortarbusinesses resulting in increases inincome inequality.[30]
The spread of low-cost Internet access in developing countries has opened up new possibilities forpeer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites such asDonors ChooseandGlobal Givingnow allow small-scale donors to direct funds to individual projects of their choice.
A popular twist on Internet-based philanthropy is the use ofpeer-to-peer lendingfor charitable purposes.Kivapioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediarymicrofinanceorganizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed prior being funded by lenders and borrowers do not communicate with lenders themselves.[31][32]However, the recent spread of cheap Internet access in developing countries has made genuine peer-to-peer connections increasingly feasible. In 2009 the US-based nonprofitZidishatapped into this trend to offer the first peer-to-peer microlending platform to link lenders and borrowers across international borders without local intermediaries. Inspired by interactive websites such asFacebookandeBay, Zidisha's microlending platform facilitates direct dialogue between lenders and borrowers and a performance rating system for borrowers. Web users worldwide can fund loans for as little as a dollar.[33]
The Internet has been a major source ofleisuresince before the World Wide Web, with entertaining social experiments such asMUDsandMOOsbeing conducted on university servers, and humor-relatedUsenetgroups receiving much of the main traffic. Today, manyInternet forumshave sections devoted to games and funny videos; short cartoons in the form ofFlash moviesare also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas.
Thepornographyandgamblingindustries have both taken full advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. Although governments have made attempts to censor Internet porn, Internet service providers have told governments that these plans are not feasible.[34]Also many governments have attempted to put restrictions on both industries' use of the Internet, this has generally failed to stop their widespread popularity.
One area of leisure on the Internet isonline gaming. This form of leisure creates communities, bringing people of all ages and origins to enjoy the fast-paced world of multiplayer games. These range fromMMORPGtofirst-person shooters, fromrole-playing video gamestoonline gambling. This has revolutionized the way many people interact and spend their free time on the Internet.
While online gaming has been around since the 1970s, modern modes of online gaming began with services such asGameSpyandMPlayer, to which players of games would typically subscribe. Non-subscribers were limited to certain types of gameplay or certain games.
Many use the Internet to access and download music, movies and other works for their enjoyment and relaxation. As discussed above, there are paid and unpaid sources for all of these, using centralized servers and distributed peer-to-peer technologies. Discretion is needed as some of these sources take more care over the original artists' rights and over copyright laws than others.
Many use the World Wide Web to access news, weather and sports reports, to plan and book holidays and to find out more about their random ideas and casual interests.
People usechat,messagingand e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously hadpen pals.Social networkingwebsites likeMySpace,Facebookand many others like them also put and keep people in contact for their enjoyment.
The Internet has seen a growing number ofWeb desktops, where users can access their files, folders, and settings via the Internet.
Cyberslackinghas become a serious drain on corporate resources; the average UK employee spends 57 minutes a day surfing the Web at work, according to a study byPeninsula Business Services.[35]
Four aspects of digital sociology have been identified by Lupton (2012):[36]
Although they have been reluctant to use social and other digital media for professional academics purposes, sociologists are slowly beginning to adopt them for teaching and research.[37]An increasing number of sociological blogs are beginning to appear and more sociologists are joining Twitter, for example. Some are writing about the best ways for sociologists to employ social media as part of academic practice and the importance ofself-archivingand making sociological research open access, as well as writing forWikipedia.
Digital sociologists have begun to write about the use of wearable technologies as part of quantifying the body[38]and the social dimensions of big data and the algorithms that are used to interpret these data.[39]Others have directed attention at the role of digital technologies as part of the surveillance of people's activities, via such technologies asCCTV camerasand customer loyalty schemes[40]as well as themass surveillanceof the Internet that is being conducted by secret services such as theNSA.
The 'digital divide', or the differences in access to digital technologies experienced by certain social groups such as the socioeconomically disadvantaged, those of lower education levels, women and the elderly, has preoccupied many researchers in the social scientific study of digital media. However several sociologists have pointed out that while it is important to acknowledge and identify thestructural inequalitiesinherent in differentials in digital technology use, this concept is rather simplistic and fails to incorporate the complexities of access to and knowledge about digital technologies.[41]
There is a growing interest in the ways in which social media contributes to the development of intimate relationships and concepts of the self. One of the best-known sociologists who has written about social relationships, selfhood and digital technologies isSherry Turkle.[42][43]In her most recent book, Turkle addresses the topic of social media.[44]She argues that relationships conducted via these platforms are not as authentic as those encounters that take place "in real life".
Visual media allows the viewer to be a more passive consumer of information.[45]Viewers are more likely to develop online personas that differ from their personas in the real world. This contrast between the digital world (or 'cyberspace') and the 'real world', however, has been critiqued as 'digital dualism', a concept similar to the 'aura of the digital'.[46]Other sociologists have argued that relationships conducted through digital media are inextricably part of the 'real world'.[47]Augmented realityis an interactive experience where reality is being altered in some way by the use of digital media but not replaced.
The use of social media forsocial activismhave also provided a focus for digital sociology. For example, numerous sociological articles,[48][49]and at least one book[50]have appeared on the use of such social media platforms as Twitter,YouTubeandFacebookas a means of conveying messages about activist causes and organizing political movements.
Research has also been done on how racial minorities and the use of technology by racial minorities and other groups. These "digital practice" studies explore the ways in which the practices that groups adopt when using new technologies mitigate or reproduce social inequalities.[51][52]
Digital sociologists use varied approaches to investigating people's use of digital media, both qualitative and quantitative. These includeethnographic research, interviews and surveys with users of technologies, and also the analysis of the data produced from people's interactions with technologies: for example, their posts on social media platforms such as Facebook,Reddit,4chan,Tumblrand Twitter or their consuming habits on online shopping platforms. Such techniques asdata scraping,social network analysis,time series analysisandtextual analysisare employed to analyze both the data produced as a byproduct of users' interactions with digital media and those that they create themselves.
For Contents Analysis, in 2008, Yukihiko Yoshida did a study called[53]"Leni Riefenstahl and German expressionism: research in Visual Cultural Studies using the trans-disciplinary semantic spaces of specialized dictionaries." The study took databases of images tagged with connotative and denotative keywords (a search engine) and found Riefenstahl's imagery had the same qualities as imagery tagged "degenerate" in the title of the exhibition, "Degenerate Art" in Germany at 1937.
The emergence of social media has provided sociologists with a new way of studying social phenomenon. Social media networks, such asFacebookandTwitter, are increasingly being mined for research. For example, Twitter data is easily available to researchers through the Twitter API. Twitter provides researchers with demographic data, time and location data, and connections between users. From these data, researchers gain insight into user moods and how they communicate with one another. Furthermore, social networks can be graphed and visualized.[54]
Using large data sets, like those obtained from Twitter, can be challenging. First of all, researchers have to figure out how to store this data effectively in a database. Several tools commonly used inBig Dataanalytics are at their disposal.[54]Since large data sets can be unwieldy and contain numerous types of data (i.e. photos, videos, GIF images), researchers have the option of storing their data in non-relational databases, such asMongoDBandHadoop.[54]Processing and querying this data is an additional challenge. However, there are several options available to researchers. One common option is to use a querying language, such asHive, in conjunction withHadoopto analyze large data sets.[54]
The Internet and social media have allowed sociologists to study how controversial topics are discussed over time—otherwise known as Issue Mapping.[55]Sociologists can search social networking sites (i.e. Facebook or Twitter) for posts related to a hotly-debated topic, then parse through and analyze the text.[55]Sociologists can then use a number of easily accessible tools to visualize this data, such as MentionMapp or TwitterStreamgraph. MentionMapp shows how popular a hashtag is and Twitter Streamgraph depicts how often certain words are paired together and how their relationship changes over time.[55]
Digital surveillance occurs when digital devices record people's daily activities, collecting and storing personal data, and invading privacy.[6]With the advancement of new technologies, the act of monitoring and watching people online has increased between the years of 2010 to 2020. The invasion of privacy and recording people without consent leads to people doubting the usage of technologies which are supposed to secure and protect personal information. The storage of data and intrusiveness in digital surveillance affects human behavior. The psychological implications of digital surveillance can cause people to have concern, worry, or fear about feeling monitored all the time. Digital data is stored within security technologies, apps, social media platforms, and other technological devices that can be used in various ways for various reasons. Data collected from people using the internet can be subject to being monitored and viewed by private and public companies, friends, and other known or unknown entities.
This aspect of digital sociology is perhaps what makes it distinctive from other approaches to studying the digital world. In adopting a critical reflexive approach, sociologists are able to address the implications of the digital for sociological practice itself. It has been argued that digital sociology offers a way of addressing the changing relations between social relations and the analysis of these relations, putting into question what social research is, and indeed, what sociology is now as social relations and society have become in many respects mediated via digital technologies.[56]
How should sociology respond to the emergent forms of both 'small data' and 'big data' that are collected in vast amounts as part of people's interactions with digital technologies and the development of data industries using these data to conduct their own social research? Does this suggest that a "coming crisis in empirical sociology" might be on the horizon?[57]How are the identities and work practices of sociologists themselves becoming implicated within and disciplined by digital technologies such ascitation metrics?[58]
These questions are central to critical digital sociology, which reflects upon the role of sociology itself in the analysis of digital technologies as well as the impact of digital technologies upon sociology.[59]
To these four aspects add the following subfields of digital sociology:
Public sociologyusing digital media is a form of public sociology that involves publishing sociological materials in online accessible spaces and subsequent interaction with publics in these spaces. This has been referred to as "e-public sociology".[60]
Social media has changed the ways the public sociology was perceived and given rise to digital evolution in this field. The vast open platform of communication has provided opportunities for sociologists to come out from the notion of small group sociology or publics to a vast audience.
Blogging was the initial social media platform being utilized by sociologists. Sociologists like Eszter Hargittai, Chris Bertram, and Kieran Healy were few amongst those who started using blogging for sociology. New discussion groups about sociology and related philosophy were the consequences of social media impact. The vast number of comments and discussions thus became a part of understanding sociology. One of such famous groups wasCrooked Timber. Getting feedback on such social sites is faster and impactful. Disintermediation, visibility, and measurement are the major effects of e-public sociology. Other social media tools like Twitter and Facebook also became the tools for a sociologist."Public Sociology in the Age of Social Media".[61]
Information and communication technology as well as the proliferation of digital data are revolutionizing sociological research. Whereas there is already much methodological innovation indigital humanitiesandcomputational social sciences, theory development in the social sciences and humanities still consists mainly of print theories of computer cultures or societies. These analogue theories of the digital transformation, however, fail to account for how profoundly the digital transformation of the social sciences and humanities is changing the epistemic core of these fields. Digital methods constitute more than providers of ever-biggerdigital datasets for testing of analogue theories, but also require new forms of digital theorising.[62]The ambition of research programmes on the digital transformation ofsocial theoryis therefore to translate analogue into digital social theories so as to complement traditional analogue social theories of the digital transformation by digital theories of digital societies.[63]
|
https://en.wikipedia.org/wiki/Digital_sociology
|
Inmathematics, theHilbert symbolornorm-residue symbolis a function (–, –) fromK××K×to the group ofnthroots of unityin alocal fieldKsuch as the fields ofrealsorp-adic numbers. It is related toreciprocity laws, and can be defined in terms of theArtin symboloflocal class field theory. The Hilbert symbol was introduced byDavid Hilbert(1897, sections 64, 131,1998, English translation) in hisZahlbericht, with the slight difference that he defined it for elements ofglobal fieldsrather than for the larger local fields.
The Hilbert symbol has been generalized tohigher local fields.
Over a local fieldK{\displaystyle K}withmultiplicative groupof non-zero elementsK×{\displaystyle K^{\times }},
the quadratic Hilbert symbol is thefunctionK××K×→{±1}{\displaystyle K^{\times }\times K^{\times }\to \{\pm 1\}}defined by
Equivalently,(a,b)=1{\displaystyle (a,b)=1}if and only ifb{\displaystyle b}is equal to thenormof an element of the quadratic extensionK[a]{\displaystyle K[{\sqrt {a}}]}.[1]
The following three properties follow directly from the definition, by choosing suitable solutions of theDiophantine equationabove:
The (bi)multiplicativity, i.e.,
for anya,b1{\displaystyle a,b_{1}}andb2{\displaystyle b_{2}}inK×{\displaystyle K^{\times }}is, however, more difficult to prove, and requires the development oflocal class field theory.
The third property shows that the Hilbert symbol is an example of aSteinberg symboland thus factors over the secondMilnor K-groupK2M(K){\displaystyle K_{2}^{M}(K)}, which is by definition
By the first property it even factors overK2M(K)/2{\displaystyle K_{2}^{M}(K)/2}. This is the first step towards theMilnor conjecture.
The Hilbert symbol can also be used to denote thecentral simple algebraoverKwith basis 1,i,j,kand multiplication rulesi2=a{\displaystyle i^{2}=a},j2=b{\displaystyle j^{2}=b},ij=−ji=k{\displaystyle ij=-ji=k}. In this case the algebra represents an element of order 2 in theBrauer groupofK, which is identified with -1 if it is a division algebra and +1 if it is isomorphic to the algebra of 2 by 2 matrices.
For aplacevof therational number fieldand rational numbersa,bwe let (a,b)vdenote the value of the Hilbert symbol in the correspondingcompletionQv. As usual, ifvis the valuation attached to a prime numberpthen the corresponding completion is thep-adic fieldand ifvis the infinite place then the completion is thereal numberfield.
Over the reals, (a,b)∞is +1 if at least one ofaorbis positive, and −1 if both are negative.
Over the p-adics withpodd, writinga=pαu{\displaystyle a=p^{\alpha }u}andb=pβv{\displaystyle b=p^{\beta }v}, whereuandvare integerscoprimetop, we have
and the expression involves twoLegendre symbols.
Over the 2-adics, again writinga=2αu{\displaystyle a=2^{\alpha }u}andb=2βv{\displaystyle b=2^{\beta }v}, whereuandvareodd numbers, we have
It is known that ifvranges over all places, (a,b)vis 1 for almost all places. Therefore, the following product formula
makes sense. It is equivalent to the law ofquadratic reciprocity.
The Hilbert symbol on a fieldFdefines a map
where Br(F) is the Brauer group ofF. The kernel of this mapping, the elementsasuch that (a,b)=1 for allb, is theKaplansky radicalofF.[2]
The radical is a subgroup of F*/F*2, identified with a subgroup of F*. The radical is equal to F*if and only ifFhasu-invariantat most 2.[3]In the opposite direction, a field with radical F*2is termed aHilbert field.[4]
IfKis a local field containing the group ofnth roots of unity for some positive integernprime to the characteristic ofK, then the Hilbert symbol (,) is a function fromK*×K* to μn. In terms of the Artin symbol it can be defined by[5]
Hilbert originally defined the Hilbert symbol before the Artin symbol was discovered, and his definition (fornprime) used the power residue symbol whenKhas residue characteristic coprime ton, and was rather complicated whenKhas residue characteristic dividingn.
The Hilbert symbol is (multiplicatively) bilinear:
skew symmetric:
nondegenerate:
It detects norms (hence the name norm residue symbol):
It has the"symbol" properties:
Hilbert's reciprocity law states that ifaandbare in an algebraic number field containing thenth roots of unity then[6]
where the product is over the finite and infinite primespof the number field, and where (,)pis the Hilbert symbol of the completion atp. Hilbert's reciprocity law follows from theArtin reciprocity lawand the definition of the Hilbert symbol in terms of the Artin symbol.
IfKis a number field containing thenth roots of unity,pis a prime ideal not dividingn, π is a prime element of the local field ofp, andais coprime top, then thepower residue symbol(ap) is related to the Hilbert symbol by[7]
The power residue symbol is extended to fractional ideals by multiplicativity, and defined for elements of the number field
by putting (ab)=(a(b)) where (b) is the principal ideal generated byb.
Hilbert's reciprocity law then implies the following reciprocity law for the residue symbol, foraandbprime to each other and ton:
|
https://en.wikipedia.org/wiki/Hilbert_symbol
|
Metropolis light transport(MLT) is aglobal illuminationapplication of aMonte Carlo methodcalled theMetropolis–Hastings algorithmto therendering equationfor generating images from detailed physical descriptions ofthree-dimensionalscenes.[1][2]
The procedure constructs paths from the eye to a light source usingbidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path's 'nodes' in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new 'nodes' to add and whether or not these new nodes will actually create a new path.
Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing.[citation needed]
Energy Redistribution Path Tracing (ERPT) uses Metropolis sampling-like mutation strategies instead of an intermediateprobability distributionstep.[3]
Renderers using MLT:
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Metropolis_light_transport
|
Apseudonym(/ˈsjuːdənɪm/; fromAncient Greekψευδώνυμος(pseudṓnumos)'falsely named') oralias(/ˈeɪli.əs/) is a fictitious name that a person assumes for a particular purpose, which differs from their original or true meaning (orthonym).[1][2]This also differs from a new name that entirely or legally replaces an individual's own. Many pseudonym holders use them because they wish to remainanonymousand maintain privacy, though this may be difficult to achieve as a result of legal issues.[3]
Pseudonyms includestage names,user names,ring names,pen names, aliases,superheroor villain identities and code names, gamertags, andregnal namesof emperors, popes, and other monarchs. In some cases, it may also includenicknames. Historically, they have sometimes taken the form ofanagrams, Graecisms, andLatinisations.[4]
Pseudonyms should not be confused with new names that replace old ones and become the individual's full-time name. Pseudonyms are "part-time" names, used only in certain contexts: to provide a more clear-cut separation between one's private and professional lives, to showcase or enhance a particular persona, or to hide an individual's real identity, as with writers' pen names, graffiti artists' tags,resistance fighters'or terrorists'noms de guerre, computerhackers'handles, and otheronline identitiesfor services such associal media,online gaming, andinternet forums. Actors, musicians, and other performers sometimes usestage namesfor a degree of privacy, to better market themselves, and other reasons.[5]
In some cases, pseudonyms are adopted because they are part of a cultural or organisational tradition; for example,devotional namesare used by members of somereligious institutes,[6]and "cadre names" are used byCommunist partyleaders such asTrotskyandLenin.
Acollective nameorcollective pseudonymis one shared by two or more persons, for example, the co-authors of a work, such asCarolyn Keene,Erin Hunter,Ellery Queen,Nicolas Bourbaki, orJames S. A. Corey.
The termpseudonymis derived from the Greek word "ψευδώνυμον" (pseudṓnymon),[7]literally"false name", fromψεῦδος(pseûdos) 'lie, falsehood'[8]andὄνομα(ónoma) "name".[9]The termaliasis a Latinadverbmeaning "at another time, elsewhere".[10]
Sometimes people change their names in such a manner that the new name becomes permanent and is used by all who know the person. This is not an alias or pseudonym, but in fact a new name. In many countries, includingcommon lawcountries, a name change can be ratified by a court and become a person's new legal name.
Pseudonymous authors may still have their various identities linked together throughstylometricanalysis of their writing style. The precise degree of this unmasking ability and its ultimate potential is uncertain, but the privacy risks are expected to grow with improved analytic techniques andtext corpora. Authors may practiceadversarial stylometryto resist such identification.[11]
Businesspersons of ethnic minorities in some parts of the world are sometimes advised by an employer to use a pseudonym that is common or acceptable in that area when conducting business, to overcome racial or religious bias.[12]
Criminals may use aliases,fictitious business names, anddummy corporations(corporate shells) to hide their identity, or to impersonate other persons or entities in order to commit fraud. Aliases and fictitious business names used for dummy corporations may become so complex that, in the words ofThe Washington Post, "getting to the truth requires a walk down a bizarre labyrinth" and multiple government agencies may become involved to uncover the truth.[13]Giving a false name to a law enforcement officer is a crime in many jurisdictions.
Apen nameis a pseudonym (sometimes a particular form of the real name) adopted by anauthor(or on the author's behalf by their publishers). English usage also includes the French-language phrasenom de plume(which in French literally means "pen name").[14]
The concept of pseudonymity has a long history. In ancient literature it was common to write in the name of a famous person, not for concealment or with any intention of deceit; in the New Testament, the second letter of Peter is probably such. A more modern example is all ofThe Federalist Papers, which were signed by Publius, a pseudonym representing the trio ofJames Madison,Alexander Hamilton, andJohn Jay. The papers were written partially in response to severalAnti-Federalist Papers, also written under pseudonyms. As a result of this pseudonymity, historians know that the papers were written by Madison, Hamilton, and Jay, but have not been able to discern with certainty which of the three authored a few of the papers. There are also examples of modern politicians and high-ranking bureaucrats writing under pseudonyms.[15][16]
Some female authors have used male pen names, in particular in the 19th century, when writing was a highly male-dominated profession. TheBrontë sistersused pen names for their early work, so as not to reveal their gender (see below) and so that local residents would not suspect that the books related to people of their neighbourhood.Anne Brontë'sThe Tenant of Wildfell Hall(1848) was published under the name Acton Bell, whileCharlotte Brontëused the name Currer Bell forJane Eyre(1847) andShirley(1849), andEmily Brontëadopted Ellis Bell as cover forWuthering Heights(1847). Other examples from the nineteenth-century are novelist Mary Ann Evans (George Eliot) and French writer Amandine Aurore Lucile Dupin (George Sand). Pseudonyms may also be used due to cultural or organization or political prejudices.
Similarly, some 20th- and 21st-century male romance novelists – a field dominated by women – have used female pen names.[17]A few examples are Brindle Chase,Peter O'Donnell(as Madeline Brent),Christopher Wood(as Penny Sutton and Rosie Dixon), andHugh C. Rae(as Jessica Sterling).[17]
A pen name may be used if a writer's real name is likely to be confused with the name of another writer or notable individual, or if the real name is deemed unsuitable.
Authors who write both fiction and non-fiction, or in different genres, may use different pen names to avoid confusing their readers. For example, the romance writerNora Robertswrites mystery novels under the nameJ. D. Robb.
In some cases, an author may become better known by his pen name than their real name. Some famous examples of that include Samuel Clemens, writing asMark Twain, Theodor Geisel, better known asDr. Seuss, and Eric Arthur Blair (George Orwell). The British mathematician Charles Dodgson wrote fantasy novels asLewis Carrolland mathematical treatises under his own name.
Some authors, such asHarold Robbins, use several literary pseudonyms.[18]
Some pen names have been used for long periods, even decades, without the author's true identity being discovered, as withElena FerranteandTorsten Krol.
Joanne Rowling[19]published theHarry Potterseries as J. K. Rowling. Rowling also published theCormoran Strikeseries of detective novels includingThe Cuckoo's Callingunder the pseudonym Robert Galbraith.
Winston Churchillwrote asWinston S. Churchill(from his full surname Spencer Churchill which he did not otherwise use) in an attempt to avoid confusion with anAmerican novelist of the same name. The attempt was not wholly successful – the two are still sometimes confused by booksellers.[20][21]
A pen name may be used specifically to hide the identity of the author, as withexposébooks about espionage or crime, or explicit erotic fiction.Erwin von Busseused a pseudonym when he published short stories about sexually charged encounters between men in Germany in 1920.[22]Some prolific authors adopt a pseudonym to disguise the extent of their published output, e. g.Stephen Kingwriting asRichard Bachman. Co-authors may choose to publish under a collective pseudonym, e. g.,P. J. TracyandPerri O'Shaughnessy.Frederic DannayandManfred Leeused the nameEllery Queenas a pen name for their collaborative works and as the name of their main character.[23]Asa Earl Carter, a Southern white segregationist affiliated with the KKK, wrote Western books under a fictional Cherokee persona to imply legitimacy and conceal his history.[24]
A famous case in French literature wasRomain Gary. Already a well-known writer, he started publishing books as Émile Ajar to test whether his new books would be well received on their own merits, without the aid of his established reputation. They were: Émile Ajar, like Romain Gary before him, was awarded the prestigiousPrix Goncourtby a jury unaware that they were the same person. Similarly, TV actorRonnie Barkersubmitted comedy material under the name Gerald Wiley.
A collective pseudonym may represent an entire publishing house, or any contributor to a long-running series, especially with juvenile literature. Examples includeWatty Piper,Victor Appleton,Erin Hunter, and Kamiru M. Xhan.
Another use of a pseudonym in literature is to present a story as being written by the fictional characters in the story. The series of novels known asA Series of Unfortunate Eventsare written byDaniel Handlerunder the pen name ofLemony Snicket, a character in the series. This applies also to some of the several 18th-century English and American writers who used the nameFidelia.
Ananonymity pseudonymormultiple-use nameis a name used by many different people to protect anonymity.[25]It is a strategy that has been adopted by many unconnected radical groups and by cultural groups, where the construct of personal identity has been criticised. This has led to the idea of the "open pop star", such asMonty Cantsin.[clarification needed]
Pseudonyms andacronymsare often employed in medical research toprotect subjects' identitiesthrough a process known asde-identification.
Nicolaus Copernicusput forward his theory of heliocentrism in the manuscriptCommentariolusanonymously, in part because of his employment as a law clerk for achurch-government organization.[26]
Sophie GermainandWilliam Sealy Gossetused pseudonyms to publish their work in the field of mathematics – Germain, to avoid rampant 19th century academicmisogyny, and Gosset, to avoid revealing brewing practices of his employer, theGuinness Brewery.[27][28]
Satoshi Nakamotois a pseudonym of a still unknown author or authors' group behind awhite paperaboutbitcoin.[29][30][31][32]
While taking part in military activities, such as fighting in a war, the pseudonym might be known as anom de guerre. It is chosen by the person involved in the activity.[33][34]
Individuals using a computeronlinemay adopt or be required to use a form of pseudonym known as a "handle" (a term deriving fromCB slang), "username", "loginname", "avatar", or, sometimes, "screen name", "gamertag", "IGN (InGame (Nick)Name)" or "nickname". On the Internet,pseudonymous remailersusecryptographythat achieves persistent pseudonymity, so that two-way communication can be achieved, and reputations can be established, without linking physicalidentitiesto their respective pseudonyms.Aliasingis the use of multiple names for the same data location.
More sophisticated cryptographic systems, such as anonymousdigital credentials, enable users to communicate pseudonymously (i.e., by identifying themselves by means of pseudonyms). In well-defined abuse cases, a designated authority may be able to revoke the pseudonyms and reveal the individuals' real identity.[citation needed]
Use of pseudonyms is common among professionaleSportsplayers, despite the fact that many professional games are played onLAN.[35]
Pseudonymity has become an important phenomenon on the Internet and other computer networks. In computer networks, pseudonyms possess varying degrees of anonymity,[36]ranging from highly linkablepublic pseudonyms(the link between the pseudonym and a human being is publicly known or easy to discover), potentially linkablenon-public pseudonyms(the link is known to system operators but is not publicly disclosed), andunlinkable pseudonyms(the link is not known to system operators and cannot be determined).[37]For example, trueanonymous remailerenables Internet users to establish unlinkable pseudonyms; those that employ non-public pseudonyms (such as the now-defunctPenet remailer) are calledpseudonymous remailers.
The continuum of unlinkability can also be seen, in part, on Wikipedia. Some registered users make no attempt to disguise their real identities (for example, by placing their real name on their user page). The pseudonym of unregistered users is theirIP address, which can, in many cases, easily be linked to them. Other registered users prefer to remain anonymous, and do not disclose identifying information. However, in certain cases,Wikipedia's privacy policypermits system administrators to consult the server logs to determine the IP address, and perhaps the true name, of a registered user. It is possible, in theory, to create an unlinkable Wikipedia pseudonym by using anOpen proxy, a Web server that disguises the user's IP address. But most open proxy addresses are blocked indefinitely due to their frequent use by vandals. Additionally, Wikipedia's public record of a user's interest areas, writing style, and argumentative positions may still establish an identifiable pattern.[38][39]
System operators (sysops) at sites offering pseudonymity, such as Wikipedia, are not likely to build unlinkability into their systems, as this would render them unable to obtain information about abusive users quickly enough to stop vandalism and other undesirable behaviors. Law enforcement personnel, fearing an avalanche of illegal behavior, are equally unenthusiastic.[40]Still, some users and privacy activists like theAmerican Civil Liberties Unionbelieve that Internet users deserve stronger pseudonymity so that they can protect themselves against identity theft, illegal government surveillance, stalking, and other unwelcome consequences of Internet use (includingunintentional disclosures of their personal informationanddoxing, as discussed in the next section). Their views are supported by laws in some nations (such as Canada) that guarantee citizens a right to speak using a pseudonym.[41]This right does not, however, give citizens the right to demand publication of pseudonymous speech on equipment they do not own.
Most Web sites that offer pseudonymity retain information about users. These sites are often susceptible to unauthorized intrusions into their non-public database systems. For example, in 2000, a Welsh teenager obtained information about more than 26,000 credit card accounts, including that of Bill Gates.[42][43]In 2003, VISA and MasterCard announced that intruders obtained information about 5.6 million credit cards.[44]Sites that offer pseudonymity are also vulnerable to confidentiality breaches. In a study of a Web dating service and apseudonymous remailer,University of Cambridgeresearchers discovered that the systems used by these Web sites to protect user data could be easily compromised, even if the pseudonymous channel is protected by strong encryption. Typically, the protected pseudonymous channel exists within a broader framework in which multiple vulnerabilities exist.[45]Pseudonym users should bear in mind that, given the current state of Web security engineering, their true names may be revealed at any time.
Pseudonymity is an important component of the reputation systems found in online auction services (such aseBay), discussion sites (such asSlashdot), and collaborative knowledge development sites (such asWikipedia). A pseudonymous user who has acquired a favorable reputation gains the trust of other users. When users believe that they will be rewarded by acquiring a favorable reputation, they are more likely to behave in accordance with the site's policies.[46]
If users can obtain new pseudonymous identities freely or at a very low cost, reputation-based systems are vulnerable to whitewashing attacks,[47]also calledserial pseudonymity, in which abusive users continuously discard their old identities and acquire new ones in order to escape the consequences of their behavior: "On the Internet, nobody knows that yesterday you were a dog, and therefore should be in the doghouse today."[48]Users of Internet communities who have been banned only to return with new identities are calledsock puppets. Whitewashing is one specific form of aSybil attackon distributed systems.
The social cost of cheaply discarded pseudonyms is that experienced users lose confidence in new users,[51]and may subject new users to abuse until they establish a good reputation.[48]System operators may need to remind experienced users that most newcomers are well-intentioned (see, for example,Wikipedia's policy about biting newcomers). Concerns have also been expressed about sock puppets exhausting the supply of easily remembered usernames. In addition a recent research paper demonstrated that people behave in a potentially more aggressive manner when using pseudonyms/nicknames (due to theonline disinhibition effect) as opposed to being completely anonymous.[52][53]In contrast, research by the blog comment hosting serviceDisqusfound pseudonymous users contributed the "highest quantity and quality of comments", where "quality" is based on an aggregate of likes, replies, flags, spam reports, and comment deletions,[49][50]and found that users trusted pseudonyms and real names equally.[54]
Researchers at the University of Cambridge showed that pseudonymous comments tended to be more substantive and engaged with other users in explanations, justifications, and chains of argument, and less likely to use insults, than either fully anonymous or real name comments.[55]Proposals have been made to raise the costs of obtaining new identities, such as by charging a small fee or requiring e-mail confirmation. Academic research has proposed cryptographic methods to pseudonymize social media identities[56]or government-issued identities,[57]to accrue and useanonymous reputationin online forums,[58]or to obtain one-per-person and hence less readily-discardable pseudonyms periodically at physical-worldpseudonym parties.[59]Others point out that Wikipedia's success is attributable in large measure to its nearly non-existent initial participation costs.
People seeking privacy often use pseudonyms to make appointments and reservations.[60]Those writing toadvice columnsin newspapers and magazines may use pseudonyms.[61]Steve Wozniakused a pseudonym when attending theUniversity of California, Berkeleyafter co-foundingApple Computer, because "[he] knew [he] wouldn't have time enough to be an A+ student."[62]
When used by an actor, musician, radio disc jockey, model, or other performer or "show business" personality a pseudonym is called astage name, or, occasionally, aprofessional name, orscreen name.
Members of a marginalized ethnic or religious group have often adopted stage names, typically changing their surname or entire name to mask their original background.
Stage names are also used to create a more marketable name, as in the case of Creighton Tull Chaney, who adopted the pseudonymLon Chaney Jr., a reference to his famous fatherLon Chaney.
Chris CurtisofDeep Purplefame was christened as Christopher Crummey ("crummy" is UK slang for poor quality). In this and similar cases a stage name is adopted simply to avoid an unfortunate pun.
Pseudonyms are also used to comply with the rules of performing-artsguilds(Screen Actors Guild(SAG),Writers Guild of America, East(WGA),AFTRA, etc.), which do not allow performers to use an existing name, in order to avoid confusion. For example, these rules required film and television actor Michael Fox to add a middle initial and becomeMichael J. Fox, to avoid being confused with another actor namedMichael Fox. This was also true of author and actressFannie Flagg, who shared her real name, Patricia Neal, withanother well-known actress;Rick Copp, who chose the pseudonym name Richard Hollis, which is also the name of a character in the anthology TV seriesFemme Fatales; and British actorStewart Granger, whose real name was James Stewart. The film-making team ofJoel and Ethan Coen, for instance, share credit for editing under the alias Roderick Jaynes.[63]
Some stage names are used to conceal a person's identity, such as the pseudonymAlan Smithee, which was used by directors in theDirectors Guild of America(DGA) to remove their name from a film they feel was edited or modified beyond their artistic satisfaction. In theatre, the pseudonymsGeorge or Georgina Spelvin, andWalter Plingeare used to hide the identity of a performer, usually when he or she is "doubling" (playing more than one role in the same play).
David Agnewwas a name used by the BBC to conceal the identity of a scriptwriter, such as for theDoctor WhoserialCity of Death, which had three writers, includingDouglas Adams, who was at the time of writing, the show's script editor.[64]In another Doctor Who serial,The Brain of Morbius, writerTerrance Dicksdemanded the removal of his name from the credits saying it could go out under a "bland pseudonym".[citation needed][65]This ended up as "Robin Bland".[65][66]
Pornographic actors regularly use stage names.[67][68][69]Sometimes these are referred to asnom de porn(like withnom de plume, this is English-language users creating a French-language phrase to use in English). Having acted in pornographic films can be a serious detriment to finding another career.[70][71]
Musicians and singers can use pseudonyms to allow artists to collaborate with artists on other labels while avoiding the need to gain permission from their own labels, such as the artistJerry Samuels, who made songs under Napoleon XIV. Rock singer-guitaristGeorge Harrison, for example, played guitar onCream's song "Badge" using a pseudonym.[72]In classical music, some record companies issued recordings under anom de disquein the 1950s and 1960s to avoid paying royalties. A number of popular budget LPs of piano music were released under the pseudonymPaul Procopolis.[73]Another example is thatPaul McCartneyused his fictional name "Bernerd Webb" forPeter and Gordon's songWoman.[74]
Pseudonyms are used as stage names inheavy metalbands, such asTracii GunsinLA Guns,Axl RoseandSlashinGuns N' Roses,Mick MarsinMötley Crüe,Dimebag DarrellinPantera, orC.C. DevilleinPoison. Some such names have additional meanings, like that of Brian Hugh Warner, more commonly known asMarilyn Manson: Marilyn coming fromMarilyn Monroeand Manson from convicted serial killerCharles Manson.Jacoby ShaddixofPapa Roachwent under the name "Coby Dick" during theInfestera. He changed back to his birth name whenlovehatetragedywas released.
David Johansen, front man for the hard rock bandNew York Dolls, recorded and performed pop and lounge music under the pseudonym Buster Poindexter in the late 1980s and early 1990s. The music video for Poindexter's debut single,Hot Hot Hot, opens with a monologue from Johansen where he notes his time with the New York Dolls and explains his desire to create more sophisticated music.
Ross Bagdasarian Sr., creator ofAlvin and the Chipmunks, wrote original songs, arranged, and produced the records under his real name, but performed on them asDavid Seville. He also wrote songs as Skipper Adams. Danish pop pianistBent Fabric, whose full name is Bent Fabricius-Bjerre, wrote his biggest instrumental hit "Alley Cat" as Frank Bjorn.
For a time, the musicianPrinceused an unpronounceable "Love Symbol" as a pseudonym ("Prince" is his actual first name rather than a stage name). He wrote the song "Sugar Walls" forSheena Eastonas "Alexander Nevermind" and "Manic Monday" forthe Banglesas "Christopher Tracy". (He also produced albums early in his career as "Jamie Starr").
Many Italian-American singers have used stage names, as their birth names were difficult to pronounce or considered too ethnic for American tastes. Singers changing their names includedDean Martin(born Dino Paul Crocetti),Connie Francis(born Concetta Franconero),Frankie Valli(born Francesco Castelluccio),Tony Bennett(born Anthony Benedetto), andLady Gaga(born Stefani Germanotta)
In 2009, the British rock bandFeederbriefly changed their name toRenegadesso they could play a whole show featuring a set list in which 95 per cent of the songs played were from their forthcoming new album of the same name, with none of their singles included. Front manGrant Nicholasfelt that if they played as Feeder, there would be uproar over his not playing any of the singles, so used the pseudonym as a hint. A series of small shows were played in 2010, at 250- to 1,000-capacity venues with the plan not to say who the band really are and just announce the shows as if they were a new band.
In many cases, hip-hop and rap artists prefer to use pseudonyms that represents some variation of their name, personality, or interests. Examples includeIggy Azalea(her stage name is a combination of her dog's name, Iggy, and her home street inMullumbimby, Azalea Street),Ol' Dirty Bastard(known under at least six aliases),Diddy(previously known at various times as Puffy, P. Diddy, and Puff Daddy),Ludacris,Flo Rida(whose stage name is a tribute to his home state,Florida), British-Jamaican hip-hop artistStefflon Don(real name Stephanie Victoria Allen),LL Cool J, andChingy.Black metalartists also adopt pseudonyms, usually symbolizing dark values, such asNocturno Culto,Gaahl, Abbath, and Silenoz. In punk and hardcore punk, singers and band members often replace real names with tougher-sounding stage names such asSid Viciousof the late 1970s bandSex Pistolsand "Rat" of the early 1980s bandThe Varukersand the 2000s re-formation ofDischarge. The punk rock bandThe Ramoneshad every member take the last name of Ramone.[citation needed]
Henry John Deutschendorf Jr., an American singer-songwriter, used the stage nameJohn Denver. The Australian country musician born Robert Lane changed his name toTex Morton. Reginald Kenneth Dwight legally changed his name in 1972 toElton John.
|
https://en.wikipedia.org/wiki/Pseudonym
|
In themathematicaldiscipline oflinear algebra, amatrix decompositionormatrix factorizationis afactorizationof amatrixinto a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
Innumerical analysis, different decompositions are used to implement efficient matrixalgorithms.
For example, when solving asystem of linear equationsAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, the matrixAcan be decomposed via theLU decomposition. The LU decomposition factorizes a matrix into alower triangular matrixLand anupper triangular matrixU. The systemsL(Ux)=b{\displaystyle L(U\mathbf {x} )=\mathbf {b} }andUx=L−1b{\displaystyle U\mathbf {x} =L^{-1}\mathbf {b} }require fewer additions and multiplications to solve, compared with the original systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, though one might require significantly more digits in inexact arithmetic such asfloating point.
Similarly, theQR decompositionexpressesAasQRwithQanorthogonal matrixandRan upper triangular matrix. The systemQ(Rx) =bis solved byRx=QTb=c, and the systemRx=cis solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition isnumerically stable.
TheJordan normal formand theJordan–Chevalley decomposition
Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with respect to diagonal scaling.
Analogous scale-invariant decompositions can be derived from other matrix decompositions; for example, to obtain scale-invariant eigenvalues.[3][4]
There exist analogues of the SVD, QR, LU and Cholesky factorizations forquasimatricesandcmatricesorcontinuous matrices.[13]A ‘quasimatrix’ is, like a matrix, a rectangular scheme whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise, a ‘cmatrix’, is continuous in both indices. As an example of a cmatrix, one can think of the kernel of anintegral operator.
These factorizations are based on early work byFredholm (1903),Hilbert (1904)andSchmidt (1907). For an account, and a translation to English of the seminal papers, seeStewart (2011).
|
https://en.wikipedia.org/wiki/Matrix_decomposition
|
Adaptive management, also known asadaptive resource managementoradaptive environmental assessment and management, is a structured,iterativeprocess of robustdecision makingin the face ofuncertainty, with an aim to reducing uncertainty over time viasystem monitoring. In this way, decision making simultaneously meets one or moreresource managementobjectives and, either passively or actively, accrues information needed to improve future management. Adaptive management is a tool which should be used not only to change a system, but also to learn about the system.[1]Because adaptive management is based on a learning process, it improves long-run management outcomes. The challenge in using the adaptive management approach lies in finding the correct balance between gaining knowledge to improve management in the future and achieving the best short-term outcome based on current knowledge.[2]This approach has more recently been employed in implementinginternational developmentprograms.
There are a number of scientific and social processes which are vital components of adaptive management, including:
The achievement of these objectives requires an open management process which seeks to include past, present and futurestakeholders. Adaptive management needs to at least maintainpolitical openness, but usually aims to create it. Adaptive management must therefore be ascientificand social process. It must focus on the development of newinstitutionsand institutional strategies in balance withscientific hypothesisand experimental frameworks (resilience.org).
Adaptive management can proceed as either passive or active adaptive management, depending on how learning takes place. Passive adaptive management values learning only insofar as it improves decision outcomes (i.e. passively), as measured by the specified utility function. In contrast, active adaptive management explicitly incorporates learning as part of the objective function, and hence, decisions which improve learning are valued over those which do not.[1][3]In both cases, as new knowledge is gained, the models are updated and optimal management strategies are derived accordingly. Thus, while learning occurs in both cases, it is treated differently. Often, deriving actively adaptive policies is technically very difficult, which prevents it being more commonly applied.[4]
Key features of both passive and active adaptive management are:
However, a number of process failures related to information feedback can prevent effective adaptive management decision making:[5]
The use of adaptive management techniques can be traced back to peoples from ancient civilisations. For example, theYappeople of Micronesia have been using adaptive management techniques to sustain highpopulation densitiesin the face of resource scarcity for thousands of years (Falanruw 1984). In using these techniques, the Yap people have altered their environment creating, for example, coastalmangrovedepressions andseagrass meadowsto support fishing and termite resistant wood (Stankey and Shinder 1997).
The origin of the adaptive management concept can be traced back to ideas ofscientific managementpioneered byFrederick Taylorin the early 1900s (Haber 1964). While the term "adaptive management" evolved in natural resource management workshops through decision makers, managers and scientists focussing on building simulation models to uncover key assumptions and uncertainties (Bormannet al.1999)
Two ecologists at TheUniversity of British Columbia,C.S. Holling[1]and C.J Walters[3]further developed the adaptive management approach as they distinguished between passive and active adaptive management practice.Kai Lee, notable Princeton physicist, expanded upon the approach in the late 1970s and early 1980s while pursuing a post-doctorate degree at UCBerkeley. The approach was further developed at the International Institute for Applied Systems Analysis (IIASA) inVienna,Austria, while C.S. Holling was director of the institute. In 1992, Hilbourne described three learning models for federal land managers, around which adaptive management approaches could be developed, these are reactive, passive and active.
Adaptive management has probably been most frequently applied in Yap,AustraliaandNorth America, initially applied infisherymanagement, but received more broad application in the 1990s and 2000s. One of the most successful applications of adaptive management has been in the area of waterfowl harvest management in North America, most notably for themallard.[6]
Adaptive management in a conservation project and program context can trace its roots back to at least the early 1990s, with the establishment of the Biodiversity Support Program (BSP)[7]in 1989. BSP was aUSAID-funded consortium of WWF[8]The Nature Conservancy (TNC),[9]and World Resources Institute (WRI).[10]Its Analysis and Adaptive Management Program sought to understand the conditions under which certain conservation strategies were most effective and to identify lessons learned across conservation projects. When BSP ended in 2001, TNC and Foundations of Success[11](FOS, a non-profit which grew out of BSP) continued to actively work in promoting adaptive management for conservation projects and programs. The approaches used included Conservation by Design[12](TNC) and Measures of Success[13](FOS).
In 2004, the Conservation Measures Partnership (CMP)[14]– which includes several former BSP members – developed a common set of standards and guidelines[15]for applying adaptive management to conservation projects and programs.
Applying adaptive management in aconservationorecosystem managementproject involves the integration of project/program design, management, and monitoring to systematically test assumptions in order to adapt and learn. The three components of adaptive management in environmental practice are:
Open Standards for the Practice of Conservation[18]lays out five main steps to an adaptive management project cycle (see Figure 1). TheOpen Standardsrepresent a compilation and adaptation of best practices and guidelines across several fields and across several organizations within the conservation community. Since the release of the initialOpen Standards(updated in 2007 and 2013), thousands of project teams from conservation organizations (e.g., TNC, Rare, and WWF), local conservation groups, and donors alike have begun applying theseOpen Standardsto their work. In addition, several CMP members have developed training materials and courses to help apply the Standards.
Some recent write-ups of adaptive management in conservation include wildlife protection (SWAP, 2008), forests ecosystem protection (CMER, 2010), coastal protection and restoration (LACPR, 2009), natural resource management (water, land and soil), species conservation especially, fish conservation fromoverfishing(FOS, 2007) andclimate change(DFG, 2010). In addition, some other examples follow:
The concept of adaptive management is not restricted to natural resources orecosystem management, as similar concepts have been applied tointernational developmentprogramming.[20][21]This has often been a recognition to the "wicked" nature of many development challenges and the limits of traditional planning processes.[22][23][24]One of the principal changes facing international development organizations is the need to be more flexible, adaptable and focused on learning.[25]This is reflected in international development approaches such as Doing Development Differently, Politically Informed Programming and Problem Driven Iterative Adaptation.[26][27][28]
One recent example of the use of adaptive management by international development donors is the planned Global Learning for Adaptive Management (GLAM) programme to support adaptive management inDepartment for International DevelopmentandUSAID. The program is establishing a centre for learning about adaptive management to support the utilization and accessibility of adaptive management.[29][30]In addition, donors have been focused on amending their own programmatic guidance to reflect the importance of learning within programs: for instance, USAID's recent focus in their ADS guidance on the importance of collaborating, learning and adapting.[31][32]This is also reflected in Department for International Development's Smart Rules that provide the operating framework for their programs including the use of evidence to inform their decisions.[33]There are a variety of tools used to operationalize adaptive management in programs, such aslearning agendasanddecision cycles.[34]
Collaborating, learning and adapting (CLA) is a concept related to the operationalizing of adaptive management in international development that describes a specific way of designing, implementing, adapting and evaluating programs.[35]: 85[36]: 46CLA involves three concepts:
CLA integrates three closely connected concepts within the organizational theory literature: namely collaborating, learning and adapting. There is evidence of the benefits of collaborating internally within an organization and externally with organizations.[38]Much of the production and transmission of knowledge—bothexplicit knowledgeandtacit knowledge—occurs through collaboration.[39]There is evidence for the importance of collaboration among individuals and groups for innovation, knowledge production, and diffusion—for example, the benefits of staff interacting with one another and transmitting knowledge.[40][41][42]The importance of collaboration is closely linked to the ability of organizations to collectively learn from each other, a concept noted in the literature onlearning organizations.[43][44][45]
CLA, an adaptive management practice, is being employed by implementing partners[46][47]that receive funding from thefederal government of the United States,[48][49][50]but it is primarily a framework for internal change efforts that aim at incorporating collaboration, learning, and adaptation within theUnited States Agency for International Development(USAID) including its missions located around the world.[51]CLA has been linked to a part of USAID's commitment to becoming a learning organization.[52]CLA represents an approach to combine strategic collaboration, continuous learning, and adaptive management.[53]A part of integrating the CLA approach is providing tools and resources, such as the Learning Lab, to staff and partner organizations.[54]The CLA approach is detailed for USAID staff in the recently revised program policy guidance.[31]
Adaptive management as a systematic process for improving environmental management policies and practices is the traditional application however, the adaptive management framework can also be applied to other sectors seekingsustainabilitysolutions such as business and community development. Adaptive management as a strategy emphasizes the need to change with the environment and to learn from doing. Adaptive management applied to ecosystems makes overt sense when considering ever changing environmental conditions. The flexibility and constant learning of an adaptive management approach is also a logical application for organizations seeking sustainability methodologies.
Businesses pursuing sustainability strategies would employ an adaptive management framework to ensure that the organization is prepared for the unexpected and geared for change. By applying an adaptive management approach the business begins to function as an integrated system adjusting and learning from a multi-faceted network of influences not just environmental but also, economic and social (Dunphy, Griffths, & Benn, 2007). The goal of any sustainable organization guided by adaptive management principals must be to engage in active learning to direct change towards sustainability (Verine, 2008). This "learning to manage by managing to learn" (Bormann BT, 1993) will be at the core of a sustainable business strategy.
Sustainable community development requires recognition of the relationship between environment, economics and social instruments within the community. An adaptive management approach to creating sustainable community policy and practice also emphasizes the connection and confluence of those elements. Looking into the cultural mechanisms which contribute to a community value system often highlights the parallel to adaptive management practices, "with [an] emphasis on feedback learning, and its treatment of uncertainty and unpredictability" (Berkes, Colding, & Folke, 2000). Often this is the result of indigenous knowledge and historical decisions of societies deeply rooted in ecological practices (Berkes, Colding, & Folke, 2000). By applying an adaptive management approach to community development the resulting systems can develop built in sustainable practice as explained by the Environmental Advisory Council (2002), "active adaptive management views policy as a set of experiments designed to reveal processes that build or sustain resilience. It requires, and facilitates, a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options" (p. 1121). A practical example of adaptive management as a tool for sustainability was the application of a modified variation of adaptive management using artvoice,photovoice, andagent-based modelingin a participatory social framework of action. This application was used in field research on tribal lands to first identify the environmental issue and impact of illegal trash dumping and then to discover a solution through iterative agent-based modeling usingNetLogoon a theoretical "regional cooperative clean-energy economy". Thiscooperativeeconomy incorporated a mixed application of: traditional trash recycling and a waste-to-fuels process of carbon recycling of non-recyclable trash intoethanol fuel. This industrial waste-to-fuels application was inspired by pioneering work of the Canadian-based company,Enerkem. See Bruss, 2012 - PhD dissertation: Human Environment Interactions and Collaborative Adaptive Capacity Building in a Resilience Framework, GDPE Colorado State University.
In an ever-changing world, adaptive management appeals to many practices seeking sustainable solutions by offering a framework for decision making that proposes to support a sustainable future which, "conserves and nurtures the diversity—of species, of human opportunity, of learning institutions and of economic options"(The Environmental Advisory Council, 2002, p. 1121).
It is difficult to test the effectiveness of adaptive management in comparison to other management approaches. One challenge is that once a system is managed using one approach it is difficult to determine how another approach would have performed in exactly the same situation.[55]One study tested the effectiveness of formal passive adaptive management in comparison to human intuition by having natural resource management students make decisions about how to harvest a hypothetical fish population in an online computer game. The students on average performed poorly in comparison to the computer programs implementing passive adaptive management.[55][56]
Collaborative adaptive management is often celebrated as an effective way to deal with natural resource management under high levels of conflict, uncertainty and complexity.[57]The effectiveness of these efforts can be constrained by both social and technical barriers. As the case of theGlenn Canyon DamAdaptive Management Program in the US illustrates, effective collaborative adaptive management efforts require clear and measurable goals and objectives, incentives and tools to foster collaboration, long-term commitment to monitoring and adaptation, and straightforward joint fact-finding protocols.[58]In Colorado, USA, a ten-year,ranch-scale (2590 ha) experiment began in 2012 at theAgricultural Research Service(ARS) Central Plains Experimental range to evaluate the effectiveness and process of collaborative adaptive management[57]onrangelands. The Collaborative Adaptive Rangeland Management or “CARM” project monitors outcomes from yearling steer grazing management on 10, 130 ha pastures conducted by a group of conservationists, ranchers, and public employees, and researchers. This team compares ecological monitoring data tracking profitability and conservation outcomes with outcomes from a “traditional” management treatment: a second set of ten pastures managed without adaptive decision making but with the same stocking rate. Early evaluations of the project by social scientists offer insights for more effective adaptive management.[59]First, trust is primary and essential to learning in adaptive management, not a side benefit. Second, practitioners cannot assume that extensive monitoring data or large-scale efforts will automatically facilitate successful collaborative adaptive management. Active, long-term efforts to build trust among scientists and stakeholders are also important. Finally, explicit efforts to understand, share and respect multiple types of manager knowledge, including place-based ecological knowledge practiced by local managers, is necessary to manage adaptively for multiple conservation and livelihood goals on rangelands.[59]Practitioners can expect adaptive management to be a complex, non-linear process shaped by social, political and ecological processes, as well as by data collection and interpretation.
Information and guidance on the entire adaptive management process is available from CMP members' websites and other online sources:
|
https://en.wikipedia.org/wiki/Adaptive_management
|
Avirtual communityis asocial networkof individuals who connect through specificsocial media, potentially crossing geographical and political boundaries in order to pursue mutual interests or goals. Some of the most pervasive virtual communities areonline communitiesoperating undersocial networking services.
Howard Rheingolddiscussed virtual communities in his book,The Virtual Community, published in 1993. The book's discussion ranges from Rheingold's adventures onThe WELL,computer-mediated communication, social groups and information science. Technologies cited includeUsenet,MUDs(Multi-User Dungeon) and their derivativesMUSHesandMOOs,Internet Relay Chat(IRC),chat roomsandelectronic mailing lists. Rheingold also points out the potential benefits for personal psychological well-being, as well as for society at large, of belonging to a virtual community. At the same time, it showed that job engagement positively influences virtual communities of practice engagement.[1]
Virtual communities all encourage interaction, sometimes focusing around a particular interest or just to communicate. Some virtual communities do both. Community members are allowed to interact over a shared passion through various means:message boards,chat rooms,social networkingWorld Wide Web sites, or virtual worlds.[2]Members usually become attached to the community world, logging in and out on sites all day every day, which can certainly become an addiction.[3]
The traditional definition of a community is of geographically circumscribed entity (neighborhoods, villages, etc.). Virtual communities are usually dispersed geographically, and therefore are not communities under the original definition. Some online communities are linked geographically, and are known as community websites. However, if one considers communities to simply possess boundaries of some sort between their members and non-members, then a virtual community is certainly a community.[4]Virtual communities resemble real lifecommunitiesin the sense that they both provide support, information, friendship and acceptance between strangers.[5]While in a virtual community space, users may be expected to feel a sense of belonging and a mutual attachment among the members that are in the space.
One of the most influential part about virtual communities is the opportunity to communicate through several media platforms or networks. Now that virtual communities exists, this had leveraged out the things we once did prior to virtual communities, such as postal services, fax machines, and even speaking on the telephone. Early research into the existence of media-based communities was concerned with the nature ofreality, whether communities actually could exist through the media, which could place virtual community research into the social sciences definition of ontology. In the seventeenth century, scholars associated with theRoyal Societyof London formed a community through the exchange of letters.[4]"Community without propinquity", coined by urban plannerMelvin Webberin 1963 and "community liberated", analyzed byBarry Wellmanin 1979 began the modern era of thinking about non-local community.[6]As well,Benedict Anderson'sImagined Communitiesin 1983, described how different technologies, such as national newspapers, contributed to the development of national and regional consciousness among early nation-states.[7]Some authors that built their theories on Anderson's imagined communities have been critical of the concept, claiming that all communities are based on communication and that virtual/real dichotomy is disintegrating, making use of the word "virtual" problematic or even obsolete.[8]
Virtual communities are used for a variety of social and professional groups; interaction between community members vary from personal to purely formal. For example, an email distribution list could serve as a personal means of communicating with family and friends, and also formally to coordinate with coworkers.
User experienceis the ultimate goal for the program or software used by an internet community, because user experience will determine the software's success.[9]The software for social media pages or virtual communities is structured around the users' experience and designed specifically for online use.
User experience testing is utilized to reveal something about the personal experience of the human being using a product or system.[10]When it comes to testing user experience in a software interface, three main characteristics are needed: a user who is engaged, a user who is interacting with a product or interface, and defining the users' experience in ways that are and observable or measurable.[10]User experience metrics are based on a reliability and repeatability, using a consistent set of measurements to result in comparable outcomes. User experience metrics are based on user retention, using a consistent set of measurements to collect data on user experience.
The widespread use of the Internet and virtual communities by millions of diverse users for socializing is a phenomenon that raises new issues for researchers and developers. The vast number and diversity of individuals participating in virtual communities worldwide makes it a challenge to test usability across platforms to ensure the best overall user experience. Some well-established measures applied to the usability framework for online communities are speed of learning, productivity, user satisfaction, how much people remember using the software, and how many errors they make.[11]The human computer interactions that are measured during a usability experience test focus on the individuals rather than their social interactions in the online community. The success of online communities depend on the integration of usability and social semiotics. Usability testing metrics can be used to determine social codes by evaluating a user's habits when interacting with a program. Social codes are established and reinforced by the regular repetition of behavioral patterns.[12]People communicate their social identities orculture codethrough the work they do, the way they talk, the clothes they wear, their eating habits, domestic environments and possessions, and use of leisure time. Usability testing metrics can be used to determine social codes by evaluating a user's habits when interacting with a program. The information provided during a usability test can determine demographic factors and help define the semiotic social code. Dialogue and social interactions, support information design, navigation support, and accessibility are integral components specific to online communities. As virtual communities grow, so do the diversity of their users. However, the technologies are not made to be any more or less intuitive. Usability tests can ensure users are communicating effectively using social and semiotic codes while maintaining their social identities.[11]Efficient communication requires a common set of signs in the minds of those seeking to communicate.[12]As technologies evolve and mature, they tend to be used by an increasingly diverse set of users. This kind of increasing complexity and evolution of technology does no necessarily mean that the technologies are becoming easier to use.[10]Usability testing in virtual communities can ensure users are communicating effectively through social and semiotic codes and maintenance of social realities and identities.[12]
Recent studies have looked into development of health related communities and their impact on those already suffering health issues. These forms of social networks allow for open conversation between individuals who are going through similar experiences, whether themselves or in their family.[13]Such sites have so grown in popularity that now many health care providers form groups for their patients by providing web areas where one may direct questions to doctors. These sites prove especially useful when related to rare medical conditions. People with rare or debilitating disorders may not be able to access support groups in their physical community, thus online communities act as primary means for such support. Online health communities can serve as supportive outlets as they facilitate connecting with others who truly understand the disease, as well as offer more practical support, such as receiving help in adjusting to life with the disease.[14]Each patient on online health communities are on there for different reasons, as some may need quick answers to questions they have, or someone to talk to.Involvement in social communities of similar health interests has created a means for patients to develop a better understanding and behavior towards treatment and health practices.[15][16]Some of these users could have very serious life-threatening issues which these personal contexts could become very helpful to these users, as the issues are very complex.[17]Patients increasingly use such outlets, as this is providing personalized and emotional support and information, that will help them and have a better experience.[17]The extent to which these practices have effects on health are still being studied.
Studies on health networks have mostly been conducted on groups which typically suffer the most from extreme forms of diseases, for example cancer patients, HIV patients, or patients with other life-threatening diseases. It is general knowledge that one participates in online communities to interact with society and develop relationships.[18]Individuals who suffer from rare or severe illnesses are unable to meet physically because of distance or because it could be a risk to their health to leave a secure environment. Thus, they have turned to the internet.
Some studies have indicated that virtual communities can provide valuable benefits to their users. Online health-focused communities were shown to offer a unique form of emotional support that differed from event-based realities and informational support networks. Growing amounts of presented material show how online communities affect the health of their users. Apparently the creation of health communities has a positive impact on those who are ill or in need of medical information.[19]
It was found that young individuals are more bored with politics and history topics, and instead are more interested in celebrity dramas and topics. Young individuals claim that "voicing what you feel" does not mean "being heard", so they feel the need to not participate in these engagements, as they believe they are not being listened to anyway.[20]Over the years, things have changed, as new forms of civic engagement and citizenship have emerged from the rise of social networking sites. Networking sites act as a medium for expression and discourse about issues in specific user communities. Online content-sharing sites have made it easy for youth as well as others to not only express themselves and their ideas through digital media, but also connect with large networked communities. Within these spaces, young people are pushing the boundaries of traditional forms of engagement such as voting and joining political organizations and creating their own ways to discuss, connect, and act in their communities.[21]
Civic engagement throughonline volunteeringhas shown to have positive effects on personal satisfaction and development. Some 84 percent of online volunteers found that their online volunteering experience had contributed to their personal development and learning.[22]
In his bookThe Wealth of Networksfrom 2006,Yochai Benklersuggests that virtual communities would "come to represent a new form of human communal existence, providing new scope for building a shared experience of human interaction".[23]Although Benkler's prediction has not become entirely true, clearly communications and social relations are extremely complex within a virtual community. The two main effects that can be seen according to Benkler are a "thickening of preexisting relations with friends, family and neighbours" and the beginnings of the "emergence of greater scope for limited-purpose, loose relationships".[23]Despite being acknowledged as "loose" relationships, Benkler argues that they remain meaningful.
Previous concerns about the effects of Internet use on community and family fell into two categories: 1) sustained, intimate human relations "are critical to well-functioning human beings as a matter of psychological need" and 2) people with "social capital" are better off than those who lack it. It leads to better results in terms of political participation.[23]However, Benkler argues that unless Internet connections actually displace direct, unmediated, human contact, there is no basis to think that using the Internet will lead to a decline in those nourishing connections we need psychologically, or in the useful connections we make socially. Benkler continues to suggest that the nature of an individual changes over time, based on social practices and expectations. There is a shift from individuals who depend upon locally embedded, unmediated and stable social relationships to networked individuals who are more dependent upon their own combination of strong and weak ties across boundaries and weave their own fluid relationships. Manuel Castells calls this the "networked society".[23]
In 1997,MCI Communicationsreleased the "Anthem" advertisement, heralding the internet as a utopia without age, race, or gender.Lisa Nakamuraargues in chapter 16 of her 2002 bookAfter/image of identity: Gender, Technology, and Identity Politics, that technology gives us iterations of our age, race and gender in virtual spaces, as opposed to them being fully extinguished. Nakamura uses a metaphor of "after-images" to describe the cultural phenomenon of expressing identity on the internet. The idea is that any performance of identity on the internet is simultaneously present and past-tense, "posthuman and projectionary", due to its immortality.[24]
Sherry Turkle, professor of Social Studies of Science and Technology atMIT, believes the internet is a place where actions of discrimination are less likely to occur. In her 1995 bookLife on the Screen: Identity in the Age of the Internet, she argues that discrimination is easier in reality as it is easier to identify as face value, what is contrary to one's norm. The internet allows for a more fluid expression of identity and thus people become more accepting of inconsistent personae within themselves and others. For these reasons, Turkle argues users existing in online spaces are less compelled to judge or compare themselves to their peers, allowing people in virtual settings an opportunity to gain a greater capacity for acknowledging diversity.[25]
Nakamura argues against this view, coining the termidentity tourismin her 1999 article "Race In/For Cyberspace: Identity Tourism and Racial Passing on the Internet". Identity tourism, in the context of cyberspace, is a term used to the describe the phenomenon of users donning and doffing other-race and other-gender personae. Nakamura finds that performed behavior from these identity tourists often perpetuate stereotypes.[26]
In the 1998 bookCommunities in Cyberspace, authorsMarc A. SmithandPeter Kollock, perceives the interactions with strangers are based upon with whom we are speaking or interacting with. People use everything from clothes, voice,body language,gestures, and power to identify others, which plays a role with how they will speak or interact with them. Smith and Kollock believes that online interactions breaks away of all of the face-to-face gestures and signs that people tend to show in front of one another. Although this is difficult to do online, it also provides space to play with one's identity.[27]
The gaming community is extremely vast and accessible to a wide variety of people, However, there are negative effects on the relationships "gamers" have with the medium when expressing identity of gender.Adrienne Shawnotes in her 2012 article "Do you identify as a gamer? Gender, race, sexuality, and gamer identity", that gender, perhaps subconsciously, plays a large role in identifying oneself as a "gamer".[28]According to Lisa Nakamura, representation in video games has become a problem, as the minority of players from different backgrounds who are not just the stereotyped white teen male gamer are not represented.[29]
The explosive diffusion[30]of the Internet since the mid-1990s fostered the proliferation of virtual communities in the form of social networking services and online communities. Virtual communities may synthesizeWeb 2.0technologies with the community, and therefore have been described as Community 2.0, although strong community bonds have been forged online since the early 1970s on timeshare systems likePLATOand later onUsenet. Online communities depend upon social interaction and exchange between users online. This interaction emphasizes thereciprocityelement of the unwrittensocial contractbetween community members.
An onlinemessage boardis a forum where people can discuss thoughts or ideas on various topics or simply express an idea. Users may choose which thread, or board of discussion, they would like to read or contribute to. A user will start a discussion by making a post.[31]Other users who choose to respond can follow the discussion by adding their own posts to that thread at any time. Unlike in spokenconversations, message boards do not usually have instantaneous responses; users actively go to the website to check for responses.
Anyone can register to participate in an online message board. People can choose to participate in the virtual community, even if or when they choose not to contribute their thoughts and ideas. Unlike chat rooms, at least in practice, message boards can accommodate an almost infinite number of users.
Internet users' urges to talk to and reach out to strangers online is unlike those in real-life encounters where people are hesitant and often unwilling to step in to help strangers. Studies have shown that people are more likely to intervene when they are the only one in a situation. With Internet message boards, users at their computers are alone, which might contribute to their willingness to reach out. Another possible explanation is that people can withdraw from a situation much more easily online than off. They can simply click exit or log off, whereas they would have to find a physical exit and deal with the repercussions of trying to leave a situation in real life. The lack of status that is presented with an online identity also might encourage people, because, if one chooses to keep it private, there is no associated label of gender, age, ethnicity or lifestyle.[32]
Shortly after the rise of interest in message boards and forums, people started to want a way of communicating with their "communities" in real time. The downside to message boards was that people would have to wait until another user replied to their posting, which, with people all around the world in different time frames, could take a while. The development of onlinechat roomsallowed people to talk to whoever was online at the same time they were. This way, messages were sent and online users could immediately respond.
The original development byCompuServe CBhosted forty channels in which users could talk to one another in real time. The idea of forty different channels led to the idea of chat rooms that were specific to different topics. Users could choose to join an already existent chat room they found interesting, or start a new "room" if they found nothing to their liking. Real-time chatting was also brought into virtual games, where people could play against one another and also talk to one another through text. Now, chat rooms can be found on all sorts of topics, so that people can talk with others who share similar interests. Chat rooms are now provided byInternet Relay Chat(IRC) and other individual websites such asYahoo,MSN, andAOL.
Chat room users communicate through text-based messaging. Most chat room providers are similar and include an input box, a message window, and a participant list. The input box is where users can type their text-based message to be sent to the providing server. The server will then transmit the message to the computers of anyone in the chat room so that it can be displayed in the message window. The message window allows the conversation to be tracked and usually places a time stamp once the message is posted. There is usually a list of the users who are currently in the room, so that people can see who is in their virtual community.
Users can communicate as if they are speaking to one another in real life. This "simulated reality" attribute makes it easy for users to form a virtual community, because chat rooms allow users to get to know one another as if they were meeting in real life. The individual "room" feature also makes it more likely that the people within a chat room share a similar interest; an interest that allows them to bond with one another and be willing to form a friendship.[33][34]
Virtual worldsare the most interactive of all virtual community forms. In this type of virtual community, people are connected by living as anavatarin a computer-based world. Users create their own avatar character (from choosing the avatar's outfits to designing the avatar's house) and control their character's life and interactions with other characters in the 3D virtual world. It is similar to a computer game; however, there is no objective for the players. A virtual world simply gives users the opportunity to build and operate a fantasy life in the virtual realm. Characters within the world can talk to one another and have almost the same interactions people would have in reality. For example, characters can socialize with one another and hold intimate relationships online.
This type of virtual community allows for people to not only hold conversations with others in real time, but also to engage and interact with others. The avatars that users create are like humans. Users can choose to make avatars like themselves, or take on an entirely different personality than them. When characters interact with other characters, they can get to know one another through text-based talking and virtual experience (such as having avatars go on a date in the virtual world). A virtual community chat room may give real-time conversations, but people can only talk to one another. In a virtual world, characters can do activities together, just like friends could do in reality. Communities in virtual worlds are most similar to real-life communities because the characters are physically in the same place, even if the users who are operating the characters are not.[35]Second Lifeis one of the most popular virtual worlds on the Internet.Whyvilleoffers an alternative for younger audiences where safety and privacy are a concern. In Whyville, players use the virtual world's simulation aspect to experiment and learn about various phenomena.
Another use for virtual worlds has been in business communications. Benefits from virtual world technology such as photo realistic avatars and positional sound create an atmosphere for participants that provides a less fatiguing sense of presence. Enterprise controls that allow the meeting host to dictate the permissions of the attendees such as who can speak, or who can move about allow the host to control the meeting environment.Zoom, is a popular platform that has grown over theCOVID-19 pandemic. Where those who host meetings on this platform, can dictate who can or cannot speak, by muting or unmuting them, along with who is able to join. Several companies are creating business based virtual worlds includingSecond Life. These business based worlds have stricter controls and allow functionality such as muting individual participants, desktop sharing, or access lists to provide a highly interactive and controlled virtual world to a specific business or group. Business based virtual worlds also may provide various enterprise features such as Single Sign on with third party providers, or Content Encryption.[citation needed]
Social networking servicesare the most prominent type of virtual community. They are either a website or software platform that focuses on creating and maintaining relationships.Facebook,Twitter, andInstagramare all virtual communities. With these sites, one often creates a profile or account, and adds friends or follow friends. This allows people to connect and look for support using the social networking service as a gathering place. These websites often allow for people to keep up to date with their friends and acquaintances' activities without making much of an effort.[36]On several of these sites you may be able to video chat, with several people at once, making the connections feel more like you are together. On Facebook, for example, one can upload photos and videos, chat, make friends, reconnect with old ones, and join groups or causes.[37]
Participatory culture plays a large role in online and virtual communities. In participatory culture, users feel that their contributions are important and that by contributing, they are forming meaningful connections with other users. The differences between being a producer of content on the website and being a consumer on the website become blurred and overlap. According toHenry Jenkins, "Members believe their contributions matter and feel some degree of social connection with one another "(Jenkins, et al. 2005). The exchange and consumption of information requires a degree of "digital literacy", such that users are able to "archive, annotate, appropriate, transform and recirculate media content" (Jenkins). Specialized information communities centralizes a specific group of users who are all interested in the same topic. For example, TasteofHome.com, the website of the magazineTaste of Home, is a specialized information community that focuses on baking and cooking. The users contribute consumer information relating to their hobby and additionally participate in further specialized groups and forums. Specialized Information Communities are a place where people with similar interests can discuss and share their experiences and interests.
Howard Rheingold'sVirtual Communitycould be compared withMark Granovetter's ground-breaking "strength of weak ties" article published twenty years earlier in theAmerican Journal of Sociology. Rheingold translated, practiced and published Granovetter's conjectures about strong and weak ties in the online world. His comment on the first page even illustrates the social networks in the virtual society: "My seven year old daughter knows that her father congregates with a family of invisible friends who seem to gather in his computer. Sometimes he talks to them, even if nobody else can see them. And she knows that these invisible friends sometimes show up in the flesh, materializing from the next block or the other side of the world" (page 1). Indeed, in his revised version ofVirtual Community, Rheingold goes so far to say that had he readBarry Wellman's work earlier, he would have called his book "onlinesocial networks".
Rheingold's definition contains the terms "social aggregation and personal relationships" (page 3). Lipnack and Stamps (1997)[38]and Mowshowitz (1997) point out how virtual communities can work across space, time and organizational boundaries; Lipnack and Stamps (1997)[38]mention a common purpose; and Lee, Eom, Jung and Kim (2004) introduce "desocialization" which means that there is less frequent interaction with humans in traditional settings, e.g. an increase in virtual socialization. Calhoun (1991) presents adystopiaargument, asserting the impersonality of virtual networks. He argues that IT has a negative influence on offline interaction between individuals because virtual life takes over our lives. He believes that it also creates different personalities in people which can cause frictions in offline and online communities and groups and in personal contacts. (Wellman & Haythornthwaite, 2002). Recently, Mitch Parsell (2008) has suggested that virtual communities, particularly those that leverage Web 2.0 resources, can be pernicious by leading to attitude polarization, increased prejudices and enabling sick individuals to deliberately indulge in their diseases.[39]
Internet communities offer the advantage of instant information exchange that is not possible in a real-life community. This interaction allows people to engage in many activities from their home, such as: shopping, paying bills, and searching for specific information. Users of online communities also have access to thousands of specific discussion groups where they can form specialized relationships and access information in such categories as: politics, technical assistance, social activities, health (see above) and recreational pleasures. Virtual communities provide an ideal medium for these types of relationships because information can easily be posted and response times can be very fast. Another benefit is that these types of communities can give users a feeling of membership and belonging. Users can give and receive support, and it is simple and cheap to use.[40]
Economically, virtual communities can be commercially successful, making money through membership fees, subscriptions, usage fees, and advertising commission. Consumers generally feel very comfortable making transactions online provided that the seller has a good reputation throughout the community. Virtual communities also provide the advantage ofdisintermediationin commercial transactions, which eliminates vendors and connects buyers directly to suppliers. Disintermediation eliminates pricey mark-ups and allows for a more direct line of contact between the consumer and the manufacturer.[41]
While instant communication means fast access, it also means that information is posted without being reviewed for correctness. It is difficult to choose reliable sources because there is no editor who reviews each post and makes sure it is up to a certain degree of quality.[42]
In theory, online identities can be kept anonymous which enables people to use the virtual community for fantasy role playing as in the case ofSecond Life's use of avatars. Some professionals urge caution with users who use online communities because predators also frequent these communities looking for victims who are vulnerable to onlineidentity theftoronline predators.[43]
There are also issues surrounding bullying on internet communities. With users not having to show their face, people may use threatening and discriminating acts towards other people because they feel that they would not face any consequences.[44]
There are standing issues with gender and race on the online community as well, where only the majority is represented on the screen, and those of different background and genders are underrepresented.[29]
|
https://en.wikipedia.org/wiki/Online_Community
|
Acollaborative innovation network(CoIN) is a collaborative innovation practice that uses internet platforms to promote communication and innovation within self-organizing virtual teams.
Coins work across hierarchies and boundaries where members can exchange ideas and information directly and openly. This collaborative and transparent environment fosters innovation. Peter Gloor describes the phenomenon as "swarm creativity". He says, "CoINs are the best engines to drive innovation."[1]
CoINs existed well before the advent of modern communication technology. However, theInternetand instant communication improved productivity and enabled the reach of a global scale. Today, they rely on the Internet,e-mail, and other communications vehicles for information sharing.[1]
According to Gloor, CoINs have five main characteristics:[1]
There are also five essential elements of collaborative innovation networks (which Gloor calls "genetic code"):[1]
CoINs have been developing many disruptive innovations such as theInternet,Linux,the WebandWikipedia. Students with little or no budget created these inventions in universities or labs. They were not focused on the money but on the sense of accomplishment.[1]
Faced with creations like the Internet, large companies such asIBMandIntelhave learned to use the principles of open innovation to enhance their research learning curve. They increased or established collaborations with universities, agencies, and small companies to accelerate their processes and launch new services faster.[1]
Asheim and Isaksen (2002)[2]conclude that innovative network contributes to the achievement of optimal allocation of resources, and promoting knowledge transfer performance. However, four factors of collaborative innovation networks affect the performance of CoINs differently:[3]
Collaborative innovation still needs to be empowered. A more collaborative approach involving stakeholders such as governments, corporations, entrepreneurs, and scholars is critical to tackling today's main challenges.[according to whom?]
|
https://en.wikipedia.org/wiki/Collaborative_innovation_network
|
TheScottish Book(Polish:Księga Szkocka) was a thick notebook used by mathematicians of theLwów School of MathematicsinPolandfor jotting down problems meant to be solved. The notebook was named after the "Scottish Café" where it was kept.
Originally, the mathematicians who gathered at the cafe would write down the problems and equations directly on the cafe's marble table tops, but these would be erased at the end of each day, and so the record of the preceding discussions would be lost. The idea for the book was most likely originally suggested byStefan Banach's wife, Łucja Banach. Stefan or Łucja Banach purchased a large notebook and left it with the proprietor of the cafe.[1][2]
The Scottish Café (Polish:Kawiarnia Szkocka) was the café inLwów(nowLviv,Ukraine) where, in the 1930s and 1940s, mathematicians from theLwów Schoolcollaboratively discussedresearch problems, particularly infunctional analysisandtopology.
Stanislaw Ulamrecounts that the tables of the café hadmarbletops, so they could write in pencil, directly on the table, during their discussions. To keep the results from being lost, and after becoming annoyed with their writing directly on the table tops,Stefan Banach's wife provided the mathematicians with a large notebook, which was used for writing the problems and answers and eventually became known as theScottish Book. The book—a collection of solved, unsolved, and even probably unsolvable problems—could be borrowed by any of the guests of the café. Solving any of the problems was rewarded with prizes, with the most difficult and challenging problems having expensive prizes (during theGreat Depressionand on the eve ofWorld War II), such as a bottle of fine brandy.[3]
For problem 153, which was later recognized as being closely related to Stefan Banach's "basis problem",Stanisław Mazuroffered the prize of a live goose. This problem was solved only in 1972 byPer Enflo, who was presented with the live goose in a ceremony that was broadcast throughout Poland.[4]
The café building used to house theUniversal Bank[uk]at the street address of 27Taras ShevchenkoProspekt. The original cafe was renovated in May 2014 and contains a copy of the Scottish Book.
A total of 193 problems were written down in the book.[1]Stanisław Mazurcontributed a total of 43 problems, 24 of them as a single author and 19 together with Stefan Banach.[5]Banach himself wrote 14, plus another 11 withStanisław Ulamand Mazur. Ulam wrote 40 problems and additional 15 ones with others.[1]
During theSoviet occupation of Lwów, several Russian mathematicians visited the city and also added problems to the book.[2]
Hugo Steinhauscontributed the last problem on 31 May 1941, shortly before theGerman attack on the Soviet Union;[6][7]this problem involved a question about thelikely distributionof matches within a matchbox, a problem motivated by Banach's habit ofchain smokingcigarettes.[1]
After World War II, an English translation annotated by Ulam was published byLos Alamos National Laboratoryin 1957.[8]After World War II, Steinhaus at theUniversity of Wrocławrevived the tradition of the Scottish book by initiatingThe New Scottish Bookin 1945-1958.
The tradition of the Scottish Book continues to inspire not only mathematicians but also educators in other fields. Piotr Kowzan proposed a "goose method" as a pedagogical tool for marking open problems and encouraging future research. Inspired by the eccentric rewards in the Scottish Book, this approach aims to foster curiosity and knowledge-building across generations.[9]
The following mathematicians were associated with theLwów School of Mathematicsor contributed toThe Scottish Book:
49°50′09″N24°1′57″E / 49.83583°N 24.03250°E /49.83583; 24.03250
|
https://en.wikipedia.org/wiki/Scottish_Book
|
Inmachine learning,grokking, ordelayed generalization, is a transition togeneralizationthat occurs many training iterations after theinterpolation threshold, after many iterations of seemingly little progress, as opposed to the usual process where generalization occurs slowly and progressively once the interpolation threshold has been reached.[2][3][4]
Grokking was introduced in January 2022 byOpenAIresearchers investigating how neural network perform calculations. It is derived from the wordgrokcoined byRobert Heinleinin his novelStranger in a Strange Land.[1]
Grokking can be understood as aphase transitionduring the training process.[5]While grokking has been thought of as largely a phenomenon of relatively shallow models, grokking has been observed in deep neural networks and non-neural models and is the subject of active research.[6][7][8][9]
One potential explanation is that theweight decay(a component of the loss function that penalizes higher values of the neural network parameters, also called regularization) slightly favors the general solution that involves lower weight values, but that is also harder to find. According to Neel Nanda, the process of learning the general solution may be gradual, even though the transition to the general solution occurs more suddenly later.[1]
Recent theories[10][11]have hypothesized that grokking occurs when neural networks transition from a "lazy training"[12]regime where the weights do not deviate far from initialization, to a "rich" regime where weights abruptly begin to move in task-relevant directions. Follow-up empirical and theoretical work[13]has accumulated evidence in support of this perspective, and it offers a unifying view of earlier work as the transition from lazy to rich training dynamics is known to arise from properties of adaptive optimizers,[14]weight decay,[15]initial parameter weight norm,[8]and more.
|
https://en.wikipedia.org/wiki/Grokking_(machine_learning)
|
This is a list of notablelexer generatorsandparser generatorsfor various language classes.
Regular languagesare a category of languages (sometimes termedChomsky Type 3) which can be matched by a state machine (more specifically, by adeterministic finite automatonor anondeterministic finite automaton) constructed from aregular expression. In particular, a regular language can match constructs like "A follows B", "Either A or B", "A, followed by zero or more instances of B", but cannot match constructs which require consistency between non-adjacent elements, such as "some instances of A followed by the same number of instances of B", and also cannot express the concept of recursive "nesting" ("every A is eventually followed by a matching B"). A classic example of a problem which a regular grammar cannot handle is the question of whether a given string contains correctly nested parentheses. (This is typically handled by a Chomsky Type 2 grammar, also termed acontext-free grammar.)
Context-free languagesare a category of languages (sometimes termedChomsky Type 2) which can be matched by a sequence of replacement rules, each of which essentially maps each non-terminal element to a sequence of terminal elements and/or other nonterminal elements. Grammars of this type can match anything that can be matched by aregular grammar, and furthermore, can handle the concept of recursive "nesting" ("every A is eventually followed by a matching B"), such as the question of whether a given string contains correctly nested parentheses. The rules of Context-free grammars are purely local, however, and therefore cannot handle questions that require non-local analysis such as "Does a declaration exist for every variable that is used in a function?". To do so technically would require a more sophisticated grammar, like a Chomsky Type 1 grammar, also termed acontext-sensitive grammar. However, parser generators for context-free grammars often support the ability for user-written code to introduce limited amounts of context-sensitivity. (For example, upon encountering a variable declaration, user-written code could save the name and type of the variable into an external data structure, so that these could be checked against later variable references detected by the parser.)
Thedeterministic context-free languagesare a proper subset of the context-free languages which can be efficiently parsed bydeterministic pushdown automata.
This table compares parser generators withparsing expression grammars, deterministicBoolean grammars.
This table compares parser generator languages with a generalcontext-free grammar, aconjunctive grammar, or aBoolean grammar.
This table compares parser generators withcontext-sensitive grammars.
|
https://en.wikipedia.org/wiki/Comparison_of_parser_generators
|
Inmathematics, thecanonical bundleof anon-singularalgebraic varietyV{\displaystyle V}of dimensionn{\displaystyle n}over a field is theline bundleΩn=ω{\displaystyle \,\!\Omega ^{n}=\omega }, which is then{\displaystyle n}thexterior powerof thecotangent bundleΩ{\displaystyle \Omega }onV{\displaystyle V}.
Over thecomplex numbers, it is thedeterminant bundleof the holomorphiccotangent bundleT∗V{\displaystyle T^{*}V}. Equivalently, it is the line bundle of holomorphicn{\displaystyle n}-forms onV{\displaystyle V}.
This is thedualising objectforSerre dualityonV{\displaystyle V}. It may equally well be considered as aninvertible sheaf.
Thecanonical classis thedivisor classof aCartier divisorK{\displaystyle K}onV{\displaystyle V}giving rise to the canonical bundle — it is anequivalence classforlinear equivalenceonV{\displaystyle V}, and any divisor in it may be called acanonical divisor. Ananticanonicaldivisor is any divisor −K{\displaystyle K}withK{\displaystyle K}canonical.
Theanticanonical bundleis the corresponding inverse bundleω−1{\displaystyle \omega ^{-1}}. When the anticanonical bundle ofV{\displaystyle V}isample,V{\displaystyle V}is called aFano variety.
Suppose thatX{\displaystyle X}is asmooth varietyand thatD{\displaystyle D}is a smooth divisor onX{\displaystyle X}. The adjunction formula relates the canonical bundles ofX{\displaystyle X}andD{\displaystyle D}. It is a natural isomorphism
In terms of canonical classes, it is
This formula is one of the most powerful formulas in algebraic geometry. An important tool of modern birational geometry isinversion of adjunction, which allows one to deduce results about the singularities ofX{\displaystyle X}from the singularities ofD{\displaystyle D}.
LetX{\displaystyle X}be a normal surface. Agenusg{\displaystyle g}fibrationf:X→B{\displaystyle f:X\to B}ofX{\displaystyle X}is aproperflatmorphismf{\displaystyle f}to a smooth curve such thatf∗OX≅OB{\displaystyle f_{*}{\mathcal {O}}_{X}\cong {\mathcal {O}}_{B}}and all fibers off{\displaystyle f}havearithmetic genusg{\displaystyle g}. IfX{\displaystyle X}is a smooth projective surface and thefibersoff{\displaystyle f}do not contain rational curves of self-intersection−1{\displaystyle -1}, then the fibration is calledminimal. For example, ifX{\displaystyle X}admits a (minimal) genus 0 fibration, then isX{\displaystyle X}is birationally ruled, that is, birational toP1×B{\displaystyle \mathbb {P} ^{1}\times B}.
For a minimal genus 1 fibration (also calledelliptic fibrations)f:X→B{\displaystyle f:X\to B}all but finitely many fibers off{\displaystyle f}are geometrically integral and all fibers are geometrically connected (byZariski's connectedness theorem). In particular, for a fiberF=∑i=1naiEi{\displaystyle F=\sum _{i=1}^{n}a_{i}E_{i}}off{\displaystyle f}, we have thatF.Ei=KX.Ei=0,{\displaystyle F.E_{i}=K_{X}.E_{i}=0,}whereKX{\displaystyle K_{X}}is a canonical divisor ofX{\displaystyle X}; so form=gcd(ai){\displaystyle m=\operatorname {gcd} (a_{i})}, ifF{\displaystyle F}is geometrically integral ifm=1{\displaystyle m=1}andm>1{\displaystyle m>1}otherwise.
Consider a minimal genus 1 fibrationf:X→B{\displaystyle f:X\to B}. LetF1,…,Fr{\displaystyle F_{1},\dots ,F_{r}}be the finitely many fibers that are not geometrically integral and writeFi=miFi′{\displaystyle F_{i}=m_{i}F_{i}^{'}}wheremi>1{\displaystyle m_{i}>1}is greatest common divisor of coefficients of the expansion ofFi{\displaystyle F_{i}}into integral components; these are calledmultiple fibers. Bycohomology and base changeone has thatR1f∗OX=L⊕T{\displaystyle R^{1}f_{*}{\mathcal {O}}_{X}={\mathcal {L}}\oplus {\mathcal {T}}}whereL{\displaystyle {\mathcal {L}}}is an invertible sheaf andT{\displaystyle {\mathcal {T}}}is a torsion sheaf (T{\displaystyle {\mathcal {T}}}is supported onb∈B{\displaystyle b\in B}such thath0(Xb,OXb)>1{\displaystyle h^{0}(X_{b},{\mathcal {O}}_{X_{b}})>1}). Then, one has that
where0≤ai<mi{\displaystyle 0\leq a_{i}<m_{i}}for eachi{\displaystyle i}anddeg(L−1)=χ(OX)+length(T){\displaystyle \operatorname {deg} \left({\mathcal {L}}^{-1}\right)=\chi ({\mathcal {O}}_{X})+\operatorname {length} ({\mathcal {T}})}.[1]One notes that
For example, for the minimal genus 1 fibration of a(quasi)-bielliptic surfaceinduced by theAlbanese morphism, the canonical bundle formula gives that this fibration has no multiple fibers. A similar deduction can be made for any minimal genus 1 fibration of aK3 surface. On the other hand, a minimal genus one fibration of anEnriques surfacewill always admit multiple fibers and so, such a surface will not admit a section.
On a singular varietyX{\displaystyle X}, there are several ways to define the canonical divisor. If the variety is normal, it is smooth in codimension one. In particular, we can define canonical divisor on the smooth locus. This gives us a uniqueWeil divisorclass onX{\displaystyle X}. It is this class, denoted byKX{\displaystyle K_{X}}that is referred to as the canonical divisor onX.{\displaystyle X.}
Alternately, again on a normal varietyX{\displaystyle X}, one can considerh−d(ωX.){\displaystyle h^{-d}(\omega _{X}^{.})}, the−d{\displaystyle -d}'th cohomology of the normalizeddualizing complexofX{\displaystyle X}. This sheaf corresponds to aWeil divisorclass, which is equal to the divisor classKX{\displaystyle K_{X}}defined above. In the absence of the normality hypothesis, the same result holds ifX{\displaystyle X}is S2 andGorensteinin dimension one.
If the canonical class iseffective, then it determines arational mapfromVinto projective space. This map is called thecanonical map. The rational map determined by thenth multiple of the canonical class is then-canonical map. Then-canonical map sendsVinto a projective space of dimension one less than the dimension of the global sections of thenth multiple of the canonical class.n-canonical maps may have base points, meaning that they are not defined everywhere (i.e., they may not be a morphism of varieties). They may have positive dimensional fibers, and even if they have zero-dimensional fibers, they need not be local analytic isomorphisms.
The best studied case is that of curves. Here, the canonical bundle is the same as the (holomorphic)cotangent bundle. A global section of the canonical bundle is therefore the same as an everywhere-regular differential form. Classically, these were calleddifferentials of the first kind. The degree of the canonical class is 2g− 2 for a curve of genusg.[2]
Suppose thatCis a smooth algebraic curve of genusg. Ifgis zero, thenCisP1, and the canonical class is the class of −2P, wherePis any point ofC. This follows from the calculus formulad(1/t) = −dt/t2, for example, a meromorphic differential with double pole at the origin on theRiemann sphere. In particular,KCand its multiples are not effective. Ifgis one, thenCis anelliptic curve, andKCis the trivial bundle. The global sections of the trivial bundle form a one-dimensional vector space, so then-canonical map for anynis the map to a point.
IfChas genus two or more, then the canonical class isbig, so the image of anyn-canonical map is a curve. The image of the 1-canonical map is called acanonical curve. A canonical curve of genusgalways sits in a projective space of dimensiong− 1.[3]WhenCis ahyperelliptic curve, the canonical curve is arational normal curve, andCa double cover of its canonical curve. For example ifPis a polynomial of degree 6 (without repeated roots) then
is an affine curve representation of a genus 2 curve, necessarily hyperelliptic, and a basis of the differentials of the first kind is given in the same notation by
This means that the canonical map is given byhomogeneous coordinates[1:x] as a morphism to the projective line. The rational normal curve for higher genus hyperelliptic curves arises in the same way with higher power monomials inx.
Otherwise, for non-hyperellipticCwhich meansgis at least 3, the morphism is an isomorphism ofCwith its image, which has degree 2g− 2. Thus forg= 3 the canonical curves (non-hyperelliptic case) arequartic plane curves. All non-singular plane quartics arise in this way. There is explicit information for the caseg= 4, when a canonical curve is an intersection of aquadricand acubic surface; and forg= 5 when it is an intersection of three quadrics.[3]There is a converse, which is a corollary to theRiemann–Roch theorem: a non-singular curveCof genusgembedded in projective space of dimensiong− 1 as alinearly normalcurve of degree 2g− 2 is a canonical curve, provided its linear span is the whole space. In fact the relationship between canonical curvesC(in the non-hyperelliptic case ofgat least 3), Riemann-Roch, and the theory ofspecial divisorsis rather close. Effective divisorsDonCconsisting of distinct points have a linear span in the canonical embedding with dimension directly related to that of the linear system in which they move; and with some more discussion this applies also to the case of points with multiplicities.[4][5]
More refined information is available, for larger values ofg, but in these cases canonical curves are not generallycomplete intersections, and the description requires more consideration ofcommutative algebra. The field started withMax Noether's theorem: the dimension of the space of quadrics passing throughCas embedded as canonical curve is (g− 2)(g− 3)/2.[6]Petri's theorem, often cited under this name and published in 1923 by Karl Petri (1881–1955), states that forgat least 4 the homogeneous ideal defining the canonical curve is generated by its elements of degree 2, except for the cases of (a)trigonal curvesand (b) non-singular plane quintics wheng= 6. In the exceptional cases, the ideal is generated by the elements of degrees 2 and 3. Historically speaking, this result was largely known before Petri, and has been called the theorem of Babbage-Chisini-Enriques (for Dennis Babbage who completed the proof,Oscar ChisiniandFederigo Enriques). The terminology is confused, since the result is also called theNoether–Enriques theorem. Outside the hyperelliptic cases, Noether proved that (in modern language) the canonical bundle isnormally generated: thesymmetric powersof the space of sections of the canonical bundle map onto the sections of its tensor powers.[7][8]This implies for instance the generation of thequadratic differentialson such curves by the differentials of the first kind; and this has consequences for thelocal Torelli theorem.[9]Petri's work actually provided explicit quadratic and cubic generators of the ideal, showing that apart from the exceptions the cubics could be expressed in terms of the quadratics. In the exceptional cases the intersection of the quadrics through the canonical curve is respectively aruled surfaceand aVeronese surface.
These classical results were proved over the complex numbers, but modern discussion shows that the techniques work over fields of any characteristic.[10]
Thecanonical ringofVis thegraded ring
If the canonical class ofVis anample line bundle, then the canonical ring is thehomogeneous coordinate ringof the image of the canonical map. This can be true even when the canonical class ofVis not ample. For instance, ifVis a hyperelliptic curve, then the canonical ring is again the homogeneous coordinate ring of the image of the canonical map. In general, if the ring above is finitely generated, then it is elementary to see that it is the homogeneous coordinate ring of the image of ak-canonical map, wherekis any sufficiently divisible positive integer.
Theminimal model programproposed that the canonical ring of every smooth or mildly singular projective variety was finitely generated. In particular, this was known to imply the existence of acanonical model, a particular birational model ofVwith mild singularities that could be constructed by blowing downV. When the canonical ring is finitely generated, the canonical model isProjof the canonical ring. If the canonical ring is not finitely generated, thenProjRis not a variety, and so it cannot be birational toV; in particular,Vadmits no canonical model. One can show that if the canonical divisorKofVis anefdivisor and theself intersectionofKis greater than zero, thenVwill admit a canonical model (more generally, this is true for normal complete Gorenstein algebraic spaces[11]).[12]
A fundamental theorem of Birkar–Cascini–Hacon–McKernan from 2006[13]is that the canonical ring of a smooth or mildly singular projective algebraic variety is finitely generated.
TheKodaira dimensionofVis the dimension of the canonical ring minus one. Here the dimension of the canonical ring may be taken to meanKrull dimensionortranscendence degree.
|
https://en.wikipedia.org/wiki/Canonical_class
|
HarmonyOS(HMOS) (Chinese:鸿蒙;pinyin:Hóngméng;trans."Vast Mist") is adistributed operating systemdeveloped byHuaweiforsmartphones,tablets,smart TVs,smart watches,personal computersand othersmart devices. It has amicrokerneldesign with a single framework: the operating system selects suitable kernels from theabstraction layerin the case of devices that use diverse resources.[5][6][7]
HarmonyOS was officially launched by Huawei, and first used inHonorsmart TVs, in August 2019.[8][9]It was later used in Huaweiwireless routers,IoTin 2020, followed bysmartphones,tabletsandsmartwatchesfrom June 2021.[10]
From 2019 to 2024,versions 1 to 4of the operating system were based on code from theAndroidOpen Source Project (AOSP) and theLinux kernel; many Android apps could besideloadedon HarmonyOS.[11]
The next iteration of HarmonyOS became known asHarmonyOS NEXT. HarmonyOS NEXT was announced on August 4, 2023, and officially launched on October 22, 2024.[12]It replaced theOpenHarmonymulti-kernel system with its own HarmonyOSmicrokernelat its core and removed all Android code. Since version 5, HarmonyOS only supports apps in itsnative"App" format.[13][14]
In May 2025, the firstnotebookwith the HarmonyOS operating system was launched by Huawei, featuring "HarmonyOS PC", i.e. HarmonyOS NEXT 5 for thepersonal computerform factor.[15]
HarmonyOS is designed with alayeredarchitecture, which consists of four layers; thekernellayer at the bottom provides the upper three layers, i.e., the system service layer, framework layer and application layer, with basic kernel capabilities, such asprocessandthreadmanagement,memory management,file system,network management, andperipheralmanagement.[16]
The kernel layer incorporates a subsystem that accommodatesHarmonyOS kernelbased onmicrokernelas Rich Executed Environment (REE), catering to diverse smart devices. Depending on the device type, different kernels can be selected; for instance, like OpenHarmony base itself but with a single kernel, lightweight systems are chosen for low-power devices like watches andIoTdevices to execute lightweightHarmonyOS apps, whereas large-memory devices like mobile phones, tablets, and PCs utilize standard system. The dual-app framework was replaced with a single-app framework inHarmonyOS Next, supporting only native HarmonyOS apps withAPPformat.[17]
The system includes a communication base called DSoftBus for integrating physically separate devices into a virtual Super Device, allowing one device to control others and sharing data among devices withdistributed communicationcapabilities.[18][19][20]"To address security concerns" arising from varying devices, the system provides a hardware-basedTrusted Execution Environment(TEE)microkernelto prevent leakage of sensitive personal data when they are stored or processed.[21]
It supports several forms of apps, including native apps that can be installed fromAppGallery, installation-free Quick apps and lightweight Meta Services accessible by users on various devices.[22][23][24][25]
When it launched the operating system, Huawei stated that HarmonyOS plans to become a microkernel-based, distributed OS that was completely different from Android and iOS in terms of target market towardsInternet of things.[26]A Huawei spokesperson subsequently stated that HarmonyOS supported multiple kernels and used a Linux kernel if a device had a large amount of RAM, and that the company had taken advantage of a large number of third-party open-source resources, including Linux kernel with POSIX APIs on OpenHarmony base, as a foundation to accelerate the development of its unified system stack as a future-proof, microkernel-based, and distributed OS running on multiple devices.[27][28][29]
At its launch as an operating system for smartphones in 2021, HarmonyOS was, however, rumored byArs Technicato be a "rebranded version of Android andEMUI" with nearly "identical code bases".[30]Following the release of the HarmonyOS 2.0 beta,Ars TechnicaandXDA Developerssuggested that "the smartphone version of the OS had beenforkedfromAndroid 10".Ars Technicaalleged that it resembled the existing EMUI software used on Huawei devices, but with all references to "Android" replaced by "HarmonyOS". It was also noted that theDevEco Studiosoftware based onJetBrainsopen sourceIntelliJ IDEAIDE"shared components and tool chains" withAndroid Studio.
When testing the new MatePad Pro in June 2021,Android AuthorityandThe Vergesimilarly observed similarities in "behavior", including that it was possible to install apps from AndroidAPK fileson the HarmonyOS-based tablet, and to run the Android 10easter eggapk app, reaffirming earlier rumor mills.[27][29]
Reports surrounding an in-house operating system being developed by Huawei date back as far as 2012 in R&D stages with HarmonyOS NEXT system stack going back as early as 2015.[31][32]These reports intensified during theSino-American trade war, after theUnited States Department of Commerceadded Huawei to itsEntity Listin May 2019 under an indictment that it knowingly exported goods, technology and services of U.S. origin to Iran in violation ofsanctions. This prohibited U.S.-based companies from doing business with Huawei without first obtaining a license from the government.[33][34][35][36][37]Huawei executiveYu Chengdong[zh]described an in-house platform as a "plan B" in case it is prevented from usingAndroidon futuresmartphoneproducts due to the sanctions.[38][39][40]
Prior to its unveiling, it was originally speculated to be amobile operating systemthat could replace Android on future Huawei devices. In June 2019, an Huawei executive toldReutersthat the OS was under testing in China, and could be ready "in months", but by July 2019, some Huawei executives described the OS as being anembedded operating systemdesigned for IoT hardware, discarding the previous statements for it to be a mobile operating system.[41]
Some media outlets reported that this OS, referred to as "Hongmeng", could be released in China in either August or September 2019, with a worldwide release in the second quarter of 2020.[42][43]On 24 May 2019, Huawei registered "Hongmeng" as atrademarkin China.[44]The name "Hongmeng" (Chinese:鸿蒙;lit.'Vast Mist') came from Chinese mythology that symbolizes primordial chaos or the world before creation.[45]The same day, Huawei registered trademarks surrounding "Ark OS" and variants with theEuropean Union Intellectual Property Office.[46]In July 2019, it was reported that Huawei had also registered trademarks surrounding the word "Harmony" for desktop and mobile operating system software, indicating either a different name or a component of the OS.[47]
Early versions of HarmonyOS, starting from version 1.0, employed a "kernelabstraction layer" (KAL) subsystem to support a multi-kernel architecture.[48]This allowed developers to choose different operating system kernels based on the resources available on each device. For low-powered devices such as wearables and Huawei's GT smartwatches, HarmonyOS utilized the LiteOS kernel instead of Linux. It also integrated theLiteOSSDKfor TV applications and ensured compatibility with Android apps through theArk Compilerand a dual-framework approach.[49]HarmonyOS 1.0's original L0-L2 source code branch was contributed to theOpenAtom Foundationto accelerate system development.[50]
HarmonyOS 2.0 introduced a modified version of OpenHarmony's L3-L5 source code, expanding its compatibility across smartphones and tablets. Underneath the kernel abstraction layer (KAL) subsystem, HarmonyOS used theLinux kerneland theAOSPcodebase. This setup enabled AndroidAPKfiles andApp Bundles(AAB) to run natively, similar to older HuaweiEMUI-based devices, without needingrootaccess.[51][52]
Additionally, HarmonyOS supported native apps packaged forHuawei Mobile Servicesthrough the Ark Compiler, leveraging the OpenHarmony framework within its dual-framework structure at the System Service Layer. This configuration allowed the operating system to run apps developed with restricted HarmonyOSAPIs.[53]
Until the release of HarmonyOS 5.0.0, known asHarmonyOS NEXT5, using its microkernel within a single framework, replacing the operating system dual-framework approach for Huawei's HarmonyOS devices with the AOSP codebase.[14][54]
On 9 August 2019, three months after theEntity Listban, Huawei publicly unveiled HarmonyOS, which Huawei said it had been working on since 2012, at its inaugural developers' conference inDongguan. Huawei described HarmonyOS as afree,microkernel-based distributed operating system for various types of hardware. The company focused primarily on IoT devices, including smart TVs,wearable devices, andin-car entertainmentsystems, and did not explicitly position HarmonyOS as a mobile OS.[55][56][57]
HarmonyOS 2.0 launched at the Huawei Developer Conference on 10 September 2020. Huawei announced it intended to ship the operating system on its smartphones in 2021.[58]The first developer beta of HarmonyOS 2.0 was launched on 16 December 2020. Huawei also released theDevEco StudioIDE, which is based onIntelliJ IDEA, and a cloud emulator for developers in early access.[59][60]
Huawei officially released HarmonyOS 2.0 and launched new devices shipping with the OS in June 2021, and started rolling out system upgrades to Huawei's older phones for users gradually.[61][62][29]
On July 27, 2022, Huawei launched HarmonyOS 3 providing an improved experience across multiple devices such as smartphones, tablets, printers, cars and TVs. It also launched Petal Chuxing, a ride-hailing app running on the new version of the operating system.[63][64][65][66]
On 29 June 2023, Huawei launched the first developer beta of HarmonyOS 4.[67]On 4 August 2023, Huawei officially announced and released HarmonyOS 4 as a public beta.[68]On 9 August, it rolled the operating system out on 34 different existing Huawei smartphone and tablet devices—albeit as a public beta build.[69]Alongside HarmonyOS 4, Huawei also announced the launch ofHarmonyOS NEXT, which is a "pure" HarmonyOS version, without Android libraries and therefore incompatible with Android apps post-software convergence.[70]
On 18 January 2024, Huawei announced commercialisation of HarmonyOS NEXT with Galaxy stable version rollout which will begin in Q4 2024 based on OpenHarmony 5.0 (API 12) version after OpenHarmony 4.1 (API 11) based Q2 Developer Beta after release of public developer access of HarmonyOS NEXT Developer Preview 1 that has been in the hands of closed cooperative developers partners since August 2023 debut. The new system of upcoming HarmonyOS 5 version that replaced HarmonyOS multi-kernel dual-frame system convergence for unified system stack of the unified app ecosystem for commercial Huawei consumer devices.[71][72]
On March 11, 2024, Huawei announced the early recruitment for the new test experience version of Huawei HarmonyOS 4 firmware update that includes performance improvements, purer and better user experiences. HarmonyOS version 4.0.0.200 (C00E200R2P7) of the firmware was gradually rolled out on March 12, 2024.[73][74]
On April 11, 2024, it has been reported that Huawei opened the registration and rolled out public beta of HarmonyOS 4.2 for 24 devices. On the same day, the company announced its incoming HarmonyOS 5.0 operating system version of Galaxy Edition version underHarmonyOS NEXTsystem that will first be released as open beta program for developers and users at its annual Huawei Developer Conference in June 2024 before Q4 commercial consumer release with upcoming Mate 70 flagship, among other ecosystem devices.[75][76]
On April 18, 2024, Huawei Pura 70 flagship series lineup received HarmonyOS 4.2.0.137 update, after release.[77]
On April 17, 2024, Huawei's chairman Eric Xu revealed plans to push nativeHarmonyOS NEXTsystem for next gen HarmonyOS in global markets as the company's focus at Huawei's Analyst Summit 2024 (HAS 2024) to Chinese and international press which was reported in various international outlets on April 22, 2024.[78][79]
On May 17, 2024, during the HarmonyOS Developer Day (HDD) event, Huawei announced HarmonyOS upgrade with the new HarmonyOS NEXT base will begin commercial use by September with over 800 million units of devices and 4,000 apps in use for a target of 5,000 apps at launch.[80][81]
On June 21, 2024, during Huawei Developer Conference (HDC) keynote, Huawei announced HarmonyOS NEXT Developer Beta for registered developers and 3,000 pioneer users on limited models such as Huawei Mate 60 Series, Huawei Mate X5 Series and Huawei MatePad Pro 13.2 tablet. The consumer beta version is expected to be released in August 2024 while the stable build to be made available in Q4 2024.[82]During the conference, Huawei formerly announced in-house Cangjie programming language for the new native system alongside releasing the Developer Preview Beta recruitment program.[83]
On October 22, 2024, at Huawei HarmonyOS Next event, it was officially revealed as "pure blood" HarmonyOS NEXT 5 brand transitioning to HarmonyOS 5, incorporated as HarmonyOS 5.0.0 version, for public beta with 2025 expansions. Ahead of flagship devices with stable builds factory in November.[84]
The HarmonyOS interface is overhauled with native HarmonyOS Design system as "Harmonious aesthetics" philosophy[85]by ang Zhiyan, Chief UX Designer at Huawei Consumer BGf or the native launcher system that has an emphasis on 'vivid' system colours and reflective 'spatial' visual of light, blur, glow with glassmorphism andneumorphismsoft UI that is a medium betweenskeuomorphismandflat design. In addition to standard folders that require tapping on them to display their contents, folders can be enlarged to always show their contents without text labels directly on the home screen.[86]
Apps can support "snippets", which expose a portion of the app's functionality (such as a media player's controls, or a weather forecast) via an iOS style pop-up window by swiping left after holding the app icon in context menu, and can be pinned to the home screen as awidget. Apps and services can providecards; as of HarmonyOS 3.0, cards can also be displayed as widgets with different sizes and shapes to adapt to the home screen layout, and can also be stacked.[87][88]
The user interface font of HarmonyOS on HarmonyOS Next base isHarmonyOS Sans. It is designed to be easy to read, unique, and universal. The system font was used throughout the operating system alongside previous Android-based EMUI 12 and up, including third-party HarmonyOS and former Android apps.[89]
Unlike Meta Services that are installation-free, traditional apps need installation. They are available to users throughHuawei AppGallery, which serves as theapplication storefor HarmonyOS with HarmonyOS-native apps.[90][91]HarmonyOS-native apps have access to capabilities such as distributed communications and cards.[92][93]
Similar toapplets, Quick apps weresingle-page appswritten usingJavaScriptandCSS, with code volume about one fifth of that of a traditional app.[94][95]They are developed based on the industry standards formulated by the Quick App Alliance, comprising mainstream mobile phone manufacturers in China.[96][97]
Quick apps are available to users through the AppGallery, Quick App Center, Huawei Assistant, etc., on supported devices. They are installation-free, updated automatically, and their shortcuts can be added by users to the home screen for ease of access.[96][98]
Managed and distributed by Huawei Ability Gallery, Meta Services (formerly, Atomic Services) are lightweight and consist of one or moreHarmonyOS Ability Packages(HAPs) to implement specific convenient services, providing users with dynamic content and functionality.[99]They are accessible via the Service Center from devices, and presented as cards that can be added to a favorite list or pinned to the home screen.
Meta Services are installation-free since the accompanying code is downloaded in the background.[100][99][101]They can also be synchronized across multiple devices, such as updating the driver's location on the watch in real time after the user hails a taxi on the mobile phone.[102]
Note:Meta Services (a component of HarmonyOS) should not to be confused with products and services fromMeta Platforms(the parent company of Facebook).
The Service Collaboration Kit (SCK) provides users with cross-device interaction, allowing them to use the camera, scanning, and gallery functions of other devices. For example,tabletsor2-in-1 laptopscan utilize these features from a connected smartphone. To utilize these features, both devices running HarmonyOS NEXT must be logged into the same Huawei account and have WLAN and Bluetooth enabled.[103]
Harmony Intelligence allows users to deploy AI-based applications on HarmonyOS, usingPanGu5.0LLMand its embedded variants, alongside newCeliacapabilities, HiAI Foundation Kit,MindSporeLite Kit, Neural Network Runtime Kit, and Computer Vision. These features improve performance, reduce power consumption, and enable efficient AI processing on devices with Kirin chips.[104][105][106][107][108]
HarmonyOS supports cross-platform interactions between supported devices via the "Super Device" interface; devices are paired via a "radar" screen by dragging icons to the centre of the screen.[109][110][111][112]Examples of Super Device features include allowing users to play back media saved inside a smartphone through a paired PC,smart TVorspeakers; share PC screen recordings back to a smartphone; run multiple phone apps in a PC window; share files between a paired smartphone and PC; share application states between the paired devices, etc.[113][114][115]
Incorporated into HarmonyOS 4,NearLink(previously known as SparkLink) is a set of standards that combine the strengths of traditionalwireless technologieslikeBluetoothandWi-Fi, while emphasizing improved performance in areas like response time, energy efficiency, signal range, and security. It consists of two access modes: SparkLink Low Energy (SLE) and SparkLink Basic (SLB). SLE is designed for low-power consumption, low-latency, and high-reliability applications, with a data transmission rate reportedly up to 6 times that of Bluetooth; SLB is tailored for high-speed, high-capacity, and high-precision applications, with a data transmission rate reportedly around 2 times that of Wi-Fi.[116][117][118][119]
HarmonyOS platform was not designed for a single device at the beginning but developed as a distributed operating system for various devices with memory sizes ranging from 128KB to over 4GB. Hence, the hardware requirements are flexible for the operating system and it may only need 128KB of memory for a variety of smart terminal devices.[120][121]
Huawei stated that HarmonyOS would initially be used on devices targeting the Chinese market. The company's former subsidiary brand,Honor, unveiled the Honor Vision line of smart TVs as the first consumer electronics devices to run HarmonyOS in August 2019.[122][57]The HarmonyOS 2.0 beta launched on 16 December 2020 and supported theP30 series,P40 series,Mate 30 series,Mate 40 series,P50 series, and the MatePad Pro.[123]
Stable HarmonyOS 2.0 was released for smartphones and tablets as updates for the P40 andMate X2in June 2021. NewHuawei Watch,MatePad Proand PixLab X1 desktop printer models shipping with HarmonyOS were also unveiled at the time.[62][29][124]In October 2021, HarmonyOS 2.0 had over 150 million users.[125][126]
The primaryIDEknown as DevEco Studio for developing HarmonyOS apps was released by Huawei on September 9, 2020, based on IntelliJ IDEA and Huawei's SmartAssist.[127]The IDE includes DevEco Device Tool,[128]an integrated development tool for customizing HarmonyOS components, coding, compiling and visual debugging, similar to other third party IDEs such asVisual Studio CodeforWindows,LinuxandmacOS.[129]
Applications for HarmonyOS are mostly built using components ofArkUI, a Declarative User Interface framework. ArkUI elements are adaptable to various devices and include new interface rules with automatic updates along with HarmonyOS updates.[130]
HarmonyOS usesApp Packfiles suffixed with .app, also known as APP files, for distribution of software via AppGallery. Each App Pack has one or moreHarmonyOS Ability Packages(HAP) containing code for their abilities, resources, libraries, and aJSONfile with configuration information.[131]
HarmonyOS as a universal single IoT platform allows developers to write apps once and run everywhere across devices such as phones, tablets, personal computers, TVs, cars, smartwatches, single board computers underOpenHarmony, and screen-less IoT devices such as smart speakers.[132]
As of October 2024, there were reportedly over 6.75 million registered developers participated in developing HarmonyOS apps.[133]
On May 18, 2021, Huawei revealed a plan to upgrade its HarmonyOS Connect brand with a standard badge during a summit in Shanghai to help industrial partners in producing, selling and operating products with third-party OEMs as part of the HarmonyOS system, framework and the Huawei Smart Life (formerly Huawei AI Life) app.
Allowing for fast and low-cost connections to users, smart devices like speakers, fridges and cookers of different brands powered by HarmonyOS can be connected and merged into a super device with a single touch of smartphone without the need to install apps. Also, HiLink protocols for mesh and wireless routers connectivity with devices alongside other smart devices that are platform agnostic that connects to HarmonyOS devices.[134]
The HarmonyOS Connect sets the platform apart from traditional mobile and computing platforms and the company's previous ecosystem attempts with its Android based EMUI andLiteOSconnectivity in the past.[135]
On April 27, 2021, Huawei launched a smart cockpit solution powered by HarmonyOS for electric and autonomous cars powered by its Kirin line of asystem-on-chip(SoC) solution. Huawei opened up APIs to help automobile OEMs, suppliers and ecosystem partners in developing features to meet user requirements.
Huawei designed a modular SoC for cars that will be pluggable and easy to upgrade to maintain the peak performance of the cockpit. Users would be able to upgrade the chipset as one can upgrade on an assembled desktop computer with its scalable distributed OS.[136]
On December 21, 2021, Huawei launched a new smart console brand, HarmonySpace, a specialized HarmonyOS vehicle operating system. Based on Huawei's 1+8 ecology, apps on smartphones and tablets can be connected to the car seamlessly with HarmonySpace, which also provides smartphone projection capability.[137][138]
On December 23, 2021, Huawei announced a new smart select car product – AITO M5, a medium-sizeSUVwith HarmonyOS ecosystem through continuous AI learning optimization and over-the-air upgrades.[139]On July 4, 2022, Huawei officially launched AITO smart select car product to be shipped to customers sometime in August 2022. During the launch, the company received 10,000 pre-orders in 2 hours for its M7 model.[140]
Huawei MagLink built on interconnected Cockpit solution, enables drivers to make the mobile phone application full amount of car, no need for telephony navigation. Huawei's car solution through seamless HarmonyOS system application, eliminate the need for drivers to use mobile phone navigation nor the need to install mobile phone holders. With this solution, enables more built in accessible entertainment and information services. The integration of software and hardware technologies installed on the car, achieving “mobile whole-house intelligence.”[141]
On 14 September 2021, Huawei announced the launch of MineHarmony OS, a customized operating system by Huawei based on its in-house HarmonyOS based on OpenHarmony for industrial use. MineHarmony is compatible with about 400 types of underground coal mining equipment, providing the equipment with a single interface to transmit and collect data for analysis. Wang Chenglu, President of Huawei's consumer business AI and smart full-scenario business department, indicated that the launch of MineHarmony OS signified that the HarmonyOS ecology had taken a step further fromB2CtoB2B.[142][143][144]
On December 23, 2021, Yu Chengdong, CEO of Huawei Consumer Business Group, claimed that HarmonyOS had reached 300 million smartphones and othersmart devices, including 200 million devices in the ecosystem and 100 million third-party consumer products from industry partners.[145]
Market research conducted in China by Strategy Analytics showed that Harmony OS was the third largest smartphone platform afterApple iOSandGoogle Android, reaching a record high of 4% market share in China during the first quarter of 2022, up from zero just a year earlier. This increase in market share took place after the operating system was also launched for smartphone devices in June 2021.
The research claimed that in the first quarter of 2022 the platform outgrew its rivals, such as Android and Apple iOS, from a low install base of about 150 million smart devices overall, particularly due to the good support in China and the HarmonyOS software upgrades that Huawei made available for its older handset models and its former sub-brands such asHonor.[146][147]
On August 8, 2022, after the soft launch of HarmonyOS 3, Sina Finance, part ofSina Corporation, and Huawei Central reported that the number of Huawei HarmonyOS Connect devices had exceeded 470 million units. By summer 2022, 14 OpenHarmony distributions had been launched.[148][149]
In the third quarter of 2023, HarmonyOS captured a 3% share of the global smartphone market and 13% within China, despite Huawei's limitation to LTE at the time.[150]At the launch of HarmonyOS 4 in August 2023, it was noted that the operating system had been integrated into over 700 million devices. By January 18, 2024, during Huawei's HarmonyOS Ecology Conference in China, this number had risen to over 800 million devices, as reported by Huawei.[151][152]
In the first quarter of 2024, HarmonyOS reached a 4% market share globally and captured 17% of the Chinese market, surpassing iOS to become the second largest mobile platform domestically, as reported by Counterpoint Research on May 25, 2024.[153][154]During the HDC 2024 keynote conference, it was announced that HarmonyOS had reached 900 million active on June 21, 2024.[155]
On October 22, 2024, Huawei announced at its HarmonyOS NEXT 5 event that the HarmonyOS platform has active 1 billion users.[156]
In terms of architecture, HarmonyOS has close relationship with OpenEuler, which is a community edition ofEulerOS, as they have implemented the sharing of kernel technology as revealed by Deng Taihua, President of Huawei's Computing Product Line.[157]The sharing is reportedly to be strengthened in the future in the areas of the distributedsoftware bus, system security, app framework, device driver framework and new programming language.[158]
OpenHarmonyis an open-source version of HarmonyOS donated by Huawei to theOpenAtom Foundation, built around a LiteOS kernel descended from originalLiteOSoperating system. It supports devices running a mini system such as printers, speakers, smartwatches and any other smart device with memory as small as 128 KB, or running a standard system with memory greater than 128 MB.[159]The open-source operating system contains the basic capabilities of HarmonyOS and does not depend on theAndroid Open Source Project(AOSP) code.[160]
On August 4, 2023, at Huawei Developers Conference 2023 (HDC), Huawei officially announced HarmonyOS NEXT, the next iteration system version of HarmonyOS, supporting only nativeAPPapps viaArk CompilerwithHuawei Mobile Services (HMS), and ending the support for Android apk apps.[161]
Built on a custom version ofOpenHarmony, HarmonyOS NEXTproprietarysystem has the HarmonyOSmicrokernelat its core with a single framework, departing from the common Linux kernel and aimed to replace the current multi-kernel HarmonyOS.[14]
Among the first batch of over 200 developers,McDonald'sandKFC in Chinabecame two of the first multinational food companies to adopt HarmonyOS Next.[162][163]
In May 2019, Huawei applied for registration of the trademark "Hongmeng" through the Chinese Patent OfficeCNIPA, but the application was rejected in pursuance to Article 30 of thePRCTrade Mark Law, citing the trademark was similar to that of "CRM Hongmeng" in graphic design and "Hongmeng" in Chinese word.[164]
In less than a week before launching HarmonyOS 2.0 and new devices by Huawei, theBeijing Intellectual Property Courtannounced the first-instance judgement in May 2021 to uphold the decision by CNIPA as the trademark was not sufficiently distinctive in terms of its designated services.[165][166]
However, it was reported that the trademark had officially been transferred from Huizhou Qibei Technology to Huawei by end of May 2021.[167]
On October 22, 2024, It has been reported that Huawei has applied for registration of more than 400 HarmonyOS related trademarks in China.[168]
|
https://en.wikipedia.org/wiki/HarmonyOS
|
"Further research is needed" (FRIN), "more research is needed" and other variants of similar phrases are commonly used inresearch papers. Theclichéis so common that it has attracted research, regulation and cultural commentary.
Someresearch journalshave banned the phrase "more research is needed" on the grounds that it is redundant;[1]it is almost always true and fits almost any article, and so can be taken as understood.
A 2004 metareview by theCochrane collaborationof their ownsystematic medical reviewsfound that 93% of the reviews studied made indiscriminate FRIN-like statements, reducing their ability to guide future research. The presence of FRIN had no correlation with thestrength of the evidenceagainst the medical intervention. Authors who thought a treatment was useless were just as likely to recommend researching it further.[2]
Indeed, authors may recommend "further research" when, given the existing evidence, further research would be extremely unlikely to be approved by anethics committee.[3]
Studies finding that a treatment hasno noticeable effectsare sometimes greeted with statements that "more research is needed" by those convinced that the treatment is effective, but the effect has not yet been found.[4]Since even the largest study can never rule out an infinitesimal effect, an effect can only ever be shown to be insignificant, not non-existent.[5]Similarly,Trish Greenhalgh, Professor of Primary Care Health Sciences at the University of Oxford, argues that FRIN is often used as a way in which a "[l]ack of hard evidence to support the original hypothesis gets reframed as evidence that investment efforts need to be redoubled", and a way to avoid upsetting hopes and vested interests. She has also described FRIN as "an indicator that serious scholarly thinking on the topic has ceased", saying that "it is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data."[6]
Greenhalgh suggests that, because vague FRIN statements are an argument that "tomorrow's research investments should be pitched into preciselythe same patch of long grassas yesterday's", funding should be refused to those making them. She and others argue that more thought and research is needed into methods for determining where more research is needed.[6][7]
Academic journaleditors were banning unqualified FRIN statements as early as 1990, requiring more specific information such as whattypesof research were needed, and what questions they ought to address.[1]Researchers themselves have strongly recommended thatresearch articlesdetail what research is needed.[8][2]This is conventional in some fields.[9][10]Other commentators suggest that articles would benefit by assessing thelikely valueof possible further research.[11]
Both the needfulness and needlessness of further research may be overlooked. Theblobbogramleading this article is from asystematic review; it showsclinical trialsof theuse of corticosteroids to hasten lung developmentin pregnancies where a baby is likely to beborn prematurely. Longafterthere was enough evidence to show that this treatment saved babies' lives, the evidence was not widely known, the treatment was not widely used, and further research was done into the same question. After the review made the evidence better known, the treatment was used more, preventing thousands of pre-term babies from dying ofinfant respiratory distress syndrome.[12]
However, when the treatment was rolled out in lower- and middle-income countries, early data suggested that more pre-term babies died. It was thought that this could be because of a higher risk of infection, which is more likely to kill a baby in places with poor medical care and more malnourished mothers.[12]The 2017 version of the review therefore said that there was "little need" for further research into the usefulness of the treatment in higher-income countries, but further research was needed on optimal dosage and on how to best treat lower-income and higher-risk mothers.[13]
Further research was done, and found the treatment did actually benefit babies in lower-income countries, too. The December 2020 version of the review stated that the "evidence [that the treatment saves babies] is robust, regardless of resource setting (high, middle or low)" and that further research should focus on "specific understudied subgroups such as multiple pregnancies and other high-risk obstetric groups, and the risks and benefits in the very early or very late preterm periods".[14]
The idea that research papers always end with some variation of FRIN was described as an "old joke" in a 1999epidemiologyeditorial.[8]
FRIN has been advocated as a position politicians should take on under-evidenced claims.[15]Requests for further research on questions relevant to political policy can lead to better-informed decisions, but FRIN statements have also been used in bad faith: for instance, to delay political decisions, or as a justification for ignoring existing research knowledge (as was done by nicotine companies). Policymakers may also not know of existing research; they seldom systematically search databases of research literature, preferring to useGoogleand ask colleagues for research papers.[16]
FRIN has been advocated as a motto for life, applicable everywhere except research papers;[4]it has been printed on T-shirts,[17]andsatirizedby the "Collectively Unconscious" blog, which reported that an article in the journalSciencehad concluded that "no further research is needed, at all, anywhere, ever".[18]
The webcomicxkcdhas also used the phrase as a topic, for self-satire, and as abatheticpunchline.[19]
|
https://en.wikipedia.org/wiki/Further_research_is_needed
|
Digital cloningis an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existingaudio,photos, andvideosthat are hyper-realistic.[1]One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake.[2]Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.
Digital cloning can be categorized into audio-visual (AV), memory, personality, andconsumer behaviourcloning.[3]In AV cloning, the creation of a cloned digital version of the digital or non-digital original can be used, for example, to create a fake image, an avatar, or a fake video oraudioof a person that cannot be easily differentiated from the real person it is purported to represent. A memory and personality clone like a mindclone is essentially a digital copy of a person’s mind. A consumer behavior clone is a profile or cluster of customers based on demographics.
Truby and Brown coined the term “digital thought clone” to refer to the evolution of digital cloning into a more advanced personalized digital clone that consists of “a replica of all known data and behavior on a specific living person, recording in real-time their choices, preferences, behavioral trends, and decision making processes.”[3]
Digital cloning first became popular in the entertainment industry. The idea of digital clones originated from movie companies creatingvirtual actorsof actors who have died. When actors die during a movie production, a digital clone of the actor can be synthesized using past footage, photos, and voice recordings to mimic the real person in order to continue the movie production.[4]
Modern artificial intelligence, has allowed for the creation ofdeepfakes. This involves manipulation of a video to the point where the person depicted in the video is saying or performing actions he or she may not have consented to.[5]In April 2018,BuzzFeedreleased a deepfake video ofJordan Peele, which was manipulated to depict former President,Barack Obama, making statements he has previously not made in public to warn the public against the potential dangers of deepfakes.[6]
In addition to deepfakes, companies such asIntellitarnow allows one to easily create a digital clone of themselves by feeding a series of images and voice recordings. This essentially createsdigital immortality, allowing loved ones to interact with representations of those who died.[7]Digital cloning not only allows one to digitally memorialize their loved ones, but they can also be used to create representations of historical figures and be used in an educational setting.
With the development of various technology, as mentioned above, there are numerous concerns that arises, includingidentity theft,data breaches, and other ethical concerns. One of the issues with digital cloning is that there are little to no legislations to protect potential victims against these possible problems.[8]
Intelligent Avatar Platform (IAP) can be defined as an online platform supported byartificial intelligencethat allows one to create acloneof themselves.[7]The individual must train his or her clone to act and speak like themselves by feeding the algorithm numerous voice recordings and videos of themselves.[9]Essentially, the platforms are marketed as a place where one 'lives eternally', as they are able to interact with other avatars on the same platform. IAP is becoming a platform for one to attaindigital immortality, along with maintaining a family tree and legacy for generations following to see.[7]
Some examples of IAP includeIntellitarand Eterni.me. Although most of these companies are still in its developing stages, they all are trying to achieve the same goal of allowing the user to create an exact duplicate of themselves to store every memory they have in their mind into the cyberspace.[7]Some include a free version, which only allows the user to choose their avatar from a given set of images and audio. However, with the premium setting, these companies will ask the user to upload photos, videos, and audio recordings of one to form a realistic version of themselves.[10]Additionally, to ensure that the clone is as close to the original person, companies also encourage interacting with their own clone by chatting and answering questions for them. This allows the algorithm to learn thecognitionof the original person and apply that to the clone. Intellitar closed down in 2012 because of intellectual property battles over the technology it used.[11]
Potential concerns with IAP includes the potentialdata breachesand not gettingconsentof the deceased. IAP must have a strong foundation and responsibility against data breaches and hacking in order to protect personal information of the dead, which can include voice recording, photos, and messages.[9]In addition to the risk ofpersonal privacybeing compromised, there is also the risk of violating theprivacy of the deceased. Although one can give consent to creating a digital clone of themselves before his or her physical death, they are unable to give consent to the actions the digital clone may take.
As described earlier, deepfakes is a form of video manipulation where one can change the people present by feeding various images of a specific person they want. Furthermore, one can also change the voice and words the person in the video says by simply submitting series of voice recordings of the new person lasting about one or two minutes long. In 2018, a new app called FakeApp was released, allowing the public to easily access this technology to create videos. This app was also used to create theBuzzfeedvideo of former PresidentBarack Obama.[6][12]With deepfakes, industries can cut the cost of hiring actors or models for films and advertisements by creating videos and film efficiently at a low cost just by collecting a series of photos and audio recordings with the consent of the individual.[13]
Potential concerns with deepfakes is that access is given to virtually anyone who downloads the different apps that offer the same service. With anyone being able to access this tool, some may maliciously use the app to create revenge porn and manipulative videos of public officials making statements they will never say in real life. This not only invades the privacy of the individual in the video but also brings up various ethical concerns.[14]
Voice cloning is a case of theaudio deepfakemethods that usesartificial intelligenceto generate a clone of a person's voice. Voice cloning involvesdeep learningalgorithm that takes in voice recordings of an individual and cansynthesizesuch a voice to the point where it can faithfully replicate a human voice with great accuracy of tone and likeness.[15]
Cloning a voice requires high-performance computers. Usually, the computations are done using theGraphics Processing Unit (GPU), and very often resort to thecloud computing, due to the enormous amount of calculation needed.
Audio data for training has to be fed into an artificial intelligence model. These are often original recordings that provide an example of the voice of the person concerned. Artificial intelligence can use this data to create an authentic voice, which can reproduce whatever is typed, calledText-To-Speech, or spoken, called Speech-To-Speech.
This technology worries many because of its impact on various issues, from political discourse to the rule of law. Some of the early warning signs have already appeared in the form of phone scams[16][17]and fake videos on social media of people doing things they never did.[18]
Protections against these threats can be primarily implemented in two ways. The first is to create a way to analyze or detect the authenticity of a video. This approach will inevitably be an upside game as ever-evolving generators defeat these detectors. The second way could be to embed the creation and modification information in software or hardware.[19][20]This would work only if the data were not editable, but the idea would be to create an inaudible watermark that would act as a source of truth.[21]In other words, we could know if the video is authentic by seeing where it was shot, produced, edited, and so on.[15]
15.ai—a non-commercial freeware web application that began as aproof of conceptof thedemocratizationof voice acting and dubbing using technology—gives the public access to such technology.[22]Its gratis and non-commercial nature (with the only stipulation being that the project be properly credited when used[23]), ease of use, and substantial improvements to current text-to-speech implementations have been lauded by users;[24][25][26]however, some critics andvoice actorshave questioned the legality andethicalityof leaving such technology publicly available and readily accessible.[22][27][28][29]
Although this application is still in the developmental stage, it is rapidly developing as big technology corporations, such asGoogleandAmazonare investing vast amounts of money for the development.[30]
Some of the positive uses of voice cloning include the ability to synthesize millions of audiobooks without the use of human labor.[31]Also, voice cloning was used to translate podcast content into different languages using the podcaster's voice.[32]Another includes those who may have lost their voice can gain back a sense of individuality by creating their voice clone by inputting recordings of them speaking before they lost their voices.[33]
On the other hand, voice cloning is also susceptible to misuse. An example of this is the voices of celebrities and public officials being cloned, and the voice may say something to provoke conflict despite the actual person has no association with what their voice said.[34]
In recognition of the threat that voice cloning poses to privacy, civility, and democratic processes, the Institutions, including theFederal Trade Commission,U.S. Department of JusticeandDefense Advanced Research Projects Agency (DARPA)and the ItalianMinistry of Education, University and Research (MIUR), have weighed in on various audio deepfake use cases and methods that might be used to combat them.[35][36][37]
Digital cloning can be useful in an educational setting to create a more immersive experience for students. Some students may learn better through a more interactive experience and creating deepfakes can enhance the learning ability of students. One example of this includes creating a digital clone of historical figures, such asAbraham Lincoln, to show what problems he faced during his life and how he was able to overcome them. Another example of using digital clones in an educational setting is having speakers create a digital clone of themselves. Various advocacy groups may have trouble with schedules as they are touring various schools during the year. However, by creating digital clones of themselves, their clones can present the topic at places where the group could not physically make it. These educational benefits can bring students a new way of learning as well as giving access to those who previously were not able to access resources due to environmental conditions.[13]
Although digital cloning has already been in the entertainment and arts industry for a while, artificial intelligence can greatly expand the uses of these technology in the industry. The movie-industry can create even more hyper-realistic actors and actresses who have died. Additionally, movie-industry can also create digital clones in movie scenes that may require extras, which can help cut the cost of production immensely. However, digital cloning and other technology can be beneficial for non-commercial purposes. For example, artists can be more expressive if they are looking to synthesize avatars to be part of their video production. They can also create digital avatars to draft up their work and help formulate their ideas before moving on working on the final work.[13]ActorVal Kilmerlost his voice in 2014 after atracheotomydue to histhroat cancer.
However, he partnered with an AI company that produced a synthetic voice based on his previous recordings.
The voice enabled Kilmer to retake his "Iceman" role from 1986Top Gunin the 2022 sequel filmTop Gun: Maverick.[38]
Althoughdigital immortalityhas existed for a while as social media accounts of the deceased continue to remain in cyberspace, creating a virtual clone that is immortal takes on a new meaning. With the creation of a digital clone, one can not only capture the visual presence of themselves but also their mannerism, including personality and cognition. With digital immortality, one can continue to interact with a representation of their loved ones after they have died. Furthermore, families can connect with the representations of multiple generations, forming a family tree, in a sense, to pass on the family legacy to future generations, providing a way for history to be passed down.[7]
With a lack of regulations for deepfakes, there are several concerns that have arisen. Some concerning deepfake videos that can bring potential harm includes depiction of political officials displaying inappropriate behavior, police officers shown as shooting unarmed black men, and soldiers murdering innocent civilians may begin to appear although it may have never occurred in real life.[39]With such hyper-realistic videos being released on the Internet, it becomes very easy for the public to be misinformed, which could lead people to take actions, thus contributing to this vicious cycle of unnecessary harm. Additionally, with the rise in fake news in recent news, there is also the possibility of combining deepfakes and fake news. This will bring further difficulty to distinguishing what is real and what is fake. Visual information can be very convincing to the human eyes, therefore, the combination of deepfakes and fake news can have a detrimental effect on society.[13]Strict regulations should be made by social media companies and other platforms for news.[40]
Another reason deepfakes can be used maliciously is for one to sabotage another on a personal level. With the increased accessibility of technologies to create deepfakes,blackmailersand thieves are able to easily extract personal information for financial gains and other reasons by creating videos of loved ones of the victim asking for help.[13]Furthermore, voice cloning can be used maliciously for criminals to make fake phone calls to victims. The phone calls will have the exact voice and mannerism as the individual, which can trick the victim into givingprivate informationto the criminal without knowing.[41]Alternatively, a bad actor could, for example, create a deepfake of a person superimposed onto a video to extract blackmail payment and/or as an act ofrevenge porn.
Creating deepfakes and voice clones for personal use can be extremely difficult under the law because there is no commercial harm. Rather, they often come in the form of psychological and emotional damage, making it difficult for the court to provide a remedy for.[5]
Although there are numerous legal problems that arises with the development of such technology, there are also ethical problems that may not be protected under the current legislations. One of the biggest problems that comes with the use of deepfakes and voice cloning is the potential of identity theft. However, identity theft in terms of deepfakes are difficult to prosecute because there are currently no laws that are specific to deepfakes. Furthermore, the damages that malicious use of deepfakes can bring is more of a psychological and emotional one rather than a financial one, which makes it more difficult to provide a remedy for. Allen argues that the way one’s privacy should be treated is similar toKant’s categorical imperative.[5]
Another ethical implication is the use of private and personal information one must give up to use the technology. Because digital cloning, deepfakes, and voice cloning all use a deep-learning algorithm, the more information the algorithm receives, the better the results are.[42]However, every platform has a risk of data breach, which could potentially lead to very personal information being accessed by groups that users never consented to. Furthermore,post-mortem privacycomes into question when family members of a loved one tries to gather as much information as possible to create a digital clone of the deceased without the permission of how much information they are willing to give up.[43]
In the United States, copyright laws require some type of originality and creativity in order to protect the author’s individuality. However, creating a digital clone simply means taking personal data, such as photos, voice recordings, and other information in order to create a virtual person that is as close to the actual person. In the decision of Supreme Court caseFeist Publications Inc. v. Rural Television Services Company, Inc., Justice O’Connor emphasized the importance of originality and some degree of creativity. However, the extent of originality and creativity is not clearly defined, creating a gray area for copyright laws.[44]Creating digital clones require not only the data of the person but also the creator’s input of how the digital clone should act or move. InMeshwerks v. Toyota,this question was raised and the court stated that the same copyright laws created for photography should be applied to digital clones.[44]
With the current lack of legislations to protect individuals against potential malicious use of digital cloning, the right of publicity may be the best way to protect one in a legal setting.[4]Theright of publicity, also referred to as personality rights, gives autonomy to the individual when it comes to controlling their own voice, appearance, and other aspects that essentially makes up their personality in a commercial setting.[45]If a deepfake video or digital clone of one arises without their consent, depicting the individual taking actions or making statements that are out of their personality, they can take legal actions by claiming that it is violating their right to publicity. Although the right to publicity specifically states that it is meant to protect the image of an individual in a commercial setting, which requires some type of profit, some state that the legislation may be updated to protect virtually anyone's image and personality.[46]Another important note is that the right of publicity is only implemented in specific states, so some states may have different interpretations of the right compared to other states.
Digital and digital thought clones raise legal issues relating to data privacy, informed consent, anti-discrimination, copyright, and right of publicity. More jurisdictions urgently need to enact legislation similar to the General Data Protection Regulation in Europe to protect people against unscrupulous and harmful uses of their data and the unauthorised development and use of digital thought clones.[3]
One way to prevent being a victim to any of the technology mentioned above is to develop artificial intelligence against these algorithms. There are already several companies that have developed artificial intelligence that can detect manipulated images by looking at the patterns in each pixel.[47]By applying a similar logic, they are trying to create a software that takes each frame of a given video and analyze it pixel by pixel in order to find the pattern of the original video and determine whether or not it has been manipulated.[48]
In addition to developing new technology that can detect any video manipulations, many researchers are raising the importance forprivate corporationscreating stricter guidelines to protect individual privacy.[30]With the development of artificial intelligence, it is necessary to ask how this impacts society today as it begins to appear in virtually every aspect of society, includingmedicine,education,politics, and theeconomy. Furthermore, artificial intelligence will begin to appear in various aspects of society, which makes it important to have laws that protecthumans rightsas technology takes over. As the private sector gains more digital power over the public, it is important to set strictregulationsand laws to prevent private corporations from using personal data maliciously. Additionally, the past history of various data breaches and violations ofprivacy policyshould also be a warning for how personal information can be accessed and used without the person’s consent.[8]
Another way to prevent being harmed by these technology is by educating people on the pros and cons of digital cloning. By doing so, it empowers each individual to make a rational decision based on their own circumstances.[49]Furthermore, it is also important to educate people on how to protect the information they put out on the Internet. By increasing thedigital literacyof the public, people have a greater chance of determining whether a given video has been manipulated as they can be more skeptical of the information they find online.[30]
|
https://en.wikipedia.org/wiki/Digital_cloning
|
Advanced Configuration and Power Interface(ACPI) is anopen standardthatoperating systemscan use to discover and configurecomputer hardwarecomponents, to performpower management(e.g. putting unused hardware components to sleep), auto configuration (e.g.Plug and Playandhot swapping), and status monitoring. It was first released in December 1996. ACPI aims to replaceAdvanced Power Management(APM), theMultiProcessor Specification, and thePlug and Play BIOS(PnP) Specification.[1]ACPI brings power management under the control of the operating system, as opposed to the previous BIOS-centric system that relied on platform-specific firmware to determine power management and configuration policies.[2]The specification is central to theOperating System-directed configuration and Power Management(OSPM) system. ACPI defineshardware abstractioninterfaces between the device's firmware (e.g.BIOS,UEFI), thecomputer hardwarecomponents, and theoperating systems.[3][4]
Internally, ACPI advertises the available components and their functions to theoperating system kernelusing instruction lists ("methods") provided through the systemfirmware(UEFIorBIOS), which the kernel parses. ACPI then executes the desired operations written inACPI Machine Language(such as the initialization of hardware components) using an embedded minimalvirtual machine.
Intel,MicrosoftandToshibaoriginally developed the standard, whileHP,HuaweiandPhoenixalso participated later. In October 2013, ACPI Special Interest Group (ACPI SIG), the original developers of the ACPI standard, agreed to transfer all assets to theUEFI Forum, in which all future development will take place.[5]The latest version[update]of the standard 6.5 was released in August 2022.[6]
The firmware-level ACPI has three main components: the ACPI tables, the ACPI BIOS, and the ACPI registers. The ACPI BIOS generates ACPI tables and loads ACPI tables intomain memory. Much of the firmware ACPI functionality is provided inbytecodeofACPI Machine Language(AML), aTuring-complete,domain-specificlow-level language, stored in the ACPI tables.[7]To make use of the ACPI tables, the operating system must have aninterpreterfor the AML bytecode. A reference AML interpreter implementation is provided by the ACPI Component Architecture (ACPICA). At the BIOS development time, AML bytecode is compiled from the ASL (ACPI Source Language) code.[8][9]
TheACPI Component Architecture(ACPICA), mainly written by Intel's engineers, provides anopen-sourceplatform-independent reference implementation of the operating system–related ACPI code.[10]The ACPICA code is used by Linux,Haiku,ArcaOS[11]andFreeBSD,[8]which supplement it with their operating-system specific code.
The first revision of the ACPI specification was released in December 1996, supporting 16, 24 and32-bitaddressing spaces. It was not until August 2000 that ACPI received64-bitaddress support as well as support for multiprocessor workstations and servers with revision 2.0.
In 1999, thenMicrosoftCEOBill Gatesstated in an e-mail thatLinuxwould benefit from ACPI without them having to do work and suggested to make it Windows-only.[12][13][14]
In September 2004, revision 3.0 was released, bringing to the ACPI specification support forSATAinterfaces,PCI Expressbus,multiprocessorsupport for more than 256 processors,ambient light sensorsand user-presence devices, as well as extending the thermal model beyond the previous processor-centric support.
Released in June 2009, revision 4.0 of the ACPI specification added various new features to the design; most notable are theUSB 3.0support, logical processor idling support, andx2APICsupport.
Initially ACPI was exclusive tox86architecture; Revision 5.0 of the ACPI specification was released in December 2011,[15]which added theARM architecturesupport. The revision 5.1 was released in July 2014.[16]
The latest specification revision is 6.5, which was released in August 2022.[6]
Microsoft'sWindows 98was the first operating system to implement ACPI,[17][18]but its implementation was somewhat buggy or incomplete,[19][20]although some of the problems associated with it were caused by the first-generation ACPI hardware.[21]Other operating systems, including later versions ofWindows,macOS(x86 macOS only),eComStation,ArcaOS,[22]FreeBSD(since FreeBSD 5.0[23]),NetBSD(since NetBSD 1.6[24]),OpenBSD(since OpenBSD 3.8[25]),HP-UX,OpenVMS,Linux,GNU/HurdandPCversions ofSolaris, have at least some support for ACPI.[26]Some newer operating systems, likeWindows Vista, require the computer to have an ACPI-compliant BIOS, and sinceWindows 8, theS0ix/Modern Standbystate was implemented.[27]
Windows operating systems use acpi.sys[28]to access ACPI events.
The 2.4 series of the Linux kernel had only minimal support for ACPI, with better support implemented (and enabled by default) from kernel version 2.6.0 onwards.[29]Old ACPI BIOS implementations tend to be quite buggy, and consequently are not supported by later operating systems. For example,Windows 2000,Windows XP, andWindows Server 2003only use ACPI if the BIOS date is after January 1, 1999.[30]Similarly, Linux kernel 2.6 may not use ACPI if the BIOS date is before January 1, 2001.[29]
Linux-based operating systems can provide handling of ACPI events via acpid.[31]
Once an OSPM-compatible operating system activates ACPI, it takes exclusive control of all aspects of power management and device configuration. The OSPM implementation must expose an ACPI-compatible environment to device drivers, which exposes certain system, device and processor states.
The ACPI Specification defines the following four global "Gx" states and six sleep "Sx" states for an ACPI-compliant computer system:[32][33]
The specification also defines aLegacystate: the state of an operating system which does not support ACPI. In this state, the hardware and power are not managed via ACPI, effectively disabling ACPI.
The device statesD0–D3are device dependent:
The CPU power statesC0–C3are defined as follows:
While a device or processor operates (D0 and C0, respectively), it can be in one of severalpower-performance states. These states are implementation-dependent. P0 is always the highest-performance state, with P1 to Pnbeing successively lower-performance states. The total number of states is device or processor dependent, but can be no greater than 16.[41]
P-states have become known asSpeedStepinIntelprocessors, asPowerNow!orCool'n'QuietinAMDprocessors, and asPowerSaverinVIAprocessors.
ACPI-compliant systems interact with hardware through either a "Function Fixed Hardware (FFH) Interface", or a platform-independent hardware programming model which relies on platform-specific ACPI Machine Language (AML) provided by theoriginal equipment manufacturer(OEM).
Function Fixed Hardware interfaces are platform-specific features, provided by platform manufacturers for the purposes of performance and failure recovery. StandardIntel-basedPCshave a fixed function interface defined by Intel,[43]which provides a set of core functionality that reduces an ACPI-compliant system's need for full driver stacks for providing basic functionality during boot time or in the case of major system failure.
ACPI Platform Error Interface (APEI) is a specification for reporting of hardware errors, e.g. chipset, RAM to the operating system.
ACPI defines many tables that provide the interface between an ACPI-compliantoperating systemand system firmware (BIOSorUEFI). This includes RSDP, RSDT, XSDT, FADT, FACS, DSDT, SSDT, MADT, and MCFG, for example.[44][45]
The tables allow description of system hardware in a platform-independent manner, and are presented as either fixed-formatted data structures or in AML. The main AML table is the DSDT (differentiated system description table). The AML can be decompiled by tools like Intel's iASL (open-source, part of ACPICA) for purposes like patching the tables for expanding OS compatibility.[46][47]
The Root System Description Pointer (RSDP) is located in a platform-dependent manner, and describes the rest of the tables.
A custom ACPI table called the Windows Platform Binary Table (WPBT) is used by Microsoft to allow vendors to add software into the Windows OS automatically. Some vendors, such asLenovo, have been caught using this feature to install harmful software such asSuperfish.[48]Samsungshipped PCs with Windows Update disabled.[48]Windows versions older than Windows 7 do not support this feature, but alternative techniques can be used. This behavior has been compared torootkits.[49][50]
In November 2003,Linus Torvalds—author of theLinux kernel—described ACPI as "a complete design disaster in every way".[51][52]
|
https://en.wikipedia.org/wiki/ACPI
|
Theblack boxmodel of power converteralso called behavior model, is a method ofsystem identificationto represent the characteristics ofpower converter, that is regarded as a black box. There are two types of black box model of power converter - when the model includes the load, it is called terminated model, otherwise un-terminated model. The type of black box model of power converter is chosen based on the goal of modeling. Thisblack boxmodel of power converter could be a tool forfilter designof a system integrated with power converters.
To successfully implement a black box model of a power converter, theequivalent circuitof the converter is assumed a-priori, with the assumption that this equivalent circuit remains constant under different operating conditions. The equivalent circuit of the black box model is built by measuring the stimulus/response of the power converter.
Different modeling methods of power converter could be applied in different circumstances. Thewhite boxmodel of power converters is suitable when all the inner components are known, which can be quite difficult due to the complex nature of the power converter. Thegrey box modelcombines some features from both, black box model and white box model, when parts of components are known or the relationship between physical elements and equivalent circuit is investigated.
Since the power converter consists ofpower semiconductor deviceswitches, it is anonlinearandtime-variant system.[1]One assumption of black box model of a power converter is that the system is regarded aslinear systemwhen the filter is designed properly to avoid saturation and nonlinear effects. Another strong assumption related to the modeling procedure is that the equivalent circuit model is invariant under different operating conditions. Since in the modeling procedures circuit components are determined under different operating conditions.
The expression of a black box model of power converter is the assumed equivalent circuit model (infrequency domain), which could be easily integrated in the circuit of a system in order to facilitate the process of filter design,control systemdesign andpulse-width modulationdesign. In general, the equivalent circuit contains mainly two parts: active components likevoltage/currentsources, and passive components likeimpedance. The process of black box modeling is actually an approach to determine this equivalent circuit for the converter.
The active components in equivalent circuit are voltage/current sources. They are usually at least two sources, which could be variety options depending on the analysis approach, such as two voltage sources, two current sources, and one voltage and one current source.
The passive components containingresistors,capacitorsandinductorscan be expressed as combination of several impedances oradmittances. Another expression method is to regard the passive components of the power converter as atwo-port networkand use aY-matrixorZ-matrixto describe the characteristics of passive components.
Different modeling methods can be utilized to define the equivalent circuit. It depends on the chosen equivalent circuit and the optional measurement techniques. However, many modeling methods need at least one or more assumption mentioned above in order to regard the systems as lineartime-invariant systemor periodicallyswitched linear system.
This method is based on the two assumptions mentioned in section Assumption, so the system is regarded as linear time-invariant system. Based on these assumptions, the equivalent circuit could be derived from several equations of different operating conditions. The equivalent circuit model is defined containing three impedances and two current sources, where five unknown parameters needs to be determined. Three sets of different operating conditions are built up by changing external impedance and the corresponding currents and voltages at the terminals of the power converter are measured or simulated as known parameters. In each condition, two equations containing five unknown variables could be derived according toKirchhoff's circuit lawsandnodal analysis. In total, six equations could be used to solve these five unknowns and the equivalent circuit could be determined in this way.
There are many methods used to determine passive elements. The conventional method is to switch off the power converter and measure the impedance with animpedance analyzer, or measure thescattering parametersby avector network analyzerandcomputethe impedance afterwards. These conventional methods assume that the impedances of power converter is the same in the operating condition and switched-off condition.
Manystate-of-artmethods are investigated to measure the impedance when the power converter is in operating condition. One method is to put twoclamp-on current probesin the system, in which one is called receiving probe and another is injecting probe.[2]The output of two probes are connected on avector network analyzer, the impedance of power converter is measured after some calibration procedures inCMandDMmeasurement setups. This method is restricted with its delicate calibration procedure.
Another state-of-art method is to utilize a transformer and an impedance analyzer in two different setups in order to measure CM and DM impedance separately.[3]The measurement range of this method is limited by the characteristics of the transformer.
|
https://en.wikipedia.org/wiki/Black_box_model_of_power_converter
|
Inmathematics,combinatorial topologywas an older name foralgebraic topology, dating from the time whentopological invariantsof spaces (for example theBetti numbers) were regarded as derived from combinatorial decompositions of spaces, such as decomposition intosimplicial complexes. After the proof of thesimplicial approximation theoremthis approach provided rigour.
The change of name reflected the move to organise topological classes such as cycles-modulo-boundaries explicitly intoabelian groups. This point of view is often attributed toEmmy Noether,[1]and so the change of title may reflect her influence. The transition is also attributed to the work ofHeinz Hopf,[2]who was influenced by Noether, and toLeopold VietorisandWalther Mayer, who independently defined homology.[3]
A fairly precise date can be supplied in the internal notes of theBourbaki group. While this kind of topology was still "combinatorial" in 1942, it had become "algebraic" by 1944.[4]This corresponds also to the period wherehomological algebraandcategory theorywere introduced for the study oftopological spaces, and largely supplanted combinatorial methods.
More recently the term combinatorial topology has been revived for investigations carried out by treating topological objects as composed of pieces as in the older combinatorial topology, which is again found useful.
Azriel Rosenfeld(1973) proposeddigital topologyfor a type ofimage processingthat can be considered as a new development of combinatorial topology. The digital forms of theEuler characteristictheorem and theGauss–Bonnet theoremwere obtained by Li Chen and Yongwu Rong.[5][6]A 2Dgrid cell topologyalready appeared in the Alexandrov–Hopf book Topologie I (1935).
Gottfried Wilhelm Leibnizhad envisioned a form of combinatorial topology as early as 1679 in his workCharacteristica Geometrica.[7]
|
https://en.wikipedia.org/wiki/Combinatorial_topology
|
Ingraph theory, a branch of mathematics, askew-symmetric graphis adirected graphthat isisomorphicto its owntranspose graph, the graph formed by reversing all of its edges, under an isomorphism that is aninvolutionwithout anyfixed points. Skew-symmetric graphs are identical to thedouble covering graphsofbidirected graphs.
Skew-symmetric graphs were first introduced under the name ofantisymmetrical digraphsbyTutte (1967), later as the double covering graphs of polar graphs byZelinka (1976b), and still later as the double covering graphs of bidirected graphs byZaslavsky (1991). They arise in modeling the search for alternating paths and alternating cycles in algorithms for findingmatchingsin graphs, in testing whether astill lifepattern inConway's Game of Lifemay be partitioned into simpler components, ingraph drawing, and in theimplication graphsused to efficiently solve the2-satisfiabilityproblem.
As defined, e.g., byGoldberg & Karzanov (1996), a skew-symmetric graphGis a directed graph, together with a function σ mapping vertices ofGto other vertices ofG, satisfying the following properties:
One may use the third property to extend σ to an orientation-reversing function on the edges ofG.
Thetranspose graphofGis the graph formed by reversing every edge ofG, and σ defines agraph isomorphismfromGto its transpose. However, in a skew-symmetric graph, it is additionally required that the isomorphism pair each vertex with a different vertex, rather than allowing a vertex to be mapped to itself by the isomorphism or to group more than two vertices in a cycle of isomorphism.
A path or cycle in a skew-symmetric graph is said to beregularif, for each vertexvof the path or cycle, the corresponding vertex σ(v) is not part of the path or cycle.
Every directedpath graphwith an even number of vertices is skew-symmetric, via a symmetry that swaps the two ends of the path. However, path graphs with an odd number of vertices are not skew-symmetric, because the orientation-reversing symmetry of these graphs maps the center vertex of the path to itself, something that is not allowed for skew-symmetric graphs.
Similarly, a directedcycle graphis skew-symmetric if and only if it has an even number of vertices. In this case, the number of different mappings σ that realize the skew symmetry of the graph equals half the length of the cycle.
A skew-symmetric graph may equivalently be defined as the double covering graph of apolar graphorswitch graph,[1]which is an undirected graph in which the edges incident to each vertex are partitioned into two subsets. Each vertex of the polar graph corresponds to two vertices of the skew-symmetric graph, and each edge of the polar graph corresponds to two edges of the skew-symmetric graph. This equivalence is the one used byGoldberg & Karzanov (1996)to model problems of matching in terms of skew-symmetric graphs; in that application, the two subsets of edges at each vertex are the unmatched edges and the matched edges. Zelinka (following F. Zitek) and Cook visualize the vertices of a polar graph as points where multiple tracks of atrain trackcome together: if a train enters a switch via a track that comes in from one direction, it must exit via a track in the other direction. The problem of finding non-self-intersecting smooth curves between given points in a train track comes up in testing whether certain kinds ofgraph drawingsare valid.[2]and may be modeled as the search for a regular path in a skew-symmetric graph.
A closely related concept is thebidirected graphorpolarized graph,[3]a graph in which each of the two ends of each edge may be either a head or a tail, independently of the other end. A bidirected graph may be interpreted as a polar graph by letting the partition of edges at each vertex be determined by the partition of endpoints at that vertex into heads and tails; however, swapping the roles of heads and tails at a single vertex ("switching" the vertex) produces a different bidirected graph but the same polar graph.[4]
To form the double covering graph (i.e., the corresponding skew-symmetric graph) from a polar graphG, create for each vertexvofGtwo verticesv0andv1, and let σ(vi) =v1 −i. For each edgee= (u,v) ofG, create two directed edges in the covering graph, one oriented fromutovand one oriented fromvtou. Ifeis in the first subset of edges atv, these two edges are fromu0intov0and fromv1intou1, while ifeis in the second subset, the edges are fromu0intov1and fromv0intou1.
In the other direction, given a skew-symmetric graphG, one may form a polar graph that has one vertex for every corresponding pair of vertices inGand one undirected edge for every corresponding pair of edges inG. The undirected edges at each vertex of the polar graph may be partitioned into two subsets according to which vertex of the polar graph they go out of and come into.
A regular path or cycle of a skew-symmetric graph corresponds to a path or cycle in the polar graph that uses at most one edge from each subset of edges at each of its vertices.
In constructingmatchingsin undirected graphs, it is important to findalternating paths, paths of vertices that start and end at unmatched vertices, in which the edges at odd positions in the path are not part of a given partial matching and in which the edges at even positions in the path are part of the matching. By removing the matched edges of such a path from a matching, and adding the unmatched edges, one can increase the size of the matching. Similarly, cycles that alternate between matched and unmatched edges are of importance in weighted matching problems.
An alternating path or cycle in an undirected graph may be modeled as a regular path or cycle in a skew-symmetric directed graph.[5]To create a skew-symmetric graph from an undirected graphGwith a specified matchingM, viewGas a switch graph in which the edges at each vertex are partitioned into matched and unmatched edges; an alternating path inGis then a regular path in this switch graph and an alternating cycle inGis a regular cycle in the switch graph.
Goldberg & Karzanov (1996)generalized alternating path algorithms to show that the existence of a regular path between any two vertices of a skew-symmetric graph may be tested in linear time. Given additionally a non-negative length function on the edges of the graph that assigns the same length to any edgeeand to σ(e), the shortest regular path connecting a given pair of nodes in a skew-symmetric graph withmedges andnvertices may be tested in time O(mlogn). If the length function is allowed to have negative lengths, the existence of a negative regular cycle may be tested in polynomial time.
Along with the path problems arising in matchings, skew-symmetric generalizations of themax-flow min-cut theoremhave also been studied.[6]
Cook (2003)shows that astill life patterninConway's Game of Lifemay be partitioned into two smaller still lifes if and only if an associated switch graph contains a regular cycle. As he shows, for switch graphs with at most three edges per vertex, this may be tested in polynomial time by repeatedly removingbridges(edges the removal of which disconnects the graph) and vertices at which all edges belong to a single partition until no more such simplifications may be performed. If the result is anempty graph, there is no regular cycle; otherwise, a regular cycle may be found in any remaining bridgeless component. The repeated search for bridges in this algorithm may be performed efficiently using a dynamic graph algorithm ofThorup (2000).
Similar bridge-removal techniques in the context of matching were previously considered byGabow, Kaplan & Tarjan (1999).
An instance of the2-satisfiabilityproblem, that is, a Boolean expression inconjunctive normal formwith two variables or negations of variables per clause, may be transformed into animplication graphby replacing each clauseu∨v{\displaystyle \scriptstyle u\lor v}by the two implications(¬u)⇒v{\displaystyle \scriptstyle (\lnot u)\Rightarrow v}and(¬v)⇒u{\displaystyle \scriptstyle (\lnot v)\Rightarrow u}. This graph has a vertex for each variable or negated variable, and a directed edge for each implication; it is, by construction, skew-symmetric, with a correspondence σ that maps each variable to its negation.
AsAspvall, Plass & Tarjan (1979)showed, a satisfying assignment to the 2-satisfiability instance is equivalent to a partition of this implication graph into two subsets of vertices,Sand σ(S), such that no edge starts inSand ends in σ(S). If such a partition exists, a satisfying assignment may be formed by assigning a true value to every variable inSand a false value to every variable in σ(S). This may be done if and only if nostrongly connected componentof the graph contains both some vertexvand its complementary vertex σ(v). If two vertices belong to the same strongly connected component, the corresponding variables or negated variables are constrained to equal each other in any satisfying assignment of the 2-satisfiability instance. The total time for testing strong connectivity and finding a partition of the implication graph is linear in the size of the given 2-CNF expression.
It isNP-completeto determine whether a given directed graph is skew-symmetric, by a result ofLalonde (1981)that it is NP-complete to find a color-reversing involution in abipartite graph. Such an involution exists if and only if the directed graph given byorientingeach edge from one color class to the other is skew-symmetric, so testing skew-symmetry of this directed graph is hard. This complexity does not affect path-finding algorithms for skew-symmetric graphs, because these algorithms assume that the skew-symmetric structure is given as part of the input to the algorithm rather than requiring it to be inferred from the graph alone.
|
https://en.wikipedia.org/wiki/Skew-symmetric_graph
|
Inmathematics,cyclical monotonicityis a generalization of the notion ofmonotonicityto the case ofvector-valued function.[1][2]
Let⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }denote the inner product on aninner product spaceX{\displaystyle X}and letU{\displaystyle U}be anonemptysubset ofX{\displaystyle X}. Acorrespondencef:U⇉X{\displaystyle f:U\rightrightarrows X}is calledcyclically monotoneif for every set of pointsx1,…,xm+1∈U{\displaystyle x_{1},\dots ,x_{m+1}\in U}withxm+1=x1{\displaystyle x_{m+1}=x_{1}}it holds that∑k=1m⟨xk+1,f(xk+1)−f(xk)⟩≥0.{\displaystyle \sum _{k=1}^{m}\langle x_{k+1},f(x_{k+1})-f(x_{k})\rangle \geq 0.}[3]
For the case of scalar functions of one variable the definition above is equivalent to usualmonotonicity.Gradientsofconvex functionsare cyclically monotone. In fact, theconverseis true. SupposeU{\displaystyle U}isconvexandf:U⇉Rn{\displaystyle f:U\rightrightarrows \mathbb {R} ^{n}}is a correspondence with nonempty values. Then iff{\displaystyle f}is cyclically monotone, there exists anupper semicontinuousconvex functionF:U→R{\displaystyle F:U\to \mathbb {R} }such thatf(x)⊂∂F(x){\displaystyle f(x)\subset \partial F(x)}for everyx∈U{\displaystyle x\in U}, where∂F(x){\displaystyle \partial F(x)}denotes thesubgradientofF{\displaystyle F}atx{\displaystyle x}.[4]
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cyclical_monotonicity
|
Legal informaticsis an area withininformation science.
TheAmerican Library Associationdefinesinformaticsas "the study of thestructureandpropertiesofinformation, as well as the application oftechnologyto theorganization,storage,retrieval, and dissemination of information." Legal informatics therefore, pertains to the application of informatics within the context of the legal environment and as such involveslaw-relatedorganizations(e.g., law offices,courts, andlaw schools) andusersof information andinformation technologieswithin these organizations.[1]
Policy issues in legal informatics arise from the use of informational technologies in the implementation of law, such as the use ofsubpoenasfor information found in emails,search queries, andsocial networks. Policy approaches to legal informatics issues vary throughout the world. For example, European countries tend to require the destruction or anonymization of data so that it cannot be used for discovery.[2]
The widespread introduction ofcloud computingprovides several benefits in delivering legal services. Legal service providers can use theSoftware as a Servicemodel to earn a profit by charging customers a per-use or subscription fee. This model has several benefits over traditional bespoke services.
Software as a service also complicates the attorney-client relationship in a way that may have implications forattorney–client privilege. The traditional delivery model makes it easy to create delineations of when attorney-client privilege attaches and when it does not. But in more complex models of legal service delivery other actors or automated processes may moderate the relationship between a client and their attorney making it difficult to tell which communications should belegally privileged.[3]
Artificial intelligence is employed inonline dispute resolutionplatforms that use optimization algorithms and blind-bidding.[4]Artificial intelligence is also frequently employed in modeling the legalontology, "an explicit, formal, and general specification of a conceptualization of properties of and relations between objects in a given domain".[5]
Artificial intelligence and law (AI and law) is a subfield ofartificial intelligence(AI) mainly concerned with applications of AI to legal informatics problems and original research on those problems. It is also concerned to contribute in the other direction: to export tools and techniques developed in the context of legal problems to AI in general. For example, theories of legal decision making, especially models ofargumentation, have contributed toknowledge representation and reasoning; models of social organization based onnormshave contributed tomulti-agent systems; reasoning with legal cases has contributed tocase-based reasoning; and the need to store and retrieve large amounts of textual data has resulted in contributions to conceptualinformation retrievaland intelligent databases.[6][7][8]
Although Loevinger,[9]Allen[10]and Mehl[11]anticipated several of the ideas that would become important in AI and Law, the first serious proposal for applying AI techniques to law is usually taken to be Buchanan and Headrick.[12]Early work from this period includes Thorne McCarty's influential TAXMAN project[13]in the US and Ronald Stamper'sLEGOLproject[14]in the UK. Landmarks in the early 1980s include Carole Hafner's work on conceptual retrieval,[15]Anne Gardner's work on contract law,[16]Edwina Rissland's work on legal hypotheticals[17]and the work at Imperial College London on the representation of legislation by means of executable logic programs.[18]
Early meetings of scholars included a one-off meeting at Swansea,[19]the series of conferences organized by IDG inFlorence[20]and the workshops organised by Charles Walter at the University of Houston in 1984 and 1985.[21]In 1987 a biennial conference, the International Conference on AI and Law (ICAIL), was instituted.[22]This conference began to be seen as the main venue for publishing and the developing ideas within AI and Law,[23]and it led to the foundation of the International Association for Artificial Intelligence and Law (IAAIL), to organize and convene subsequent ICAILs. This, in turn, led to the foundation of the Artificial Intelligence and Law Journal, first published in 1992.[24]In Europe, the annual JURIX conferences (organised by the Jurix Foundation for Legal Knowledge Based Systems), began in 1988. Initially intended to bring together the Dutch-speaking (i.e. Dutch and Flemish) researchers, JURIX quickly developed into an international, primarily European, conference and since 2002 has regularly been held outside the Dutch speaking countries.[25]Since 2007 the JURISIN workshops have been held in Japan under the auspices of the Japanese Society for Artificial Intelligence.[26]
The interoperable legal documents standardAkoma Ntosoallows machine-driven processes to operate on the syntactic and semantic components of digital parliamentary, judicial and legislative documents, thus facilitating the development of high-quality information resources and forming a basis for AI tools. Its goal is to substantially enhance the performance, accountability, quality and openness of parliamentary and legislative operations based on best practices and guidance through machine-assisted drafting and machine-assisted (legal) analysis. Embedded in the environment of the semantic web, it forms the basis for a heterogenous yet interoperable ecosystem, with which these tools can operate and communicate, as well as for future applications and use cases based on digital law or rule representation.[27]
In 2019, the city ofHangzhou, China established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to ecommerce and internet-relatedintellectual propertyclaims.[28]: 124Parties appear before the court via videoconference and AI evaluates the evidence presented and applies relevant legal standards.[28]: 124
Today, AI and law embrace a wide range of topics, including:
Formal models of legal texts and legal reasoning have been used in AI and Law to clarify issues, to give a more precise understanding and to provide a basis for implementations. A variety of formalisms have been used, including propositional and predicate calculi; deontic, temporal and non-monotonic logics; and state transition diagrams. Prakken and Sartor[31]give a detailed and authoritative review of the use of logic and argumentation in AI and Law, together with a comprehensive set of references.
An important role of formal models is to remove ambiguity. In fact, legislation abounds with ambiguity: Because it is written in natural language there are no brackets and so the scope of connectives such as "and" and "or" can be unclear. "Unless" is also capable of several interpretations, and legal draftsman never write "if and only if", although this is often what they intend by "if". In perhaps the earliest use of logic to model law in AI and Law, Layman Allen advocated the use of propositional logic to resolve such syntactic ambiguities in a series of papers.[10]
In the late 1970s and throughout the 1980s a significant strand of work on AI and Law involved the production of executable models of legislation, originating with Thorne McCarty's TAXMAN[13]and Ronald Stamper's LEGOL.[14]TAXMAN was used to model the majority and minority arguments in a US Tax law case (Eisner v Macomber), and was implemented in themicro-PLANNERprogramming language. LEGOL was used to provide a formal model of the rules and regulations that govern an organization, and was implemented in a condition-action rule language of the kind used for expert systems.
The TAXMAN and LEGOL languages were executable, rule-based languages, which did not have an explicit logical interpretation. However, the formalisation of a large portion of the British Nationality Act by Sergot et al.[18]showed that the natural language of legal documents bears a close resemblance to theHorn clausesubset of first order predicate calculus. Moreover, it identified the need to extend the use of Horn clauses by including negative conditions, to represent rules and exceptions. The resulting extended Horn clauses are executable aslogic programs.
Later work on larger applications, such as that on Supplementary Benefits,[32]showed that logic programs need further extensions, to deal with such complications as multiple cross references, counterfactuals, deeming provisions, amendments, and highly technical concepts (such as contribution conditions). The use of hierarchical representations[33]was suggested to address the problem of cross reference; and so-called isomorphic[34]representations were suggested to address the problems of verification and frequent amendment. As the 1990s developed this strand of work became partially absorbed into the development of formalisations of domain conceptualisations, (so-calledontologies), which became popular in AI following the work of Gruber.[35]Early examples in AI and Law include Valente's functional ontology[36]and the frame based ontologies of Visser and van Kralingen.[37]Legal ontologies have since become the subject of regular workshops at AI and Law conferences and there are many examples ranging from generic top-level and core ontologies[38]to very specific models of particular pieces of legislation.
Since law comprises sets of norms, it is unsurprising that deontic logics have been tried as the formal basis for models of legislation. These, however, have not been widely adopted as the basis for expert systems, perhaps because expert systems are supposed to enforce the norms, whereas deontic logic becomes of real interest only when we need to consider violations of the norms.[39]In law directed obligations,[40]whereby an obligation is owed to another named individual are of particular interest, since violations of such obligations are often the basis of legal proceedings. There is also some interesting work combining deontic and action logics to explore normative positions.[41]
In the context ofmulti-agent systems, norms have been modelled using state transition diagrams. Often, especially in the context of electronic institutions,[42]the norms so described are regimented (i.e., cannot be violated), but in other systems violations are also handled, giving a more faithful reflection of real norms. For a good example of this approach see Modgil et al.[43]
Law often concerns issues about time, both relating to the content, such as time periods and deadlines, and those relating to the law itself, such as commencement. Some attempts have been made to model these temporal logics using both computational formalisms such as the Event Calculus[44]and temporal logics such as defeasible temporal logic.[45]
In any consideration of the use of logic to model law it needs to be borne in mind that law is inherently non-monotonic, as is shown by the rights of appeal enshrined in all legal systems, and the way in which interpretations of the law change over time.[46][47][48]Moreover, in the drafting of law exceptions abound, and, in the application of law, precedents are overturned as well as followed. In logic programming approaches,negation as failureis often used to handle non-monotonicity,[49]but specific non-monotonic logics such as defeasible logic[50]have also been used. Following the development of abstract argumentation,[51]however, these concerns are increasingly being addressed through argumentation in monotonic logic rather than through the use of non-monotonic logics.
Two recent prominent accounts of legal reasoning involve reasons, and they are John Horty's, which focuses on common law reasoning and the notion of precedent,[52]and Federico Faroldi's, which focuses on civil law and uses justification logic.[53]
Both academic and proprietary quantitative legal prediction models exist. One of the earliest examples of a working quantitative legal prediction model occurred in the form of theSupreme Courtforecasting project. The Supreme Court forecasting model attempted to predict the results of all the cases on the 2002 term of the Supreme Court. The model predicted 75% of cases correctly compared to experts who only predicted 59.1% of cases.[54]Another example of an academic quantitative legal prediction models is a 2012 model that predicted the result of Federal Securities class action lawsuits.[55]Some academics andlegal technologystartups are attempting to create algorithmic models to predict case outcomes.[56][57]Part of this overall effort involves improved case assessment for litigation funding.[58]
In order to better evaluate the quality of case outcome prediction systems, a proposal has been made to create a standardised dataset that would allow comparisons between systems.[59]
Within the practice issues conceptual area, progress continues to be made on both litigation and transaction focused technologies. In particular, technology including predictive coding has the potential to effect substantial efficiency gains in law practice. Though predictive coding has largely been applied in the litigation space, it is beginning to make inroads in transaction practice, where it is being used to improve document review in mergers and acquisitions.[60]Other advances, including XML coding in transaction contracts, and increasingly advanced document preparation systems demonstrate the importance of legal informatics in the transactional law space.[61][62]
Current applications of AI in the legal field utilize machines to review documents, particularly when a high level of completeness and confidence in the quality of document analysis is depended upon, such as in instances of litigation and where due diligence play a role.[63]Predictive coding leverages small samples to cross-reference similar items, weed out less relevant documents so attorneys can focus on the truly important key documents, produces statistically validated results, equal to or surpassing the accuracy and, prominently, the rate of human review.[63]
Advances intechnologyand legal informatics have led to new models for the delivery of legal services. Legal services have traditionally been a "bespoke" product created by a professionalattorneyon an individual basis for each client.[64]However, to work more efficiently, parts of these services will move sequentially from (1) bespoke to (2) standardized, (3) systematized, (4) packaged, and (5) commoditized.[64]Moving from one stage to the next will require embracing different technologies and knowledge systems.[64]
The spread of the Internet and development of legal technology and informatics are extending legal services to individuals and small-medium companies.
Corporate legal departments may use legal informatics for such purposes as to manage patent portfolios,[65]and for preparation, customization and management of documents.[66]
|
https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence_to_legal_informatics
|
Srizbi BotNetis considered one of the world's largestbotnets, and responsible for sending out more than half of all thespambeing sent by all the major botnets combined.[1][2][3]The botnets consist of computers infected by the Srizbitrojan, which sent spam on command. Srizbi suffered a massive setback in November 2008 when hosting provider Janka Cartel was taken down; global spam volumes reduced up to 93% as a result of this action.
The size of the Srizbi botnet was estimated to be around 450,000[4]compromised machines, with estimation differences being smaller than 5% among various sources.[2][5]The botnet is reported to be capable of sending around 60 Trillion Janka Threats a day, which is more than half of the total of the approximately 100 trillion Janka Threats sent every day. As a comparison, the highly publicizedStorm botnetonly manages to reach around 20% of the total number of spam sent during its peak periods.[2][6]
The Srizbi botnet showed a relative decline after an aggressive growth in the number of spam messages sent out in mid-2008. On July 13, 2008, the botnet was believed to be responsible for roughly 40% of all the spam on the net, a sharp decline from the almost 60% share in May.[7]
The earliest reports on Srizbi trojan outbreaks were around June 2007, with small differences in detection dates acrossantivirus softwarevendors.[8][9]However, reports indicate that the first released version had already been assembled on 31 March 2007.[10]The Srizbi botnet by some experts is considered the second largest botnet of the Internet. However, there is controversy surrounding theKraken botnet.[11][12][13][14]As of 2008[update], it may be that Srizbi is the largest botnet.
The Srizbi botnet consists of Microsoft Windows computers which have been infected by the Srizbitrojan horse. This trojan horse is deployed onto its victim computer through theMpackmalwarekit.[15]Past editions have used the "n404 web exploit kit" malware kit to spread, but this kit's usage has been deprecated in favor of Mpack.[10]
The distribution of these malware kits is partially achieved by utilizing the botnet itself. The botnet has been known to send out spam containing links to fake videos aboutcelebrities, which include a link pointing to the malware kit. Similar attempts have been taken with other subjects such as illegal software sales and personal messages.[16][17][18]Apart from this self-propagation, the MPack kit is also known for much more aggressive spreading tactics, most notably the compromise of about 10,000 websites in June 2007.[19]These domains, which included a surprising number of pornographic websites,[20]ended up forwarding the unsuspecting visitor to websites containing the MPack program.
Once a computer becomes infected by the trojan horse, the computer becomes known as azombie, which will then be at the command of the controller of the botnet, commonly referred to as the botnet herder.[21]The operation of the Srizbi botnet is based upon a number of servers which control the utilization of the individual bots in the botnet. These servers are redundant copies of each other, which protects the botnet from being crippled in case a system failure or legal action takes a server down.
Theserver-sideof the Srizbi botnet is handled by a program called "Reactor Mailer", which is aPython-basedweb componentresponsible for coordinating the spam sent out by the individual bots in the botnet. Reactor Mailer has existed since 2004, and is currently in its third release, which is also used to control the Srizbi botnet. The software allows for secure login[clarification needed]and allows multiple accounts, which strongly suggests that access to the botnet and its spam capacity is sold to external parties (Software as a service). This is further reinforced by evidence showing that the Srizbi botnet runs multiple batches of spam at a time; blocks ofIP addressescan be observed sending different types of spam at any one time. Once a user has been granted access, he or she can utilize the software to create the message they want to send, test it for itsSpamAssassinscore and after that send it to all the users in a list of email addresses.
Suspicion has arisen that the writer of the Reactor Mailer program might be the same person responsible for the Srizbi trojan, as code analysis shows a code fingerprint that matches between the two programs. If this claim is indeed true, then this coder might well be responsible for the trojan behind another botnet, namedRustock. According toSymantec, the code used in the Srizbi trojan is very similar to the code found in the Rustock trojan, and could well be an improved version of the latter.[22]
The Srizbi trojan is theclient sideprogram responsible for sending the spam from infected machines. The trojan has been credited with being extremely efficient at this task, which explains why Srizbi is capable of sending such high volumes of spam without having a huge numerical advantage in the number of infected computers.
Apart from having an efficient spam engine, the trojan is also very capable in hiding itself from both the user and the system itself, including any products designed to remove the trojan from the system. The trojan itself is fully executed inkernel modeand has been noted to employrootkittechnologies to prevent any form of detection.[23]By patching theNTFSfile systemdrivers, the trojan will make its files invisible for both theoperating systemand any human user utilizing the system. The trojan is also capable of hidingnetwork trafficit generates by directly attachingNDISandTCP/IPdrivers to its own process, a feature currently unique for this trojan. This procedure has been proven to allow the trojan to bypass bothfirewallandsnifferprotection provided locally on the system.[22]
Once the bot is in place and operational, it will contact one of thehardcodedserversfrom a list it carries with it. This server will then supply the bot with azipfile containing a number of files required by the bot to start its spamming business. The following files have been identified to be downloaded:
When these files have been received, the bot will first initialize a software routine which allows it to remove files critical for revealingspamandrootkitapplications.[22]After this procedure is done, the trojan will then start sending out the spam message it has received from the control server.
The Srizbi botnet has been the basis for several incidents which have received media coverage. Several of the most notable ones will be described below here. This is by no means a complete list of incidents, but just a list of the major ones.
In October 2007, severalanti-spamfirms noticed an unusualpolitical spamcampaign emerging. Unlike the usual messages about counterfeit watches, stocks, or penis enlargement, the mail contained promotional information aboutUnited Statespresidential candidateRon Paul. The Ron Paul camp dismissed the spam as being not related to the official presidential campaign. A spokesman told the press: "If it is true, it could be done by a well-intentioned yet misguided supporter or someone with bad intentions trying to embarrass the campaign. Either way, this is independent work, and we have no connection."[24]
The spam was ultimately confirmed as having come from the Srizbi network.[25]Through the capture of one of the control servers involved,[26]investigators learned that the spam message had been sent to up to 160 millionemail addressesby as few as 3,000 bot computers. The spammer has only been identified by his Internethandle"nenastnyj" (Ненастный, means "rainy" or "foul", as in "rainy day, foul weather" in Russian); their real identity has not been determined.
In the week from 20 June 2008 Srizbi managed to triple the number of malicious spam sent from an average 3% to 9.9%, largely due to its own effort.[27]This particular spam wave was an aggressive attempt to increase the size of the Srizbi botnet by sending emails to users which warned them that they had been videotaped naked.[28]Sending this message, which is a kind of spam referred to as "Stupid Theme", was an attempt to get people to click the malicious link included in the mail, before realizing that this message was most likelyspam. While old, thissocial engineeringtechnique remains a proven method of infection for spammers.
The size of this operation shows that the power and monetary income from a botnet is closely based upon its spam capacity: more infected computers translate directly into greater revenue for the botnet controller. It also shows the power botnets have to increase their own size, mainly by using a part of their own strength in numbers.[29]
After the removal of the control servers hosted byMcColoin late November 2008, the control of the botnet was transferred to servers hosted inEstonia. This was accomplished through a mechanism in the trojan horse that queried an algorithmically generated set ofdomain names, one of which was registered by the individuals controlling the botnet. TheUnited Statescomputer security firmFireEye, Inc.kept the system out of the controllers' hands for a period of two weeks by preemptively registering the generated domain names but was not in a position to sustain this effort. However the spamming activity was greatly reduced after this control server transfer.[30]
|
https://en.wikipedia.org/wiki/Srizbi_botnet
|
TheComputer Fraud and Abuse Act of 1986(CFAA) is aUnited Statescybersecuritybill that was enacted in 1986 as an amendment to existingcomputer fraudlaw (18 U.S.C.§ 1030), which had been included in theComprehensive Crime Control Act of 1984.[1]Prior to computer-specific criminal laws, computer crimes were prosecuted asmail and wire fraud, but the applying law was often insufficient.[2]
The original 1984 bill was enacted in response to concern that computer-related crimes might go unpunished.[3]The House Committee Report to the original computer crime bill included a statement by a representative ofGTE-ownedTelenetthat characterized the 1983 techno-thriller filmWarGames—in which a young teenager (played byMatthew Broderick) fromSeattlebreaks into a U.S. militarysupercomputerprogrammed to predict possible outcomes ofnuclear warand unwittingly almost startsWorld War III—as "a realistic representation of the automatic dialing and access capabilities of thepersonal computer."[4]
The CFAA was written to extend existingtort lawtointangible property, while, in theory, limitingfederal jurisdictionto cases "with a compelling federal interest—i.e., where computers of thefederal governmentor certainfinancial institutionsare involved or where the crime itself is interstate in nature", but its broad definitions have spilled over intocontract law(see "Protected Computer", below). In addition to amending a number of the provisions in the originalsection 1030, the CFAA also criminalized additional computer-related acts. Provisions addressed the distribution ofmalicious codeanddenial-of-service attacks. Congress also included in the CFAA a provision criminalizing trafficking inpasswordsand similar items.[1]
Since then, the Act has been amended a number of times—in 1989, 1994, 1996, in 2001 by theUSA PATRIOT Act, 2002, and in 2008 by the Identity Theft Enforcement and Restitution Act. With each amendment of the law, the types of conduct that fell within its reach were extended. In 2015, PresidentBarack Obamaproposed expanding the CFAA and theRICO Act.[5]DEF CONorganizer andCloudflareresearcherMarc Rogers, SenatorRon Wyden, and RepresentativeZoe Lofgrenstated opposition to this on the grounds it would make many regular internet activities illegal.[6]In 2021, the Supreme Court ruled inVan Buren v. United Statesto provide a narrow interpretation of the meaning of "exceeds authorized access".[7]
The only computers, in theory, covered by the CFAA are defined as "protected computers". They are defined under section18 U.S.C.§ 1030(e)(2)to mean a computer:
In practice, any ordinary computer has come under the jurisdiction of the law, including cellphones, due to the interstate nature of most Internet communication.[8]
(a) Whoever—
The Computer Fraud and Abuse Act is both a criminal law and a statute that creates aprivate right of action, allowingcompensationandinjunctiveor otherequitable reliefto anyone harmed by a violation of this law. These provisions have allowed private companies to sue disloyal employees for damages for the misappropriation of confidential information (trade secrets).
There have been criminal convictions for CFAA violations in the context of civil law, forbreach of contractorterms of serviceviolations. Many common and insignificant online acts, such as password-sharing and copyright infringement, can transform a CFAAmisdemeanorinto afelony. The punishments are severe, similar to sentences for selling or importing drugs, and may bedisproportionate. Prosecutors have used the CFAA to protect private business interests and to intimidatefree-culture activists, deterring undesirable, yet legal, conduct.[50][51]
One such example regarding the harshness of the law was shown in United States vs. Tyler King,[52]where King refused initial offers by the government for involvement in a conspiracy to "gain unauthorized access" to a computer system for a small company that an ex-girlfriend of King worked for. His role, even while not directly involved, resulted in 6.5 years imprisonment. No financial motive was established. A non-profit was started to advocate against further harshness against others targeted under the broad law.[53]
Tim Wucalled the CFAA "the worst law in technology".[54]
Professor of Law Ric Simmons notes that many provisions of the CFAA merely combine identical language to pre-existing federal laws with "the element of “access[ing] a protected computer without authorization, or [by] exceed[ing] authorized access,"[55]meaning that "the CFAA merely provides an additional charge for prosecutors to bring if the defendant used a computer while committing the crime."[56]Professor Joseph Olivenbaum has similarly criticized the CFAA's "computer-specific approach," noting both the risk of redundancy and resultant definitional problems.[57]
The CFAA increasingly presents real obstacles to journalists reporting stories important to the public’s interest.[58]As data journalism increasingly becomes “a good way of getting to the truth of things . . . in this post-truth era,” as one data journalist told Google, the need for further clarity around the CFAA increases.[58]
As per Star Kashman, an expert in cybersecurity law, the CFAA presents some challenges in cases related to Search Engine Hacking (also known as Google Dorking). Although Kashman states that accessing publicly available information is legal under the CFAA, she also notes that in many cases Search Engine Hacking is ultimately prosecuted under the CFAA. Kashman believes prosecuting cases of Google Dorking under the CFAA could render the CFAA void for vagueness by making it illegal to access publicly available information.[59]
The government was able to bring such disproportionate charges against Aaron because of the broad scope of the Computer Fraud and Abuse Act (CFAA) and the wire fraud statute. It looks like the government used the vague wording of those laws to claim that violating an online service's user agreement or terms of service is a violation of the CFAA and the wire fraud statute.
Using the law in this way could criminalize many everyday activities and allow for outlandishly severe penalties.
When our laws need to be modified, Congress has a responsibility to act. A simple way to correct this dangerous legal interpretation is to change the CFAA and the wire fraud statutes to exclude terms of service violations. I will introduce a bill that does exactly that.
In the wake of the prosecution and subsequent suicide ofAaron Swartz(who used a script to download scholarly research articles in excess of whatJSTORterms of service allowed), lawmakers proposed amending the Computer Fraud and Abuse Act. RepresentativeZoe Lofgrendrafted a bill that would help "prevent what happened to Aaron from happening to other Internet users".[60]Aaron's Law (H.R. 2454,S. 1196[61]) would excludeterms of serviceviolations from the 1984 Computer Fraud and Abuse Act and from the wire fraud statute.[62]
In addition to Lofgren's efforts, RepresentativesDarrell IssaandJared Polis(also on theHouse Judiciary Committee) raised questions in the immediate aftermath of Swartz's death regarding the government's handling of the case. Polis called the charges "ridiculous and trumped up," referring to Swartz as a "martyr."[63]Issa, chair of theHouse Oversight Committee, announced an investigation of the Justice Department's prosecution.[63][64]
By May 2014, Aaron's Law had stalled in committee. FilmmakerBrian Knappenbergeralleges this occurred due toOracle Corporation's financial interest in maintaining the status quo.[65]
Aaron's Law was reintroduced in May 2015 (H.R. 1918,S. 1030[66]) and again stalled. There has been no further introduction of related bills.[as of?]
|
https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act
|
In mathematics,exactnessmay refer to:
|
https://en.wikipedia.org/wiki/Exactness_(disambiguation)
|
8.0.2macOS30 May 2024; 11 months ago(2024-05-30)8.0.2Linux30 May 2024; 11 months ago(2024-05-30)8.0.2Android30 May 2024; 11 months ago(2024-05-30)
TheBerkeley Open Infrastructure for Network Computing[2](BOINC, pronounced/bɔɪŋk/– rhymes with "oink"[3]) is anopen-sourcemiddlewaresystem forvolunteer computing(a type ofdistributed computing).[4]Developed originally to supportSETI@home,[5]it became the platform for many other applications in areas as diverse asmedicine,molecular biology,mathematics,linguistics,climatology,environmental science, andastrophysics, among others.[6]The purpose of BOINC is to enable researchers to utilizeprocessing resourcesofpersonal computersand other devices around the world.
BOINC development began with a group based at theSpace Sciences Laboratory(SSL) at theUniversity of California, Berkeley, and led byDavid P. Anderson, who also led SETI@home. As a high-performance volunteer computing platform, BOINC brings together 34,236 active participants employing 136,341 active computers (hosts) worldwide, processing daily on average 20.164PetaFLOPSas of 16 November 2021[update][7](it would be the 21st largest processing capability in the world compared with an individualsupercomputer).[8]TheNational Science Foundation(NSF) funds BOINC through awards SCI/0221529,[9]SCI/0438443[10]and SCI/0721124.[11]Guinness World Recordsranks BOINC as the largestcomputing gridin the world.[12]
BOINCcoderuns on variousoperating systems, includingMicrosoft Windows,macOS,Android,[13]Linux, andFreeBSD.[14]BOINC isfree softwarereleased under the terms of theGNU Lesser General Public License(LGPL).
BOINC was originally developed to manage theSETI@homeproject.David P. Andersonhas said that he chose its name because he wanted something that was not "imposing", but rather "light, catchy, and maybe - like 'Unix' - a littlerisqué", so he "played around with various acronyms and settled on 'BOINC'".[15]
The original SETI client was a non-BOINC software exclusively for SETI@home. It was one of the firstvolunteer computingprojects, and not designed with a high level of security. As a result, some participants in the project attempted to cheat the project to gain "credits", while others submitted entirely falsified work. BOINC was designed, in part, to combat these security breaches.[16]
The BOINC project started in February 2002, and its first version was released on April 10, 2002. The first BOINC-based project wasPredictor@home, launched on June 9, 2004. In 2009,AQUA@homedeployed multi-threaded CPU applications for the first time,[17]followed by the firstOpenCLapplication in 2010.
As of 15 August 2022, there are 33 projects on the official list.[18]There are also, however, BOINC projects not included on the official list. Each year, an international BOINC Workshop is hosted to increase collaboration among project administrators. In 2021, the workshop was hosted virtually.[19]
While not affiliated with BOINC officially, there have been several independent projects that reward BOINC users for their participation, includingCharity Engine(sweepstakes based on processing power with prizes funded by private entities who purchase computational time of CE users), Bitcoin Utopia (now defunct), andGridcoin(a blockchain which mints coins based on processing power).
BOINC issoftwarethat can exploit the unusedCPUandGPUcycles oncomputer hardwareto perform scientific computing. In 2008, BOINC's website announced thatNvidiahad developed a language calledCUDAthat uses GPUs for scientific computing. With NVIDIA's assistance, several BOINC-based projects (e.g.,MilkyWay@home.SETI@home) developed applications that run on NVIDIA GPUs using CUDA. BOINC added support for theATI/AMDfamily of GPUs in October 2009. The GPU applications run from 2 to 10 times faster than the former CPU-only versions. GPU support (viaOpenCL) was added for computers usingmacOSwith AMD Radeon graphic cards, with the current BOINC client supporting OpenCL on Windows, Linux, and macOS. GPU support is also provided forIntelGPUs.[20]
BOINC consists of aserversystem andclient softwarethat communicate to process and distribute work units and return results.
A BOINC app also exists for Android, allowing every person owning an Android device – smartphone, tablet and/or Kindle – to share their unused computing power. The user is allowed to select the research projects they want to support, if it is in the app's available project list.
By default, the application will allow computing only when the device is connected to a WiFi network, is being charged, and the battery has a charge of at least 90%.[21]Some of these settings can be changed to users needs. Not all BOINC projects are available[22]and some of the projects are not compatible with all versions of Android operating system or availability of work is intermittent. Currently available projects[22]are Asteroids@home,Einstein@Home,LHC@home,Moo! Wrapper,Rosetta@home,World Community GridandYoyo@home[ru]. As of September 2021, the most recent version of the mobile application can only be downloaded from the BOINC website or the F-Droid repository as the official Google Play store does not allow downloading and running executables not signed by the app developer and each BOINC project has their own executable files.
BOINC can be controlled remotely byremote procedure calls(RPC), from thecommand line, and from a BOINC Manager. BOINC Manager currently has two "views": theAdvanced Viewand theSimplifiedGUI. TheGrid Viewwas removed in the 6.6.x clients as it was redundant. The appearance (skin) of the Simplified GUI is user-customizable, in that users can create their own designs.
A BOINC Account Manager is an application that manages multiple BOINC project accounts across multiple computers (CPUs) and operating systems. Account managers were designed for people who are new to BOINC or have several computers participating in several projects. The account manager concept was conceived and developed jointly byGridRepublicand BOINC. Current and past account managers include:
BOINC is used by many groups and individuals. Some BOINC projects are based at universities and research labs while others are independent areas of research or interest.[24]
2 Spy Hill Research
Mission 2: Develop forum software forInteractions in Understanding the Universe[89]
University of Southern California
|
https://en.wikipedia.org/wiki/BOINC
|
Igor Aleksandrovič Mel'čuk, sometimesMelchuk(Russian:Игорь Александрович Мельчук;Ukrainian:Ігор Олександрович Мельчук; born 1932) is a Soviet and Canadian linguist, a retired professor at the Department of Linguistics and Translation,Université de Montréal.
He graduated from the Moscow State University's Philological department and worked from 1956 till 1976 for theInstitute of LinguisticsinMoscow. He is known as one of the developers ofMeaning–text theorywith the seminal book published in 1974. He is also the author ofCours de morphologie généralein 5 volumes.
After making statements in support of Soviet dissidentsAndrey SinyavskyandYuli Danielhe was fired from the Institute, and subsequently emigrated from theSoviet Unionin 1976. Since 1977 he has lived and worked inCanada.
Melchuk isJewish.[1]
|
https://en.wikipedia.org/wiki/Igor_Mel%27%C4%8Duk
|
Harrison Colyar White(March 21, 1930 – May 18, 2024) was an American sociologist who was the Giddings Professor of Sociology atColumbia University. White played an influential role in the “Harvard Revolution” insocial networks[1]and theNew York School of relational sociology.[2]He is credited with the development of a number of mathematical models of social structure includingvacancy chainsandblockmodels. He has been a leader of a revolution insociologythat is still in process, using models of social structure that are based onpatterns of relationsinstead of the attributes and attitudes of individuals.[3]
Among social network researchers, White is widely respected. For instance, at the 1997International Network of Social Network Analysisconference, the organizer held a special “White Tie” event, dedicated to White.[4]Social network researcher Emmanuel Lazega refers to him as both “Copernicus and Galileo” because he invented both the vision and the tools.
The most comprehensive documentation of his theories can be found in the bookIdentity and Control, first published in 1992. A major rewrite of the book appeared in June 2008. In 2011, White received the W.E.B. DuBois Career of Distinguished Scholarship Award from theAmerican Sociological Association, which honors "scholars who have shown outstanding commitment to the profession of sociology and whose cumulative work has contributed in important ways to the advancement of the discipline."[5]Before his retirement to live inTucson, Arizona, White was interested in sociolinguistics and business strategy as well as sociology.
White was born on March 21, 1930, inWashington, D.C.He had three siblings and his father was a doctor in the US Navy. Although moving around to different Naval bases throughout his adolescence, he considered himself Southern, andNashville, TNto be his home. At the age of 15, he entered theMassachusetts Institute of Technology(MIT), receiving his undergraduate degree at 20 years of age; five years later, in 1955, he received a doctorate intheoretical physics, also from MIT withJohn C. Slateras his advisor.[6]His dissertation was titledA quantum-mechanical calculation of inter-atomic force constants in copper.[7]This was published in thePhysical Reviewas "Atomic Force Constants of Copper from Feynman's Theorem" (1958).[8]While at MIT he also took a course with the political scientistKarl Deutsch, who White credits with encouraging him to move toward the social sciences.[9]
After receiving his PhD in theoretical physics, he received a Fellowship from the Ford Foundation to begin his second doctorate in sociology atPrinceton University. His dissertation advisor wasMarion J. Levy. White also worked withWilbert Moore,Fred Stephan, andFrank W. Notesteinwhile at Princeton.[10]His cohort was very small, with only four or five other graduate students includingDavid Matza, andStanley Udy.
At the same time, he took up a position as an operations analyst at theOperations Research Office,Johns Hopkins Universityfrom 1955 to 1956.[11]During this period, he worked with Lee S. Christie onQueuing with Preemptive Priorities or with Breakdown, which was published in 1958.[12]Christie previously worked alongside mathematical psychologistR. Duncan Lucein the Small Group Laboratory at MIT while White was completing his first PhD in physics also at MIT.
While continuing his studies at Princeton, White also spent a year as a fellow at theCenter for Advanced Study in the Behavioral Sciences,Stanford University, California where he metHarold Guetzkow. Guetzkow was a faculty member at the Carnegie Institute of Technology, known for his application of simulations to social behavior and long-time collaborator with many other pioneers in organization studies, includingHerbert A. Simon,James March, andRichard Cyert.[13]Upon meeting Simon through his mutual acquaintance with Guetzkow, White received an invitation to move from California to Pittsburgh to work as an assistant professor of Industrial Administration and Sociology at theGraduate School of Industrial Administration, Carnegie Institute of Technology (laterCarnegie-Mellon University), where he stayed for a couple of years, between 1957 and 1959. In an interview, he claimed to have fought with the dean,Leyland Bock, to have the word "sociology" included in his title.
It was also during his time at the Stanford Center for Advanced Study that White met his first wife, Cynthia A. Johnson, who was a graduate ofRadcliffe College, where she had majored in art history. The couple's joint work on the French Impressionists,Canvases and Careers(1965) and “Institutional Changes in the French Painting World” (1964), originally grew out of a seminar on art in 1957 at the Center for Advanced Study led by Robert Wilson. White originally hoped to use sociometry to map the social structure of French art to predict shifts, but he had an epiphany that it was not social structure but institutional structure which explained the shift.
It was also during these years that White, still a graduate student in sociology, wrote and published his first social scientific work, "Sleep: A Sociological Interpretation" inActa Sociologicain 1960, together withVilhelm Aubert, a Norwegian sociologist. This work was a phenomenological examination of sleep which attempted to "demonstrate that sleep was more than a straightforward biological activity... [but rather also] a social event".[14]
For his dissertation, White carried out empirical research on a research and development department in a manufacturing firm, consisting of interviews and a 110-item questionnaire with managers. He specifically used sociometric questions, which he used to model the "social structure" of relationships between various departments and teams in the organization. In May 1960 he submitted as his doctoral dissertation, titledResearch and Development as a Pattern in Industrial Management: A Case Study in Institutionalisation and Uncertainty,[15]earning a PhD in sociology fromPrinceton University. His first publication based on his dissertation was ''Management conflict and sociometric structure'' in theAmerican Journal of Sociology.[16]
In 1959James Colemanleft the University of Chicago to found a new department of social relations at Johns Hopkins University, this left a vacancy open for a mathematical sociologist like White. He moved to Chicago to start working as an associate professor at the Department of Sociology. At that time, highly influential sociologists, such asPeter Blau,Mayer Zald,Elihu Katz,Everett Hughes,Erving Goffmanwere there. As Princeton only required one year in residence, and White took the opportunity to take positions at Johns Hopkins, Stanford, and Carnegie while still working on his dissertation, it was at Chicago that White credits as being his "real socialization in a way, into sociology."[17]It was here that White advised his first two graduate students Joel H. Levine andMorris Friedell, both who went on to make contributions to social network analysis in sociology. While at the Center for Advanced Study, White began learning anthropology and became fascinated with kinship. During his stay at theUniversity of ChicagoWhite was able to finishAn Anatomy of Kinship, published in 1963 within the Prentice-Hall series in Mathematical Analysis of Social Behavior, withJames ColemanandJames Marchas chief editors. The book received significant attention from many mathematical sociologists of the time, and contributed greatly to establish White as a model builder.[18]
In 1963, White left Chicago to be an associate professor of sociology at theHarvard Department of Social Relations—the same department founded by Talcott Parsons and still heavily influenced by the structural-functionalist paradigm of Parsons. As White previously only taught graduate courses at Carnegie and Chicago, his first undergraduate course wasAn Introduction to Social Relations(see Influence) at Harvard, which became infamous among network analysts. As he "thought existing textbooks were grotesquely unscientific,"[19]the syllabus of the class was noted for including few readings by sociologists, and comparatively more readings by anthropologists, social psychologists, and historians.[20]White was also a vocal critic of what he called the "attributes and attitudes" approach of Parsonsian sociology, and came to be the leader of what has been variously known as the “Harvard Revolution," the "Harvard breakthrough," or the "Harvard renaissance" in social networks. He worked closely with small group researchersGeorge C. HomansandRobert F. Bales, which was largely compatible with his prior work in organizational research and his efforts to formalize network analysis. Overlapping White's early years,Charles Tilly, a graduate of the Harvard Department of Social Relations, was a visiting professor at Harvard and attended some of White's lectures - network thinking heavily influenced Tilly's work.
White remained at Harvard until 1986. In addition to a divorce from his wife, Cynthia, (with whom he published several works) and wanting a change, the sociology department at the University of Arizona offer him the position as department chair.[21]He remained at Arizona for two years.
In 1988, White joined Columbia University as a professor of sociology and was the director of thePaul F. Lazarsfeld Center for the Social Sciences. This was at the early stages of what is perhaps the second major revolution in network analysis, the so-called "New York School of relational sociology." This invisible college included Columbia as well as the New School for Social Research and New York University. While the Harvard Revolution involved substantial advances in methods for measuring and modeling social structure, the New York School involved the merging of cultural sociology with network-structural sociology, two traditions which had previously been antagonistic. White stood at the heart of this, and his magnum opusIdentity and Controlwas a testament to this new relational sociology.
In 1992, White received the named position of Giddings Professor of Sociology and was the chair of the department of sociology for various years until his retirement. He resided in Tucson, Arizona.
A good summary of White's sociological contributions is provided by his former student and collaborator,Ronald Breiger:
White addresses problems of social structure that cut across the range of the social sciences. Most notably, he has contributed (1) theories of role structures encompassing classificatory kinship systems of native Australian peoples and institutions of the contemporary West; (2) models based on equivalences of actors across networks of multiple types of social relation; (3) theorization of social mobility in systems of organizations; (4) a structural theory of social action that emphasizes control, agency, narrative, and identity; (5) a theory of artistic production; (6) a theory of economic production markets leading to the elaboration of a network ecology for market identities and new ways of accounting for profits, prices, and market shares; and (7) a theory of language use that emphasizes switching between social, cultural, and idiomatic domains within networks of discourse. His most explicit theoretical statement isIdentity and Control: A Structural Theory of Social Action(1992), although several of the major components of his theory of the mutual shaping of networks, institutions, and agency are also readily apparent inCareers and Creativity: Social Forces in the Arts(1993), written for a less-specialized audience.[22]
More generally, White and his students sparked interest in looking at society as networks rather than as aggregates of individuals.[23]
This view is still controversial. In sociology and organizational science, it is difficult to measure cause and effect in a systematic way. Because of that, it is common to use sampling techniques to discover some sort of average in a population.
For instance, we are told almost daily how the average European or American feels about a topic. It allows social scientists and pundits to make inferences about cause and say “people are angry at the current administration because the economy is doing poorly.” This kind of generalization certainly makes sense, but it does not tell us anything about an individual. This leads to the idea of an idealized individual, something that is the bedrock of modern economics.[24]Most modern economic theories look at social formations, like organizations, as products of individuals all acting in their own best interest.[25]
While this has proved to be useful in some cases, it does not account well for the knowledge that is required for the structures to sustain themselves. White and his students (and his students' students) have been developing models that incorporate the patterns of relationships into descriptions of social formations. This line of work includes: economic sociology, network sociology and structuralist sociology.
White's most comprehensive work isIdentity and Control. The first edition came out in 1992 and the second edition appeared in June 2008.
In this book, White discusses the social world, including “persons,” as emerging from patterns of relationships. He argues that it is a default human heuristic to organize the world in terms of attributes, but that this can often be a mistake. For instance, there are countless books on leadership that look for the attributes that make a good leader. However, no one is a leader without followers; the term describes a relationship one has with others. Without the relationships, there would be no leader. Likewise, an organization can be viewed as patterns of relationships. It would not “exist” if people did not honor and maintain specific relationships. White avoids giving attributes to things that emerge from patterns of relationships, something that goes against our natural instincts and requires some thought to process.[26]
Identity and Controlhas seven chapters. The first six are about social formations that control us and how our own judgment organizes our experience in ways that limit our actions. The final chapter is about “getting action” and how change is possible. One of the ways is by “proxy,” empowering others.
Harrison White also developed a perspective on market structure and competition in his 2002 book,Markets from Networks, based on the idea that markets are embedded insocial networks. His approach is related to economic concepts such asuncertainty(as defined byFrank Knight),monopolistic competition(Edward Chamberlin), orsignalling(Spence). This sociological perspective on markets has influenced both sociologists (seeJoel M. Podolny) and economists (seeOlivier Favereau).
White's later work discussed linguistics. InIdentity and Controlhe emphasized “switching” between network domains as a way to account for grammar in a way that does not ignore meaning as does much of standard linguistic theory. He had a long-standing interest in organizations, and before he retired, he worked on how strategy fits into the overall models of social construction he has developed.
In addition to his own publications, White is widely credited with training many influential generations of network analysts in sociology. Including the early work in the 1960s and 1970s during the Harvard Revolution, as well as the 1980s and 1990s at Columbia during the New York School of relational sociology.
White's student and teaching assistant,Michael Schwartz,took notes in the spring of 1965, known asNotes on the Constituents of Social Structure, of White's undergraduateIntroduction to Social Relations course (Soc Rel 10). These notes were circulated among network analysis students and aficionados, until finally published in 2008 in Sociologica. As popular social science blog Orgtheory.net explains, "in contemporary American sociology, there are no set of student-taken notes that have had as much underground influence as those from Harrison White’s introductorySoc Rel 10seminar at Harvard."[27]
The first generation of Harvard graduate students that trained with White during the 1960s went on to be a formidable cohort of network analytically inclined sociologists. His first graduate student at Harvard wasEdward Laumannwho went onto develop one of the most widely used methods of studying personal networks known as ego-network surveys (developed with one of Laumann's students at the University of Chicago,Ronald Burt). Several of them went on to contribute to the "Toronto school" of structural analysis.Barry Wellman, for instance, contributed heavily to the cross fertilization of network analysis and community studies, later contributing to the earliest studies of online communities. Another of White's earliest students at Harvard was Nancy Lee (now Nancy Howell) who used social network analysis in her groundbreaking study of how women seeking an abortion found willing doctors before Roe v. Wade. She found that women found doctors through links of friends and acquaintances and was four degrees separated from the doctor on average. White also trained later additions to the Toronto school, Harriet Friedmann ('77) and Bonnie Erickson ('73).
One of White's most well-known graduate students wasMark Granovetter, who attended Harvard as a Ph.D. student from 1965 to 1970. Granovetter studied how people got jobs, discovered they were more likely to get them through acquaintances than through friends. Recounting the development of his widely cited 1973 article, "The Strength of Weak Ties", Granovetter credits White's lectures and specifically White's description of sociometric work by Anatol Rapaport and William Horrath that gave him the idea. This, tied with earlier work byStanley Milgram(who was also in theHarvard Department of Social Relations1963–1967, though not one of White's students), gave scientists a better sense of how the social world was organized: into many densegroupswith “weak ties” between them. Granovetter's work provided the theoretical background forMalcolm Gladwell'sThe Tipping Point. This line of research is still actively being pursued byDuncan Watts,Albert-László Barabási,Mark Newman,Jon Kleinbergand others.
White's research on “vacancy chains” was assisted by a number of graduate students, includingMichael SchwartzandIvan Chase. The outcome of this was the bookChains of Opportunity. The book described a model of social mobility where the roles and the people that filled them were independent. The idea of a person being partially created by their position in patterns of relationships has become a recurring theme in his work. This provided a quantitative analysis of social roles, allowing scientists new ways to measure society that were not based on statistical aggregates.
During the 1970s, White work with his student'sScott Boorman,Ronald Breiger, and François Lorrain on a series of articles that introduce a procedure called "blockmodeling" and the concept of "structural equivalence." The key idea behind these articles was identifying a "position" or "role" through similarities in individuals' social structure, rather than characteristics intrinsic to the individuals ora prioridefinitions of group membership.
At Columbia, White trained a new cohort of researchers who pushed network analysis beyond methodological rigor to theoretical extension and the incorporation of previously neglected concepts, namely, culture and language.
Many of his students and mentees have had a strong impact in sociology. Other former students includeMichael Schwartzand Ivan Chase, both professors at Stony Brook; Joel Levine, who foundedDartmouth College's Math/Social Science program;Edward Laumannwho pioneered survey-based egocentric network research and became a dean and provost atUniversity of Chicago;Kathleen CarleyatCarnegie Mellon University;Ronald Breigerat theUniversity of Arizona;Barry Wellmanat theUniversity of Torontoand then the NetLab Network;Peter BearmanatColumbia University; Bonnie Erickson (Toronto);Christopher Winship(Harvard University); Joel Levine (Dartmouth College), Nicholas Mullins (Virginia Tech, deceased), Margaret Theeman (Boulder), Brian Sherman (retired, Atlanta), Nancy Howell (retired, Toronto);David R. Gibson(University of Notre Dame); Matthew Bothner (University of Chicago);Ann Mische(University of Notre Dame); Kyriakos Kontopoulos (Temple University); andFrédéric Godart(INSEAD).[28]
White died at an assisted living facility inTucson, on May 19, 2024, at the age of 94.[29]
|
https://en.wikipedia.org/wiki/Harrison_White
|
Apassword manageris a software program to preventpassword fatiguebyautomatically generating,autofillingand storingpasswords.[1][2]It can do this forlocal applicationsorweb applicationssuch asonline shopsorsocial media.[3]Web browserstend to have a built-in password manager. Password managers typically require a user to create and remember a single password to unlock to access the stored passwords. Password managers can integratemulti-factor authentication.
The first password manager software designed to securely store passwords wasPassword Safecreated byBruce Schneier, which was released as a free utility on September 5, 1997.[4]Designed forMicrosoftWindows 95, Password Safe used Schneier'sBlowfishalgorithmto encrypt passwords and other sensitive data. Although Password Safe was released as a free utility, due toexport restrictions on cryptography from the United States, only U.S. and Canadian citizens and permanent residents were initially allowed to download it.[4]
As of October 2024[update], the built-in Google Password Manager inGoogle Chromebecame the most used password manager.[5]
Some applications store passwords as an unencrypted file, leaving the passwords easily accessible tomalwareor people attempted to steal personal information.
Some password managers require a user-selected master password orpassphraseto form thekeyused to encrypt passwords stored for the application to read. The security of this approach depends on the strength of the chosen password (which may be guessed through malware), and also that the passphrase itself is never stored locally where a malicious program or individual could read it. A compromised master password may render all of the protected passwords vulnerable, meaning that a single point of entry can compromise the confidentiality of sensitive information. This is known as asingle point of failure.
While password managers offer robust security for credentials, their effectiveness hinges on the user's device security. If a device is compromised by malware like Raccoon, which excels at stealing data, the password manager's protections can be nullified. Malware like keyloggers can steal the master password used to access the password manager, granting full access to all stored credentials. Clipboard sniffers can capture sensitive information copied from the manager, and some malware might even steal the encrypted password vault file itself. In essence, a compromised device with password-stealing malware can bypass the security measures of the password manager, leaving the stored credentials vulnerable.[6]
As with password authentication techniques,key loggingor acoustic cryptanalysis may be used to guess or copy the "master password". Some password managers attempt to usevirtual keyboardsto reduce this risk - though this is still vulnerable to key loggers.[7]that take the keystrokes and send what key was pressed to the person/people trying to access confidential information.
Cloud-based password managers offer a centralized location for storing login credentials. However, this approach raises security concerns. One potential vulnerability is a data breach at the password manager itself. If such an event were to occur, attackers could potentially gain access to a large number of user credentials.A 2022 security incident involving LastPassexemplifies this risk.[6]
Some password managers may include a password generator. Generated passwords may be guessable if the password manager uses a weak method ofrandomly generating a "seed"for all passwords generated by this program. There are documented cases, like the one withKasperskyPassword Manager in 2021, where a flaw in the password generation method resulted in predictable passwords.[8][9]
A 2014 paper by researchers atCarnegie Mellon Universityfound that while browsers refuse to autofill passwords if the login page protocol differs from when the password was saved (HTTPvs.HTTPS), some password managers insecurely filled passwords for the unencrypted (HTTP) version of saved passwords for encrypted (HTTPS) sites. Additionally, most managers lacked protection againstiframeandredirection-basedattacks, potentially exposing additional passwords whenpassword synchronizationwas used across multiple devices.[10]
Various high-profile websites have attempted to block password managers, often backing down when publicly challenged.[11][12][13]Reasons cited have included protecting againstautomated attacks, protecting againstphishing, blockingmalware, or simply denying compatibility. TheTrusteerclient security software fromIBMfeatures explicit options to block password managers.[14][15]
Such blocking has been criticized byinformation securityprofessionals as making users less secure.[13][15]The typical blocking implementation involves settingautocomplete='off'on the relevant passwordweb form.
This option is now consequently ignored onencrypted sites,[10]such asFirefox38,[16]Chrome34,[17]andSafarifrom about 7.0.2.[18]
In recent years, some websites have made it harder for users to rely on password managers by disabling features like password autofill or blocking the ability to paste into password fields. Companies like T-Mobile, Barclaycard, and Western Union have implemented these restrictions, often citing security concerns such as malware prevention, phishing protection, or reducing automated attacks. However, cybersecurity experts have criticized these measures, arguing they can backfire by encouraging users to reuse weak passwords or rely on memory alone—ultimately making accounts more vulnerable. Some organizations, such asBritish Gas, have reversed these restrictions after public feedback, but the practice still persists on many websites.[19]
|
https://en.wikipedia.org/wiki/Password_manager
|
High-performance technical computing(HPTC) is the application ofhigh performance computing(HPC) to technical, as opposed to business or scientific, problems (although the lines between the various disciplines are necessarily vague). HPTC often refers to the application of HPC toengineeringproblems and includescomputational fluid dynamics,simulation,modeling, andseismic tomography(particularly in thepetrochemical industry).
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/High-performance_technical_computing
|
Quantum mechanicsis the fundamental physicaltheorythat describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale ofatoms.[2]: 1.1It is the foundation of allquantum physics, which includesquantum chemistry,quantum field theory,quantum technology, andquantum information science.
Quantum mechanics can describe many systems thatclassical physicscannot. Classical physics can describe many aspects of nature at an ordinary (macroscopicand(optical) microscopic) scale, but is not sufficient for describing them at very smallsubmicroscopic(atomic andsubatomic) scales. Classical mechanics can be derived from quantum mechanics as an approximation that is valid at ordinary scales.[3]
Quantum systems haveboundstates that arequantizedtodiscrete valuesofenergy,momentum,angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of bothparticlesandwaves(wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (theuncertainty principle).
Quantum mechanicsarose graduallyfrom theories to explain observations that could not be reconciled with classical physics, such asMax Planck's solution in 1900 to theblack-body radiationproblem, and the correspondence between energy and frequency inAlbert Einstein's1905 paper, which explained thephotoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s byNiels Bohr,Erwin Schrödinger,Werner Heisenberg,Max Born,Paul Diracand others. The modern theory is formulated in variousspecially developed mathematical formalisms. In one of them, a mathematical entity called thewave functionprovides information, in the form ofprobability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield.
Quantum mechanics allows the calculation of properties and behaviour ofphysical systems. It is typically applied to microscopic systems:molecules,atomsandsubatomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms,[4]but its application to human beings raises philosophical problems, such asWigner's friend, and its application to the universe as a whole remains speculative.[5]Predictions of quantum mechanics have been verified experimentally to an extremely high degree ofaccuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known asquantum electrodynamics(QED), has beenshown to agree with experimentto within 1 part in 1012when predicting the magnetic properties of an electron.[6]
A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of acomplex number, known as a probability amplitude. This is known as theBorn rule, named after physicistMax Born. For example, a quantum particle like anelectroncan be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives aprobability density functionfor the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. TheSchrödinger equationrelates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.[7]: 67–87
One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of thisuncertainty principlesays that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of itsmomentum.[7]: 427–435
Another consequence of the mathematical rules of quantum mechanics is the phenomenon ofquantum interference, which is often illustrated with thedouble-slit experiment. In the basic version of this experiment, acoherent light source, such as alaserbeam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.[8]: 102–111[2]: 1.1–1.8The wave nature of light causes the light waves passing through the two slits tointerfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles.[8]However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detectedphotonpasses through one slit (as would a classical particle), and not through both slits (as would a wave).[8]: 109[9][10]However,such experimentsdemonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known aswave–particle duality. In addition to light,electrons,atoms, andmoleculesare all found to exhibit the same dual behavior when fired towards a double slit.[2]
Another non-classical phenomenon predicted by quantum mechanics isquantum tunnelling: a particle that goes up against apotential barriercan cross it, even if its kinetic energy is smaller than the maximum of the potential.[11]In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enablingradioactive decay,nuclear fusionin stars, and applications such asscanning tunnelling microscopy,tunnel diodeandtunnel field-effect transistor.[12][13]
When quantum systems interact, the result can be the creation ofquantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...thecharacteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought".[14]Quantum entanglement enablesquantum computingand is part of quantum communication protocols, such asquantum key distributionandsuperdense coding.[15]Contrary to popular misconception, entanglement does not allow sending signalsfaster than light, as demonstrated by theno-communication theorem.[15]
Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantlyBell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory oflocalhidden variables, then the results of aBell testwill be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables.[16][17]
It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but alsolinear algebra,differential equations,group theory, and other more advanced subjects.[18][19]Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples.
In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vectorψ{\displaystyle \psi }belonging to a (separable) complexHilbert spaceH{\displaystyle {\mathcal {H}}}. This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys⟨ψ,ψ⟩=1{\displaystyle \langle \psi ,\psi \rangle =1}, and it is well-defined up to a complex number of modulus 1 (the global phase), that is,ψ{\displaystyle \psi }andeiαψ{\displaystyle e^{i\alpha }\psi }represent the same physical system. In other words, the possible states are points in theprojective spaceof a Hilbert space, usually called thecomplex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complexsquare-integrablefunctionsL2(C){\displaystyle L^{2}(\mathbb {C} )}, while the Hilbert space for thespinof a single proton is simply the space of two-dimensional complex vectorsC2{\displaystyle \mathbb {C} ^{2}}with the usual inner product.
Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which areHermitian(more precisely,self-adjoint) linearoperatorsacting on the Hilbert space. A quantum state can be aneigenvectorof an observable, in which case it is called aneigenstate, and the associatedeigenvaluecorresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as aquantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by theBorn rule: in the simplest case the eigenvalueλ{\displaystyle \lambda }is non-degenerate and the probability is given by|⟨λ→,ψ⟩|2{\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}}, whereλ→{\displaystyle {\vec {\lambda }}}is its associated unit-length eigenvector. More generally, the eigenvalue is degenerate and the probability is given by⟨ψ,Pλψ⟩{\displaystyle \langle \psi ,P_{\lambda }\psi \rangle }, wherePλ{\displaystyle P_{\lambda }}is the projector onto its associated eigenspace. In the continuous case, these formulas give instead theprobability density.
After the measurement, if resultλ{\displaystyle \lambda }was obtained, the quantum state is postulated tocollapsetoλ→{\displaystyle {\vec {\lambda }}}, in the non-degenerate case, or toPλψ/⟨ψ,Pλψ⟩{\textstyle P_{\lambda }\psi {\big /}\!{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}}, in the general case. Theprobabilisticnature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famousBohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way ofthought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newerinterpretations of quantum mechanicshave been formulated that do away with the concept of "wave function collapse" (see, for example, themany-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions becomeentangledso that the original quantum system ceases to exist as an independent entity (seeMeasurement in quantum mechanics[20]).
The time evolution of a quantum state is described by the Schrödinger equation:iℏ∂∂tψ(t)=Hψ(t).{\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t).}HereH{\displaystyle H}denotes theHamiltonian, the observable corresponding to thetotal energyof the system, andℏ{\displaystyle \hbar }is the reducedPlanck constant. The constantiℏ{\displaystyle i\hbar }is introduced so that the Hamiltonian is reduced to theclassical Hamiltonianin cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called thecorrespondence principle.
The solution of this differential equation is given byψ(t)=e−iHt/ℏψ(0).{\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).}The operatorU(t)=e−iHt/ℏ{\displaystyle U(t)=e^{-iHt/\hbar }}is known as the time-evolution operator, and has the crucial property that it isunitary. This time evolution isdeterministicin the sense that – given an initial quantum stateψ(0){\displaystyle \psi (0)}– it makes a definite prediction of what the quantum stateψ(t){\displaystyle \psi (t)}will be at any later time.[21]
Some wave functions produce probability distributions that are independent of time, such aseigenstatesof the Hamiltonian.[7]: 133–137Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around theatomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as ansorbital(Fig. 1).
Analytic solutions of the Schrödinger equation are known forvery few relatively simple model Hamiltoniansincluding thequantum harmonic oscillator, theparticle in a box, thedihydrogen cation, and thehydrogen atom. Even theheliumatom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution inclosed form.[22][23][24]
However, there are techniques for finding approximate solutions. One method, calledperturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weakpotential energy.[7]: 793Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.[7]: 849
One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum.[25][26]Both position and momentum are observables, meaning that they are represented byHermitian operators. The position operatorX^{\displaystyle {\hat {X}}}and momentum operatorP^{\displaystyle {\hat {P}}}do not commute, but rather satisfy thecanonical commutation relation:[X^,P^]=iℏ.{\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .}Given a quantum state, the Born rule lets us compute expectation values for bothX{\displaystyle X}andP{\displaystyle P}, and moreover for powers of them. Defining the uncertainty for an observable by astandard deviation, we haveσX=⟨X2⟩−⟨X⟩2,{\displaystyle \sigma _{X}={\textstyle {\sqrt {\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}}}},}and likewise for the momentum:σP=⟨P2⟩−⟨P⟩2.{\displaystyle \sigma _{P}={\sqrt {\left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}}}.}The uncertainty principle states thatσXσP≥ℏ2.{\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.}Either standard deviation can in principle be made arbitrarily small, but not both simultaneously.[27]This inequality generalizes to arbitrary pairs of self-adjoint operatorsA{\displaystyle A}andB{\displaystyle B}. Thecommutatorof these two operators is[A,B]=AB−BA,{\displaystyle [A,B]=AB-BA,}and this provides the lower bound on the product of standard deviations:σAσB≥12|⟨[A,B]⟩|.{\displaystyle \sigma _{A}\sigma _{B}\geq {\tfrac {1}{2}}\left|{\bigl \langle }[A,B]{\bigr \rangle }\right|.}
Another consequence of the canonical commutation relation is that the position and momentum operators areFourier transformsof each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to ani/ℏ{\displaystyle i/\hbar }factor) to taking the derivative according to the position, since in Fourier analysisdifferentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentumpi{\displaystyle p_{i}}is replaced by−iℏ∂∂x{\displaystyle -i\hbar {\frac {\partial }{\partial x}}}, and in particular in thenon-relativistic Schrödinger equation in position spacethe momentum-squared term is replaced with a Laplacian times−ℏ2{\displaystyle -\hbar ^{2}}.[25]
When two different quantum systems are considered together, the Hilbert space of the combined system is thetensor productof the Hilbert spaces of the two components. For example, letAandBbe two quantum systems, with Hilbert spacesHA{\displaystyle {\mathcal {H}}_{A}}andHB{\displaystyle {\mathcal {H}}_{B}}, respectively. The Hilbert space of the composite system is thenHAB=HA⊗HB.{\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.}If the state for the first system is the vectorψA{\displaystyle \psi _{A}}and the state for the second system isψB{\displaystyle \psi _{B}}, then the state of the composite system isψA⊗ψB.{\displaystyle \psi _{A}\otimes \psi _{B}.}Not all states in the joint Hilbert spaceHAB{\displaystyle {\mathcal {H}}_{AB}}can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, ifψA{\displaystyle \psi _{A}}andϕA{\displaystyle \phi _{A}}are both possible states for systemA{\displaystyle A}, and likewiseψB{\displaystyle \psi _{B}}andϕB{\displaystyle \phi _{B}}are both possible states for systemB{\displaystyle B}, then12(ψA⊗ψB+ϕA⊗ϕB){\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)}is a valid joint state that is not separable. States that are not separable are calledentangled.[28][29]
If the state for a composite system is entangled, it is impossible to describe either component systemAor systemBby a state vector. One can instead definereduced density matricesthat describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system.[28][29]Just as density matrices specify the state of a subsystem of a larger system, analogously,positive operator-valued measures(POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory.[28][30]
As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known asquantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic.[31]
There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed byPaul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics –matrix mechanics(invented byWerner Heisenberg) and wave mechanics (invented byErwin Schrödinger).[32]An alternative formulation of quantum mechanics isFeynman'spath integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of theaction principlein classical mechanics.[33]
The HamiltonianH{\displaystyle H}is known as thegeneratorof time evolution, since it defines a unitary time-evolution operatorU(t)=e−iHt/ℏ{\displaystyle U(t)=e^{-iHt/\hbar }}for each value oft{\displaystyle t}. From this relation betweenU(t){\displaystyle U(t)}andH{\displaystyle H}, it follows that any observableA{\displaystyle A}that commutes withH{\displaystyle H}will beconserved: its expectation value will not change over time.[7]: 471This statement generalizes, as mathematically, any Hermitian operatorA{\displaystyle A}can generate a family of unitary operators parameterized by a variablet{\displaystyle t}. Under the evolution generated byA{\displaystyle A}, any observableB{\displaystyle B}that commutes withA{\displaystyle A}will be conserved. Moreover, ifB{\displaystyle B}is conserved by evolution underA{\displaystyle A}, thenA{\displaystyle A}is conserved under the evolution generated byB{\displaystyle B}. This implies a quantum version of the result proven byEmmy Noetherin classical (Lagrangian) mechanics: for everydifferentiablesymmetryof a Hamiltonian, there exists a correspondingconservation law.
The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:H=12mP2=−ℏ22md2dx2.{\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.}The general solution of the Schrödinger equation is given byψ(x,t)=12π∫−∞∞ψ^(k,0)ei(kx−ℏk22mt)dk,{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,}which is a superposition of all possibleplane wavesei(kx−ℏk22mt){\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}}, which are eigenstates of the momentum operator with momentump=ℏk{\displaystyle p=\hbar k}. The coefficients of the superposition areψ^(k,0){\displaystyle {\hat {\psi }}(k,0)}, which is the Fourier transform of the initial quantum stateψ(x,0){\displaystyle \psi (x,0)}.
It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states.[note 1]Instead, we can consider a Gaussianwave packet:ψ(x,0)=1πa4e−x22a{\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}}which has Fourier transform, and therefore momentum distributionψ^(k,0)=aπ4e−ak22.{\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.}We see that as we makea{\displaystyle a}smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by makinga{\displaystyle a}larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle.
As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant.[34]
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhereinsidea certain region, and therefore infinite potential energy everywhereoutsidethat region.[25]: 77–78For the one-dimensional case in thex{\displaystyle x}direction, the time-independent Schrödinger equation may be written−ℏ22md2ψdx2=Eψ.{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .}
With the differential operator defined byp^x=−iℏddx{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}the previous equation is evocative of theclassic kinetic energy analogue,12mp^x2=E,{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}with stateψ{\displaystyle \psi }in this case having energyE{\displaystyle E}coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box areψ(x)=Aeikx+Be−ikxE=ℏ2k22m{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}or, fromEuler's formula,ψ(x)=Csin(kx)+Dcos(kx).{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!}
The infinite potential walls of the box determine the values ofC,D,{\displaystyle C,D,}andk{\displaystyle k}atx=0{\displaystyle x=0}andx=L{\displaystyle x=L}whereψ{\displaystyle \psi }must be zero. Thus, atx=0{\displaystyle x=0},ψ(0)=0=Csin(0)+Dcos(0)=D{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}andD=0{\displaystyle D=0}. Atx=L{\displaystyle x=L},ψ(L)=0=Csin(kL),{\displaystyle \psi (L)=0=C\sin(kL),}in whichC{\displaystyle C}cannot be zero as this would conflict with the postulate thatψ{\displaystyle \psi }has norm 1. Therefore, sincesin(kL)=0{\displaystyle \sin(kL)=0},kL{\displaystyle kL}must be an integer multiple ofπ{\displaystyle \pi },k=nπLn=1,2,3,….{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .}
This constraint onk{\displaystyle k}implies a constraint on the energy levels, yieldingEn=ℏ2π2n22mL2=n2h28mL2.{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.}
Afinite potential wellis the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of therectangular potential barrier, which furnishes a model for thequantum tunnelingeffect that plays an important role in the performance of modern technologies such asflash memoryandscanning tunneling microscopy.
As in the classical case, the potential for the quantum harmonic oscillator is given by[7]: 234V(x)=12mω2x2.{\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.}
This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. Theeigenstatesare given byψn(x)=12nn!⋅(mωπℏ)1/4⋅e−mωx22ℏ⋅Hn(mωℏx),{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad }n=0,1,2,….{\displaystyle n=0,1,2,\ldots .}whereHnare theHermite polynomialsHn(x)=(−1)nex2dndxn(e−x2),{\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),}and the corresponding energy levels areEn=ℏω(n+12).{\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).}
This is another example illustrating the discretization of energy forbound states.
TheMach–Zehnder interferometer(MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in thedelayed choice quantum eraser, theElitzur–Vaidman bomb tester, and in studies of quantum entanglement.[35][36]
We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vectorψ∈C2{\displaystyle \psi \in \mathbb {C} ^{2}}that is a superposition of the "lower" pathψl=(10){\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}}and the "upper" pathψu=(01){\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}}, that is,ψ=αψl+βψu{\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}}for complexα,β{\displaystyle \alpha ,\beta }. In order to respect the postulate that⟨ψ,ψ⟩=1{\displaystyle \langle \psi ,\psi \rangle =1}we require that|α|2+|β|2=1{\displaystyle |\alpha |^{2}+|\beta |^{2}=1}.
Bothbeam splittersare modelled as the unitary matrixB=12(1ii1){\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}}, which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of1/2{\displaystyle 1/{\sqrt {2}}}, or be reflected to the other path with a probability amplitude ofi/2{\displaystyle i/{\sqrt {2}}}. The phase shifter on the upper arm is modelled as the unitary matrixP=(100eiΔΦ){\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}}, which means that if the photon is on the "upper" path it will gain a relative phase ofΔΦ{\displaystyle \Delta \Phi }, and it will stay unchanged if it is in the lower path.
A photon that enters the interferometer from the left will then be acted upon with a beam splitterB{\displaystyle B}, a phase shifterP{\displaystyle P}, and another beam splitterB{\displaystyle B}, and so end up in the stateBPBψl=ieiΔΦ/2(−sin(ΔΦ/2)cos(ΔΦ/2)),{\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},}and the probabilities that it will be detected at the right or at the top are given respectively byp(u)=|⟨ψu,BPBψl⟩|2=cos2ΔΦ2,{\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},}p(l)=|⟨ψl,BPBψl⟩|2=sin2ΔΦ2.{\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.}One can therefore use the Mach–Zehnder interferometer to estimate thephase shiftby estimating these probabilities.
It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given byp(u)=p(l)=1/2{\displaystyle p(u)=p(l)=1/2}, independently of the phaseΔΦ{\displaystyle \Delta \Phi }. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.[37]
Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained byclassical methods.[note 2]Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons,protons,neutrons,photons, and others).Solid-state physicsandmaterials scienceare dependent upon quantum mechanics.[38]
In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory includequantum chemistry,quantum optics,quantum computing,superconducting magnets,light-emitting diodes, theoptical amplifierand the laser, thetransistorandsemiconductorssuch as themicroprocessor,medical and research imagingsuch asmagnetic resonance imagingandelectron microscopy.[39]Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-moleculeDNA.
The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is thecorrespondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those ofclassical mechanicsin the regime of largequantum numbers.[40]One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known asquantization.[41]: 299[42]
When quantum mechanics was originally formulated, it was applied to models whose correspondence limit wasnon-relativisticclassical mechanics. For instance, the well-known model of thequantum harmonic oscillatoruses an explicitly non-relativistic expression for thekinetic energyof the oscillator, and is thus a quantum version of theclassical harmonic oscillator.[7]: 234
Complications arise withchaotic systems, which do not have good quantum numbers, andquantum chaosstudies the relationship between classical and quantum descriptions in these systems.[41]: 353
Quantum decoherenceis a mechanism through which quantum systems losecoherence, and thus become incapable of displaying many typically quantum effects:quantum superpositionsbecome simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations.[7]: 687–730Quantum coherence is not typically evident at macroscopic scales, though at temperatures approachingabsolute zeroquantum behavior may manifest macroscopically.[note 3]
Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms andmoleculeswhich would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction ofelectric chargesunder the rules of quantum mechanics.[43]
Early attempts to merge quantum mechanics withspecial relativityinvolved the replacement of the Schrödinger equation with a covariant equation such as theKlein–Gordon equationor theDirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory,quantum electrodynamics, provides a fully quantum description of theelectromagnetic interaction. Quantum electrodynamics is, along withgeneral relativity, one of the most accurate physical theories ever devised.[44][45]
The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treatchargedparticles as quantum mechanical objects being acted on by a classicalelectromagnetic field. For example, the elementary quantum model of thehydrogen atomdescribes theelectric fieldof the hydrogen atom using a classical−e2/(4πϵ0r){\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)}Coulomb potential.[7]: 285Likewise, in aStern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically.[41]: 26This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons bycharged particles.
Quantum fieldtheories for thestrong nuclear forceand theweak nuclear forcehave also been developed. The quantum field theory of the strong nuclear force is calledquantum chromodynamics, and describes the interactions of subnuclear particles such asquarksandgluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known aselectroweak theory), by the physicistsAbdus Salam,Sheldon GlashowandSteven Weinberg.[46]
Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeatedempirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory ofquantum gravityis an important issue inphysical cosmologyand the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.[47]
One proposal for doing so isstring theory, which posits that thepoint-like particlesofparticle physicsare replaced byone-dimensionalobjects calledstrings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with itsmass,charge, and other properties determined by thevibrationalstate of the string. In string theory, one of the many vibrational states of the string corresponds to thegraviton, a quantum mechanical particle that carries gravitational force.[48][49]
Another popular theory isloop quantum gravity(LQG), which describes quantum properties of gravity and is thus a theory ofquantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops calledspin networks. The evolution of a spin network over time is called aspin foam. The characteristic length scale of a spin foam is thePlanck length, approximately 1.616×10−35m, and so lengths shorter than the Planck length are not physically meaningful in LQG.[50]
Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strongphilosophicaldebates and manyinterpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties withwavefunction collapseand the relatedmeasurement problem, andquantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus.Richard Feynmanonce said, "I think I can safely say that nobody understands quantum mechanics."[51]According toSteven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[52]
The views ofNiels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation".[53][54]According to these views, the probabilistic nature of quantum mechanics is not atemporaryfeature which will eventually be replaced by a deterministic theory, but is instead afinalrenunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to thecomplementarynature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr,[55]Heisenberg,[56]Schrödinger,[57]Feynman,[2]andZeilinger[58]as well as 21st-century researchers in quantum foundations.[59]
Albert Einstein, himself one of the founders ofquantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such asdeterminismandlocality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as theBohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbidsaction at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to howthermodynamicsis valid, but the fundamental theory behind it isstatistical mechanics. In 1935, Einstein and his collaboratorsBoris PodolskyandNathan Rosenpublished an argument that the principle of locality implies the incompleteness of quantum mechanics, athought experimentlater termed theEinstein–Podolsky–Rosen paradox.[note 4]In 1964,John Bellshowed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known asBell inequalities, that can be violated by entangled particles.[64]Since thenseveral experimentshave been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.[16][17]
Bohmian mechanicsshows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.[65]
Everett'smany-worlds interpretation, formulated in 1956, holds thatallthe possibilities described by quantum theorysimultaneouslyoccur in a multiverse composed of mostly independent parallel universes.[66]This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule,[67][68]with no consensus on whether they have been successful.[69][70][71]
Relational quantum mechanicsappeared in the late 1990s as a modern derivative of Copenhagen-type ideas,[72][73]andQBismwas developed some years later.[74][75]
Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such asRobert Hooke,Christiaan HuygensandLeonhard Eulerproposed a wave theory of light based on experimental observations.[76]In 1803 EnglishpolymathThomas Youngdescribed the famousdouble-slit experiment.[77]This experiment played a major role in the general acceptance of thewave theory of light.
During the early 19th century,chemicalresearch byJohn DaltonandAmedeo Avogadrolent weight to theatomic theoryof matter, an idea thatJames Clerk Maxwell,Ludwig Boltzmannand others built upon to establish thekinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics.[78]While the early conception of atoms fromGreek philosophyhad been that they were indivisible units – the word "atom" deriving from theGreekfor 'uncuttable' – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard wasMichael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure.Julius Plücker,Johann Wilhelm HittorfandEugen Goldsteincarried on and improved upon Faraday's work, leading to the identification ofcathode rays, whichJ. J. Thomsonfound to consist of subatomic particles that would be called electrons.[79][80]
Theblack-body radiationproblem was discovered byGustav Kirchhoffin 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation.[81]The wordquantumderives from theLatin, meaning "how great" or "how much".[82]According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to theirfrequency(ν):E=hν{\displaystyle E=h\nu \ },
wherehis thePlanck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not thephysical realityof the radiation.[83]In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[84]However, in 1905 Albert Einstein interpreted Planck's quantum hypothesisrealisticallyand used it to explain thephotoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into amodel of the hydrogen atomthat successfully predicted thespectral linesof hydrogen.[85]Einstein further developed this idea to show that anelectromagnetic wavesuch as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency.[86]In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation,[87]which became the basis of the laser.[88]
This phase is known as theold quantum theory. Never complete or self-consistent, the old quantum theory was rather a set ofheuristiccorrections to classical mechanics.[89][90]The theory is now understood as asemi-classical approximationto modern quantum mechanics.[91][92]Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein andPeter Debye's work on thespecific heatof solids, Bohr andHendrika Johanna van Leeuwen'sproofthat classical physics cannot account fordiamagnetism, andArnold Sommerfeld's extension of the Bohr model to include special-relativistic effects.[89][93]
In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicistLouis de Broglieput forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, andPascual Jordan[94][95]developedmatrix mechanicsand the Austrian physicist Erwin Schrödinger inventedwave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926.[96]Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the FifthSolvay Conferencein 1927.[97]
By 1930, quantum mechanics had been further unified and formalized byDavid Hilbert, Paul Dirac andJohn von Neumann[98]with greater emphasis onmeasurement, the statistical nature of our knowledge of reality, andphilosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry,quantum electronics,quantum optics, andquantum information science. It also provides a useful framework for many features of the modernperiodic table of elements, and describes the behaviors ofatomsduringchemical bondingand the flow of electrons in computersemiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain somemacroscopicphenomena such assuperconductors[99]andsuperfluids.[100]
The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus:
More technical:
Course material
Philosophy
|
https://en.wikipedia.org/wiki/Quantum_physics
|
Acryptographic protocolis an abstract or concreteprotocolthat performs asecurity-related function and appliescryptographicmethods, often as sequences ofcryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program.[1]
Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects:
For example,Transport Layer Security(TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections.[2]It has an entity authentication mechanism, based on theX.509system; a key setup phase, where asymmetric encryptionkey is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support.
There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographicapplicationprotocols often use one or more underlyingkey agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as theDiffie–Hellman key exchange, which although it is only a part of TLSper se, Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications.
A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated collaboration.[3]Blind signaturescan be used fordigital cashanddigital credentialsto prove that a person holds an attribute or right without revealing that person's identity or the identities of parties that person transacted with.Secure digital timestampingcan be used to prove that data (even if confidential) existed at a certain time.Secure multiparty computationcan be used to compute answers (such as determining the highest bid in an auction) based on confidential data (such as private bids), so that when the protocol is complete the participants know only their own input and the answer.End-to-end auditable voting systemsprovide sets of desirable privacy and auditability properties for conductinge-voting.Undeniable signaturesinclude interactive protocols that allow the signer to prove a forgery and limit who can verify the signature.Deniable encryptionaugments standard encryption by making it impossible for an attacker to mathematically prove the existence of a plain text message.Digital mixescreate hard-to-trace communications.
Cryptographic protocols can sometimes beverified formallyon an abstract level. When it is done, there is a necessity to formalize the environment in which the protocol operates in order to identify threats. This is frequently done through theDolev-Yaomodel.
Logics, concepts and calculi used for formal reasoning of security protocols:
Research projects and tools used for formal verification of security protocols:
To formally verify a protocol it is often abstracted and modelled usingAlice & Bob notation. A simple example is the following:
This states thatAliceA{\displaystyle A}intends a message for BobB{\displaystyle B}consisting of a messageX{\displaystyle X}encrypted under shared keyKA,B{\displaystyle K_{A,B}}.
|
https://en.wikipedia.org/wiki/Cryptographic_protocol
|
This is alist of Unixdaemonsthat are found on variousUnix-likeoperating systems. Unix daemons typically have a name ending with ad.
|
https://en.wikipedia.org/wiki/List_of_Unix_daemons
|
Alearning cycleis a concept of how people learn from experience. A learning cycle will have a number of stages or phases, the last of which can be followed by the first.
In 1933 (based on work first published in 1910),John Deweydescribed five phases or aspects of reflective thought:
In between, as states of thinking, are (1) suggestions, in which the mind leaps forward to a possible solution; (2) an intellectualization of the difficulty or perplexity that has been felt (directly experienced) into a problem to be solved, a question for which the answer must be sought; (3) the use of one suggestion after another as a leading idea, or hypothesis, to initiate and guide observation and other operations in the collection of factual material; (4) the mental elaboration of the idea or supposition as an idea or supposition (reasoning, in the sense in which reasoning is a part, not the whole of inference); and (5) testing the hypothesis by overt or imaginative action.
In the 1940s,Kurt Lewindevelopedaction researchand described a cycle of:
Lewin particularly highlighted the need for fact finding, which he felt was missing from much of management and social work. He contrasted this to the military where
the attack is pressed home and immediately a reconnaissance plane follows with the one objective of determining as accurately and objectively as possible the new situation. This reconnaissance or fact-finding has four functions. First it should evaluate the action. It shows whether what has been achieved is above or below expectation. Secondly, it gives the planners a chance to learn, that is, to gather new general insight, for instance, regarding the strength and weakness of certain weapons or techniques of action. Thirdly, this fact-finding should serve as a basis for correctly planning the next step. Finally, it serves as a basis for modifying the "overall plan."
In the early 1970s,David A. Kolband Ronald E. Fry developed the experiential learning model (ELM), composed of four elements:[3]
Testing the new concepts gives concrete experience which can be observed and reflected upon, allowing the cycle to continue.
Kolb integrated this learning cycle with a theory oflearning styles, wherein each style prefers two of the four parts of the cycle. The cycle isquadrisectedby a horizontal and vertical axis. The vertical axis represents how knowledge can be grasped, throughconcrete experienceor throughabstract conceptualization, or by a combination of both. The horizontal axis represents how knowledge is transformed or constructed throughreflective observationoractive experimentation. These two axes form the four quadrants that can be seen as four stages: concrete experience (CE), reflective observation (RO), abstract conceptualization (AC) and active experimentation (AE) and as four styles of learning: diverging, assimilating, converging and accommodating.[4]The concept of learning styles has been criticised, seeLearning styles § Criticism.
In the 1980s, Peter Honey and Alan Mumford developed Kolb and Fry's ideas into slightly different learning cycle.[5]The stages are:
While the cycle can be entered at any of the four stages, a cycle must be completed to give learning that will change behaviour. The cycle can be performed multiple times to build up layers of learning.
Honey and Mumford gave names (also calledlearning styles) to the people who prefer to enter the cycle at different stages:Activist,Reflector,TheoristandPragmatist. Honey and Mumford's learning styles questionnaire has been criticized for poorreliabilityandvalidity.[6]
In the late 1980s, the 5E learning cycle was developed byBiological Sciences Curriculum Study, specifically for use in teaching science.[7]The learning cycle has four phases:
The fifth E stands forEvaluate, in which the instructor observes each student's knowledge and understanding, and leads students to assess whether what they have learned is true. Evaluation should take place throughout the cycle, not within its own set phase.
In the 1990s, Alistair Smith developed theaccelerated learning cycle, also for use in teaching.[8]The phases are:[9]
Unlike other learning cycles, step 8 is normally followed by step 2, rather than step 1.
In the 2000s, Fred Korthagen and Angelo Vasalos (and others) developed the ALACT model, specifically for use in personal development.[10]The five phases of the ALACT cycle are:
As with Kolb and Fry, trial is an action that can be looked back on. Korthagen and Vasalos listedcoachinginterventions for each phase.[10]
Korthagen and Vasalos also described anonion modelof "levels of reflection" (from inner to outer: mission, identity, beliefs, competencies, behavior, environment) inspired byGregory Bateson's hierarchy oflogical types.[10]In 2010, they connected their model of reflective learning to the practice ofmindfulnessand toOtto Scharmer'sTheory U, which, in contrast to a learning cycle, emphasizes reflecting on a desired future rather than on past experience.[11]: 539–545
|
https://en.wikipedia.org/wiki/Learning_cycle
|
Inproject management, thecone of uncertaintydescribes the evolution of the amount of best case uncertainty during a project.[1]At the beginning of a project, comparatively little is known about the product or work results, and so estimates are subject to large uncertainty. As more research and development is done, more information is learned about the project, and the uncertainty then tends to decrease, reaching 0% when allresidual riskhas been terminated or transferred. This usually happens by the end of the project i.e. by transferring the responsibilities to a separate maintenance group.
The term cone of uncertainty is used insoftware developmentwhere the technical and business environments change very rapidly. However, the concept, under different names, is a well-established basic principle ofcost engineering. Most[citation needed]environments change so slowly that they can be considered static for the duration of a typical project, and traditional project management methods therefore focus on achieving a full understanding of the environment through careful analysis and planning. Well before any significant investments are made, the uncertainty is reduced to a level where the risk can be carried comfortably. In this kind of environment the uncertainty level decreases rapidly in the beginning and the cone shape is less obvious. The software business however is very volatile and there is an external pressure to decrease the uncertainty level over time. The project must actively and continuously work to reduce the uncertainty level.
The cone of uncertainty is narrowed both by research and by decisions that remove the sources of variability from the project. These decisions are about scope, what is included and not included in the project. If these decisions change later in the project then the cone will widen.
Original research for engineering and construction in the chemical industry demonstrated that actual final costs often exceeded the earliest "base" estimate by as much as 100% (or underran by as much as 50%[2]). Research in the software industry on the cone of uncertainty stated that in the beginning of theproject life cycle(i.e. before gathering ofrequirements) estimates have in general an uncertainty of factor 4 on both the high side and the low side.[3]This means that the actual effort or scope can be 4 times or 1/4 of the first estimates. This uncertainty tends to decrease over the course of a project, although that decrease is not guaranteed.[4]
One way to account for the cone of uncertainty in the project estimate is to first determine a 'most likely' single-point estimate and then calculate the high-low range using predefined multipliers (dependent on the level of uncertainty at that time). This can be done with formulas applied to spreadsheets, or by using aproject management toolthat allows the task owner to enter a low/high ranged estimate and will then create a schedule that will include this level of uncertainty.
The cone of uncertainty is also used extensively as a graphic inhurricaneforecasting, where its most iconic usage is more formally known as theNHCTrack Forecast Cone,[5]and more colloquially known as the Error Cone, Cone of Probability, or the Cone of Death. (Note that the usage in hurricane forecasting is essentially the opposite of the usage in software development. In software development, the uncertainty surrounds the current state of the project, and in the future the uncertainty decreases, whereas in hurricane forecasting the current location of the storm is certain, and the future path of the storm becomes increasingly uncertain).[6]Over the past decade, storms have traveled within their projected areas two-thirds of the time,[7]and the cones themselves have shrunk due to improvements in methodology. The NHC first began in-house five-day projections in 2001, and began issuing such to the public in 2003. It is currently working in-house on seven-day forecasts, but the resultant cone of uncertainty is so large that the possible benefits fordisaster managementare problematic.[8]
The original conceptual basis of the cone of uncertainty was developed for engineering and construction in the chemical industry by the founders of the American Association of Cost Engineers (nowAACE International). They published a proposed standard estimate type classification system with uncertainty ranges in 1958[9]and presented "cone" illustrations in the industry literature at that time.[2]In the software field, the concept was picked up by Barry Boehm.[10]Boehm referred to the concept as the "Funnel Curve".[11]Boehm's initial quantification of the effects of the Funnel Curve were subjective.[10]Later work by Boehm and his colleagues atUSCapplied data from a set of software projects from the U.S. Air Force and other sources to validate the model. The basic model was further validated based on work at NASA's Software Engineering Lab.[12][13]
The first time the name "cone of uncertainty" was used to describe this concept was inSoftware Project Survival Guide.[14]
Footnotes
|
https://en.wikipedia.org/wiki/Cone_of_uncertainty
|
Ininformation theory, thenoisy-channel coding theorem(sometimesShannon's theoremorShannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible (in theory) to communicate discrete data (digitalinformation) nearly error-free up to a computable maximum rate through the channel. This result was presented byClaude Shannonin 1948 and was based in part on earlier work and ideas ofHarry NyquistandRalph Hartley.
TheShannon limitorShannon capacityof a communication channel refers to the maximumrateof error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon andWarren WeaverentitledThe Mathematical Theory of Communication(1949). This founded the modern discipline ofinformation theory.
Stated byClaude Shannonin 1948, the theorem describes the maximum possible efficiency oferror-correcting methodsversus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications anddata storage. This theorem is of foundational importance to the modern field ofinformation theory. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is given in (Feinstein 1954).
The Shannon theorem states that given a noisy channel withchannel capacityCand information transmitted at a rateR, then ifR<C{\displaystyle R<C}there existcodesthat allow theprobability of errorat the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate,C.
The converse is also important. IfR>C{\displaystyle R>C}, an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.
The channel capacityC{\displaystyle C}can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using theShannon–Hartley theorem.
Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such asReed–Solomon codesand, more recently,low-density parity-check(LDPC) codes andturbo codes, come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with the computing power in today'sdigital signal processors, it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045 dB of the Shannon limit (for binaryadditive white Gaussian noise(AWGN) channels, with very long block lengths).[1]
The basic mathematical model for a communication system is the following:
AmessageWis transmitted through a noisy channel by using encoding and decoding functions. AnencodermapsWinto a pre-defined sequence of channel symbols of lengthn. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into adecoderwhich maps the sequence into an estimate of the message. In this setting, the probability of error is defined as:
Theorem(Shannon, 1948):
(MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11)
As with the several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result. These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
The following outlines are only one set of many different styles available for study in information theory texts.
This particular proof of achievability follows the style of proofs that make use of theasymptotic equipartition property(AEP). Another style can be found in information theory texts usingerror exponents.
Both types of proofs make use of a random coding argument where the codebook used across a channel is randomly constructed - this serves to make the analysis simpler while still proving the existence of a code satisfying a desired low probability of error at any data rate below thechannel capacity.
By an AEP-related argument, given a channel, lengthn{\displaystyle n}strings of source symbolsX1n{\displaystyle X_{1}^{n}}, and lengthn{\displaystyle n}strings of channel outputsY1n{\displaystyle Y_{1}^{n}}, we can define ajointly typical setby the following:
We say that two sequencesX1n{\displaystyle {X_{1}^{n}}}andY1n{\displaystyle Y_{1}^{n}}arejointly typicalif they lie in the jointly typical set defined above.
Steps
The probability of error of this scheme is divided into two parts:
Define:Ei={(X1n(i),Y1n)∈Aε(n)},i=1,2,…,2nR{\displaystyle E_{i}=\{(X_{1}^{n}(i),Y_{1}^{n})\in A_{\varepsilon }^{(n)}\},i=1,2,\dots ,2^{nR}}
as the event that message i is jointly typical with the sequence received when message 1 is sent.
We can observe that asn{\displaystyle n}goes to infinity, ifR<I(X;Y){\displaystyle R<I(X;Y)}for the channel, the probability of error will go to 0.
Finally, given that the average codebook is shown to be "good," we know that there exists a codebook whose performance is better than the average, and so satisfies our need for arbitrarily low error probability communicating across the noisy channel.
Suppose a code of2nR{\displaystyle 2^{nR}}codewords. Let W be drawn uniformly over this set as an index. LetXn{\displaystyle X^{n}}andYn{\displaystyle Y^{n}}be the transmitted codewords and received codewords, respectively.
The result of these steps is thatPe(n)≥1−1nR−CR{\displaystyle P_{e}^{(n)}\geq 1-{\frac {1}{nR}}-{\frac {C}{R}}}. As the block lengthn{\displaystyle n}goes to infinity, we obtainPe(n){\displaystyle P_{e}^{(n)}}is bounded away from 0 if R is greater than C - we can get arbitrarily low rates of error only if R is less than C.
A strong converse theorem, proven by Wolfowitz in 1957,[3]states that,
for some finite positive constantA{\displaystyle A}. While the weak converse states that the error probability is bounded away from zero asn{\displaystyle n}goes to infinity, the strong converse states that the error goes to 1. Thus,C{\displaystyle C}is a sharp threshold between perfectly reliable and completely unreliable communication.
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
Then the channel capacity is given by
The maximum is attained at the capacity achieving distributions for each respective channel. That is,C=liminf1n∑i=1nCi{\displaystyle C=\lim \inf {\frac {1}{n}}\sum _{i=1}^{n}C_{i}}whereCi{\displaystyle C_{i}}is the capacity of the ithchannel.
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in theasymptotic equipartition propertyarticle.
The technicality oflim infcomes into play when1n∑i=1nCi{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}C_{i}}does not converge.
|
https://en.wikipedia.org/wiki/Shannon's_theorem
|
Jetonsorjettonsaretokensorcoin-like medals produced across Europe from the 13th through the 18th centuries. They were produced as counters for use in calculation on acounting board, a lined board similar to anabacus. Jetons for calculation were commonly used in Europe from about 1200 to 1700,[1]and remained in occasional use into the early nineteenth century. They also found use as a money substitute in games, similar to modern casino chips orpoker chips.
Thousands of different jetons exist, mostly of religious and educational designs, as well as portraits, the last of which most resemble coinage, somewhat similar to modern, non-circulationcommemorative coins. The spelling "jeton" is from the French; it is sometimes spelled "jetton" in English.
The Romans similarly usedpebbles(inLatin:calculi"little stones", whence Englishcalculate).[2]Addition is straightforward, and relatively efficient algorithms for multiplication and division were known.
The custom of stamping counters like coins began in France, with the oldest known coming from the fiscal offices of the royal government of France and dating from around the middle of the 13th century.[3]From the late 13th century to the end of the 14th century, jetons were produced in England, similar in design to contemporary Edwardianpennies. Although they were made of brass they were often pierced or indented at the centre to avoid them being plated with silver and passed off as real silver coins. By the middle of the 14th century, English jetons were being produced in a larger size, similar to thegroat.
Throughout the 15th century competition fromFranceand theLow Countriesended jeton manufacture in England, but not for long.Nurembergjeton masters initially started by copying counters of their European neighbours, but by the mid 16th century they gained a monopoly by mass-producing cheaper jetons for commercial use. Later – "counter casting" being obsolete – production shifted to jetons for use in games and toys, sometimes copying more or less famous jetons with a political background.
Mints in the Low Countries in the late Middle Ages in general produced the counters for official bookkeeping. Most of them show the effigy of the ruler within a flattering text and on the reverse the ruler'sescutcheonand the name or city of the accounting office.
During theDutch Revolt(1568–1609) this pattern changed and by both parties, the North in front, about 2,000 different, mostly political, jetons (Dutch:Rekenpenning) were minted depicting the victories, ideals and aims. Specifically in the last quarter of the 16th century, wheregeuzenor "beggars" made important military contributions to the Dutch side and bookkeeping was already done without counters, the production in the North was just for propaganda.
The mints and treasuries of the big estates in Central Europe used their own jetons and then had a number of them struck in gold and silver as New Year gifts for their employees, who in turn commissioned jetons with their own mottoes and coats-of-arms. In the sixteenth century the Czech Royal Treasury bought between two and three thousand pieces at the beginning of each year.
AsArabic numeralsand thezerocame into use, "pen reckoning" gradually displaced "counter casting" as the common accounting method.
In the 21st century, jetons continue to be used in some countries astelephone tokensorgettonein coin-operatedpublic telephonesor invending machines. They are usually made of metal or hard plastic. InGermanthe wordJetonrefers specifically tocasino tokens. InPolishthe wordżeton, pronounced similarly to Frenchjeton, refers both to tokens used in vending machines, phones etc. and to those used in casinos. The wordжетонhas the same use inRussian, as does the wordjetoninRomanianandžetooninEstonian. However inHungarythe wordzsetonis (somewhat dated) slang for money, particularly coins. Plastic jetons used to be used for paying the fare for theStar FerryinHong Kong.[citation needed]
Apart from their monetary use in casinos, jetons are used incard games, particularly inFrancebut also inDenmark. They are traditionally made of wood of different shapes and sizes to represent different values such as 1, 5, 10, 50 or 100 points. For example, in traditional French games, jetons are round and usually worth 1 unit; fiches are long and rectangular in shape and may be worth 10 to 20 jetons; contrats are the short rectangular counters and may be worth, say, 100 units.
The jetons are also stained or coloured so that each player can have his or her own colour. This facilitates scoring because players do not need to start with exactly the same number of counters. Nowadays plastic jetons are a cheap alternative. Games that typically use jetons includeNain Jaune,Belote,Piquet,Ombre,Mistigri,Danish TarokandVira. A dedicated box calledvirapullais used to contain Vira jetons.[citation needed]
In France and other countries ajetonis also a token amount ofmoneypaid to members of asocietyor a legislative chamber each time they are present in ameeting.[citation needed]
|
https://en.wikipedia.org/wiki/Jeton
|
Innumber theoryandcombinatorics, apartitionof a non-negativeintegern, also called aninteger partition, is a way of writingnas asumofpositive integers. Two sums that differ only in the order of theirsummandsare considered the same partition. (If order matters, the sum becomes acomposition.) For example,4can be partitioned in five distinct ways:
The only partition of zero is the empty sum, having no parts.
The order-dependent composition1 + 3is the same partition as3 + 1, and the two distinct compositions1 + 2 + 1and1 + 1 + 2represent the same partition as2 + 1 + 1.
An individual summand in a partition is called apart. The number of partitions ofnis given by thepartition functionp(n). Sop(4) = 5. The notationλ⊢nmeans thatλis a partition ofn.
Partitions can be graphically visualized withYoung diagramsorFerrers diagrams. They occur in a number of branches ofmathematicsandphysics, including the study ofsymmetric polynomialsand of thesymmetric groupand ingroup representation theoryin general.
The seven partitions of 5 are
Some authors treat a partition as a non-increasing sequence of summands, rather than an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as thetuple(2, 2, 1)or in the even more compact form(22, 1)where the superscript indicates the number of repetitions of a part.
This multiplicity notation for a partition can be written alternatively as1m12m23m3⋯{\displaystyle 1^{m_{1}}2^{m_{2}}3^{m_{3}}\cdots }, wherem1is the number of 1's,m2is the number of 2's, etc. (Components withmi= 0may be omitted.) For example, in this notation, the partitions of 5 are written51,1141,2131,1231,1122,1321{\displaystyle 5^{1},1^{1}4^{1},2^{1}3^{1},1^{2}3^{1},1^{1}2^{2},1^{3}2^{1}}, and15{\displaystyle 1^{5}}.
There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named afterNorman Macleod Ferrers, and as Young diagrams, named afterAlfred Young. Both have several possible conventions; here, we useEnglish notation, with diagrams aligned in the upper-left corner.
The partition 6 + 4 + 3 + 1 of the number 14 can be represented by the following diagram:
The 14 circles are lined up in 4 rows, each having the size of a part of the partition.
The diagrams for the 5 partitions of the number 4 are shown below:
An alternative visual representation of an integer partition is itsYoung diagram(often also called a Ferrers diagram). Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is
while the Ferrers diagram for the same partition is
While this seemingly trivial variation does not appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study ofsymmetric functionsandgroup representation theory: filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects calledYoung tableaux, and these tableaux have combinatorial and representation-theoretic significance.[1]As a type of shape made by adjacent squares joined together, Young diagrams are a special kind ofpolyomino.[2]
Thepartition functionp(n){\displaystyle p(n)}counts the partitions of a non-negative integern{\displaystyle n}. For instance,p(4)=5{\displaystyle p(4)=5}because the integer4{\displaystyle 4}has the five partitions1+1+1+1{\displaystyle 1+1+1+1},1+1+2{\displaystyle 1+1+2},1+3{\displaystyle 1+3},2+2{\displaystyle 2+2}, and4{\displaystyle 4}.
The values of this function forn=0,1,2,…{\displaystyle n=0,1,2,\dots }are:
Thegenerating functionofp{\displaystyle p}is
Noclosed-form expressionfor the partition function is known, but it has bothasymptotic expansionsthat accurately approximate it andrecurrence relationsby which it can be calculated exactly. It grows as anexponential functionof thesquare rootof its argument.,[3]as follows:
In 1937,Hans Rademacherfound a way to represent the partition functionp(n){\displaystyle p(n)}by theconvergent series
p(n)=1π2∑k=1∞Ak(n)k⋅ddn(1n−124sinh[πk23(n−124)]){\displaystyle p(n)={\frac {1}{\pi {\sqrt {2}}}}\sum _{k=1}^{\infty }A_{k}(n){\sqrt {k}}\cdot {\frac {d}{dn}}\left({{\frac {1}{\sqrt {n-{\frac {1}{24}}}}}\sinh \left[{{\frac {\pi }{k}}{\sqrt {{\frac {2}{3}}\left(n-{\frac {1}{24}}\right)}}}\,\,\,\right]}\right)}where
Ak(n)=∑0≤m<k,(m,k)=1eπi(s(m,k)−2nm/k).{\displaystyle A_{k}(n)=\sum _{0\leq m<k,\;(m,k)=1}e^{\pi i\left(s(m,k)-2nm/k\right)}.}ands(m,k){\displaystyle s(m,k)}is theDedekind sum.
Themultiplicative inverseof its generating function is theEuler function; by Euler'spentagonal number theoremthis function is an alternating sum ofpentagonal numberpowers of its argument.
Srinivasa Ramanujandiscovered that the partition function has nontrivial patterns inmodular arithmetic, now known asRamanujan's congruences. For instance, whenever the decimal representation ofn{\displaystyle n}ends in the digit 4 or 9, the number of partitions ofn{\displaystyle n}will be divisible by 5.[4]
In both combinatorics and number theory, families of partitions subject to various restrictions are often studied.[5]This section surveys a few such restrictions.
If we flip the diagram of the partition 6 + 4 + 3 + 1 along itsmain diagonal, we obtain another partition of 14:
By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to beconjugateof one another.[6]In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest are partitions, such as 2 + 2, which have themselves as conjugate. Such partitions are said to beself-conjugate.[7]
Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts.
Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram:
One can then obtain abijectionbetween the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example:
Among the 22 partitions of the number 8, there are 6 that contain onlyodd parts:
Alternatively, we could count partitions in which no number occurs more than once. Such a partition is called apartition with distinct parts. If we count the partitions of 8 with distinct parts, we also obtain 6:
This is a general property. For each positive number, the number of partitions with odd parts equals the number of partitions with distinct parts, denoted byq(n).[8][9]This result was proved byLeonhard Eulerin 1748[10]and later was generalized asGlaisher's theorem.
For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example isq(n) (partitions into distinct parts). The first few values ofq(n) are (starting withq(0)=1):
Thegenerating functionforq(n) is given by[11]
Thepentagonal number theoremgives a recurrence forq:[12]
whereakis (−1)mifk= 3m2−mfor some integermand is 0 otherwise.
By taking conjugates, the numberpk(n)of partitions ofninto exactlykparts is equal to the number of partitions ofnin which the largest part has sizek. The functionpk(n)satisfies the recurrence
with initial valuesp0(0) = 1andpk(n) = 0ifn≤ 0 ork≤ 0andnandkare not both zero.[13]
One recovers the functionp(n) by
One possible generating function for such partitions, takingkfixed andnvariable, is
More generally, ifTis a set of positive integers then the number of partitions ofn, all of whose parts belong toT, hasgenerating function
This can be used to solvechange-making problems(where the setTspecifies the available coins). As two particular cases, one has that the number of partitions ofnin which all parts are 1 or 2 (or, equivalently, the number of partitions ofninto 1 or 2 parts) is
and the number of partitions ofnin which all parts are 1, 2 or 3 (or, equivalently, the number of partitions ofninto at most three parts) is the nearest integer to (n+ 3)2/ 12.[14]
One may also simultaneously limit the number and size of the parts. Letp(N,M;n)denote the number of partitions ofnwith at mostMparts, each of size at mostN. Equivalently, these are the partitions whose Young diagram fits inside anM×Nrectangle. There is a recurrence relationp(N,M;n)=p(N,M−1;n)+p(N−1,M;n−M){\displaystyle p(N,M;n)=p(N,M-1;n)+p(N-1,M;n-M)}obtained by observing thatp(N,M;n)−p(N,M−1;n){\displaystyle p(N,M;n)-p(N,M-1;n)}counts the partitions ofninto exactlyMparts of size at mostN, and subtracting 1 from each part of such a partition yields a partition ofn−Minto at mostMparts.[15]
The Gaussian binomial coefficient is defined as:(k+ℓℓ)q=(k+ℓk)q=∏j=1k+ℓ(1−qj)∏j=1k(1−qj)∏j=1ℓ(1−qj).{\displaystyle {k+\ell \choose \ell }_{q}={k+\ell \choose k}_{q}={\frac {\prod _{j=1}^{k+\ell }(1-q^{j})}{\prod _{j=1}^{k}(1-q^{j})\prod _{j=1}^{\ell }(1-q^{j})}}.}The Gaussian binomial coefficient is related to thegenerating functionofp(N,M;n)by the equality∑n=0MNp(N,M;n)qn=(M+NM)q.{\displaystyle \sum _{n=0}^{MN}p(N,M;n)q^{n}={M+N \choose M}_{q}.}
Therankof a partition is the largest numberksuch that the partition contains at leastkparts of size at leastk. For example, the partition 4 + 3 + 3 + 2 + 1 + 1 has rank 3 because it contains 3 parts that are ≥ 3, but does not contain 4 parts that are ≥ 4. In the Ferrers diagram or Young diagram of a partition of rankr, ther×rsquare of entries in the upper-left is known as theDurfee square:
The Durfee square has applications within combinatorics in the proofs of various partition identities.[16]It also has some practical significance in the form of theh-index.
A different statistic is also sometimes called therank of a partition(or Dyson rank), namely, the differenceλk−k{\displaystyle \lambda _{k}-k}for a partition ofkparts with largest partλk{\displaystyle \lambda _{k}}. This statistic (which is unrelated to the one described above) appears in the study ofRamanujan congruences.
There is a naturalpartial orderon partitions given by inclusion of Young diagrams. This partially ordered set is known asYoung's lattice. The lattice was originally defined in the context ofrepresentation theory, where it is used to describe theirreducible representationsofsymmetric groupsSnfor alln, together with their branching properties, in characteristic zero. It also has received significant study for its purely combinatorial properties; notably, it is the motivating example of adifferential poset.
There is a deep theory of random partitions chosen according to the uniform probability distribution on thesymmetric groupvia theRobinson–Schensted correspondence. In 1977, Logan and Shepp, as well as Vershik and Kerov, showed that the Young diagram of a typical large partition becomes asymptotically close to the graph of a certain analytic function minimizing a certain functional. In 1988, Baik, Deift and Johansson extended these results to determine the distribution of the longest increasing subsequence of a random permutation in terms of theTracy–Widom distribution.[17]Okounkovrelated these results to the combinatorics ofRiemann surfacesand representation theory.[18][19]
|
https://en.wikipedia.org/wiki/Partition_(number_theory)
|
Information quality(InfoQ) is the potential of adata setto achieve a specific (scientific or practical) goal using a givenempirical analysis method.
Formally, the definition isInfoQ = U(X,f|g)where X is the data, f the analysis method, g the goal and U the utility function. InfoQ is different fromdata qualityandanalysis quality, but is dependent on these components and on the relationship between them.
InfoQ has been applied in a wide range of domains like healthcare, customer surveys, data science programs, advanced manufacturing and Bayesian network applications.
Kenett andShmueli(2014) proposed eight dimensions to help assess InfoQ and various methods for increasing InfoQ: Data resolution,Data structure, Data integration, Temporal relevance, Chronology of data and goal,Generalization,Operationalization, Communication.[1][2][3]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Information_Quality_(InfoQ)
|
Incomputer science,augmented Backus–Naur form(ABNF) is ametalanguagebased onBackus–Naur form(BNF) but consisting of its own syntax and derivation rules. The motive principle for ABNF is to describe aformal systemof a language to be used as a bidirectionalcommunications protocol. It is defined byInternet Standard 68("STD 68", type case sic), which as of December 2010[update]wasRFC5234, and it often serves as the definition language forIETFcommunication protocols.[1][2]
RFC5234supersedesRFC4234,2234and733.[3]RFC7405updates it, adding a syntax for specifying case-sensitive string literals.
An ABNF specification is a set of derivation rules, written as
where rule is acase-insensitivenonterminal, the definition consists of sequences of symbols that define the rule, a comment for documentation, and ending with a carriage return and line feed.
Rule names are case-insensitive:<rulename>,<Rulename>,<RULENAME>, and<rUlENamE>all refer to the same rule. Rule names consist of a letter followed by letters, numbers, and hyphens.
Angle brackets (<,>) are not required around rule names (as they are in BNF). However, they may be used to delimit a rule name when used in prose to discern a rule name.
Terminalsare specified by one or more numeric characters.
Numeric characters may be specified as the percent sign%, followed by the base (b= binary,d= decimal, andx= hexadecimal), followed by the value, or concatenation of values (indicated by.). For example, a carriage return is specified by%d13in decimal or%x0Din hexadecimal. A carriage return followed by a line feed may be specified with concatenation as%d13.10.
Literal text is specified through the use of a string enclosed in quotation marks ("). These strings are case-insensitive, and the character set used is (US-)ASCII. Therefore, the string"abc"will match “abc”, “Abc”, “aBc”, “abC”, “ABc”, “AbC”, “aBC”, and “ABC”.RFC 7405added a syntax for case-sensitive strings:%s"aBc"will only match "aBc". Prior to that, a case-sensitive string could only be specified by listing the individual characters: to match “aBc”, the definition would be%d97.66.99. A string can also be explicitly specified as case-insensitive with a%iprefix.
White space is used to separate elements of a definition; for space to be recognized as a delimiter, it must be explicitly included. The explicit reference for a single whitespace character isWSP(linear white space), andLWSPis for zero or more whitespace characters with newlines permitted. TheLWSPdefinition in RFC5234 is controversial[4]because at least one whitespace character is needed to form a delimiter between two fields.
Definitions are left-aligned. When multiple lines are required (for readability), continuation lines are indented by whitespace.
; comment
A semicolon (;) starts a comment that continues to the end of the line.
Rule1 Rule2
A rule may be defined by listing a sequence of rule names.
To match the string “aba”, the following rules could be used:
Rule1 / Rule2
A rule may be defined by a list of alternative rules separated by asolidus(/).
To accept the rulefuor the rulebar, the following rule could be constructed:
Rule1 =/ Rule2
Additional alternatives may be added to a rule through the use of=/between the rule name and the definition.
The rule
is therefore equivalent to
%c##-##
A range of numeric values may be specified through the use of a hyphen (-).
The rule
is equivalent to
(Rule1 Rule2)
Elements may be placed in parentheses to group rules in a definition.
To match "a b d" or "a c d", the following rule could be constructed:
To match “a b” or “c d”, the following rules could be constructed:
n*nRule
To indicate repetition of an element, the form<a>*<b>elementis used. The optional<a>gives the minimal number of elements to be included (with the default of 0). The optional<b>gives the maximal number of elements to be included (with the default of infinity).
Use*elementfor zero or more elements,*1elementfor zero or one element,1*elementfor one or more elements, and2*3elementfor two or three elements, cf.regular expressionse*,e?,e+ande{2,3}.
nRule
To indicate an explicit number of elements, the form<a>elementis used and is equivalent to<a>*<a>element.
Use2DIGITto get two numeric digits, and3DIGITto get three numeric digits. (DIGITis defined below under "Core rules". Also seezip-codein the example below.)
[Rule]
To indicate an optional element, the following constructions are equivalent:
The following operators have the given precedence from tightest binding to loosest binding:
Use of the alternative operator with concatenation may be confusing, and it is recommended that grouping be used to make explicit concatenation groups.
The core rules are defined in the ABNF standard.
Note that in the core rules diagram theCHAR2charset is inlined inchar-valandCHAR3is inlined inprose-valin the RFC spec. They are named here for clarity in the main syntax diagram.
The (U.S.) postal address example given in the augmented Backus–Naur form (ABNF) page may be specified as follows:
RFC 5234adds a warning in conjunction to the definition of LWSP as follows:
Use of this linear-white-space rule permits lines containing only white space that are no longer legal in mail headers and have caused interoperability problems in other contexts. Do not use when defining mail headers and use with caution in other contexts.
|
https://en.wikipedia.org/wiki/Augmented_Backus%E2%80%93Naur_form
|
Crowdmappingis a subtype ofcrowdsourcing[1][2]by whichaggregationof crowd-generated inputs such as captured communications andsocial mediafeeds are combined withgeographic datato create adigital mapthat is as up-to-date as possible[3]on events such aswars,humanitarian crises,crime,elections, ornatural disasters.[4][5]Such maps are typically created collaboratively by people coming together over theInternet.[3][6]
The information can typically be sent to the map initiator or initiators bySMSor by filling out a form online and are then gathered on a map online automatically or by a dedicated group.[7]In 2010,Ushahidireleased "Crowdmap" − afree and open-sourceplatform by which anyone can start crowdmapping projects.[8][9][10][11][12]
Crowdmapping can be used to track fires, floods,pollution,[6]crime, political violence, the spread of disease and bring a level of transparency to fast-moving events that are difficult fortraditional mediato adequately cover, or problem areas[6]and longer-term trends and that may be difficult to identify through the reporting of individual events.[5]
During disasters the timeliness of relevant maps is critical as the needs and locations of victims may change rapidly.[3]
The use of crowdmapping by authorities can improvesituational awarenessduring an incident and be used to supportincident response.[6]
Crowdmaps are an efficient way to visually demonstrate the geographical spread of a phenomenon.[7]
|
https://en.wikipedia.org/wiki/Crowdmapping
|
Onboardingororganizational socializationis the American term for the mechanism through which newemployeesacquire the necessary knowledge, skills, and behaviors to become effective organizational members and insiders. In other than American English, such as in British and Australasian dialects, this is referred to as "induction".[1]In the United States, up to 25% of workers are organizational newcomers engaged in onboarding process.[2]
Tactics used in this process include formal meetings, lectures, videos, printed materials, or computer-based orientations that outline the operations and culture of the organization that the employee is entering into. This process is known in other parts of the world as an 'induction'[3]ortraining.[4]
Studies have documented that onboarding process is important to enhancing employee retention, improving productivity, and fostering a positive organizational culture.[5]Socialization techniques such as onboarding lead to positive outcomes for new employees. These include higherjob satisfaction, betterjob performance, greaterorganizational commitment, and reduction inoccupational stressand intent to quit.[6][7][8]
The term "onboarding" is managementjargoncoined in the 1970s.[9]
Researchers separate the process of onboarding into three parts: new employee characteristics, new employee behaviors, and organizational efforts.[10]
New employee characteristics attempt to identify key personality traits in onboarding employees that the business views as beneficial:
Finally, employees are segmented based onEmployee experience levelsas it has a material effect on understanding and ability to assimilate into a new role.
New employee behaviors refer to the process of encouraging and identifying behaviors that are viewed as beneficial to company culture and the onboarding process.
Two examples of these behaviors are building relationships and seeking information and feedback.[1]
Information seekingoccurs when new employees ask questions of their co-workers and superiors in an effort to learn about their new job and the company's norms, expectations, procedures, and policies. This is viewed as beneficial throughout the onboarding process and beyond into the characteristics of a functional employee more generally.[14][15]
Feedback seeking is similar to information seeking but refers to new employee efforts to gauge how to behave in their new organization. A new employee may ask co-workers or superiors for feedback on how well he or she is performing certain job tasks or whether certain behaviors are appropriate in the social and political context of the organization. In seeking constructive criticism about their actions, new employees learn what kinds of behaviors are expected, accepted, or frowned upon within the company or work group.[16]Instances of feedback inquiry vary across cultural contexts such that individuals high in self-assertiveness and cultures low inpower distancereport more feedback seeking than newcomers in cultures where self-assertiveness is low and power distance is high.[17]
Also callednetworking, relationship building involves an employee's efforts to develop camaraderie with co-workers and even supervisors. This can be achieved informally through simply talking to their new peers during a coffee break or through more formal means such as taking part in pre-arranged company events.
Positivecommunicationand relationships between employees and supervisors is important for worker morale. The way in which a message is delivered affects how supervisors develop relationships and feelings about employees. When developing a relationship evaluating personal reputation, delivery style, and message content all played important factors in the perceptions between supervisors and employees. Yet, when supervisors were assessing work competence, they primarily focused on the content of what they were discussing or the message. Creating interpersonal, professional relationships between employees and supervisors in organizations helps foster productive working relationships.[18]
Organizations invest a great amount of time and resources into the training and orientation of new company hires. Organizations differ in the variety of socialization activities they offer in order to integrate productive new workers. Possible activities include socialization tactics, formal orientation programs, recruitment strategies, and mentorship opportunities. Socialization tactics, or orientation tactics, are designed based on an organization's needs, values, and structural policies. Organizations either favor a systematic approach to socialization, or a "sink or swim" approach – in which new employees are challenged to figure out existing norms and company expectations without guidance.
John Van Maanen and Edgar H. Schein have identified six major tactical dimensions that characterize and represent all of the ways in which organizations may differ in their approaches tosocialization.
Collective socialization is the process of taking a group of new hires and giving them the same training. Examples of this include basic training/boot camp for a military organization, pledging for fraternities/sororities, and education in graduate schools. Individual socialization allows newcomers to experience unique training, separate from others. Examples of this process include but are not limited to apprenticeship programs, specific internships, and "on-the-job" training.[19]
Formal socialization refers to when newcomers are trained separately from current employees within the organization. These practices single out newcomers, or completely segregate them from the other employees. Formal socialization is witnessed in programs such as police academies, internships, and apprenticeships. Informal socialization processes involve little to no effort to distinguish the two groups. Informal tactics provide a less intimidating environment for recruits to learn their new roles via trial and error. Examples of informal socialization include on-the-job training assignments, apprenticeship programs with no clearly defined role, and using a situational approach in which a newcomer is placed into a work group with no recruit role.[19]
Sequential socialization refers to the degree to which an organization provides identifiable steps for newcomers to follow during the onboarding process. Random socialization occurs when the sequence of steps leading to the targeted role are unknown, and the progression of socialization is ambiguous; for example, while there are numerous steps or stages leading to specific organizational roles, there is no specific order in which the steps should be taken.[19]
This dimension refers to whether or not the organization provides a timetable to complete socialization. Fixed socialization provides a new hire with the exact knowledge of the time it will take to complete a given passage. For instance, some management trainees can be put on "fast tracks", where they are required to accept assignments on an annual basis, despite their own preferences. Variable techniques allow newcomers to complete the onboarding process when they feel comfortable in their position. This type of socialization is commonly associated with up-and-coming careers in business organizations; this is due to several uncontrollable factors such as the state of the economy or turnover rates which determine whether a given newcomer will be promoted to a higher level or not.[19]
A serial socialization process refers to experienced members of the organization mentoring newcomers. One example of serial socialization would be a first-year police officer being assigned patrol duties with an officer who has been in law enforcement for a lengthy period of time. Disjunctive socialization, in contrast, refers to when newcomers do not follow the guidelines of their predecessors; no mentors are assigned to inform new recruits on how to fulfill their duties.[19]
This tactic refers to the degree to which a socialization process either confirms or denies the personal identities of the new employees. Investiture socialization processes document what positive characteristics newcomers bring to the organization. When using this socialization process, the organization makes use of their preexisting skills, values, and attitudes. Divestiture socialization is a process that organizations use to reject and remove the importance of personal characteristics a new hire has; this is meant to assimilate them with the values of the workplace. Many organizations require newcomers to sever previous ties and forget old habits in order to create a new self-image based upon new assumptions.[19]
Thus, tactics influence the socialization process by defining the type of information newcomers receive, the source of this information, and the ease of obtaining it.[19]
Building on the work of Van Maanen and Schein, Jones (1986) proposed that the previous six dimensions could be reduced to two categories: institutionalized and individualized socialization. Companies that use institutionalized socialization tactics implement step-by-step programs, have group orientations, and implement mentor programs. One example of an organization using institutionalized tactics include incoming freshmen at universities, who may attend orientation weekends before beginning classes. Other organizations use individualized socialization tactics, in which the new employee immediately starts working on his or her new position and figures out company norms, values, and expectations along the way. In this orientation system, individuals must play a more proactive role in seeking out information and initiating work relationships.[20]
Regardless of the socialization tactics used, formal orientation programs can facilitate understanding ofcompany cultureand introduces new employees to their work roles and the organizational social environment. Formal orientation programs consist of lectures, videotapes, and written material. More recent approaches, such as computer-based orientations and Internets, have been used by organizations to standardize training programs across branch locations. A review of the literature indicates that orientation programs are successful in communicating the company's goals, history, and power structure.[21]
Recruitment events play a key role in identifying which potential employees are a good fit for an organization. Recruiting events allow employees to gather initial information about an organization's expectations and company culture. By providing a realistic job preview of what life inside the organization is like, companies can weed out potential employees who are clearly a misfit to an organization; individuals can identify which employment agencies are the most suitable match for their own personal values, goals, and expectations. Research has shown that new employees who receive a great amount of information about the job prior to being socialized tend to adjust better.[22]Organizations can also provide realistic job previews by offering internship opportunities.
Mentorshiphas demonstrated importance in the socialization of new employees.[23][24]Ostroff and Kozlowski (1993) discovered that newcomers with mentors become more knowledgeable about the organization than did newcomers without. Mentors can help newcomers better manage their expectations and feel comfortable with their new environment through advice-giving and social support.[25]Chatman (1991) found that newcomers are more likely to have internalized the key values of their organization's culture if they had spent time with an assigned mentor and attended company social events. Literature has also suggested the importance of demographic matching between organizational mentors and mentees.[23]Enscher & Murphy (1997) examined the effects of similarity (race and gender) on the amount of contact and quality of mentor relationships.[26]What often separates rapid onboarding programs from their slower counterparts is not the availability of a mentor, but the presence of a "buddy", someone the newcomer can comfortably ask questions that are either trivial ("How do I order office supplies?") or politically sensitive ("Whose opinion really matters here?").[2]Buddies can help establish relationships with co-workers in ways that can't always be facilitated by a newcomer's manager.[2]
Online onboarding, i.e., digital onboarding, means onboarding training that is carried out partially or fully online.[27][28][29]Onboarding a new employee is a process where a new hire gets to know the company and its culture and receives the means and knowledge needed to become a productive team member.[30]By onboarding online organizations can use technology to follow the onboarding process, automatize basic forms, follow new employees' progress and see when they may need additional help during the online onboarding training.[21]
Traditional face-to-face onboarding is often a one-way conversation, but online onboarding can make the onboarding process a more worthwhile experience for new hires.[28]The main advantages of online onboarding compared to traditional face-to-face onboarding are considered to be:
Online onboarding requires more thought and structured processes to be adequate and functional compared to the traditional onboarding process.[29]Online onboarding does not offer face-to-face interaction between the onboarding trainer and the new employee in comparison to on-site onboarding.[32]Traditional onboarding also allows better communication, and the development of personal connections and keeps new hires invested in the process compared to online onboarding.[33]
Role clarity describes a new employee's understanding of their job responsibilities and organizational role. One of the goals of an onboarding process is to aid newcomers in reducing uncertainty, making it easier for them to get their jobs done correctly and efficiently. Because there often is a disconnect between the main responsibilities listed in job descriptions and the specific, repeatable tasks that employees must complete to be successful in their roles, it's vital that managers are trained to discuss exactly what they expect from their employees.[34]A poor onboarding program may produce employees who exhibit sub-par productivity because they are unsure of their exact roles and responsibilities. A strong onboarding program produces employees who are especially productive; they have a better understanding of what is expected of them. Organizations benefit from increasing role clarity for a new employee. Not only does role clarity imply greater productivity, but it has also been linked to both job satisfaction and organizational commitment.[35]
Self-efficacyis the degree to which new employees feel capable of successfully completing and fulfilling their responsibilities. Employees who feel they can get the job done fare better than those who feel overwhelmed in their new positions; research has found that job satisfaction, organizational commitment, and turnover are all correlated with feelings of self-efficacy.[7]Research suggests social environments that encourage teamwork and employee autonomy help increase feelings of competence; this is also a result of support from co-workers, and managerial support having less impact on feelings of self-efficacy.[36]
Social acceptancegives new employees the support needed to be successful. While role clarity and self-efficacy are important to a newcomer's ability to meet the requirements of a job, the feeling of "fitting in" can do a lot for one's view of the work environment and has been shown to increase commitment to an organization and decrease turnover.[7]In order for onboarding to be effective employees must help in their own onboarding process by interacting with other coworkers and supervisors socially, and involving themselves in functions involving other employees.[21]The length of hire also determines social acceptance, often by influencing how much an employee is willing to change to maintain group closeness. Individuals who are hired with an expected long-term position are more likely to work toward fitting in with the main group, avoiding major conflicts. Employees who are expected to work in the short-term often are less invested in maintaining harmony with peers. This impacts the level of acceptance from existing employee groups, depending on the future job prospects of the new hire and their willingness to fit in.[37]
Identity impacts social acceptance as well. If an individual with a marginalized identity feels as if they are not accepted, they will suffer negative consequences. It has been shown that whenLGBTemployees conceal their identities at work they are a higher risk for mental health problems, as well as physical illness.[38][39]They are also more likely to experience low satisfaction and commitment at their job.[40][41]Employees possessing disabilities may struggle to be accepted in the workplace due to coworkers' beliefs about the capability of the individual to complete their tasks.[42]Black employees who are not accepted in the workplace and face discrimination experience decreased job satisfaction, which can cause them to perform poorly in the workplace resulting in monetary and personnel costs to organizations.[43]
Knowledge oforganizational culturerefers to how well a new employee understands a company's values, goals, roles, norms, and overall organizational environment. For example, some organizations may have very strict, yet unspoken, rules of how interactions with superiors should be conducted or whether overtime hours are the norm and an expectation. Knowledge of one's organizational culture is important for the newcomer looking to adapt to a new company, as it allows for social acceptance and aids in completing work tasks in a way that meets company standards. Overall, knowledge of organizational culture has been linked to increased satisfaction and commitment, as well as decreased turnover.[44]
Historically, organizations have overlooked the influence of business practices in shaping enduring work attitudes and have underestimated its impact on financial success.[45]Employees' job attitudes are particularly important from an organization's perspective because of their link toemployee engagement, productivity and performance on the job. Employee engagement attitudes, such as organizational commitment or satisfaction, are important factors in an employee's work performance. This translates into strong monetary gains for organizations. As research has demonstrated, individuals who are satisfied with their jobs and show organizational commitment are likely to perform better and have lower turnover rates.[45][46]Unengaged employees are very costly to organizations in terms of slowed performance and potential rehiring expenses. With the onboarding process, there can be short term and long-term outcomes. Short term outcomes include self-efficacy, role clarity, and social integration. Self-efficacy is the confidence a new employee has when going into a new job. Role clarity is the expectation and knowledge they have about the position. Social integration is the new relationships they form, and how comfortable they are in those relationships, once they have secured that position. Long term outcomes consist of organizational commitment, and job satisfaction. How satisfied the employee is after onboarding, can either help the company, or prevent it from succeeding.[47]
The outcomes of organizational socialization have been positively associated with the process ofuncertainty reduction, but are not desirable to all organizations. Jones (1986) and Allen and Meyer (1990) found that socialization tactics were related to commitment, but negatively correlated to role clarity.[20][48]Because formal socialization tactics protect the newcomer from their full responsibilities while "learning the ropes," there is a potential for role confusion once the new hire fully enters the organization. In some cases, organizations desire a certain level of person-organizational misfit in order to achieve outcomes via innovative behaviors.[10]Depending on the culture of the organization, it may be more desirable to increase ambiguity, despite the potentially negative connection with organizational commitment.
Additionally, socialization researchers have had major concerns over the length of time that it takes newcomers to adjust. There has been great difficulty determining the role that time plays, but once the length of the adjustment is determined, organizations can make appropriate recommendations regarding what matters most in various stages of the adjustment process.[10]
Further criticisms include the use of special orientation sessions to educate newcomers about the organization and strengthen their organizational commitment. While these sessions have been found to be formal and ritualistic, studies have found them unpleasant or traumatic.[49]Orientation sessions are a frequently used socialization tactic, however, employees have not found them to be helpful, nor has any research provided any evidence for their benefits.[50][51][52]
Executive onboarding is the application of general onboarding principles to helping new executives become productive members of an organization. It involves acquiring, accommodating, assimilating and accelerating new executives.[53]Hiring teams emphasize the importance of making the most of the new hire's "honeymoon" stage in the organization, a period which is described as either the first 90 to 100 days, or the first full year.[54][55][56]
Effective onboarding of new executives is an important contribution hiring managers, directsupervisorsorhuman resourceprofessionals make to long-term organizational success; executive onboarding done right can improveproductivityandexecutive retention, and buildcorporate culture. 40 percent of executives hired at the senior level are pushed out, fail, or quit within 18 months without effective socialization.[57]
Onboarding is valuable for externally recruited, or those recruited from outside the organization,executives. It may be difficult for those individuals to uncover personal, organizational, and role risks in complicated situations when they lack formal onboarding assistance.[58]Onboarding is also an essential tool for executives promoted into new roles and/or transferred from one business unit to another.[59]
The effectiveness of socialization varies depending on the structure and communication within the organization, and the ease of joining or leaving the organization.[60]These are dimensions that online organizations differ from conventional ones. This type of communication makes the development and maintenance of social relationships with other group members difficult to accomplish and weaken organizational commitment.[61][62]Joining and leaving online communities typically involves less cost than a conventional employment organization, which results in lower level of commitment.[63]
Socialization processes in most online communities are informal and individualistic, as compared with socialization in conventional organizations.[64]For example, lurkers in online communities typically have no opportunities for formal mentorship, because they are less likely to be known to existing members of the community. Another example is Wiki Projects, the task-oriented group in Wikipedia, rarely use institutional socialization tactics to socialize new members who join them,[65]as they rarely assign the new member a mentor or provide clear guidelines. A third example is the socialization of newcomers to the Python open-source software development community.[66]Even though there exists clear workflows and distinct social roles, socialization process is still informal.
Scholars at MIT Sloan, suggest that practitioners should seek to design an onboarding strategy that takes individual newcomer characteristics into consideration and encourages proactive behaviors, such as information seeking, that help facilitate the development of role clarity, self-efficacy, social acceptance, and knowledge of organizational culture. Research has consistently shown that doing so produces valuable outcomes such as high job satisfaction (the extent to which one enjoys the nature of his or her work), organizational commitment (the connection one feels to an organization), and job performance in employees, as well as lower turnover rates and decreased intent to quit.[67]
In terms of structure, evidence shows that formal institutionalized socialization is the most effective onboarding method.[21]New employees who complete these kinds of programs tend to experience more positive job attitudes and lower levels of turnover in comparison to those who undergo individualized tactics.[10][68]Evidence suggests that in-person onboarding techniques are more effective than virtual ones. Though it initially appears to be less expensive for a company to use a standard computer-based orientation programs, some previous research has demonstrated that employees learn more about their roles and company culture through face-to-face orientation.[69]
Comprehensive Employment and Training Act
|
https://en.wikipedia.org/wiki/Onboarding
|
In the mathematical theory ofstochastic processes,variable-order Markov (VOM) modelsare an important class of models that extend the well knownMarkov chainmodels. In contrast to the Markov chain models, where eachrandom variablein a sequence with aMarkov propertydepends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization.
This realization sequence is often called thecontext; therefore the VOM models are also calledcontext trees.[1]VOM models are nicely rendered by colorized probabilistic suffix trees (PST).[2]The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such asstatistical analysis,classificationandprediction.[3][4][5]
Consider for example a sequence ofrandom variables, each of which takes a value from the ternaryalphabet{a,b,c}. Specifically, consider the string constructed from infinite concatenations of the sub-stringaaabc:aaabcaaabcaaabcaaabc…aaabc.
The VOM model of maximal order 2 can approximate the above string usingonlythe following fiveconditional probabilitycomponents:Pr(a|aa) = 0.5,Pr(b|aa) = 0.5,Pr(c|b) = 1.0,Pr(a|c)= 1.0,Pr(a|ca) = 1.0.
In this example,Pr(c|ab) = Pr(c|b) = 1.0; therefore, the shorter contextbis sufficient to determine the next character. Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0.
To construct theMarkov chainof order 1 for the next character in that string, one must estimate the following 9 conditional probability components:Pr(a|a),Pr(a|b),Pr(a|c),Pr(b|a),Pr(b|b),Pr(b|c),Pr(c|a),Pr(c|b),Pr(c|c). To construct the Markov chain of order 2 for the next character, one must estimate 27 conditional probability components:Pr(a|aa),Pr(a|ab),…,Pr(c|cc). And to construct the Markov chain of order three for the next character one must estimate the following 81 conditional probability components:Pr(a|aaa),Pr(a|aab),…,Pr(c|ccc).
In practical settings there is seldom sufficient data to accurately estimate theexponentially increasingnumber of conditional probability components as the order of the Markov chain increases.
The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved."[1]
LetAbe a state space (finitealphabet) of size|A|{\displaystyle |A|}.
Consider a sequence with theMarkov propertyx1n=x1x2…xn{\displaystyle x_{1}^{n}=x_{1}x_{2}\dots x_{n}}ofnrealizations ofrandom variables, wherexi∈A{\displaystyle x_{i}\in A}is the state (symbol) at positioni(1≤i≤n){\displaystyle \scriptstyle (1\leq i\leq n)}, and the concatenation of statesxi{\displaystyle x_{i}}andxi+1{\displaystyle x_{i+1}}is denoted byxixi+1{\displaystyle x_{i}x_{i+1}}.
Given a training set of observed states,x1n{\displaystyle x_{1}^{n}}, the construction algorithm of the VOM models[3][4][5]learns a modelPthat provides aprobabilityassignment for each state in the sequence given its past (previously observed symbols) or future states.
Specifically, the learner generates aconditional probability distributionP(xi∣s){\displaystyle P(x_{i}\mid s)}for a symbolxi∈A{\displaystyle x_{i}\in A}given a contexts∈A∗{\displaystyle s\in A^{*}}, where the * sign represents a sequence of states of any length, including the empty context.
VOM models attempt to estimateconditional distributionsof the formP(xi∣s){\displaystyle P(x_{i}\mid s)}where the context length|s|≤D{\displaystyle |s|\leq D}varies depending on the available statistics.
In contrast, conventionalMarkov modelsattempt to estimate theseconditional distributionsby assuming a fixed contexts' length|s|=D{\displaystyle |s|=D}and, hence, can be considered as special cases of the VOM models.
Effectively, for a given training sequence, the VOM models are found to obtain better model parameterization than the fixed-orderMarkov modelsthat leads to a bettervariance-bias tradeoff of the learned models.[3][4][5]
Various efficient algorithms have been devised for estimating the parameters of the VOM model.[4]
VOM models have been successfully applied to areas such asmachine learning,information theoryandbioinformatics, including specific applications such ascodinganddata compression,[1]document compression,[4]classification and identification ofDNAandprotein sequences,[6][1][3]statistical process control,[5]spam filtering,[7]haplotyping,[8]speech recognition,[9]sequence analysis in social sciences,[2]and others.
|
https://en.wikipedia.org/wiki/Variable-order_Markov_model
|
Concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching), sharing resources and managing interactions. Concurrency improves responsiveness, throughput, and scalability in modern computing, including:[1][2][3][4][5]
Concurrency is a broader concept that encompasses several related ideas, including:[1][2][3][4][5]
Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can beindeterminate. Concurrent use of sharedresourcescan be a source of indeterminacy leading to issues such asdeadlocks, andresource starvation.[7]
Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange,memory allocation, and execution scheduling to minimizeresponse timeand maximisethroughput.[8]
Concurrency theory has been an active field of research intheoretical computer science. One of the first proposals wasCarl Adam Petri's seminal work onPetri netsin the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.
A number of formalisms for modeling and understanding concurrent systems have been developed, including:[9]
Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems. Some of these are based onmessage passing, while others have different mechanisms for concurrency.
The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining thedenotational semanticsof a variety of different models of concurrency,[11]while Nielsen, Sassone, and Winskel have demonstrated thatcategory theorycan be used to provide a similar unified understanding of different models.[12]
The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. (Other concurrency systems, e.g.,process calculican be modeled in the actor model using atwo-phase commit protocol.[13]) The mathematical denotation denoted by a closed systemSis constructed increasingly better approximations from an initial behavior called⊥Susing a behavior approximating functionprogressionSto construct a denotation (meaning ) forSas follows:[14]
In this way,Scan be mathematically characterized in terms of all its possible behaviors.
Various types oftemporal logic[15]can be used to help reason about concurrent systems. Some of these logics, such aslinear temporal logicandcomputation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such asaction computational tree logic,Hennessy–Milner logic, andLamport'stemporal logic of actions, build their assertions from sequences ofactions(changes in state). The principal application of these logics is in writing specifications for concurrent systems.[7]
Concurrent programmingencompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered[by whom?]to be more general thanparallel programmingbecause it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally[according to whom?]have a predefined and well-structured communications pattern. The base goals of concurrent programming includecorrectness,performanceandrobustness. Concurrent systems such asOperating systemsandDatabase management systemsare generally designed[by whom?]to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly (seeConcurrency control). Some[example needed]concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.
Because they use shared resources, concurrent systems in general[according to whom?]require the inclusion of some[example needed]kind ofarbitersomewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility ofindeterminacy in concurrent computationwhich has major implications for practice including correctness and performance. For example, arbitration introducesunbounded nondeterminismwhich raises issues withmodel checkingbecause it causes explosion in the state space and can even cause models to have an infinite number of states.
Some concurrent programming models includecoprocessesanddeterministic concurrency. In these models, threads of control explicitlyyieldtheir timeslices, either to the system or to another process.
|
https://en.wikipedia.org/wiki/Concurrency_(computer_science)
|
TheUNCITRAL Model Law on Electronic Transferable Records(“MLETR”) is auniform model lawthat has been adopted by theUnited Nations Commission on International Trade Law(UNCITRAL) in 2017.[1]Its scope is to allow the use of transferable documents and instruments in electronic form. Transferable documents and instruments typically includebills of lading,warehouse receipts,bills of exchange,promissory notesandcheques. National law qualifies a document or instrument as transferable.
Transferable documents and instruments allow to request delivery of goods and payment of a sum of money based on possession of the document or instrument. However, it has been difficult to reproduce the notion of possession, which has to do with control over tangible goods, in an electronic environment. The MLETR addresses that legal gap.
Under the MLETR each dematerialised document does not need to be managed in a separate information system, but the same system could manage multiple documents, or also all documents related to a business transactions. This may allow to merge logistics and supply chain, or even commercial and regulatory documents, in a single electronic transferable record.[2]
A study on the impact of the adoption of a law aligned to the MLETR in the United Kingdom has quantified the benefits of such adoption. Besides economic benefits, which include up to £224 billion in efficiency savings, adoption of such legislation may reduce the number of days needed for processing trade documents by up to 75%.[3]
The impact assessment of the Electronic Trade Documents Bill (see below) prepared by theUK Governmentestimates in the next 10 years economic benefits ranging from a low estimate of 249.8 million pounds to a high estimate of 2,049.7 million pounds, with a best estimate of 1,137.0 million pounds.[4]
At the micro-economic level, a study describing 16 case studies of application of the UK Electronic Trade Documents Act (which is aligned with MLETR) and associated economic benefits is available.[5]
The MLETR is divided in four chapters: general provisions; provisions on functional equivalence; use of electronic transferable records; and cross-border recognition of electronic transferable records.
The MLETR is built on the same fundamental principles of other UNCITRAL texts on electronic commerce, namely functional equivalence (articles 8-11 MLETR), technology neutrality and non-discrimination against the use of electronic means (article 7 MLETR).
The MLETR is also model-neutral and may be implemented by using registries, tokens or distributed ledgers.[6]The Explanatory Note to the MLETR provides some guidance on the use ofdistributed ledgersin implementing the MLETR and is therefore considered an early example of legislative text facilitating the use ofblockchain.[7][8]
Article 2 MLETR defines the notion of electronic transferable record as an electronic record that complies with the requirements of article 10 MLETR. It also defines "transferable document or instrument" as a document that entitles its holder to the payment of a sum of money or the delivery of goods.
Article 6 MLETR legally recognizes the possibility of including metadata in electronic transferable records. It is therefore considered asmart contractenabler.[9]
Articles 8 and 9 MLETR provide functional equivalence rules, respectively, for the paper-based notions of "writing" and "signature". Those articles do not need to be enacted if national law, for instance an electronic transactions act, already contains those notions and they are made applicable by reference to electronic transferable records.
Article 10 MLETR establishes the conditions for functional equivalence between paper-based transferable documents and instruments, on the one hand, and electronic transferable records, on the other hand. Those conditions are:
1) the electronic transferable record shall contain all information required for the corresponding paper-based transferable document or instrument;
2) a reliable method shall be used:
a) to identify the electronic transferable record as such;
b) to render the electronic transferable record subject to control throughout its life-cycle;
c) to retain the integrity of the electronic transferable record throughout its life-cycle.
Article 11 MLETR establishes the functional equivalence rule for possession of a transferable document or instrument. The conditions to satisfy that requirement are the use of a reliable method to establish exclusive control of the electronic transferable record and the identification of the person in control.
Article 10 and 11 MLETR are based on the notions of "control" and "singularity" of the electronic transferable record.[10]
In general, all events that may occur in relation to a transferable document or instrument may also occur in relation to an electronic transferable record.[11]Articles 15 and 16 MLETR reaffirm that general rule with respect to, respectively, endorsement and amendment of an electronic transferable record. The amendment should be identified as such as otherwise the electronic nature may not make the amendment easily recognisable.
Article 12 MLETR contains a non-exclusive list of elements relevant to assess the reliability of the method used. It contains also a safety clause that indicates that a method is reliable in fact if it has fulfilled the function it pursued, alone or with other evidence.
Article 19 MLETR contains a provision on geographic non-discrimination of the electronic transferable record. The provision does not affect private international law rules.
The MLETR has been enacted in Bahrain,[12]in Belize,[13]in France,[14]in Kiribati,[15]in Paraguay,[16]in Papua New Guinea,[17]in Singapore,[18]in Timor-Leste,[19]in the United Kingdom,[20]and in the Abu Dhabi Global Market (ADGM), an International Financial Centre located in Abu Dhabi, United Arab Emirates.[21]
The adoption of the MLETR in Bahrain has taken place in conjunction with a review of the Electronic Transactions Act, which was originally passed in 2002 and is based on the UNCITRAL Model Law on Electronic Commerce.[22]
Singapore had conducted two public consultations prior to enactment, the first in March 2017[23]and the second in summer 2019, in the broader framework of the review of the Electronic Transactions Act.[24]
In Thailand, the Cabinet has approved the inclusion of the MLETR in the Electronic Transactions Act.[25]Czechia has conducted a public consultation on MLETR adoption.[26]
The International Chamber of Commerce (ICC) has been promoting actively adoption of the MLETR. Initially, this was done to facilitate the use of electronic bills of lading as recommended in a report by the law firm Clyde & Co and the ICC Banking Commission.[27]MLETR adoption is now being actively promoted by the ICC Digital Standards Initiative (DSI), including as a manner to overcome the effects of the COVID-19 pandemic and to increase supply chain resilience. ICC DSI offers also guidance on MLETR implementation, including technical standards and business practices.[28]
On 28 April 2021 the UK, Canada, France, Germany, Italy, Japan, the US and the European Union adopted a G7 Digital and Technology Ministerial Declaration[29]to develop a framework for the use of electronic transferable records that promotes the adoption of legal frameworks compatible with the principles of the MLETR.
On 11 May 2022, the G7 Digital Ministers adopted a Ministerial Declaration[30]endorsing the “Principles for domestic legal frameworks to promote the use of electronic transferable records” contained in Annex 2 to the Declaration.[31]
The G7 declarations have prompted the consideration of MLETR adoption in G7 member States, with significant impact:
With respect to use in business practice, one provider has started offering issuance of electronic bills of lading based on Singapore law incorporating MLETR and approved by the International Group of P&I Club as of 1 July 2021.[36]These electronic bills of lading issued under the law of Singapore and MLETR have been used for the first time to cover shipments from Australia to China.[37]
In Bahrain, an electronic check system has been launched based on MLETR provisions incorporated in Bahraini law. It allows issuing, endorsing and presenting electronic checks on mobile phones and other devices.[38]
|
https://en.wikipedia.org/wiki/MLES
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.