text
stringlengths
21
172k
source
stringlengths
32
113
Afiduciaryis a person who holds alegalor ethical relationship oftrustwith one or more otherparties(legal personor group of persons). Typically, a fiduciary prudently takes care of money or otherassetsfor another person. One party, for example, a corporate trust company or the trust department of a bank, acts in a fiduciary capacity to another party, who, for example, has entrusted funds to the fiduciary for safekeeping or investment. Likewise,financial advisers,financial planners, and asset managers, including managers of pension plans, endowments, and other tax-exempt assets, are considered fiduciaries under applicable statutes and laws.[1]In a fiduciary relationship, one person, in a position of vulnerability, justifiably vests confidence,good faith, reliance, and trust in another whose aid, advice, or protection is sought in some matter.[2]: 68[3]In such a relation, good conscience requires the fiduciary to act at all times for the sole benefit and interest of the one who trusts. A fiduciary is someone who has undertaken to act for and on behalf of another in a particular matter in circumstances which give rise to a relationship of trust and confidence. Fiduciary dutiesin a financial sense exist to ensure that those who manage other people's money act in their beneficiaries' interests, rather than serving their own interests. Afiduciary duty[5]is the higheststandard of carein equity or law. A fiduciary is expected to be extremely loyal to the person to whom he owes the duty (the "principal") such that there must be no conflict of duty between fiduciary and principal, and the fiduciary must not profit from their position as a fiduciary,[6]unless the principal consents.[7]The nature of fiduciary obligations differs among jurisdictions. In Australia, only proscriptive or negative fiduciary obligations are recognised,[3]: 113[8]: 198[9]whereas in Canada, fiduciaries can come under both proscriptive (negative) and prescriptive (positive) fiduciary obligations.[10][11] InEnglishcommon law, the fiduciary relation is an important concept within a part of the legal system known asequity. In the United Kingdom, theJudicature Actsmerged thecourts of equity(historically based in England'sCourt of Chancery) with the courts of common law, and as a result the concept of fiduciary duty also became applicable incommon lawcourts. When a fiduciary duty is imposed, equity requires a different, stricter standard of behavior than the comparabletortiousduty of carein common law. The fiduciary has a duty not to be in a situation where personal interests and fiduciary duty conflict, not to be in a situation where their fiduciary duty conflicts with another fiduciary duty, and a duty not to profit from their fiduciary position without knowledge and consent. A fiduciary ideally would not have aconflict of interest. It has been said that fiduciaries must conduct themselves "at a level higher than that trodden by the crowd"[12]and that "[t]he distinguishing or overriding duty of a fiduciary is the obligation of undivided loyalty".[13]: 289 Different jurisdictions regard fiduciary duties in different lights.Canadian law, for example, has developed a more expansive view of fiduciary obligation thanAmerican law,[14]whileAustralian lawandBritish lawhave developed more conservative approaches than either the United States orCanada.[3]In Australia, it has been found that there is no comprehensive list of criteria by which to establish a fiduciary relationship.[13]Courts have so far refused to define the concept of a fiduciary, instead preferring to develop the law on a case-by-case basis and by way of analogy.[2][8]Fiduciary relationships are of different types and carry different obligations so that a test appropriate to determine whether a fiduciary relationship exists for one purpose might be inappropriate for another:[2] In 2014 theLaw Commission (England and Wales)reviewed the fiduciary duties of investment intermediaries, looking particularly at the duties on pension trustees. They commented that the term "fiduciary" is used in many different ways. Fiduciary duties cannot be understood in isolation. Instead they are better viewed as ‘legalpolyfilla’, molding themselves flexibly around other legal structures, and sometimes filling the gaps. The question of who is a fiduciary is a "notoriously intractable" question and this was the first of many questions. InSEC v. Chenery Corporation,[16]Frankfurter Jsaid, To say that a man is a fiduciary only begins the analysis; it gives direction to further inquiry. To whom is he a fiduciary? What obligations does he owe as a fiduciary? In what respect has he failed to discharge these obligations? And what are the consequences of his deviation from his duty? The law expressed here follows the general body of elementary fiduciary law found in most common law jurisdictions; for in-depth analysis of particular jurisdictional idiosyncrasies please consult primary authorities within the relevant jurisdiction. This is especially true in the area of Labor and Employment law. InCanadaa fiduciary has obligations to the employer even after the employment relationship is terminated, whereas in the United States the employment and fiduciary relationships terminate together. The corporate law ofDelawareis the most influential in the United States, as more than 50% of publicly traded companies in the United States, including 64% of the Fortune 500, have chosen to incorporate in that state.[17]Under Delaware law, officers, directors and other control persons of corporations and other entities owe three primary fiduciary duties, (1) theduty of care, (2) theduty of loyaltyand (3) theduty of good faith.[18] The duty of care requires control persons to act on an informed basis after due consideration of all information. The duty includes a requirement that such persons reasonably inform themselves of alternatives. In doing so, they may rely on employees and other advisers so long as they do so with a critical eye and do not unquestionably accept the information and conclusions provided to them. Under normal circumstances, their actions are accorded the protection of the business judgment rule, which presumes that control persons acted properly, provided that they act on an informed basis, in good faith and in the honest belief that the action taken was in the best interests of the company.[18] The duty of loyalty requires control persons to look to the interests of the company and its other owners and not to their personal interests. In general, they cannot use their positions of trust, confidence and inside knowledge to further their own private interests or approve an action that will provide them with a personal benefit (such as continued employment) that does not primarily benefit the company or its other owners.[18] The duty of good faith requires control persons to exercise care and prudence in making business decisions—that is, the care that a reasonably prudent person in a similar position would use under similar circumstances. Control persons fail to act in good faith, even if their actions are not illegal, when they take actions for improper purposes or, in certain circumstances, when their actions have grossly inequitable results. The duty to act in good faith is an obligation not only to make decisions free from self-interest, but also free of any interest that diverts the control persons from acting in the best interest of the company. The duty to act in good faith may be measured by an individual's particular knowledge and expertise. The higher the level of expertise, the more accountable that person will be (e.g., a finance expert may be held to a more exacting standard than others in accepting a third party valuation).[18] At one time, courts seemed to view the duty of good faith as an independent obligation. However, more recently, courts have treated the duty of good faith as a component of the duty of loyalty.[18][19] In Canada, directors of corporations owe a fiduciary duty. A debate exists as to the nature and extent of this duty following a controversial landmark judgment from theSupreme Court of CanadainBCE Inc. v. 1976 Debentureholders. Scholarly literature has defined this as a "tripartite fiduciary duty", composed of (1) an overarching duty to the corporation, which contains two component duties—(2) a duty to protect shareholder interests from harm, and (3) a procedural duty of "fair treatment" for relevant stakeholder interests. This tripartite structure encapsulates the duty of directors to act in the "best interests of the corporation, viewed as a good corporate citizen".[14] The most common circumstance where a fiduciary duty will arise is between atrustee, whether real or juristic, and abeneficiary. The trustee to whom property is legally committed is the legal—i.e., common law—owner of all such property. The beneficiary, at law, has no legal title to thetrust; however, the trustee is bound by equity to suppress their own interests and administer the property only for the benefit of the beneficiary. In this way, the beneficiary obtains theuseof property without being its technical owner. Others, such as corporatedirectors, may be held to a fiduciary duty similar in some respects to that of a trustee. This happens when, for example, the directors of a bank are trustees for the depositors, the directors of a corporation are trustees for the stockholders or a guardian is trustee of their ward's property. A person in a sensitive position sometimes protects themselves from possible conflict of interest charges by setting up ablind trust, placing their financial affairs in the hands of a fiduciary and giving up all right to know about or intervene in their handling. The fiduciary functions of trusts and agencies are commonly performed by atrust company, such as acommercial bank, organized for that purpose. In the United States, theOffice of the Comptroller of the Currency(OCC), an agency of theUnited States Department of the Treasury, is the primaryregulatorof the fiduciary activities offederal savings associations. When a court desires to hold the offending party to a transaction responsible so as to prevent unjust enrichment, the judge can declare that a fiduciary relation exists between the parties, as though the offender were in fact a trustee for the partner. Relationships which routinely attract by law a fiduciary duty between certain classes of persons include these: In Australia, the categories of fiduciary relationships are not closed.[2][8] Roman and civil law recognized a type of contract calledfiducia(alsocontractus fiduciaeor fiduciary contract), involving essentially a sale to a person coupled with an agreement that the purchaser should sell the property back upon the fulfillment of certain conditions.[52]Such contracts were used in the emancipation of children, in connection with testamentary gifts and in pledges. Under Roman law a woman could arrange a fictitious sale called afiduciary coemptionin order to change her guardian or gain legal capacity to make a will.[53] InRoman Dutch law, afiduciary heirmay receive property subject to passing it to another on fulfilment of certain conditions; the gift is called afideicommissum. The fiduciary of afideicommissumis afideicommissionerand one that receives property from a fiduciary heir is afideicommissary heir.[54] Fiduciary principles may be applied in a variety of legal contexts.[55] Joint ventures, as opposed to businesspartnerships,[38]are notpresumedto carry a fiduciary duty; however, this is a matter of degree.[56][57]If a joint venture is conducted at commercial arm's length and both parties are on an equal footing then thecourtswill be reluctant to find a fiduciary duty, but if the joint venture is carried out more in the manner of a partnership then fiduciary relationships can and often will arise.[58][59][56] Husbands and wives are notpresumedto be in a fiduciary relationship in many jurisdictions; however, this may be easily established. Similarly, ordinary commercial transactions in themselves are notpresumedto but can give rise to fiduciary duties, should the appropriate circumstances arise. These are usually circumstances where the contract specifies a degree of trust and loyalty or it can be inferred by the court.[2][60] Australian courts also do not recognise parents and their children to be in fiduciary relationships.[48][61][62]In contrast, the Supreme Court of Canada allowed a child to sue her father for damages for breach of his fiduciary duties, opening the door in Canada for allowing fiduciary obligations between parent and child to be recognised.[63] Australian courts have also not accepted doctor-patient relationships as fiduciary in nature. InBreen v Williams,[3]the High Court viewed the doctor's responsibilities over their patients as lacking the representative capacity of the trustee in fiduciary relationships. Moreover, the existence of remedies in contract and tort made the Court reluctant in recognising the fiduciary relationship. In 2011, in an insider trading case, the U.S. Securities and Exchange Commission brought charges against a boyfriend of a Disney intern, alleging he had a fiduciary duty to his girlfriend and breached it. The boyfriend, Toby Scammell, allegedly received and used insider information on Disney's takeover of Marvel Comics.[64][65] Generally, the employment relationship is not regarded as fiduciary, but may be so if ... within a particular contractual relationship there are specific contractual obligations which the employee has undertaken which have placed him in a situation where equity imposes these rigorous duties in addition to the contractual obligations. Although terminologies like duty of good faith, or loyalty, or the mutual duty of trust and confidence are frequently used to describe employment relationships, such concepts usually denote situations where "a party merely has to take into consideration the interests of another, but does not have to act in the interests of that other.[check quotation syntax][66] If fiduciary relationships are to arise between employers and employees, it is necessary to ascertain that the employee has placed himself in a position where he must act solely in the interests of his employer.[66]In the case ofCanadian Aero Service Ltd v O'Malley,[67]it was held that a senior employee is much more likely to be found to owe fiduciary duties towards his employer. In 2015, theUnited States Department of Laborissued a proposed rule that if finalized would extend the fiduciary duty relationship to investment advisors and some brokers including insurance brokers.[68]In 2017, thefirst Trump administrationplanned to order a 180-delay of implementation of the rule,[69]sometimes known as the 'fiduciary rule'.[70]The rule would require "brokers offering retirement investment advice to put their clients' interest first".[69]The Trump administration later rescinded the fiduciary rule on July 20, 2018.[71][72]Prior to its repeal, the rule was also dealt blows by theUS Fifth Circuit Court of Appealsin March and June 2018.[73] For example, two members,XandY, of a band currently under contract with one another (or with some other tangible, existing relationship that creates a legal duty) record songs together. Let us imagine it is a serious, successful band and that a court would declare that the two members are equal partners in a business. One day,Xtakes some demos made cooperatively by the duo to a recording label, where an executive expresses interest.Xpretends it is all his work and receives an exclusivecontractand $50,000.Yis unaware of the encounter until reading it in the paper the next week. This situation represents a conflict of interest and duty. BothXandYhold fiduciary duties to each other, which means they must subdue their own interests in favor of the duo's collective interest. By signing an individual contract and taking all the money,Xhas put personal interest above the fiduciary duty. Therefore, a court will find thatXhas breached his fiduciary duty. Thejudicial remedyhere will be thatXholds both the contract and the money in aconstructive trustfor the duo. Note,Xwill not be punished or totally denied of the benefit; bothXandYwill receive a half share in the contract and the money. WhenT. Boone Pickens'sMesa Petroleumattempted to take overCities Servicein 1982, Cities Service attempted to take over the smaller Mesa instead. Pickens was friends with Alan Habacht ofWeiss, Peck & Greer, who supported Mesa's attempt. Fiduciary duty, however, required Habacht to seek the maximum possible return on the investment he managed by offering Weiss's Mesa shares to Cities'stender offer.[74] A fiduciary, such as the administrator,executoror guardian of an estate, may be legally required to file with a probate court or judge asurety bond, called afiduciary bondorprobate bond, to guarantee faithful performance of his duties.[75]One of those duties may be to prepare, generally under oath, aninventoryof the tangible or intangible property of the estate, describing the items or classes of property and usually placing a valuation on them.[76] A bank or other fiduciary having legal title to a mortgage may sell fractional shares to investors, thereby creating aparticipating mortgage. A fiduciary will be liable to account if proven to have acquired a profit, benefit or gain from the relationship by one of three means:[1] Therefore, it is said the fiduciary has a duty not to be in a situation where personal interests and fiduciary duty conflict, a duty not to be in a situation where his fiduciary duty conflicts with another fiduciary duty, and not to profit from his fiduciary position without express knowledge and consent. A fiduciary cannot have aconflict of interest. The state of Texas in the United States sets out the duties of a fiduciary in its Estates Code, chapter 751, as follows (the bracketed references to TPC refer to the Texas Probate Code superseded by the Estates Code, effective January 1, 2014): A fiduciary's duty must not conflict with another fiduciary duty.[20][38][77]Conflicts between one fiduciary duty and another fiduciary duty arise most often when alawyeror anagent, such as areal estate agent, represent more than one client, and the interests of those clients conflict.[23]This would occur when a lawyer attempts to represent both theplaintiffand thedefendantin the same matter, for example. The rule comes from thelogicalconclusion that a fiduciary cannot make the principal's interests a top priority if he has two principals and their interests are diametrically opposed; he must balance the interests, which is not acceptable to equity. Therefore, the conflict of duty and duty rule is really an extension of the conflict of interest and duty rules. A fiduciary must not profit from the fiduciary position.[6][24][38][2]This includes any benefits orprofitswhich, although unrelated to the fiduciary position, came about because of an opportunity that the fiduciary position afforded.[38][78]It is unnecessary that the principal would have been unable to make the profit; if the fiduciary makes a profit, by virtue of his role as fiduciary for the principal, then the fiduciary must report the profit to the principal. If the principal provides fullyinformed consent, then the fiduciary may keep the benefit and be absolved of any liability for what would be a breach of fiduciary duty.[13][22][34]If this requirement is not met then the property is deemed by the court to be held by the fiduciary on constructive trust for the principal.[20] Secret commissions, orbribes, also come under the no profit rule.[79]The bribe shall be held in constructive trust for the principal. The person who made the bribe cannot recover it, since he has committed acrime. Similarly, the fiduciary, who received the bribe, has committed a crime. Fiduciary duties are an aspect of equity and, in accordance with the equitable principles, or maxims, equity serves those with clean hands. Therefore, the bribe is held on constructive trust for the principal, the only innocent party. Bribes were initially considered not to be held on constructive trust, but were considered to be held as adebtby the fiduciary to the principal.[80]This approach has been overruled; the bribe is now classified as a constructive trust.[81]The change is due to pragmatic reasons, especially in regard to abankruptfiduciary. If a fiduciary takes a bribe and that bribe is considered a debt then if the fiduciary goes bankrupt the debt will be left in his pool of assets to be paid tocreditorsand the principal may miss out on recovery because other creditors were more secured. If the bribe is treated as held on a constructive trust then it will remain in the possession of the fiduciary, despite bankruptcy, until such time as the principal recovers it. The landmark Australian decisionASIC v Citigroupnoted that the "informed consent" on behalf of the beneficiary to breaches of either the no-profit and no-conflict rule will allow the fiduciary to get around these rules.[13][58]Furthermore, it highlighted that a contract may include a clause that allows individuals to avoid all fiduciary obligations within the course of dealings, and thereby continue to make a personal profit or deal with other parties- tasks that may otherwise have been in conflict with what would have been a fiduciary duty had it not been for this clause.[13]In the Australian case ofFarah Constructions Pty Ltd v Say-Dee Pty Ltd, however, Gleeson CJ, Gummow, Callinan, Heydon and Crennan JJ observed that the sufficiency of disclosure may depend on the sophistication and intelligence of the persons to whom the disclosure must be made.[58] However, in the English case ofArmitage v Nursean exception was noted to be the fiduciary's obligation of good faith;[82]liability for breach of fiduciary duty by way of fraud or dishonesty cannot be avoided through an exclusion clause in a contract. The decision inArmitage v Nursehas been applied in Australian .[83] Conduct by a fiduciary may be deemedconstructive fraudwhen it is based on acts, omissions or concealments considered fraudulent and that gives one an advantage against the other because such conduct—though not actually fraudulent, dishonest or deceitful—demands redress for reasons of public policy.[84]Breach of fiduciary duty may occur ininsider trading, when an insider or a related party makes trades in a corporation's securities based on material non-public information obtained during the performance of the insider's duties at the corporation. Breach of fiduciary duty by a lawyer with regard to a client, if negligent, may be a form oflegal malpractice; if intentional, it may be remedied in equity.[85][86] Where a principal can establish both a fiduciary duty and a breach of that duty, through violation of the above rules, the court will find that the benefit gained by the fiduciary should be returned to the principal because it would beunconscionableto allow the fiduciary to retain the benefit by employing his strict common law legalrights. This will be the case, unless the fiduciary can show there was full disclosure of the conflict of interest or profit and that the principal fully accepted and freely consented to the fiduciary's course of action.[58] Remedies will differ according to the type of damage or benefit. They are usually distinguished between proprietary remedies, dealing with property, and personal remedies, dealing with pecuniary (monetary) compensation. Where concurrent contractual and fiduciary relationships exist, remedies available to the plaintiff beneficiary is dependent upon the duty of care owed by the defendant and the specific breach of duty allowing for remedy/damages. The courts will clearly distinguish the relationship and determine the nature in which the breach occurred.[87] Where the unconscionable gain by the fiduciary is in an easily identifiable form, such as the recording contract discussed above, the usual remedy will be the already discussed constructive trust.[88] Constructive trusts pop up in many aspects of equity, not just in a remedial sense,[89]but, in this sense, what is meant by a constructive trust is that the court has created and imposed a duty on the fiduciary to hold the money in safekeeping until it can be rightfully transferred to the principal.[38][90] Anaccount of profitsis another potential remedy.[91]It is usually used where the breach of duty was ongoing or when the gain is hard to identify. The idea of an account of profits is that the fiduciary profited unconscionably by virtue of the fiduciary position, so any profit made should be transferred to the principal. It may sound like a constructive trust at first, but it is not. An account of profits is the appropriate remedy when, for example, a senioremployeehas taken advantage of his fiduciary position by conducting his owncompanyon the side and has run up quite a lot of profits over a period of time, profits which he wouldn't have been able to make otherwise. The fiduciary in breach may however receive an allowance for effort and ingenuity expended in making the profit. Compensatory damagesare also available.[92]Accounts of profits can be hard remedies to establish, therefore, a plaintiff will often seek compensation (damages) instead. Courts of equity initially had no power to award compensatory damages, which traditionally were a remedy at common law, but legislation and case law has changed the situation so compensatory damages may now be awarded for a purely equitable action. Some experts have argued that, in the context of pension governance,trusteeshave started to reassert their fiduciary prerogatives more strongly after 2008 – notably following the heavy losses or reduced returns incurred by many retirement schemes in the wake of theGreat Recessionand the progression ofESG and Responsible Investmentideas: "Clearly, there is a mounting demand for CEOs (equity issuers) and governments (sovereign bond issuers) to be more 'accountable' ... No longer ‘absentee landlords', trustees have started to exercise more forcefully their governance prerogatives across the boardrooms of Britain, Benelux and America: coming together through the establishment of engaged pressure groups."[93]However, in the United States, there are questions whether a pension's decision to consider factors such as how investments impact contributors' continued employment violate a fiduciary duty to maximize the retirement fund's returns.[94] Pension fundsand other largeinstitutional investorsare increasingly making their voices heard to call out irresponsible practices in the businesses in which they invest[95] The Fiduciary Duty in the 21st Century Programme, led by theUnited Nations Environment Programme Finance Initiative, thePrinciples for Responsible Investment, and the Generation Foundation, aims to end the debate on whether fiduciary duty is a legitimate barrier to the integration of environmental, social and governance (ESG) issues in investment practice and decision-making.[96]This followed the 2015 publication of "Fiduciary Duty in the 21st Century" which concluded that "failing to consider all long-term investment value drivers, including ESG issues, is a failure of fiduciary duty".[97]Founded on the realization that there is a general lack of legal clarity globally about the relationship between sustainability and investors’ fiduciary duty, the programme engaged with and interviewed over 400 policymakers and investors to raise awareness of the importance of ESG issues to the fiduciary duties of investors. The programme also published roadmaps which set out recommendations to fully embed the consideration of ESG factors in the fiduciary duties of investors across more than eight capital markets.[96]Drawing upon findings from Fiduciary Duty in the 21st Century, the European Commission High-Level Expert Group (HLEG) recommended in its 2018 final report that the EU Commission clarify investor duties to better embrace long-term horizon and sustainability preferences.[98] The following books cover the field in detail:
https://en.wikipedia.org/wiki/Fiduciary
Inmathematics, especially inorder theory, thegreatest elementof a subsetS{\displaystyle S}of apartially ordered set(poset) is an element ofS{\displaystyle S}that is greater than every other element ofS{\displaystyle S}. The termleast elementis defineddually, that is, it is an element ofS{\displaystyle S}that is smaller than every other element ofS.{\displaystyle S.} Let(P,≤){\displaystyle (P,\leq )}be apreordered setand letS⊆P.{\displaystyle S\subseteq P.}An elementg∈P{\displaystyle g\in P}is said to beagreatest element ofS{\displaystyle S}ifg∈S{\displaystyle g\in S}and if it also satisfies: By switching the side of the relation thats{\displaystyle s}is on in the above definition, the definition of a least element ofS{\displaystyle S}is obtained. Explicitly, an elementl∈P{\displaystyle l\in P}is said to bealeast element ofS{\displaystyle S}ifl∈S{\displaystyle l\in S}and if it also satisfies: If(P,≤){\displaystyle (P,\leq )}is also apartially ordered setthenS{\displaystyle S}can have at most one greatest element and it can have at most one least element. Whenever a greatest element ofS{\displaystyle S}exists and is unique then this element is calledthegreatest element ofS{\displaystyle S}. The terminologytheleast element ofS{\displaystyle S}is defined similarly. If(P,≤){\displaystyle (P,\leq )}has a greatest element (resp. a least element) then this element is also calledatop(resp.abottom) of(P,≤).{\displaystyle (P,\leq ).} Greatest elements are closely related toupper bounds. Let(P,≤){\displaystyle (P,\leq )}be apreordered setand letS⊆P.{\displaystyle S\subseteq P.}Anupper boundofS{\displaystyle S}in(P,≤){\displaystyle (P,\leq )}is an elementu{\displaystyle u}such thatu∈P{\displaystyle u\in P}ands≤u{\displaystyle s\leq u}for alls∈S.{\displaystyle s\in S.}Importantly, an upper bound ofS{\displaystyle S}inP{\displaystyle P}isnotrequired to be an element ofS.{\displaystyle S.} Ifg∈P{\displaystyle g\in P}theng{\displaystyle g}is a greatest element ofS{\displaystyle S}if and only ifg{\displaystyle g}is an upper bound ofS{\displaystyle S}in(P,≤){\displaystyle (P,\leq )}andg∈S.{\displaystyle g\in S.}In particular, any greatest element ofS{\displaystyle S}is also an upper bound ofS{\displaystyle S}(inP{\displaystyle P}) but an upper bound ofS{\displaystyle S}inP{\displaystyle P}is a greatest element ofS{\displaystyle S}if and only if itbelongstoS.{\displaystyle S.}In the particular case whereP=S,{\displaystyle P=S,}the definition of "u{\displaystyle u}is an upper bound ofS{\displaystyle S}inS{\displaystyle S}" becomes:u{\displaystyle u}is an element such thatu∈S{\displaystyle u\in S}ands≤u{\displaystyle s\leq u}for alls∈S,{\displaystyle s\in S,}which iscompletely identicalto the definition of a greatest element given before. Thusg{\displaystyle g}is a greatest element ofS{\displaystyle S}if and only ifg{\displaystyle g}is an upper bound ofS{\displaystyle S}inS{\displaystyle S}. Ifu{\displaystyle u}is an upper bound ofS{\displaystyle S}inP{\displaystyle P}that is not an upper bound ofS{\displaystyle S}inS{\displaystyle S}(which can happen if and only ifu∉S{\displaystyle u\not \in S}) thenu{\displaystyle u}cannotbe a greatest element ofS{\displaystyle S}(however, it may be possible that some other elementisa greatest element ofS{\displaystyle S}). In particular, it is possible forS{\displaystyle S}to simultaneouslynothave a greatest elementandfor there to exist some upper bound ofS{\displaystyle S}inP{\displaystyle P}. Even if a set has some upper bounds, it need not have a greatest element, as shown by the example of the negativereal numbers. This example also demonstrates that the existence of aleast upper bound(the number 0 in this case) does not imply the existence of a greatest element either. A greatest element of a subset of a preordered set should not be confused with amaximal elementof the set, which are elements that are not strictly smaller than any other element in the set. Let(P,≤){\displaystyle (P,\leq )}be apreordered setand letS⊆P.{\displaystyle S\subseteq P.}An elementm∈S{\displaystyle m\in S}is said to be amaximal elementofS{\displaystyle S}if the following condition is satisfied: If(P,≤){\displaystyle (P,\leq )}is apartially ordered setthenm∈S{\displaystyle m\in S}is a maximal element ofS{\displaystyle S}if and only if there doesnotexist anys∈S{\displaystyle s\in S}such thatm≤s{\displaystyle m\leq s}ands≠m.{\displaystyle s\neq m.}Amaximal element of(P,≤){\displaystyle (P,\leq )}is defined to mean a maximal element of the subsetS:=P.{\displaystyle S:=P.} A set can have several maximal elements without having a greatest element. Like upper bounds and maximal elements, greatest elements may fail to exist. In atotally ordered setthe maximal element and the greatest element coincide; and it is also calledmaximum; in the case of function values it is also called theabsolute maximum, to avoid confusion with alocal maximum.[1]The dual terms areminimumandabsolute minimum. Together they are called theabsolute extrema. Similar conclusions hold for least elements. One of the most important differences between a greatest elementg{\displaystyle g}and a maximal elementm{\displaystyle m}of a preordered set(P,≤){\displaystyle (P,\leq )}has to do with what elements they are comparable to. Two elementsx,y∈P{\displaystyle x,y\in P}are said to becomparableifx≤y{\displaystyle x\leq y}ory≤x{\displaystyle y\leq x}; they are calledincomparableif they are not comparable. Because preorders arereflexive(which means thatx≤x{\displaystyle x\leq x}is true for all elementsx{\displaystyle x}), every elementx{\displaystyle x}is always comparable to itself. Consequently, the only pairs of elements that could possibly be incomparable aredistinctpairs. In general, however, preordered sets (and evendirectedpartially ordered sets) may have elements that are incomparable. By definition, an elementg∈P{\displaystyle g\in P}is a greatest element of(P,≤){\displaystyle (P,\leq )}ifs≤g,{\displaystyle s\leq g,}for everys∈P{\displaystyle s\in P}; so by its very definition, a greatest element of(P,≤){\displaystyle (P,\leq )}must, in particular, be comparable toeveryelement inP.{\displaystyle P.}This is not required of maximal elements. Maximal elements of(P,≤){\displaystyle (P,\leq )}arenotrequired to be comparable to every element inP.{\displaystyle P.}This is because unlike the definition of "greatest element", the definition of "maximal element" includes an importantifstatement. The defining condition form∈P{\displaystyle m\in P}to be a maximal element of(P,≤){\displaystyle (P,\leq )}can be reworded as: Suppose thatS{\displaystyle S}is a set containingat least two(distinct) elements and define a partial order≤{\displaystyle \,\leq \,}onS{\displaystyle S}by declaring thati≤j{\displaystyle i\leq j}if and only ifi=j.{\displaystyle i=j.}Ifi≠j{\displaystyle i\neq j}belong toS{\displaystyle S}then neitheri≤j{\displaystyle i\leq j}norj≤i{\displaystyle j\leq i}holds, which shows that all pairs of distinct (i.e. non-equal) elements inS{\displaystyle S}areincomparable. Consequently,(S,≤){\displaystyle (S,\leq )}can not possibly have a greatest element (because a greatest element ofS{\displaystyle S}would, in particular, have to be comparable toeveryelement ofS{\displaystyle S}butS{\displaystyle S}has no such element). However,everyelementm∈S{\displaystyle m\in S}is a maximal element of(S,≤){\displaystyle (S,\leq )}because there is exactly one element inS{\displaystyle S}that is both comparable tom{\displaystyle m}and≥m,{\displaystyle \geq m,}that element beingm{\displaystyle m}itself (which of course, is≤m{\displaystyle \leq m}).[note 1] In contrast, if apreordered set(P,≤){\displaystyle (P,\leq )}does happen to have a greatest elementg{\displaystyle g}theng{\displaystyle g}will necessarily be a maximal element of(P,≤){\displaystyle (P,\leq )}and moreover, as a consequence of the greatest elementg{\displaystyle g}being comparable toeveryelement ofP,{\displaystyle P,}if(P,≤){\displaystyle (P,\leq )}is also partially ordered then it is possible to conclude thatg{\displaystyle g}is theonlymaximal element of(P,≤).{\displaystyle (P,\leq ).}However, the uniqueness conclusion is no longer guaranteed if the preordered set(P,≤){\displaystyle (P,\leq )}isnotalso partially ordered. For example, suppose thatR{\displaystyle R}is a non-empty set and define a preorder≤{\displaystyle \,\leq \,}onR{\displaystyle R}by declaring thati≤j{\displaystyle i\leq j}alwaysholds for alli,j∈R.{\displaystyle i,j\in R.}Thedirectedpreordered set(R,≤){\displaystyle (R,\leq )}is partially ordered if and only ifR{\displaystyle R}has exactly one element. All pairs of elements fromR{\displaystyle R}are comparable andeveryelement ofR{\displaystyle R}is a greatest element (and thus also a maximal element) of(R,≤).{\displaystyle (R,\leq ).}So in particular, ifR{\displaystyle R}has at least two elements then(R,≤){\displaystyle (R,\leq )}has multipledistinctgreatest elements. Throughout, let(P,≤){\displaystyle (P,\leq )}be apartially ordered setand letS⊆P.{\displaystyle S\subseteq P.} The least and greatest element of the whole partially ordered set play a special role and are also calledbottom(⊥) andtop(⊤), orzero(0) andunit(1), respectively. If both exist, the poset is called abounded poset. The notation of 0 and 1 is used preferably when the poset is acomplemented lattice, and when no confusion is likely, i.e. when one is not talking about partial orders of numbers that already contain elements 0 and 1 different from bottom and top. The existence of least and greatest elements is a specialcompleteness propertyof a partial order. Further introductory information is found in the article onorder theory.
https://en.wikipedia.org/wiki/Greatest_element_and_least_element
Incomputer science, thereentrant mutex(recursive mutex,recursive lock) is a particular type ofmutual exclusion(mutex) device that may be locked multiple times by the sameprocess/thread, without causing adeadlock. While any attempt to perform the "lock" operation on an ordinary mutex (lock) would either fail or block when the mutex is already locked, on a recursive mutex this operation will succeedif and only ifthe locking thread is the one that already holds the lock. Typically, a recursive mutex tracks the number of times it has been locked, and requires equally many unlock operations to be performed before other threads may lock it. Recursive mutexes solve the problem ofnon-reentrancywith regular mutexes: if a function that takes a lock and executes a callback is itself called by the callback,deadlockensues.[1]Inpseudocode, that is the following situation: Given these definitions, the function calllock_and_call(1)will cause the following sequence of events: Replacing the mutex with a recursive one solves the problem, because the finalm.lock()will succeed without blocking. W. Richard Stevensnotes that recursive locks are "tricky" to use correctly, and recommends their use for adapting single-threaded code without changingAPIs, but "only when no other solution is possible".[2] TheJavalanguage's native synchronization mechanism,monitor, uses recursive locks. Syntactically, a lock is a block of code with the 'synchronized' keyword preceding it and anyObjectreference in parentheses that will be used as the mutex. Inside the synchronized block, the given object can be used as a condition variable by doing a wait(), notify(), or notifyAll() on it. Thus all Objects are both recursive mutexes andcondition variables.[3] Software emulation can be accomplished[clarification needed]using the following structure:[citation needed]
https://en.wikipedia.org/wiki/Reentrant_mutex
InCanada,trade secretsare generally considered to include information set out, contained or embodied in, but not limited to, a formula, pattern, plan, compilation, computer program, method, technique, process, product, device or mechanism; it may be information of any sort; an idea of a scientific nature, or of a literary nature, as long as they grant an economical advantage to the business and improve its value.[1]Additionally, there must be some element of secrecy. Matters of public knowledge or of general knowledge in an industry cannot be the subject-matter of a trade secret.[2] Trade secretsare a type ofintellectual propertythat consists of certaininformation,expertiseorknow-howthat has been developed or acquired by firms. This knowledge frequently gives firms their competitive edge in the market and it has to be kept as a secret. In Canada any information that a firm or its employees produces or acquires for the purpose of the firm's business can constitute confidential information that courts are willing to protect. All that is required is that the creator of the information “has used his brain and thus produced a result which can be produced by somebody who goes through the same process”.[3] According toSeager v. Copydex Ltd,[4]courts will even act to protect a comparatively underdeveloped idea from misappropriation. However, information may stop being confidential and confident may be released from its obligations of confidence if the information subject to confidence is later publicly disclosed by the confider or a third party. With one exception in the field of employer-employee relations, there is no recognized distinction in Canada between the rights and remedies afforded totrade secretsas opposed to mereconfidential information. In the field of employer-employee relationships, the British caseFaccenda Chicken Ltd. v. Fowler, which has been cited with approval by several Canadian courts, has drawn a distinction between the two.[5] Under theConstitution Act 1867, the exclusive Legislative Authority of theParliament of Canadaextends to most areas of intellectual property such as patents, trademarks and copyrights;[6]whereas the provincial government has exclusive authority to legislate on matters related toproperty and civil rights. The federal Parliament also has exclusive jurisdiction to create offences under its criminal law power.[7] At one time, the federalTrade-marks Actprohibited anyone from "do[ing] any other act or adopt[ing] any other business practice contrary to honest industrial or commercial usage in Canada,"[8]which was considered to include the taking of trade secrets.[9]However, theSupreme Court of Canadaruled inMacDonald v. Vapor Canada Ltd.that the provisions encroached on the provinces' authority overproperty and civil rightsand could not be upheld under the federaltrade and commercepower.[10]Therefore, the regulation of trade secrets as a civil matter falls under provincial jurisdiction. TheUniform Trade Secrets Act, adopted by the Uniform Law Conference of Canada in 1989, would provide civil remedies for the breach of trade secrets. That Uniform Act defines "trade secrets" as follows: 1(1) In this Act... "trade secret" means any information that (2) For the purposes of the definition trade secret "information" includes information set out, contained or embodied in, but not limited to, a formula, pattern, plan, compilation, computer program, method, technique, process, product, device or mechanism.[11] To date, the Uniform Act has not been enacted into law by any of the Legislatures,[12]but the definition has been incorporated in the federalSecurity of Information Act.[13] In all the provinces but Quebec, trade secrets are governed by the common law, ultimately derived from the English common law as interpreted and applied in Canada. The Canadian definition of trade secret is based on Canadian case law and doctrine, and also draws on American and English case law.[14]InLac Minerals Ltd. v. International Corona Resources Ltd., theSupreme Court of Canadaheld that a breach of confidence action issui generisand the courts may rely on all three traditional jurisdictional bases for action (contract, equity and property) to enforce the policy of the law that confidences are to be respected.[15] In common law, there are essentially five types ofcivil actionthat atrade secretholder can rely on to seek protection of its trade secrets before a court of justice: TheSupreme Court of Canadastated inCadbury Schweppes Inc. v. FBI Foods Ltd.that all these types of actions coexist in the Canadian judicial system and remain available to the trade secret holder.[17] In Quebec, trade secrets are governed by provisions under theCivil Code of Quebec. An action for breach oftrade secretsor confidential business information generally arises either from a contractual liability action[18]or, in the absence of a contract, from a civil liability action.[19] TheCodedeals specifically withtrade secretsin one article that provides for a defense where disclosing the secret is in the public interest,[20]and in one that describes how a loss resulting from disclosure is to be calculated.[21]However, none of its provisions define the concept of trade secret. TheQuebec Court of Appealhas ruled inContinental Casualty Company v. Combined Insurance Companythat those who owntrade secrets(secrets de commerce) are entitled to seek protection and thatQuebeccourts are competent to grant remedies in the case the plaintiff can evidence its ownership of them.[22] Two important forms of contract used by employers in Canada to protect theirtrade secretsand confidential information arenon-disclosure agreementsandnon-competition agreements, which are also known asconfidentiality agreementsandrestrictive covenants.[5] According toFaccenda Chicken Ltd. v. Fowler, ex-employees, post-termination, may use their general skills and knowledge anywhere but they may not use or divulge their former employer's trade secrets. Exceptionally, ex-employers may also be able to enjoin a former employee's use of non-trade secret information where that information has been obtained from records which qualify as trade secrets.[23] According toInternational Tools Ltd. v. Kollar, in Canada the length of apermanent injunctionto force a defendant to cease using the plaintiff's information should not normally extend beyond the time that the plaintiff's trade secrets remains a secret which is exclusively known to the plaintiff and its confidants.[24] InCadbury Schweppes Inc. v. FBI Foods Ltd.Justice Binnie concluded that the form of relief for breach of confidence was “dictated by the facts of the case rather than strict jurisdictional or doctrinal considerations”.[25]He also stated that “whether a breach of contract in a particular case has a contractual, tortuous, proprietary or trust flavour goes to the appropriateness of particular equitable remedy but does not limit the court’s jurisdiction to grant it”.[25] InR. v. Stewart,[26]theSupreme Court of Canadaheld that the taking of confidential information cannot form the basis of a charge of theft[27]under theCriminal Code, but it could in certain circumstances form one forfraud:[28] Parliament has since amended theSecurity of Information Actto provide that it is an offence to: for the benefit of a foreign economic entity, and to the detriment of Canada's economic interests, international relations or national defence or national security.[13]
https://en.wikipedia.org/wiki/Trade_secrets_in_Canada
Probabilityis a branch ofmathematicsandstatisticsconcerningeventsand numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur.[note 1][1][2]This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%). These concepts have been given anaxiomaticmathematical formalization inprobability theory, which is used widely inareas of studysuch asstatistics,mathematics,science,finance,gambling,artificial intelligence,machine learning,computer science,game theory, andphilosophyto, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities ofcomplex systems.[3] The wordprobabilityderivesfrom the Latinprobabilitas, which can also mean "probity", a measure of theauthorityof awitnessin alegal casein Europe, and often correlated with the witness'snobility. In a sense, this differs much from the modern meaning ofprobability, which in contrast is a measure of the weight ofempirical evidence, and is arrived at frominductive reasoningandstatistical inference.[4] When dealing withrandom experiments– i.e.,experimentsthat arerandomandwell-defined– in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to astheoretical probability(in contrast toempirical probability, dealing with probabilities in the context of real experiments). The probability is a number between 0 and 1; the larger the probability, the more likely the desired outcome is to occur. For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. The probability of getting an outcome of at least one head is 3 out of 4, or 0.75, and this event is more likely to occur. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability: The scientific study of probability is a modern development of mathematics.Gamblingshows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues[note 2]are still obscured by superstitions.[11] According toRichard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latinprobabilis) meantapprovable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances."[12]However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.[13] The sixteenth-centuryItalianpolymathGerolamo Cardanodemonstrated the efficacy of definingoddsas the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes[14]). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence ofPierre de FermatandBlaise Pascal(1654).Christiaan Huygens(1657) gave the earliest known scientific treatment of the subject.[15]Jakob Bernoulli'sArs Conjectandi(posthumous, 1713) andAbraham de Moivre'sDoctrine of Chances(1718) treated the subject as a branch of mathematics.[16]SeeIan Hacking'sThe Emergence of Probability[4]andJames Franklin'sThe Science of Conjecture[17]for histories of the early development of the very concept of mathematical probability. Thetheory of errorsmay be traced back toRoger Cotes'sOpera Miscellanea(posthumous, 1722), but a memoir prepared byThomas Simpsonin 1755 (printed 1756) first applied the theory to the discussion of errors of observation.[18]The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve. The first two laws of error that were proposed both originated withPierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error – disregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error.[19]The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old."[19] Daniel Bernoulli(1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. Adrien-Marie Legendre(1805) developed themethod of least squares, and introduced it in hisNouvelles méthodes pour la détermination des orbites des comètes(New Methods for Determining the Orbits of Comets).[20]In ignorance of Legendre's contribution, an Irish-American writer,Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error, ϕ(x)=ce−h2x2{\displaystyle \phi (x)=ce^{-h^{2}x^{2}}} whereh{\displaystyle h}is a constant depending on precision of observation, andc{\displaystyle c}is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same asJohn Herschel's (1850).[citation needed]Gaussgave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823),James Ivory(1825, 1826), Hagen (1837),Friedrich Bessel(1838),W.F. Donkin(1844, 1856), andMorgan Crofton(1870). Other contributors wereEllis(1844),De Morgan(1864),Glaisher(1872), andGiovanni Schiaparelli(1875).Peters's (1856) formula[clarification needed]forr, theprobable errorof a single observation, is well known. In the nineteenth century, authors on the general theory includedLaplace,Sylvestre Lacroix(1816), Littrow (1833),Adolphe Quetelet(1853),Richard Dedekind(1860), Helmert (1872),Hermann Laurent(1873), Liagre, Didion andKarl Pearson.Augustus De MorganandGeorge Booleimproved the exposition of the theory. In 1906,Andrey Markovintroduced[21]the notion ofMarkov chains, which played an important role instochastic processestheory and its applications. The modern theory of probability based onmeasure theorywas developed byAndrey Kolmogorovin 1931.[22] On the geometric side, contributors toThe Educational Timesincluded Miller, Crofton, McColl, Wolstenholme, Watson, andArtemas Martin.[23]Seeintegral geometryfor more information. Like othertheories, thetheory of probabilityis a representation of its concepts in formal terms – that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain. There have been at least two successful attempts to formalize probability, namely theKolmogorovformulation and theCoxformulation. In Kolmogorov's formulation (see alsoprobability space),setsare interpreted aseventsand probability as ameasureon a class of sets. InCox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, thelaws of probabilityare the same, except for technical details. There are other methods for quantifying uncertainty, such as theDempster–Shafer theoryorpossibility theory, but those are essentially different and not compatible with the usually-understood laws of probability. Probability theory is applied in everyday life inriskassessment andmodeling. The insurance industry andmarketsuseactuarial scienceto determine pricing and make trading decisions. Governments apply probabilistic methods inenvironmental regulation, entitlement analysis, andfinancial regulation. An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory ofbehavioral financeemerged to describe the effect of suchgroupthinkon pricing, on policy, and on peace and conflict.[24] In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biologicalPunnett squares).[25]As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to designgames of chanceso that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play.[26] Another significant application of probability theory in everyday life isreliability. Many consumer products, such asautomobilesand consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product'swarranty.[27] Thecache language modeland otherstatistical language modelsthat are used innatural language processingare also examples of applications of probability theory. Consider an experiment that can produce a number of results. The collection of all possible results is called thesample spaceof the experiment, sometimes denoted asΩ{\displaystyle \Omega }. Thepower setof the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of thepower setof the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred. A probability is away of assigningevery event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.[28] The probability of aneventAis written asP(A){\displaystyle P(A)},[29]p(A){\displaystyle p(A)}, orPr(A){\displaystyle {\text{Pr}}(A)}.[30]This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure. Theoppositeorcomplementof an eventAis the event [notA] (that is, the event ofAnot occurring), often denoted asA′,Ac{\displaystyle A',A^{c}},A¯,A∁,¬A{\displaystyle {\overline {A}},A^{\complement },\neg A}, or∼A{\displaystyle {\sim }A}; its probability is given byP(notA) = 1 −P(A).[31]As an example, the chance of not rolling a six on a six-sided die is1 – (chance of rolling a six) =1 −⁠1/6⁠=⁠5/6⁠.For a more comprehensive treatment, seeComplementary event. If two eventsAandBoccur on a single performance of an experiment, this is called the intersection orjoint probabilityofAandB, denoted asP(A∩B).{\displaystyle P(A\cap B).} If two events,AandBareindependentthen the joint probability is[29] P(AandB)=P(A∩B)=P(A)P(B).{\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=P(A)P(B).} For example, if two coins are flipped, then the chance of both being heads is12×12=14.{\displaystyle {\tfrac {1}{2}}\times {\tfrac {1}{2}}={\tfrac {1}{4}}.}[32] If either eventAor eventBcan occur but never both simultaneously, then they are called mutually exclusive events. If two events aremutually exclusive, then the probability ofbothoccurring is denoted asP(A∩B){\displaystyle P(A\cap B)}andP(AandB)=P(A∩B)=0{\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=0}If two events aremutually exclusive, then the probability ofeitheroccurring is denoted asP(A∪B){\displaystyle P(A\cup B)}andP(AorB)=P(A∪B)=P(A)+P(B)−P(A∩B)=P(A)+P(B)−0=P(A)+P(B){\displaystyle P(A{\mbox{ or }}B)=P(A\cup B)=P(A)+P(B)-P(A\cap B)=P(A)+P(B)-0=P(A)+P(B)} For example, the chance of rolling a 1 or 2 on a six-sided die isP(1or2)=P(1)+P(2)=16+16=13.{\displaystyle P(1{\mbox{ or }}2)=P(1)+P(2)={\tfrac {1}{6}}+{\tfrac {1}{6}}={\tfrac {1}{3}}.} If the events are not (necessarily) mutually exclusive thenP(AorB)=P(A∪B)=P(A)+P(B)−P(AandB).{\displaystyle P\left(A{\hbox{ or }}B\right)=P(A\cup B)=P\left(A\right)+P\left(B\right)-P\left(A{\mbox{ and }}B\right).}Rewritten,P(A∪B)=P(A)+P(B)−P(A∩B){\displaystyle P\left(A\cup B\right)=P\left(A\right)+P\left(B\right)-P\left(A\cap B\right)} For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is1352+1252−352=1126,{\displaystyle {\tfrac {13}{52}}+{\tfrac {12}{52}}-{\tfrac {3}{52}}={\tfrac {11}{26}},}since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once. This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows:P(A∪B∪C)=P((A∪B)∪C)=P(A∪B)+P(C)−P((A∪B)∩C)=P(A)+P(B)−P(A∩B)+P(C)−P((A∩C)∪(B∩C))=P(A)+P(B)+P(C)−P(A∩B)−(P(A∩C)+P(B∩C)−P((A∩C)∩(B∩C)))P(A∪B∪C)=P(A)+P(B)+P(C)−P(A∩B)−P(A∩C)−P(B∩C)+P(A∩B∩C){\displaystyle {\begin{aligned}P\left(A\cup B\cup C\right)=&P\left(\left(A\cup B\right)\cup C\right)\\=&P\left(A\cup B\right)+P\left(C\right)-P\left(\left(A\cup B\right)\cap C\right)\\=&P\left(A\right)+P\left(B\right)-P\left(A\cap B\right)+P\left(C\right)-P\left(\left(A\cap C\right)\cup \left(B\cap C\right)\right)\\=&P\left(A\right)+P\left(B\right)+P\left(C\right)-P\left(A\cap B\right)-\left(P\left(A\cap C\right)+P\left(B\cap C\right)-P\left(\left(A\cap C\right)\cap \left(B\cap C\right)\right)\right)\\P\left(A\cup B\cup C\right)=&P\left(A\right)+P\left(B\right)+P\left(C\right)-P\left(A\cap B\right)-P\left(A\cap C\right)-P\left(B\cap C\right)+P\left(A\cap B\cap C\right)\end{aligned}}}It can be seen, then, that this pattern can be repeated for any number of events. Conditional probabilityis the probability of some eventA, given the occurrence of some other eventB. Conditional probability is writtenP(A∣B){\displaystyle P(A\mid B)}, and is read "the probability ofA, givenB". It is defined by[33] P(A∣B)=P(A∩B)P(B){\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}\,} IfP(B)=0{\displaystyle P(B)=0}thenP(A∣B){\displaystyle P(A\mid B)}is formallyundefinedby this expression. In this caseA{\displaystyle A}andB{\displaystyle B}are independent, sinceP(A∩B)=P(A)P(B)=0.{\displaystyle P(A\cap B)=P(A)P(B)=0.}However, it is possible to define a conditional probability for some zero-probability events, for example by using aσ-algebraof such events (such as those arising from acontinuous random variable).[34] For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is1/2;{\displaystyle 1/2;}however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be1/3,{\displaystyle 1/3,}since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be2/3.{\displaystyle 2/3.} Inprobability theoryand applications,Bayes' rulerelates theoddsof eventA1{\displaystyle A_{1}}to eventA2,{\displaystyle A_{2},}before (prior to) and after (posterior to)conditioningon another eventB.{\displaystyle B.}The odds onA1{\displaystyle A_{1}}to eventA2{\displaystyle A_{2}}is simply the ratio of the probabilities of the two events. When arbitrarily many eventsA{\displaystyle A}are of interest, not just two, the rule can be rephrased asposterior is proportional to prior times likelihood,P(A|B)∝P(A)P(B|A){\displaystyle P(A|B)\propto P(A)P(B|A)}where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side asA{\displaystyle A}varies, for fixed or givenB{\displaystyle B}(Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). In adeterministicuniverse, based onNewtonianconcepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in whichsensitivity to initial conditionsexceeds our ability to measure them, i.e. know them). In the case of aroulettewheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass'Newtonian Casinorevealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in thekinetic theory of gases, where the system, while deterministicin principle, is so complex (with the number of molecules typically the order of magnitude of theAvogadro constant6.02×1023) that only a statistical description of its properties is feasible.[35] Probability theoryis required to describe quantum phenomena.[36]A revolutionary discovery of early 20th centuryphysicswas the random character of all physical processes that occur at sub-atomic scales and are governed by the laws ofquantum mechanics. The objectivewave functionevolves deterministically but, according to theCopenhagen interpretation, it deals with probabilities of observing, the outcome being explained by awave function collapsewhen an observation is made. However, the loss ofdeterminismfor the sake ofinstrumentalismdid not meet with universal approval.Albert Einsteinfamouslyremarkedin a letter toMax Born: "I am convinced that God does not play dice".[37]Like Einstein,Erwin Schrödinger, whodiscoveredthe wave function, believed quantum mechanics is astatisticalapproximation of an underlying deterministicreality.[38]In some modern interpretations of the statistical mechanics of measurement,quantum decoherenceis invoked to account for the appearance of subjectively probabilistic experimental outcomes.
https://en.wikipedia.org/wiki/Probability
Local regressionorlocal polynomial regression,[1]also known asmoving regression,[2]is a generalization of themoving averageandpolynomial regression.[3]Its most common methods, initially developed forscatterplot smoothing, areLOESS(locally estimated scatterplot smoothing) andLOWESS(locally weighted scatterplot smoothing), both pronounced/ˈloʊɛs/LOH-ess. They are two strongly relatednon-parametric regressionmethods that combine multiple regression models in ak-nearest-neighbor-based meta-model. In some fields, LOESS is known and commonly referred to asSavitzky–Golay filter[4][5](proposed 15 years before LOESS). LOESS and LOWESS thus build on"classical" methods, such as linear and nonlinearleast squares regression. They address situations in which the classical procedures do not perform well or cannot be effectively applied without undue labor. LOESS combines much of the simplicity of linear least squares regression with the flexibility ofnonlinear regression. It does this by fitting simple models to localized subsets of the data to build up a function that describes the deterministic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is that the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data. The trade-off for these features is increased computation. Because it is so computationally intensive, LOESS would have been practically impossible to use in the era when least squares regression was being developed. Most other modern methods for process modeling are similar to LOESS in this respect. These methods have been consciously designed to use our current computational ability to the fullest possible advantage to achieve goals not easily achieved by traditional approaches. A smooth curve through a set of data points obtained with this statistical technique is called aloess curve, particularly when each smoothed value is given by a weighted quadratic least squares regression over the span of values of they-axisscattergramcriterion variable. When each smoothed value is given by a weighted linear least squares regression over the span, this is known as alowess curve; however, some authorities treatlowessand loess as synonyms.[6][7] Local regression and closely related procedures have a long and rich history, having been discovered and rediscovered in different fields on multiple occasions. An early work byRobert Henderson[8]studying the problem of graduation (a term for smoothing used in Actuarial literature) introduced local regression using cubic polynomials. Specifically, letYj{\displaystyle Y_{j}}denote an ungraduated sequence of observations. Following Henderson, suppose that only the terms fromY−h{\displaystyle Y_{-h}}toYh{\displaystyle Y_{h}}are to be taken into account when computing the graduated value ofY0{\displaystyle Y_{0}}, andWj{\displaystyle W_{j}}is the weight to be assigned toYj{\displaystyle Y_{j}}. Henderson then uses a local polynomial approximationa+bj+cj2+dj3{\displaystyle a+bj+cj^{2}+dj^{3}}, and sets up the following four equations for the coefficients: Solving these equations for the polynomial coefficients yields the graduated value,Y^0=a{\displaystyle {\hat {Y}}_{0}=a}. Henderson went further. In preceding years, many 'summation formula' methods of graduation had been developed, which derived graduation rules based on summation formulae (convolution of the series of obeservations with a chosen set of weights). Two such rules are the 15-point and 21-point rules ofSpencer(1904).[9]These graduation rules were carefully designed to have a quadratic-reproducing property: If the ungraduated values exactly follow a quadratic formula, then the graduated values equal the ungraduated values. This is an important property: a simple moving average, by contrast, cannot adequately model peaks and troughs in the data. Henderson's insight was to show thatanysuch graduation rule can be represented as a local cubic (or quadratic) fit for an appropriate choice of weights. Further discussions of the historical work on graduation and local polynomial fitting can be found inMaculay(1931),[10]ClevelandandLoader(1995);[11]andMurrayandBellhouse(2019).[12] TheSavitzky-Golay filter, introduced byAbraham SavitzkyandMarcel J. E. Golay(1964)[13]significantly expanded the method. Like the earlier graduation work, their focus was data with an equally-spaced predictor variable, where (excluding boundary effects) local regression can be represented as aconvolution. Savitzky and Golay published extensive sets of convolution coefficients for different orders of polynomial and smoothing window widths. Local regression methods started to appear extensively in statistics literature in the 1970s; for example,Charles J. Stone(1977),[14]Vladimir Katkovnik(1979)[15]andWilliam S. Cleveland(1979).[16]Katkovnik (1985)[17]is the earliest book devoted primarily to local regression methods. Theoretical work continued to appear throughout the 1990s. Important contributions includeJianqing FanandIrène Gijbels(1992)[18]studying efficiency properties, andDavid RuppertandMatthew P. Wand(1994)[19]developing an asymptotic distribution theory for multivariate local regression. An important extension of local regression is Local Likelihood Estimation, formulated byRobert TibshiraniandTrevor Hastie(1987).[20]This replaces the local least-squares criterion with a likelihood-based criterion, thereby extending the local regression method to theGeneralized linear modelsetting; for example binary data, count data or censored data. Practical implementations of local regression began appearing in statistical software in the 1980s. Cleveland (1981)[21]introduces the LOWESS routines, intended for smoothing scatterplots. This implements local linear fitting with a single predictor variable, and also introduces robustness downweighting to make the procedure resistant to outliers. An entirely new implementation, LOESS, is described in Cleveland andSusan J. Devlin(1988).[22]LOESS is a multivariate smoother, able to handle spatial data with two (or more) predictor variables, and uses (by default) local quadratic fitting. Both LOWESS and LOESS are implemented in theSandRprogramming languages. See also Cleveland's Local Fitting Software.[23] While Local Regression, LOWESS and LOESS are sometimes used interchangeably, this usage should be considered incorrect. Local Regression is a general term for the fitting procedure; LOWESS and LOESS are two distinct implementations. Local regression uses adata setconsisting of observations one or more ‘independent’ or ‘predictor’ variables, and a ‘dependent’ or ‘response’ variable. The dataset will consist of a numbern{\displaystyle n}observations. The observations of the predictor variable can be denotedx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}, and corresponding observations of the response variable byY1,…,Yn{\displaystyle Y_{1},\ldots ,Y_{n}}. For ease of presentation, the development below assumes a single predictor variable; the extension to multiple predictors (when thexi{\displaystyle x_{i}}are vectors) is conceptually straightforward. A functional relationship between the predictor and response variables is assumed:Yi=μ(xi)+ϵi{\displaystyle Y_{i}=\mu (x_{i})+\epsilon _{i}}whereμ(x){\displaystyle \mu (x)}is the unknown ‘smooth’ regression function to be estimated, and represents the conditional expectation of the response, given a value of the predictor variables. In theoretical work, the ‘smoothness’ of this function can be formally characterized by placing bounds on higher order derivatives. Theϵi{\displaystyle \epsilon _{i}}represents random error; for estimation purposes these are assumed to havemeanzero. Stronger assumptions (e.g.,independenceand equalvariance) may be made when assessing properties of the estimates. Local regression then estimates the functionμ(x){\displaystyle \mu (x)}, for one value ofx{\displaystyle x}at a time. Since the function is assumed to be smooth, the most informative data points are those whosexi{\displaystyle x_{i}}values are close tox{\displaystyle x}. This is formalized with a bandwidthh{\displaystyle h}and akernelor weight functionW(⋅){\displaystyle W(\cdot )}, with observations assigned weightswi(x)=W(xi−xh).{\displaystyle w_{i}(x)=W\left({\frac {x_{i}-x}{h}}\right).}A typical choice ofW{\displaystyle W}, used by Cleveland in LOWESS, isW(u)=(1−|u|3)3{\displaystyle W(u)=(1-|u|^{3})^{3}}for|u|<1{\displaystyle |u|<1}, although any similar function (peaked atu=0{\displaystyle u=0}and small or 0 for large values ofu{\displaystyle u}) can be used. Questions of bandwidth selection and specification (how large shouldh{\displaystyle h}be, and should it vary depending upon the fitting pointx{\displaystyle x}?) are deferred for now. A local model (usually a low-order polynomial with degreep≤3{\displaystyle p\leq 3}), expressed asμ(xi)≈β0+β1(xi−x)+…+βp(xi−x)p{\displaystyle \mu (x_{i})\approx \beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}}is then fitted byweighted least squares: choose regression coefficients(β^0,…,β^p){\displaystyle ({\hat {\beta }}_{0},\ldots ,{\hat {\beta }}_{p})}to minimize∑i=1nwi(x)(Yi−β0−β1(xi−x)−…−βp(xi−x)p)2.{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left(Y_{i}-\beta _{0}-\beta _{1}(x_{i}-x)-\ldots -\beta _{p}(x_{i}-x)^{p}\right)^{2}.}The local regression estimate ofμ(x){\displaystyle \mu (x)}is then simply the intercept estimate:μ^(x)=β^0{\displaystyle {\hat {\mu }}(x)={\hat {\beta }}_{0}}while the remaining coefficients can be interpreted (up to a factor ofp!{\displaystyle p!}) as derivative estimates. It is to be emphasized that the above procedure produces the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}for one value ofx{\displaystyle x}. When considering a new value ofx{\displaystyle x}, a new set of weightswi(x){\displaystyle w_{i}(x)}must be computed, and the regression coefficient estimated afresh. As with all least squares estimates, the estimated regression coefficients can be expressed in closed form (seeWeighted least squaresfor details):β^=(XTWX)−1XTWy{\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X^{\textsf {T}}WX} )^{-1}\mathbf {X^{\textsf {T}}W} \mathbf {y} }whereβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}is a vector of the local regression coefficients;X{\displaystyle \mathbf {X} }is then×(p+1){\displaystyle n\times (p+1)}design matrixwith entries(xi−x)j{\displaystyle (x_{i}-x)^{j}};W{\displaystyle \mathbf {W} }is a diagonal matrix of the smoothing weightswi(x){\displaystyle w_{i}(x)}; andy{\displaystyle \mathbf {y} }is a vector of the responsesYi{\displaystyle Y_{i}}. This matrix representation is crucial for studying the theoretical properties of local regression estimates. With appropriate definitions of the design and weight matrices, it immediately generalizes to the multiple-predictor setting. Implementation of local regression requires specification and selection of several components: Each of these components has been the subject of extensive study; a summary is provided below. The bandwidthh{\displaystyle h}controls the resolution of the local regression estimate. Ifhis too small, the estimate may show high-resolution features that represent noise in the data, rather than any real structure in the mean function. Conversely, ifhis too large, the estimate will only show low-resolution features, and important structure may be lost. This is thebias-variance tradeoff; ifhis too small, the estimate exhibits large variation; while at largeh, the estimate exhibits large bias. Careful choice of bandwidth is therefore crucial when applying local regression. Mathematical methods for bandwidth selection require, firstly, formal criteria to assess the performance of an estimate. One such criterion is prediction error: if a new observation is made atx~{\displaystyle {\tilde {x}}}, how well does the estimateμ^(x~){\displaystyle {\hat {\mu }}({\tilde {x}})}predict the new responseY~{\displaystyle {\tilde {Y}}}? Performance is often assessed using a squared-error loss function. The mean squared prediction error isE(Y~−μ^(x~))2=E(Y~−μ(x)+μ(x)−μ^(x~))2=E(Y~−μ(x))2+E(μ(x)−μ^(x~))2.{\displaystyle {\begin{aligned}E\left({\tilde {Y}}-{\hat {\mu }}({\tilde {x}})\right)^{2}&=E\left({\tilde {Y}}-\mu (x)+\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}\\&=E\left({\tilde {Y}}-\mu (x)\right)^{2}+E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}.\end{aligned}}}The first termE(Y~−μ(x))2{\displaystyle E\left({\tilde {Y}}-\mu (x)\right)^{2}}is the random variation of the observation; this is entirely independent of the local regression estimate. The second term,E(μ(x)−μ^(x~))2{\displaystyle E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}}is the mean squared estimation error. This relation shows that, for squared error loss, minimizing prediction error and estimation error are equivalent problems. In global bandwidth selection, these measures can be integrated over thex{\displaystyle x}space ("mean integrated squared error", often used in theoretical work), or averaged over the actualxi{\displaystyle x_{i}}(more useful for practical implementations). Some standard techniques from model selection can be readily adapted to local regression: Any of these criteria can be minimized to produce an automatic bandwidth selector. Cleveland and Devlin[22]prefer a graphical method (theM-plot) to visually display the bias-variance trade-off and guide bandwidth choice. One question not addressed above is, how should the bandwidth depend upon the fitting pointx{\displaystyle x}? Often a constant bandwidth is used, while LOWESS and LOESS prefer a nearest-neighbor bandwidth, meaninghis smaller in regions with many data points. Formally, the smoothing parameter,α{\displaystyle \alpha }, is the fraction of the total numbernof data points that are used in each local fit. The subset of data used in each weighted least squares fit thus comprises thenα{\displaystyle n\alpha }points (rounded to the next largest integer) whose explanatory variables' values are closest to the point at which the response is being estimated.[7] More sophisticated methods attempt to choose the bandwidthadaptively; that is, choose a bandwidth at each fitting pointx{\displaystyle x}by applying criteria such as cross-validation locally within the smoothing window. An early example of this isJerome H. Friedman's[24]"supersmoother", which uses cross-validation to choose among local linear fits at different bandwidths. Most sources, in both theoretical and computational work, use low-order polynomials as the local model, with polynomial degree ranging from 0 to 3. The degree 0 (local constant) model is equivalent to akernel smoother; usually credited toÈlizbar Nadaraya(1964)[25]andG. S. Watson(1964).[26]This is the simplest model to use, but can suffer from bias when fitting near boundaries of the dataset. Local linear (degree 1) fitting can substantially reduce the boundary bias. Local quadratic (degree 2) and local cubic (degree 3) can result in improved fits, particularly when the underlying mean functionμ(x){\displaystyle \mu (x)}has substantial curvature, or equivalently a large second derivative. In theory, higher orders of polynomial can lead to faster convergence of the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}to the true meanμ(x){\displaystyle \mu (x)},provided thatμ(x){\displaystyle \mu (x)}has a sufficient number of derivatives. See C. J. Stone (1980).[27]Generally, it takes a large sample size for this faster convergence to be realized. There are also computational and stability issues that arise, particularly for multivariate smoothing. It is generally not recommended to use local polynomials with degree greater than 3. As with bandwidth selection, methods such as cross-validation can be used to compare the fits obtained with different degrees of polynomial. As mentioned above, the weight function gives the most weight to the data points nearest the point of estimation and the least weight to the data points that are furthest away. The use of the weights is based on the idea that points near each other in the explanatory variable space are more likely to be related to each other in a simple way than points that are further apart. Following this logic, points that are likely to follow the local model best influence the local model parameter estimates the most. Points that are less likely to actually conform to the local model have less influence on the local modelparameterestimates. Cleveland (1979)[16]sets out four requirements for the weight function: Asymptotic efficiency of weight functions has been considered byV. A. Epanechnikov(1969)[28]in the context of kernel density estimation; J. Fan (1993)[29]has derived similar results for local regression. They conclude that the quadratic kernel,W(x)=1−x2{\displaystyle W(x)=1-x^{2}}for|x|≤1{\displaystyle |x|\leq 1}has greatest efficiency under a mean-squared-error loss function. See"kernel functions in common use"for more discussion of different kernels and their efficiencies. Considerations other than MSE are also relevant to the choice of weight function. Smoothness properties ofW(x){\displaystyle W(x)}directly affect smoothness of the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}. In particular, the quadaratic kernel is not differentiable atx=±1{\displaystyle x=\pm 1}, andμ^(x){\displaystyle {\hat {\mu }}(x)}is not differentiable as a result. Thetri-cube weight function,W(x)=(1−|x|3)3;|x|<1{\displaystyle W(x)=(1-|x|^{3})^{3};|x|<1}has been used in LOWESS and other local regression software; this combines higher-order differentiability with a high MSE efficiency. One criticism of weight functions with bounded support is that they can lead to numerical problems (i.e. an unstable or singular design matrix) when fitting in regions with sparse data. For this reason, some authors[who?]choose to use the Gaussian kernel, or others with unbounded support. As described above, local regression uses a locally weighted least squares criterion to estimate the regression parameters. This inherits many of the advantages (ease of implementation and interpretation; good properties when errors are normally distributed) and disadvantages (sensitivity to extreme values and outliers; inefficiency when errors have unequal variance or are not normally distributed) usually associated with least squares regression. These disadvantages can be addressed by replacing the local least-squares estimation by something else. Two such ideas are presented here: local likelihood estimation, which applies local estimation to thegeneralized linear model, and robust local regression, which localizes methods fromrobust regression. In local likelihood estimation, developed in Tibshirani and Hastie (1987),[20]the observationsYi{\displaystyle Y_{i}}are assumed to come from a parametric family of distributions, with a known probability density function (or mass function, for discrete data),Yi∼f(y,θ(xi)),{\displaystyle Y_{i}\sim f(y,\theta (x_{i})),}where the parameter functionθ(x){\displaystyle \theta (x)}is the unknown quantity to be estimated. To estimateθ(x){\displaystyle \theta (x)}at a particular pointx{\displaystyle x}, the local likelihood criterion is∑i=1nwi(x)log⁡(f(Yi,β0+β1(xi−x)+…+βp(xi−x)p).{\displaystyle \sum _{i=1}^{n}w_{i}(x)\log \left(f(Y_{i},\beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}\right).}Estimates of the regression coefficients (in, particular,β^0{\displaystyle {\hat {\beta }}_{0}}) are obtained by maximizing the local likelihood criterion, and the local likelihood estimate isθ^(x)=β^0.{\displaystyle {\hat {\theta }}(x)={\hat {\beta }}_{0}.} Whenf(y,θ(x)){\displaystyle f(y,\theta (x))}is the normal distribution andθ(x){\displaystyle \theta (x)}is the mean function, the local likelihood method reduces to the standard local least-squares regression. For other likelihood families, there is (usually) no closed-form solution for the local likelihood estimate, and iterative procedures such asiteratively reweighted least squaresmust be used to compute the estimate. Example(local logistic regression). All response observations are 0 or 1, and the mean function is the "success" probability,μ(xi)=Pr(Yi=1|xi){\displaystyle \mu (x_{i})=\Pr(Y_{i}=1|x_{i})}. Sinceμ(xi){\displaystyle \mu (x_{i})}must be between 0 and 1, a local polynomial model should not be used forμ(x){\displaystyle \mu (x)}directly. Insead, the logistic transformationθ(x)=log⁡(μ(x)1−μ(x)){\displaystyle \theta (x)=\log \left({\frac {\mu (x)}{1-\mu (x)}}\right)}can be used; equivalently,1−μ(x)=11+eθ(x);μ(x)=eθ(x)1+eθ(x){\displaystyle {\begin{aligned}1-\mu (x)&={\frac {1}{1+e^{\theta (x)}}};\\\mu (x)&={\frac {e^{\theta (x)}}{1+e^{\theta (x)}}}\end{aligned}}}and the mass function isf(Yi,θ(xi))=eYiθ(xi)1+eθ(xi).{\displaystyle f(Y_{i},\theta (x_{i}))={\frac {e^{Y_{i}\theta (x_{i})}}{1+e^{\theta (x_{i})}}}.} An asymptotic theory for local likelihood estimation is developed in J. Fan,Nancy E. Heckmanand M.P.Wand (1995);[30]the book Loader (1999)[31]discusses many more applications of local likelihood. To address the sensitivity to outliers, techniques fromrobust regressioncan be employed. In localM-estimation, the local least-squares criterion is replaced by a criterion of the form∑i=1nwi(x)ρ(Yi−β0−…−βp(xi−x)ps){\displaystyle \sum _{i=1}^{n}w_{i}(x)\rho \left({\frac {Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}}{s}}\right)}whereρ(⋅){\displaystyle \rho (\cdot )}is a robustness function ands{\displaystyle s}is a scale parameter. Discussion of the merits of different choices of robustness function is best left to therobust regressionliterature. The scale parameters{\displaystyle s}must also be estimated. References for local M-estimation include Katkovnik (1985)[17]andAlexandre Tsybakov(1986).[32] The robustness iterations in LOWESS and LOESS correspond to the robustness function defined byρ′(u)=u(1−u2/6)2;|u|<1{\displaystyle \rho '(u)=u(1-u^{2}/6)^{2};|u|<1}and a robust global estimate of the scale parameter. Ifρ(u)=|u|{\displaystyle \rho (u)=|u|}, the localL1{\displaystyle L_{1}}criterion∑i=1nwi(x)|Yi−β0−…−βp(xi−x)p|{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left|Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}\right|}results; this does not require a scale parameter. Whenp=0{\displaystyle p=0}, this criterion is minimized by a locally weighted median; localL1{\displaystyle L_{1}}regression can be interpreted as estimating themedian, rather thanmean, response. If the loss function is skewed, this becomes local quantile regression. SeeKeming YuandM.C. Jones(1998).[33] As discussed above, the biggest advantage LOESS has over many other methods is the process of fitting a model to the sample data does not begin with the specification of a function. Instead the analyst only has to provide a smoothing parameter value and the degree of the local polynomial. In addition, LOESS is very flexible, making it ideal for modeling complex processes for which no theoretical models exist. These two advantages, combined with the simplicity of the method, make LOESS one of the most attractive of the modern regression methods for applications that fit the general framework of least squares regression but which have a complex deterministic structure. Although it is less obvious than for some of the other methods related to linear least squares regression, LOESS also accrues most of the benefits typically shared by those procedures. The most important of those is the theory for computing uncertainties for prediction and calibration. Many other tests and procedures used for validation of least squares models can also be extended to LOESS models[citation needed]. LOESS makes less efficient use of data than other least squares methods. It requires fairly large, densely sampled data sets in order to produce good models. This is because LOESS relies on the local data structure when performing the local fitting. Thus, LOESS provides less complex data analysis in exchange for greater experimental costs.[7] Another disadvantage of LOESS is the fact that it does not produce a regression function that is easily represented by a mathematical formula. This can make it difficult to transfer the results of an analysis to other people. In order to transfer the regression function to another person, they would need the data set and software for LOESS calculations. Innonlinear regression, on the other hand, it is only necessary to write down a functional form in order to provide estimates of the unknown parameters and the estimated uncertainty. Depending on the application, this could be either a major or a minor drawback to using LOESS. In particular, the simple form of LOESS can not be used for mechanistic modelling where fitted parameters specify particular physical properties of a system. Finally, as discussed above, LOESS is a computationally intensive method (with the exception of evenly spaced data, where the regression can then be phrased as a non-causalfinite impulse responsefilter). LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative,robustversion of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity tooutliers, but too many extreme outliers can still overcome even the robust method. Books substantially covering local regression and extensions: Book chapters, Reviews: This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Local_polynomial_regression
Data corruptionrefers to errors incomputer datathat occur during writing, reading, storage, transmission, or processing, which introduce unintended changes to the original data. Computer, transmission, and storage systems use a number of measures to provide end-to-enddata integrity, or lack of errors. In general, when data corruption occurs, afilecontaining that data will produce unexpected results when accessed by the system or the related application. Results could range from a minor loss of data to a system crash. For example, if adocument fileis corrupted, when a person tries to open that file with a document editor they may get anerror message, thus the file might not be opened or might open with some of the data corrupted (or in some cases, completely corrupted, leaving the document unintelligible). The adjacent image is a corrupted image file in which most of the information has been lost. Some types ofmalwaremay intentionally corrupt files as part of theirpayloads, usually by overwriting them with inoperative or garbage code, while a non-malicious virus may also unintentionally corrupt files when it accesses them. If a virus ortrojanwith this payload method manages to alter files critical to the running of the computer's operating system software or physical hardware, the entire system may be rendered unusable. Some programs can give a suggestion to repair the file automatically (after the error), and some programs cannot repair it. It depends on the level of corruption, and the built-in functionality of the application to handle the error. There are various causes of the corruption. There are two types of data corruption associated with computer systems: undetected and detected. Undetected data corruption, also known assilent data corruption, results in the most dangerous errors as there is no indication that the data is incorrect. Detected data corruption may be permanent with the loss of data, or may be temporary when some part of the system is able to detect and correct the error; there is no data corruption in the latter case. Data corruption can occur at any level in a system, from the host to the storage medium. Modern systems attempt to detect corruption at many layers and then recover or correct the corruption; this is almost always successful but very rarely the information arriving in the systems memory is corrupted and can cause unpredictable results. Data corruption during transmission has a variety of causes. Interruption of data transmission causesinformation loss. Environmental conditions can interfere with data transmission, especially when dealing with wireless transmission methods. Heavy clouds can block satellite transmissions. Wireless networks are susceptible to interference from devices such as microwave ovens. Hardware and software failure are the two main causes fordata loss.Background radiation,head crashes, andagingor wear of the storage device fall into the former category, while software failure typically occurs due tobugsin the code.Cosmic rayscause mostsoft errorsin DRAM.[1] Some errors go unnoticed, without being detected by the disk firmware or the host operating system; these errors are known assilent data corruption.[2] There are many error sources beyond the disk storage subsystem itself. For instance, cables might be slightly loose, the power supply might be unreliable,[3]external vibrations such as a loud sound,[4]the network might introduce undetected corruption,[5]cosmic radiationand many other causes ofsoft memory errors, etc. In 39,000 storage systems that were analyzed, firmware bugs accounted for 5–10% of storage failures.[6]All in all, the error rates as observed by aCERNstudy on silent corruption are far higher than one in every 1016bits.[7]WebshopAmazon.comhas acknowledged similar high data corruption rates in their systems.[8]In 2021, faulty processor cores were identified as an additional cause in publications by Google and Facebook; cores were found to be faulty at a rate of several in thousands of cores.[9][10] One problem is that hard disk drive capacities have increased substantially, but their error rates remain unchanged. The data corruption rate has always been roughly constant in time, meaning that modern disks are not much safer than old disks. In old disks the probability of data corruption was very small because they stored tiny amounts of data. In modern disks the probability is much larger because they store much more data, whilst not being safer. That way, silent data corruption has not been a serious concern while storage devices remained relatively small and slow. In modern times and with the advent of larger drives and very fast RAID setups, users are capable of transferring 1016bits in a reasonably short time, thus easily reaching the data corruption thresholds.[11] As an example,ZFScreator Jeff Bonwick stated that the fast database atGreenplum, which is a database software company specializing in large-scale data warehousing and analytics, faces silent corruption every 15 minutes.[12]As another example, a real-life study performed byNetAppon more than 1.5 million HDDs over 41 months found more than 400,000 silent data corruptions, out of which more than 30,000 were not detected by the hardware RAID controller (only detected duringscrubbing).[13]Another study, performed byCERNover six months and involving about 97petabytesof data, found that about 128megabytesof data became permanently corrupted silently somewhere in the pathway from network to disk.[14] Silent data corruption may result incascading failures, in which the system may run for a period of time with undetected initial error causing increasingly more problems until it is ultimately detected.[15]For example, a failure affecting file systemmetadatacan result in multiple files being partially damaged or made completely inaccessible as the file system is used in its corrupted state. When data corruption behaves as aPoisson process, where eachbitof data has an independently low probability of being changed, data corruption can generally be detected by the use ofchecksums, and can often becorrectedby the use oferror correcting codes(ECC). If an uncorrectable data corruption is detected, procedures such as automatic retransmission or restoration frombackupscan be applied. Certain levels ofRAIDdisk arrays have the ability to store and evaluateparity bitsfor data across a set of hard disks and can reconstruct corrupted data upon the failure of a single or multiple disks, depending on the level of RAID implemented. SomeCPUarchitectures employ various transparent checks to detect and mitigate data corruption inCPU caches,CPU buffersandinstruction pipelines; an example isIntel Instruction Replaytechnology, which is available onIntel Itaniumprocessors.[16] Many errors are detected and corrected by the hard disk drives using the ECC codes[17]which are stored on disk for each sector. If the disk drive detects multiple read errors on a sector it may make a copy of the failing sector on another part of the disk, by remapping the failed sector of the disk to a spare sector without the involvement of the operating system (though this may be delayed until the next write to the sector). This "silent correction" can be monitored usingS.M.A.R.T.and tools available for most operating systems to automatically check the disk drive for impending failures by watching for deteriorating SMART parameters. Somefile systems, such asBtrfs,HAMMER,ReFS, andZFS, use internal data andmetadatachecksumming to detect silent data corruption. In addition, if a corruption is detected and the file system uses integrated RAID mechanisms that providedata redundancy, such file systems can also reconstruct corrupted data in a transparent way.[18]This approach allows improved data integrity protection covering the entire data paths, which is usually known asend-to-end data protection, compared with other data integrity approaches that do not span different layers in the storage stack and allow data corruption to occur while the data passes boundaries between the different layers.[19] Data scrubbingis another method to reduce the likelihood of data corruption, as disk errors are caught and recovered from before multiple errors accumulate and overwhelm the number of parity bits. Instead of parity being checked on each read, the parity is checked during a regular scan of the disk, often done as a low priority background process. The "data scrubbing" operation activates a parity check. If a user simply runs a normal program that reads data from the disk, then the parity would not be checked unless parity-check-on-read was both supported and enabled on the disk subsystem. If appropriate mechanisms are employed to detect and remedy data corruption, data integrity can be maintained. This is particularly important in commercial applications (e.g.banking), where an undetected error could either corrupt a database index or change data to drastically affect an account balance, and in the use ofencryptedorcompresseddata, where a small error can make an extensive dataset unusable.[7]
https://en.wikipedia.org/wiki/End-to-end_data_integrity
Arepeating decimalorrecurring decimalis adecimal representationof a number whosedigitsare eventuallyperiodic(that is, after some place, the same sequence of digits is repeated forever); if this sequence consists only of zeros (that is if there is only a finite number of nonzero digits), the decimal is said to beterminating, and is not considered as repeating. It can be shown that a number isrationalif and only if its decimal representation is repeating or terminating. For example, the decimal representation of⁠1/3⁠becomes periodic just after thedecimal point, repeating the single digit "3" forever, i.e. 0.333.... A more complicated example is⁠3227/555⁠, whose decimal becomes periodic at theseconddigit following the decimal point and then repeats the sequence "144" forever, i.e. 5.8144144144.... Another example of this is⁠593/53⁠, which becomes periodic after the decimal point, repeating the 13-digit pattern "1886792452830" forever, i.e. 11.18867924528301886792452830.... The infinitely repeated digit sequence is called therepetendorreptend. If the repetend is a zero, this decimal representation is called aterminating decimalrather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros.[1]Every terminating decimal representation can be written as adecimal fraction, a fraction whose denominator is apowerof 10 (e.g.1.585 =⁠1585/1000⁠); it may also be written as aratioof the form⁠k/2n·5m⁠(e.g.1.585 =⁠317/23·52⁠). However,everynumber with a terminating decimal representation also trivially has a second, alternative representation as a repeating decimal whose repetend is the digit "9". This is obtained by decreasing the final (rightmost) non-zero digit by one and appending a repetend of 9. Two examples of this are1.000... = 0.999...and1.585000... = 1.584999.... (This type of repeating decimal can be obtained by long division if one uses a modified form of the usualdivision algorithm.[2]) Any number that cannot be expressed as aratioof twointegersis said to beirrational. Their decimal representation neither terminates nor infinitely repeats, but extends forever without repetition (see§ Every rational number is either a terminating or repeating decimal). Examples of such irrational numbers are√2andπ.[3] There are several notational conventions for representing repeating decimals. None of them are accepted universally. In English, there are various ways to read repeating decimals aloud. For example, 1.234may be read "one point two repeating three four", "one point two repeated three four", "one point two recurring three four", "one point two repetend three four" or "one point two into infinity three four". Likewise, 11.1886792452830may be read "eleven point repeating one double eight six seven nine two four five two eight three zero", "eleven point repeated one double eight six seven nine two four five two eight three zero", "eleven point recurring one double eight six seven nine two four five two eight three zero" "eleven point repetend one double eight six seven nine two four five two eight three zero" or "eleven point into infinity one double eight six seven nine two four five two eight three zero". In order to convert arational numberrepresented as a fraction into decimal form, one may uselong division. For example, consider the rational number⁠5/74⁠: etc. Observe that at each step we have a remainder; the successive remainders displayed above are 56, 42, 50. When we arrive at 50 as the remainder, and bring down the "0", we find ourselves dividing 500 by 74, which is the same problem we began with. Therefore, the decimal repeats:0.0675675675.... For any integer fraction⁠A/B⁠, the remainder at step k, for any positive integerk, isA× 10k(moduloB). For any given divisor, only finitely many different remainders can occur. In the example above, the 74 possible remainders are 0, 1, 2, ..., 73. If at any point in the division the remainder is 0, the expansion terminates at that point. Then the length of the repetend, also called "period", is defined to be 0. If 0 never occurs as a remainder, then the division process continues forever, and eventually, a remainder must occur that has occurred before. The next step in the division will yield the same new digit in the quotient, and the same new remainder, as the previous time the remainder was the same. Therefore, the following division will repeat the same results. The repeating sequence of digits is called "repetend" which has a certain length greater than 0, also called "period".[5] In base 10, a fraction has a repeating decimal if and only ifin lowest terms, its denominator has any prime factors besides 2 or 5, or in other words, cannot be expressed as 2m5n, wheremandnare non-negative integers. Each repeating decimal number satisfies alinear equationwith integer coefficients, and its unique solution is a rational number. In the example above,α= 5.8144144144...satisfies the equation The process of how to find these integer coefficients is describedbelow. Given a repeating decimalx=a.bc¯{\displaystyle x=a.b{\overline {c}}}wherea{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}are groups of digits, letn=⌈log10⁡b⌉{\displaystyle n=\lceil {\log _{10}b}\rceil }, the number of digits ofb{\displaystyle b}. Multiplying by10n{\displaystyle 10^{n}}separates the repeating and terminating groups: 10nx=ab.c¯.{\displaystyle 10^{n}x=ab.{\bar {c}}.} If the decimals terminate (c=0{\displaystyle c=0}), the proof is complete.[6]Forc≠0{\displaystyle c\neq 0}withk∈N{\displaystyle k\in \mathbb {N} }digits, letx=y.c¯{\displaystyle x=y.{\bar {c}}}wherey∈Z{\displaystyle y\in \mathbb {Z} }is a terminating group of digits. Then, c=d1d2...dk{\displaystyle c=d_{1}d_{2}\,...d_{k}} wheredi{\displaystyle d_{i}}denotes thei-thdigit, and x=y+∑n=1∞c(10k)n=y+(c∑n=0∞1(10k)n)−c.{\displaystyle x=y+\sum _{n=1}^{\infty }{\frac {c}{{(10^{k})}^{n}}}=y+\left(c\sum _{n=0}^{\infty }{\frac {1}{{(10^{k})}^{n}}}\right)-c.} Since∑n=0∞1(10k)n=11−10−k{\displaystyle \textstyle \sum _{n=0}^{\infty }{\frac {1}{{(10^{k})}^{n}}}={\frac {1}{1-10^{-k}}}},[7] x=y−c+10kc10k−1.{\displaystyle x=y-c+{\frac {10^{k}c}{10^{k}-1}}.} Sincex{\displaystyle x}is the sum of an integer (y−c{\displaystyle y-c}) and a rational number (10kc10k−1{\textstyle {\frac {10^{k}c}{10^{k}-1}}}),x{\displaystyle x}is also rational.[8] Therebyfractionis theunit fraction⁠1/n⁠andℓ10is the length of the (decimal) repetend. The lengthsℓ10(n) of the decimal repetends of⁠1/n⁠,n= 1, 2, 3, ..., are: For comparison, the lengthsℓ2(n) of thebinaryrepetends of the fractions⁠1/n⁠,n= 1, 2, 3, ..., are: The decimal repetends of⁠1/n⁠,n= 1, 2, 3, ..., are: The decimal repetend lengths of⁠1/p⁠,p= 2, 3, 5, ... (nth prime), are: The least primespfor which⁠1/p⁠has decimal repetend lengthn,n= 1, 2, 3, ..., are: The least primespfor which⁠k/p⁠hasndifferent cycles (1 ≤k≤p−1),n= 1, 2, 3, ..., are: A fractionin lowest termswith aprimedenominator other than 2 or 5 (i.e.coprimeto 10) always produces a repeating decimal. The length of the repetend (period of the repeating decimal segment) of⁠1/p⁠is equal to theorderof 10 modulop. If 10 is aprimitive rootmodulop, then the repetend length is equal top− 1; if not, then the repetend length is a factor ofp− 1. This result can be deduced fromFermat's little theorem, which states that10p−1≡ 1 (modp). The base-10digital rootof the repetend of the reciprocal of any prime number greater than 5 is 9.[9] If the repetend length of⁠1/p⁠for primepis equal top− 1 then the repetend, expressed as an integer, is called acyclic number. Examples of fractions belonging to this group are: The list can go on to include the fractions⁠1/109⁠,⁠1/113⁠,⁠1/131⁠,⁠1/149⁠,⁠1/167⁠,⁠1/179⁠,⁠1/181⁠,⁠1/193⁠,⁠1/223⁠,⁠1/229⁠, etc. (sequenceA001913in theOEIS). Everypropermultiple of a cyclic number (that is, a multiple having the same number of digits) is a rotation: The reason for the cyclic behavior is apparent from an arithmetic exercise of long division of⁠1/7⁠: the sequential remainders are the cyclic sequence{1, 3, 2, 6, 4, 5}. See also the article142,857for more properties of this cyclic number. A fraction which is cyclic thus has a recurring decimal of even length that divides into two sequences innines' complementform. For example⁠1/7⁠starts '142' and is followed by '857' while⁠6/7⁠(by rotation) starts '857' followed byitsnines' complement '142'. The rotation of the repetend of a cyclic number always happens in such a way that each successive repetend is a bigger number than the previous one. In the succession above, for instance, we see that 0.142857... < 0.285714... < 0.428571... < 0.571428... < 0.714285... < 0.857142.... This, for cyclic fractions with long repetends, allows us to easily predict what the result of multiplying the fraction by any natural number n will be, as long as the repetend is known. Aproper primeis a primepwhich ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repetend with lengthp− 1. In such primes, each digit 0, 1,..., 9 appears in the repeating sequence the same number of times as does each other digit (namely,⁠p− 1/10⁠times). They are:[10]: 166 A prime is a proper prime if and only if it is afull reptend primeandcongruentto 1 mod 10. If a primepis bothfull reptend primeandsafe prime, then⁠1/p⁠will produce a stream ofp− 1pseudo-random digits. Those primes are Some reciprocals of primes that do not generate cyclic numbers are: (sequenceA006559in theOEIS) The reason is that 3 is a divisor of 9, 11 is a divisor of 99, 41 is a divisor of 99999, etc. To find the period of⁠1/p⁠, we can check whether the primepdivides some number 999...999 in which the number of digits dividesp− 1. Since the period is never greater thanp− 1, we can obtain this by calculating⁠10p−1− 1/p⁠. For example, for 11 we get and then by inspection find the repetend 09 and period of 2. Those reciprocals of primes can be associated with several sequences of repeating decimals. For example, the multiples of⁠1/13⁠can be divided into two sets, with different repetends. The first set is: where the repetend of each fraction is a cyclic re-arrangement of 076923. The second set is: where the repetend of each fraction is a cyclic re-arrangement of 153846. In general, the set of proper multiples of reciprocals of a primepconsists ofnsubsets, each with repetend lengthk, wherenk=p− 1. For an arbitrary integern, the lengthL(n) of the decimal repetend of⁠1/n⁠dividesφ(n), whereφis thetotient function. The length is equal toφ(n)if and only if 10 is aprimitive root modulon.[11] In particular, it follows thatL(p) =p− 1if and only ifpis a prime and 10 is a primitive root modulop. Then, the decimal expansions of⁠n/p⁠forn= 1, 2, ...,p− 1, all have periodp− 1 and differ only by a cyclic permutation. Such numberspare calledfull repetend primes. Ifpis a prime other than 2 or 5, the decimal representation of the fraction⁠1/p2⁠repeats: The period (repetend length)L(49) must be a factor ofλ(49) = 42, whereλ(n) is known as theCarmichael function. This follows fromCarmichael's theoremwhich states that ifnis a positive integer thenλ(n) is the smallest integermsuch that for every integerathat iscoprimeton. The period of⁠1/p2⁠is usuallypTp, whereTpis the period of⁠1/p⁠. There are three known primes for which this is not true, and for those the period of⁠1/p2⁠is the same as the period of⁠1/p⁠becausep2divides 10p−1−1. These three primes are 3, 487, and 56598313 (sequenceA045616in theOEIS).[12] Similarly, the period of⁠1/pk⁠is usuallypk–1Tp Ifpandqare primes other than 2 or 5, the decimal representation of the fraction⁠1/pq⁠repeats. An example is⁠1/119⁠: where LCM denotes theleast common multiple. The periodTof⁠1/pq⁠is a factor ofλ(pq) and it happens to be 48 in this case: The periodTof⁠1/pq⁠is LCM(Tp,Tq), whereTpis the period of⁠1/p⁠andTqis the period of⁠1/q⁠. Ifp,q,r, etc. are primes other than 2 or 5, andk,ℓ,m, etc. are positive integers, then is a repeating decimal with a period of whereTpk,Tqℓ,Trm,... are respectively the period of the repeating decimals⁠1/pk⁠,⁠1/qℓ⁠,⁠1/rm⁠,... as defined above. An integer that is not coprime to 10 but has a prime factor other than 2 or 5 has a reciprocal that is eventually periodic, but with a non-repeating sequence of digits that precede the repeating part. The reciprocal can be expressed as: whereaandbare not both zero. This fraction can also be expressed as: ifa>b, or as ifb>a, or as ifa=b. The decimal has: For example⁠1/28⁠= 0.03571428: Given a repeating decimal, it is possible to calculate the fraction that produces it. For example: Another example: The procedure below can be applied in particular if the repetend hasndigits, all of which are 0 except the final one which is 1. For instance forn= 7: So this particular repeating decimal corresponds to the fraction⁠1/10n− 1⁠, where the denominator is the number written asn9s. Knowing just that, a general repeating decimal can be expressed as a fraction without having to solve an equation. For example, one could reason: or It is possible to get a general formula expressing a repeating decimal with ann-digit period (repetend length), beginning right after the decimal point, as a fraction: More explicitly, one gets the following cases: If the repeating decimal is between 0 and 1, and the repeating block isndigits long, first occurring right after the decimal point, then the fraction (not necessarily reduced) will be the integer number represented by then-digit block divided by the one represented byn9s. For example, If the repeating decimal is as above, except that there arek(extra) digits 0 between the decimal point and the repeatingn-digit block, then one can simply addkdigits 0 after thendigits 9 of the denominator (and, as before, the fraction may subsequently be simplified). For example, Any repeating decimal not of the form described above can be written as a sum of a terminating decimal and a repeating decimal of one of the two above types (actually the first type suffices, but that could require the terminating decimal to be negative). For example, An even faster method is to ignore the decimal point completely and go like this It follows that any repeating decimal withperiodn, andkdigits after the decimal point that do not belong to the repeating part, can be written as a (not necessarily reduced) fraction whose denominator is (10n− 1)10k. Conversely the period of the repeating decimal of a fraction⁠c/d⁠will be (at most) the smallest numbernsuch that 10n− 1 is divisible byd. For example, the fraction⁠2/7⁠hasd= 7, and the smallestkthat makes 10k− 1 divisible by 7 isk= 6, because 999999 = 7 × 142857. The period of the fraction⁠2/7⁠is therefore 6. The following picture suggests kind of compression of the above shortcut. TherebyI{\displaystyle \mathbf {I} }represents the digits of the integer part of the decimal number (to the left of the decimal point),A{\displaystyle \mathbf {A} }makes up the string of digits of the preperiod and#A{\displaystyle \#\mathbf {A} }its length, andP{\displaystyle \mathbf {P} }being the string of repeated digits (the period) with length#P{\displaystyle \#\mathbf {P} }which is nonzero. In the generated fraction, the digit9{\displaystyle 9}will be repeated#P{\displaystyle \#\mathbf {P} }times, and the digit0{\displaystyle 0}will be repeated#A{\displaystyle \#\mathbf {A} }times. Note that in the absence of anintegerpart in the decimal,I{\displaystyle \mathbf {I} }will be represented by zero, which being to the left of the other digits, will not affect the final result, and may be omitted in the calculation of the generating function. Examples: 3.254444…=3.254¯={I=3A=25P=4#A=2#P=1}=3254−325900=29299000.512512…=0.512¯={I=0A=∅P=512#A=0#P=3}=512−0999=5129991.09191…=1.091¯={I=1A=0P=91#A=1#P=2}=1091−10990=10819901.333…=1.3¯={I=1A=∅P=3#A=0#P=1}=13−19=129=430.3789789…=0.3789¯={I=0A=3P=789#A=1#P=3}=3789−39990=37869990=6311665{\displaystyle {\begin{array}{lllll}3.254444\ldots &=3.25{\overline {4}}&={\begin{Bmatrix}\mathbf {I} =3&\mathbf {A} =25&\mathbf {P} =4\\&\#\mathbf {A} =2&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {3254-325}{900}}&={\dfrac {2929}{900}}\\\\0.512512\ldots &=0.{\overline {512}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =\emptyset &\mathbf {P} =512\\&\#\mathbf {A} =0&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {512-0}{999}}&={\dfrac {512}{999}}\\\\1.09191\ldots &=1.0{\overline {91}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =0&\mathbf {P} =91\\&\#\mathbf {A} =1&\#\mathbf {P} =2\end{Bmatrix}}&={\dfrac {1091-10}{990}}&={\dfrac {1081}{990}}\\\\1.333\ldots &=1.{\overline {3}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =\emptyset &\mathbf {P} =3\\&\#\mathbf {A} =0&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {13-1}{9}}&={\dfrac {12}{9}}&={\dfrac {4}{3}}\\\\0.3789789\ldots &=0.3{\overline {789}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =3&\mathbf {P} =789\\&\#\mathbf {A} =1&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {3789-3}{9990}}&={\dfrac {3786}{9990}}&={\dfrac {631}{1665}}\end{array}}} The symbol∅{\displaystyle \emptyset }in the examples above denotes the absence of digits of partA{\displaystyle \mathbf {A} }in the decimal, and therefore#A=0{\displaystyle \#\mathbf {A} =0}and a corresponding absence in the generated fraction. A repeating decimal can also be expressed as aninfinite series. That is, a repeating decimal can be regarded as the sum of an infinite number of rational numbers. To take the simplest example, The above series is ageometric serieswith the first term as⁠1/10⁠and the common factor⁠1/10⁠. Because the absolute value of the common factor is less than 1, we can say that the geometric seriesconvergesand find the exact value in the form of a fraction by using the following formula whereais the first term of the series andris the common factor. Similarly, The cyclic behavior of repeating decimals in multiplication also leads to the construction of integers which arecyclically permutedwhen multiplied by certain numbers. For example,102564 × 4 = 410256. 102564 is the repetend of⁠4/39⁠and 410256 the repetend of⁠16/39⁠. Various properties of repetend lengths (periods) are given by Mitchell[13]and Dickson.[14] For some other properties of repetends, see also.[15] Various features of repeating decimals extend to the representation of numbers in all other integer bases, not just base 10: For example, induodecimal,⁠1/2⁠= 0.6,⁠1/3⁠= 0.4,⁠1/4⁠= 0.3 and⁠1/6⁠= 0.2 all terminate;⁠1/5⁠= 0.2497repeats with period length 4, in contrast with the equivalent decimal expansion of 0.2;⁠1/7⁠= 0.186A35has period 6 in duodecimal, just as it does in decimal. Ifbis an integer base andkis an integer, then For example 1/7 in duodecimal:17=(1101+5102+21103+A5104+441105+1985106+⋯)base 12{\displaystyle {\frac {1}{7}}=\left({\frac {1}{10^{\phantom {1}}}}+{\frac {5}{10^{2}}}+{\frac {21}{10^{3}}}+{\frac {A5}{10^{4}}}+{\frac {441}{10^{5}}}+{\frac {1985}{10^{6}}}+\cdots \right)_{\text{base 12}}} which is 0.186A35base12. 10base12is 12base10, 102base12is 144base10, 21base12is 25base10, A5base12is 125base10. For a rational0 <⁠p/q⁠< 1(and baseb∈N>1) there is the following algorithm producing the repetend together with its length: The first highlighted line calculates the digitz. The subsequent line calculates the new remainderp′of the divisionmodulothe denominatorq. As a consequence of thefloor functionfloorwe have thus and Because all these remainderspare non-negative integers less thanq, there can be only a finite number of them with the consequence that they must recur in thewhileloop. Such a recurrence is detected by theassociative arrayoccurs. The new digitzis formed in the yellow line, wherepis the only non-constant. The lengthLof the repetend equals the number of the remainders (see also sectionEvery rational number is either a terminating or repeating decimal). Repeating decimals (also called decimal sequences) have found cryptographic and error-correction coding applications.[16]In these applications repeating decimals to base 2 are generally used which gives rise to binary sequences. The maximum length binary sequence for⁠1/p⁠(when 2 is a primitive root ofp) is given by:[17]
https://en.wikipedia.org/wiki/Recurring_decimal#Fractions_with_prime_denominators
Awatermarkis an identifying image or pattern inpaperthat appears as various shades of lightness/darkness when viewed by transmitted light (or when viewed by reflected light, atop a dark background), caused by thickness or density variations in the paper.[1]Watermarks have been used onpostage stamps,currency, and other government documents to discouragecounterfeiting. There are two main ways of producing watermarks in paper; thedandy roll process, and the more complexcylinder mould process. Watermarks vary greatly in their visibility; while some are obvious on casual inspection, others require some study to pick out. Various aids have been developed, such aswatermark fluidthat wets the paper without damaging it. A watermark is very useful in theexaminationof paper because it can be used for dating documents and artworks, identifying sizes, mill trademarks and locations, and determining the quality of a sheet of paper. The word is also used fordigital practices that share similaritieswith physical watermarks. In one case, overprint on computer-printed output may be used to identify output from an unlicensedtrial version of a program. In another instance, identifying codes can be encoded as adigital watermarkfor amusic,video,picture, or otherfile. Or an artist adding their identifying digital Signature, graphic, logo in theirdigital artworksas an identifier or anti-counterfeit measure . Watermarks were first introduced inFabriano, Italy, in 1282.[2]At the time, watermarks were created by changing the thickness of paper during a stage in the manufacturing process when it was still wet. Traditionally, a watermark was made by impressing a water-coated metal stamp onto thepaperduring manufacturing. The invention of the dandy roll in 1826 by John Marshall revolutionised the watermark process and made it easier for producers to watermark their paper. Thedandy rollis a light roller covered by material similar towindow screenthat is embossed with a pattern. Faint lines are made bylaid wiresthat run parallel to the axis of the dandy roll, and the bold lines are made bychain wiresthat run around the circumference to secure the laid wires to the roll from the outside. Because the chain wires are located on the outside of the laid wires, they have a greater influence on the impression in the pulp, hence their bolder appearance than the laid wire lines. This embossing is transferred to thepulpfibres, compressing and reducing their thickness in that area. Because the patterned portion of thepageis thinner, it transmits morelightthrough and therefore has a lighter appearance than the surrounding paper. If these lines are distinct and parallel, and/or there is a watermark, then the paper is termedlaid paper. If the lines appear as ameshor are indiscernible, and/or there is no watermark, then it is calledwove paper. This method is calledline drawing watermarks. Another type of watermark is called thecylinder mould watermark. It is a shaded watermark first used in 1848 that incorporates tonal depth and creates agreyscaleimage. Instead of using a wire covering for the dandy roll, the shaded watermark is created by areas of relief on the roll's own surface. Once dry, the paper may then be rolled again to produce a watermark of even thickness but with varying density. The resulting watermark is generally much clearer and more detailed than those made by the Dandy Roll process, and as such, Cylinder Mould Watermark Paper is the preferred type of watermarked paper for banknotes, passports, motor vehicle titles, and other documents where it is an important anti-counterfeiting measure. Inphilately, the watermark is a key feature of a stamp, and often constitutes the difference between a common and a rare stamp. Collectors who encounter two otherwise identical stamps with different watermarks consider each stamp to be a separate identifiable issue.[3]The "classic" stamp watermark is a small crown or other national symbol, appearing either once on each stamp or a continuous pattern. Watermarks were nearly universal on stamps in the 19th and early 20th centuries, but generally fell out of use, but some countries continue to use them.[4] Some types ofembossing, such as that used to make the "cross on oval" design on early stamps ofSwitzerland, resemble a watermark in that the paper is thinner, but can be distinguished by having sharper edges than is usual for a normal watermark. Stamp paper watermarks also show various designs, letters, numbers and pictorial elements. The process of bringing out the stamp watermark is fairly simple. Sometimes a watermark in stamp paper can be seen just by looking at the unprinted back side of a stamp. More often, the collector must use a few basic items to get a good look at the watermark. For example, watermark fluid may be applied to the back of a stamp to temporarily reveal the watermark.[4] Even using the simple watermarking method described, it can be difficult to distinguish some watermarks. Watermarks on stamps printed in yellow and orange can be particularly difficult to see. A few mechanical devices are also used by collectors to detect watermarks on stamps such as the Morley-Bright watermark detector and the more expensiveSafe Signoscope.[5]Such devices can be very useful for they can be used without the application of watermark fluid and also allow the collector to look at the watermark for a longer period of time to more easily detect the watermark.
https://en.wikipedia.org/wiki/Watermark
Observational learningislearningthat occurs through observing thebehaviorof others. It is a form ofsocial learningwhich takes various forms, based on various processes. In humans, this form of learning seems to not needreinforcementto occur, but instead, requires a social model such as aparent,sibling,friend, orteacherwith surroundings. Particularly in childhood, a model is someone of authority or higher status in an environment. In animals, observational learning is often based onclassical conditioning, in which aninstinctivebehavior is elicited by observing the behavior of another (e.g. mobbing in birds), but other processes may be involved as well.[1] Many behaviors that a learner observes, remembers, and imitates are actions that models display and display modeling, even though the model may not intentionally try to instill a particular behavior. A child may learn to swear, smack, smoke, and deem other inappropriate behavior acceptable through poor modeling.Albert Banduraclaims that children continually learn desirable and undesirable behavior through observational learning. Observational learning suggests that an individual's environment,cognition, and behavior all incorporate and ultimately determine how the individual functions and models.[2] Through observational learning, individual behaviors can spread across a culture through a process calleddiffusionchain. This basically occurs when an individual first learns a behavior by observing another individual and that individual serves as a model through whom other individuals learn the behavior, and so on.[3] Cultureplays a role in whether observational learning is the dominant learning style in a person orcommunity. Some cultures expect children to actively participate in their communities and are therefore exposed to different trades and roles on a daily basis.[4]This exposure allows children to observe and learn the different skills and practices that are valued in their communities.[5] Albert Bandura, who is known for the classicBobo doll experiment, identified this basic form of learning in 1961. The importance of observational learning lies in helping individuals, especially children, acquire new responses by observing others' behavior. Albert Bandura states that people's behavior could be determined by their environment. Observational learning occurs through observing negative and positive behaviors. Bandura believes inreciprocal determinismin which the environment can influence people's behavior and vice versa. For instance, the Bobo doll experiment shows that the model, in a determined environment, affects children's behavior. In this experiment Bandura demonstrates that one group of children placed in an aggressive environment would act the same way, while the control group and the other group of children placed in a passive role model environment hardly shows any type of aggression.[6] In communities where children's primary mode of learning is through observation, thechildrenare rarely separated from adult activities. This incorporation into the adult world at an early age allows children to use observational learning skills in multiple spheres of life. This learning through observation requires keen attentive abilities. Culturally, they learn that their participation and contributions are valued in their communities. This teaches children that it is their duty, as members of the community, to observe others' contributions so they gradually become involved and participate further in the community.[7] The stages of observational learning include exposure to the model, acquiring the model's behaviour and accepting it as one's own. Bandura'ssocial cognitive learning theorystates that there are four factors that influence observational learning:[8] Bandura clearly distinguishes between learning and performance. Unless motivated, a person does not produce learned behavior. This motivation can come from external reinforcement, such as the experimenter's promise of reward in some of Bandura's studies, or the bribe of a parent. Or it can come from vicarious reinforcement, based on the observation that models are rewarded. High-status models can affect performance through motivation. For example, girls aged 11 to 14 performed better on a motor performance task when they thought it was demonstrated by a high-status cheerleader than by a low-status model.[9] Some have even added a step between attention and retention involving encoding a behavior. Observational learning leads to a change in an individual's behavior along three dimensions: According to Bandura's social cognitive learning theory, observational learning can affect behavior in many ways, with both positive and negative consequences. It can teach completely new behaviors, for one. It can also increase or decrease the frequency of behaviors that have previously been learned. Observational learning can even encourage behaviors that were previously forbidden (for example, the violent behavior towards the Bobo doll that children imitated in Albert Bandura's study). Observational learning can also influence behaviors that are similar to, but not identical to, the ones being modeled. For example, seeing a model excel at playing the piano may motivate an observer to play the saxophone. Albert Bandurastressed that developing children learn from different social models, meaning that no two children are exposed to exactly the same modeling influence. Frominfancytoadolescence, they are exposed to various social models. A 2013 study found that a toddlers' previous social familiarity with a model was not always necessary for learning and that they were also able to learn from observing a stranger demonstrating or modeling a new action to another stranger.[11] It was once believed that babies could not imitate actions until the latter half of the first year. However, a number of studies now report that infants as young as seven days can imitate simple facial expressions. By the latter half of their first year, 9-month-old babies can imitate actions hours after they first see them. As they continue to develop, toddlers around age two can acquire important personal andsocial skillsby imitating a social model. Deferred imitationis an important developmental milestone in a two-year-old, in which children not only construct symbolic representations but can also remember information.[12]Unlike toddlers, children ofelementary schoolage are less likely to rely on imagination to represent an experience. Instead, they can verbally describe the model's behavior.[13]Since this form of learning does not need reinforcement, it is more likely to occur regularly. As age increases, age-related observational learning motor skills may decrease in athletes and golfers.[14]Younger and skilled golfers have higher observational learning compared to older golfers and less skilled golfers. Humans use observational Moleen causal learning to watch other people's actions and use the information gained to find out how something works and how we can do it ourselves. A study of 25-month-old infants found that they can learn causal relations from observing human interventions. They also learn by observing normal actions not created by intentional human action.[15] Observational learning is presumed to have occurred when an organism copies an improbable action or action outcome that it has observed and the matching behavior cannot be explained by an alternative mechanism. Psychologists have been particularly interested in the form of observational learning known as imitation and in how to distinguish imitation from other processes. To successfully make this distinction, one must separate the degree to which behavioral similarity results from (a)predisposed behavior, (b) increased motivation resulting from the presence of another animal, (c) attention drawn to a place or object, (d) learning about the way the environment works, as distinguished from what we think of as (e) imitation (the copying of the demonstrated behavior).[16] Observational learning differs fromimitative learningin that it does not require a duplication of the behavior exhibited by the model. For example, the learner may observe an unwanted behavior and the subsequent consequences, and thus learn to refrain from that behavior. For example, Riopelle (1960) found that monkeys did better with observational learning if they saw the "tutor" monkey make a mistake before making the right choice.[17]Heyes (1993) distinguished imitation and non-imitative social learning in the following way: imitation occurs when animals learn about behavior from observing conspecifics, whereas non-imitative social learning occurs when animals learn about the environment from observing others.[18] Not all imitation and learning through observing is the same, and they often differ in the degree to which they take on an active or passive form.John Deweydescribes an important distinction between two different forms of imitation: imitation as an end in itself and imitation with a purpose.[19]Imitation as an end is more akin to mimicry, in which a person copies another's act to repeat that action again. This kind of imitation is often observed in animals. Imitation with a purpose utilizes the imitative act as a means to accomplish something more significant. Whereas the more passive form of imitation as an end has been documented in some European American communities, the other kind of more active, purposeful imitation has been documented in other communities around the world. Observation may take on a more active form in children's learning in multipleIndigenous American communities.Ethnographicanthropologicalstudies in Yucatec Mayan and Quechua Peruvian communities provide evidence that the home or community-centered economic systems of these cultures allow children to witness first-hand, activities that are meaningful to their own livelihoods and the overall well-being of the community.[20]These children have the opportunity to observe activities that are relevant within the context of that community, which gives them a reason to sharpen their attention to the practical knowledge they are exposed to. This does not mean that they have to observe the activities even though they are present. The children often make an active decision to stay in attendance while a community activity is taking place to observe and learn.[20]This decision underscores the significance of this learning style in many indigenous American communities. It goes far beyond learning mundane tasks through rote imitation; it is central to children's gradual transformation into informed members of their communities' unique practices. There was also a study, done with children, that concluded that Imitated behavior can be recalled and used in another situation or the same.[21] Apprenticeshipcan involve both observational learning and modelling. Apprentices gain their skills in part through working with masters in their profession and through observing and evaluating the work of their fellow apprentices. Examples include renaissance inventor/painter Leonardo da Vinci and Michelangelo; before succeeding in their profession, they were apprentices.[22] Michael Tomasellodescribed various ways of observational learning without the process of imitation in animals[23](ethology): Observational learning is very beneficial when there are positive, reinforcing peer models involved. Although individuals go through four different stages for observational learning: attention; retention; production; and motivation, this does not simply mean that when an individual's attention is captured that it automatically sets the process in that exact order. One of the most important ongoing stages for observational learning, especially among children, is motivation andpositive reinforcement.[26] Performance is enhanced when children are positively instructed on how they can improve a situation and where children actively participate alongside a more skilled person. Examples of this are scaffolding and guided participation. Scaffolding refers to an expert responding contingently to a novice so the novice gradually increases their understanding of a problem. Guided participation refers to an expert actively engaging in a situation with a novice so the novice participates with or observes the adult to understand how to resolve a problem.[27] Cultural variationcan be seen by the extent of information learned or absorbed by children in non-Western cultures through learning by observation. Cultural variation is not restricted only to ethnicity and nationality, but rather, extends to the specific practices within communities. In learning by observation, children use observation to learn without verbal requests for further information, or without direct instruction. For example, children from Mexican heritage families tend to learn and make better use of information observed during classroom demonstration than children of European heritage.[28][29]Children of European heritage experience the type of learning that separates them from their family and community activities. They instead participate in lessons and other exercises in special settings such as school.[30]Cultural backgrounds differ from each other in which children display certain characteristics in regards to learning an activity. Another example is seen in the immersion of children in someIndigenous communities of the Americasinto the adult world and the effects it has on observational learning and the ability to complete multiple tasks simultaneously.[7]This might be due to children in these communities having the opportunity to see a task being completed by their elders or peers and then trying to emulate the task. In doing so they learn to value observation and the skill-building it affords them because of the value it holds within their community.[5]This type of observation is not passive, but reflects the child's intent to participate or learn within a community.[4] Observational learning can be seen taking place in many domains of Indigenous communities. The classroom setting is one significant example, and it functions differently for Indigenous communities compared to what is commonly present in Western schooling. The emphasis of keen observation in favor of supporting participation in ongoing activities strives to aid children to learn the important tools and ways of their community.[28]Engaging in shared endeavors – with both the experienced and inexperienced – allows for the experienced to understand what the inexperienced need in order to grow in regards to the assessment of observational learning.[28]The involvement of the inexperienced, or the children in this matter, can either be furthered by the children's learning or advancing into the activity performed by the assessment of observational learning.[29]Indigenous communities rely on observational learning as a way for their children to be a part of ongoing activities in the community (Tharp, 2006). Although learning in the Indigenous American communities is not always the central focus when participating in an activity,[29]studies have shown that attention in intentional observation differs from accidental observation. Intentional participation is "keen observation and listening in anticipation of, or in the process of engaging in endeavors". This means that when they have the intention of participating in an event, their attention is more focused on the details, compared to when they are accidentally observing. Observational learning can be an active process in many Indigenous American communities. The learner must take initiative to attend to activities going on around them. Children in these communities also take initiative to contribute their knowledge in ways that will benefit their community. For example, in many Indigenous American cultures, children perform household chores without being instructed to do so by adults. Instead, they observe a need for their contributions, understand their role in their community, and take initiative to accomplish the tasks they have observed others doing.[31]The learner's intrinsic motivations play an important role in the child's understanding and construction of meaning in these educational experiences. The independence and responsibility associated with observational learning in many Indigenous American communities are significant reasons why this method of learning involves more than just watching and imitating. A learner must be actively engaged with their demonstrations and experiences in order to fully comprehend and apply the knowledge they obtain.[32] Children fromindigenous heritage communitiesof the Americas oftenlearn through observation, a strategy that can carry over into adulthood. The heightened value towards observation allows children tomulti-task and actively engage in simultaneous activities. The exposure to an uncensored adult lifestyle allows children toobserve and learnthe skills and practices that are valued in their communities.[5]Children observe elders, parents, and siblings complete tasks and learn to participate in them. They are seen as contributors and learn to observe multiple tasks being completed at once and can learn to complete a task while still engaging with other community members without being distracted. Indigenous communities provide moreopportunitiesto incorporatechildrenin everyday life.[33]This can be seen in someMayancommunities where children are given full access to community events, which allows observational learning to occur more often.[33]Other children inMazahua, Mexicoare known to observe ongoing activities intensely .[33]In native northern Canadian and indigenous Mayan communities, children often learn as third-party observers fromstoriesand conversations by others.[34]Most young Mayan children are carried on their mother's back, allowing them to observe their mother's work and see the world as their mother sees it.[35]Often, children in Indigenous American communities assume the majority of the responsibility for their learning. Additionally, children find their own approaches to learning.[36]Children are often allowed to learn without restrictions and with minimal guidance. They are encouraged to participate in the community even if they do not know how to do the work. They are self-motivated to learn and finish their chores.[37]These children act as a second set of eyes and ears for their parents, updating them about the community.[38] Children aged 6 to 8 in an indigenous heritage community inGuadalajara, Mexicoparticipated in hard work, such as cooking or running errands, thus benefiting the whole family, while those in the city of Guadalajara rarely did so. These children participated more in adult regulated activities and had little time to play, while those from the indigenous-heritage community had more time to play and initiate in their after-school activities and had a higher sense of belonging to their community.[39]Children from formerly indigenous communities are more likely to show these aspects than children from cosmopolitan communities are, even after leaving their childhood community[40] Within certain indigenous communities, people do not typically seek out explanations beyond basic observation. This is because they are competent in learning through astute observation and often nonverbally encourage to do so. In a Guatemalan footloom factory, amateur adult weavers observed skilled weavers over the course of weeks without questioning or being given explanations; the amateur weaver moved at their own pace and began when they felt confident.[33]The framework of learning how to weave through observation can serve as a model that groups within a society use as a reference to guide their actions in particular domains of life.[41]Communities that participate in observational learning promote tolerance and mutual understand of those coming from different cultural backgrounds.[42] When an animal is given a task to complete, they are almost always more successful after observing another animal doing the same task before them. Experiments have been conducted on several different species with the same effect: animals can learn behaviors from peers. However, there is a need to distinguish the propagation of behavior and the stability of behavior. Research has shown that social learning can spread a behavior, but there are more factors regarding how a behavior carries across generations of ananimal culture.[43] Experiments withninespine sticklebacksshowed that individuals will use social learning to locate food.[43] A study in 1996 at the University of Kentucky used a foraging device to test social learning in pigeons. A pigeon could access the food reward by either pecking at a treadle or stepping on it. Significant correspondence was found between the methods of how the observers accessed their food and the methods the initial model used in accessing the food.[44] Studies have been conducted at the University of Oslo and University of Saskatchewan regarding the possibility of social learning in birds, delineating the difference between cultural and genetic acquisition.[45]Strong evidence already exists formate choice, bird song, predator recognition, and foraging. Researchers cross-fostered eggs between nests of blue tits and great tits and observed the resulting behavior through audio-visual recording. Tits raised in the foster family learned their foster family's foraging sites early. This shift—from the sites the tits would among their own kind and the sites they learned from the foster parents—lasted for life. What young birds learn from foster parents, they eventually transmitted to their own offspring. This suggests cultural transmissions of foraging behavior over generations in the wild.[46] The University of Washington studied this phenomenon with crows, acknowledging the evolutionary tradeoff between acquiring costly information firsthand and learning that information socially with less cost to the individual but at the risk of inaccuracy. The experimenters exposed wild crows to a unique "dangerous face" mask as they trapped, banded, and released 7-15 birds at five different study places around Seattle, WA. An immediate scolding response to the mask after trapping by previously captured crows illustrates that the individual crow learned the danger of that mask. There was a scolding from crows that were captured that had not been captured initially. That response indicates conditioning from the mob of birds that assembled during the capture. Horizontal social learning (learning from peers) is consistent with the lone crows that recognized the dangerous face without ever being captured. Children of captured crow parents were conditioned to scold the dangerous mask, which demonstrates vertical social learning (learning from parents). The crows that were captured directly had the most precise discrimination between dangerous and neutral masks than the crows that learned from the experience of their peers. The ability of crows to learn doubled the frequency of scolding, which spread at least 1.2  km from where the experiment started to over a 5-year period at one site.[47] Researchers at the Département d’Etudes Cognitives, Institut Jean Nicod, Ecole Normale Supérieure acknowledged a difficulty with research in social learning. To count acquired behavior as cultural, two conditions need must be met: the behavior must spread in a social group, and that behavior must be stable across generations. Research has provided evidence that imitation may play a role in the propagation of a behavior, but these researchers believe the fidelity of this evidence is not sufficient to prove the stability of animal culture. Other factors like ecological availability, reward-based factors, content-based factors, and source-based factors might explain the stability of animal culture in a wild rather than just imitation. As an example of ecological availability, chimps may learn how to fish for ants with a stick from their peers, but that behavior is also influenced by the particular type of ants as well as the condition. A behavior may be learned socially, but the fact that it was learned socially does not necessarily mean it will last. The fact that the behavior is rewarding has a role in cultural stability as well. The ability for socially-learned behaviors to stabilize across generations is also mitigated by the complexity of the behavior. Different individuals of a species, like crows, vary in their ability to use a complex tool. Finally, a behavior's stability in animal culture depends on the context in which they learn a behavior. If a behavior has already been adopted by a majority, then the behavior is more likely to carry across generations out of a need for conforming. Animals are able to acquire behaviors from social learning, but whether or not that behavior carries across generations requires more investigation.[48] Experiments with hummingbirds provided one example of apparent observational learning in a non-human organism. Hummingbirds were divided into two groups. Birds in one group were exposed to the feeding of a knowledgeable "tutor" bird; hummingbirds in the other group did not have this exposure. In subsequent tests the birds that had seen a tutor were more efficient feeders than the others.[49] Herman (2002) suggested thatbottlenose dolphinsproduce goal-emulated behaviors rather than imitative ones. A dolphin that watches a model place a ball in a basket might place the ball in the basket when asked to mimic the behavior, but it may do so in a different manner seen.[50] Kinnaman (1902) reported that onerhesus monkeylearned to pull a plug from a box with its teeth to obtain food after watching another monkey succeed at this task.[51] Fredman (2012) also performed an experiment on observational behavior. In experiment 1, human-raised monkeys observed a familiar human model open a foraging box using a tool in one of two alternate ways: levering or poking. In experiment 2, mother-raised monkeys viewed similar techniques demonstrated by monkey models. A control group in each population saw no model. In both experiments, independent coders detected which technique experimental subjects had seen, thus confirming social learning. Further analyses examined copying at three levels of resolution. The human-raised monkeys exhibited the greatest learning with the specific tool use technique they saw. Only monkeys who saw the levering model used the lever technique, by contrast with controls and those who witnessed poking. Mother-reared monkeys instead typically ignored the tool and exhibited fidelity at a lower level, tending only to re-create whichever result the model had achieved by either levering or poking. Nevertheless, this level of social learning was associated with significantly greater levels of success in monkeys witnessing a model than in controls, an effect absent in the human-reared population. Results in both populations are consistent with a process of canalization of the repertoire in the direction of the approach witnessed, producing a narrower, socially shaped behavioral profile than among controls who saw no model.[52] Pinkham and Jaswal (2011) did an experiment to see if a child would learn how to turn on a light box by watching a parent. They found that children who saw a parent use their head to turn on the light box tended to do the task in that manner, while children who had not seen the parent used their hands instead.[53] When adequate practice and appropriate feedback follow demonstrations, increased skill performance and learning occurs. Lewis (1974) did a study[54]of children who had a fear of swimming and observed how modelling and going over swimming practices affected their overall performance. The experiment spanned nine days, and included many steps. The children were first assessed on their anxiety and swimming skills. Then they were placed into one of three conditional groups and exposed to these conditions over a few days. At the end of each day, all children participated in a group lesson. The first group was a control group where the children watched a short cartoon video unrelated to swimming. The second group was a peer mastery group, which watched a short video of similar-aged children who had very good task performances and high confidence. Lastly, the third group was a peer coping group, whose subjects watched a video of similar-aged children who progressed from low task performances and low confidence statements to high task performances and high confidence statements. The day following the exposures to each condition, the children were reassessed. Finally, the children were also assessed a few days later for a follow-up assessment. Upon reassessment, it was shown that the two model groups who watched videos of children similar in age had successful rates on the skills assessed because they perceived the models as informational and motivational. Flexible methods must be used to assess whether an animal can imitate an action. This led to an approach that teaches animals to imitate by using a command such as "do-as-I-do" or "do this" followed by the action that they are supposed to imitate.[55]Researchers trained chimpanzees to imitate an action that was paired with the command. For example, this might include a researcher saying "do this" paired with clapping hands. This type of instruction has been utilized in a variety of other animals in order to teach imitation actions by utilizing a command or request.[55] Observational learning allows for new skills to be learned in a wide variety of areas. Demonstrations help the modification of skills and behaviors.[56] When learning skills for physical activities can be anything that is learned that requires physical movement, this can include learning a sport, learning to eat with a fork, or learning to walk.[56]There are multiple important variables that aid in modifying physical skills and psychological responses from an observational learning standpoint. Modeling is a variable in observational learning where the skill level of the model is considered. When someone is supposed to demonstrate a physical skill such as throwing a baseball the model should be able to execute the behavior of throwing the ball flawlessly if the model of learning is a mastery model.[56]Another model to utilize in observational learning is a coping model, which would be a model demonstrating a physical skill that they have not yet mastered or achieved high performance in.[57]Both models are found to be effective and can be utilized depending on the what skills is trying to be demonstrated.[56]These models can be used as interventions to increase observational learning in practice, competition, and rehabilitation situations.[56]Observational learning is also dependent on the learner's intentions and goals where performance can be enhanced by increasing instruction and beneficial feedback depending on the individual's age, personality, and abilities.[58] Recent research in neuroscience has implicatedmirror neuronsas a neurophysiological basis for observational learning.[59]Mirror neurons were first discovered in 1991 by researchers led byGiacomo Rizzolatti. The scientists had a device connected to a monkey to monitor brain activity. When the scientists came into the lab eating ice cream, the device buzzed. This accidental finding led them to mirror neurons which are an essential part in imitation and observational learning.[60]These specialized visuomotor neurons fireaction potentialswhen an individual performs a motor task and also fire when an individual passively observes another individual performing the same motor task.[61]In observationalmotor learning, the process begins with a visual presentation of another individual performing a motor task, this acts as a model. The learner then needs to transform the observed visual information into internal motor commands that will allow them to perform the motor task, this is known as visuomotor transformation.[62]Mirror neuron networks provide a mechanism for visuo-motor and motor-visual transformation and interaction. Similar networks of mirror neurons have also been implicated insocial learning,motor cognitionandsocial cognition.[63] Discrete trial training(DTT) is a structured and systematic approach utilized in helping individuals withautism spectrum disorderlearn.[64]Individuals with autism tend to struggle with learning through observation, therefore something that is reinforcing is necessary in order to motivate them to imitate or follow through with the task.[64]When utilizing DTT to teach individuals with autism modeling is utilized to aid in their learning. Modeling would include showing how to reach the correct answer, this could mean showing the steps to a math equation. Utilizing DTT in a group setting also promotes observational learning from peers as well.[64]
https://en.wikipedia.org/wiki/Observational_Learning
Inpropositional logic,import-exportis a name given to the propositional form ofExportation: This already holds inminimal logic, and thus also inclassical logic, where the conditional operator "→{\displaystyle \rightarrow }" is taken asmaterial implication. In theCurry-Howard correspondencefor intuitionistic logics, it can be realized throughcurryingand uncurrying. Import-export expresses adeductiveargument form. Innatural languageterms, the formula states that the followingEnglishsentences arelogically equivalent:[1][2][3] There are logics where it does not hold and its status as a true principle of logic is a matter of debate. Controversy over the principle arises from the fact that any conditional operator that satisfies it will collapse to material implication when combined with certain other principles. This conclusion would be problematic given theparadoxes of material implication, which are commonly taken to show that natural language conditionals are not material implication.[2][3][4] This problematic conclusion can be avoided within the framework ofdynamic semantics, whose expressive power allows one to define a non-material conditional operator which nonetheless satisfies import-export along with the other principles.[3][5]However, other approaches reject import-export as a general principle, motivated by cases such as the following, uttered in a context where it is most likely that the match will be lit by throwing it into a campfire, but where it is possible that it could be lit by striking it. In this context, the first sentence is intuitively true but the second is intuitively false.[5][6][7]
https://en.wikipedia.org/wiki/Import-Export_(logic)
Ingeometry,close-packing of equalspheresis a dense arrangement of congruent spheres in an infinite, regular arrangement (orlattice).Carl Friedrich Gaussproved that the highest average density – that is, the greatest fraction of space occupied by spheres – that can be achieved by alatticepacking is The samepacking densitycan also be achieved by alternate stackings of the same close-packed planes of spheres, including structures that are aperiodic in the stacking direction. TheKepler conjecturestates that this is the highest density that can be achieved by any arrangement of spheres, either regular or irregular. This conjecture was proven byThomas Hales.[1][2]Highest density is known only for 1, 2, 3, 8, and 24 dimensions.[3] Manycrystalstructures are based on a close-packing of a single kind of atom, or a close-packing of large ions with smaller ions filling the spaces between them. The cubic and hexagonal arrangements are very close to one another in energy, and it may be difficult to predict which form will be preferred from first principles. There are two simple regular lattices that achieve this highest average density. They are calledface-centered cubic(FCC) (also calledcubicclose packed) andhexagonalclose-packed(HCP), based on theirsymmetry. Both are based upon sheets of spheres arranged at the vertices of a triangular tiling; they differ in how the sheets are stacked upon one another. The FCC lattice is also known to mathematicians as that generated by the A3root system.[4] The problem of close-packing of spheres was first mathematically analyzed byThomas Harriotaround 1587, after a question on piling cannonballs on ships was posed to him by SirWalter Raleighon their expedition to America.[5]Cannonballs were usually piled in a rectangular or triangular wooden frame, forming a three-sided or four-sided pyramid. Both arrangements produce a face-centered cubic lattice – with different orientation to the ground. Hexagonal close-packing would result in a six-sided pyramid with a hexagonal base. Thecannonball problemasks which flat square arrangements of cannonballs can be stacked into a square pyramid.Édouard Lucasformulated the problem as theDiophantine equation∑n=1Nn2=M2{\displaystyle \sum _{n=1}^{N}n^{2}=M^{2}}or16N(N+1)(2N+1)=M2{\displaystyle {\frac {1}{6}}N(N+1)(2N+1)=M^{2}}and conjectured that the only solutions areN=1,M=1,{\displaystyle N=1,M=1,}andN=24,M=70{\displaystyle N=24,M=70}. HereN{\displaystyle N}is the number of layers in the pyramidal stacking arrangement andM{\displaystyle M}is the number of cannonballs along an edge in the flat square arrangement. In both the FCC and HCP arrangements each sphere has twelve neighbors. For every sphere there is one gap surrounded by six spheres (octahedral) and two smaller gaps surrounded by four spheres (tetrahedral). The distances to the centers of these gaps from the centers of the surrounding spheres is√3⁄2for the tetrahedral, and√2for the octahedral, when the sphere radius is 1. Relative to a reference layer with positioning A, two more positionings B and C are possible. Every sequence of A, B, and C without immediate repetition of the same one is possible and gives an equally dense packing for spheres of a given radius. The most regular ones are There is an uncountably infinite number of disordered arrangements of planes (e.g. ABCACBABABAC...) that are sometimes collectively referred to as "Barlow packings", after crystallographerWilliam Barlow.[6] In close-packing, the center-to-center spacing of spheres in thexyplane is a simple honeycomb-like tessellation with a pitch (distance between sphere centers) of one sphere diameter. The distance between sphere centers, projected on thez(vertical) axis, is: wheredis the diameter of a sphere; this follows from the tetrahedral arrangement of close-packed spheres. Thecoordination numberof HCP and FCC is 12 and theiratomic packing factors(APFs) are equal to the number mentioned above, 0.74. When forming any sphere-packing lattice, the first fact to notice is that whenever two spheres touch a straight line may be drawn from the center of one sphere to the center of the other intersecting the point of contact. The distance between the centers along the shortest path namely that straight line will therefore ber1+r2wherer1is the radius of the first sphere andr2is the radius of the second. In close packing all of the spheres share a common radius,r. Therefore, two centers would simply have a distance 2r. To form an A-B-A-B-... hexagonal close packing of spheres, the coordinate points of the lattice will be the spheres' centers. Suppose, the goal is to fill a box with spheres according to HCP. The box would be placed on thex-y-zcoordinate space. First form a row of spheres. The centers will all lie on a straight line. Theirx-coordinate will vary by 2rsince the distance between each center of the spheres are touching is 2r. They-coordinate and z-coordinate will be the same. For simplicity, say that the balls are the first row and that theiry- andz-coordinates are simplyr, so that their surfaces rest on the zero-planes. Coordinates of the centers of the first row will look like (2r,r,r), (4r,r,r), (6r,r,r), (8r,r,r), ... . Now, form the next row of spheres. Again, the centers will all lie on a straight line withx-coordinate differences of 2r, but there will be a shift of distancerin thex-direction so that the center of every sphere in this row aligns with thex-coordinate of where two spheres touch in the first row. This allows the spheres of the new row to slide in closer to the first row until all spheres in the new row are touching two spheres of the first row. Since the new spherestouchtwo spheres, their centers form an equilateral triangle with those two neighbors' centers. The side lengths are all 2r, so the height ory-coordinate difference between the rows is√3r. Thus, this row will have coordinates like this: The first sphere of this row only touches one sphere in the original row, but its location follows suit with the rest of the row. The next row follows this pattern of shifting thex-coordinate byrand they-coordinate by√3. Add rows until reaching thexandymaximum borders of the box. In an A-B-A-B-... stacking pattern, the odd numberedplanesof spheres will have exactly the same coordinates save for a pitch difference in thez-coordinates and the even numberedplanesof spheres will share the samex- andy-coordinates. Both types of planes are formed using the pattern mentioned above, but the starting place for thefirstrow's first sphere will be different. Using the plane described precisely above as plane #1, the A plane, place a sphere on top of this plane so that it lies touching three spheres in the A-plane. The three spheres are all already touching each other, forming an equilateral triangle, and since they all touch the new sphere, the four centers form aregular tetrahedron.[7]All of the sides are equal to 2rbecause all of the sides are formed by two spheres touching. The height of which or thez-coordinate difference between the two "planes" is⁠√6r2/3⁠. This, combined with the offsets in thexandy-coordinates gives the centers of the first row in the B plane: The second row's coordinates follow the pattern first described above and are: The difference to the next plane, the A plane, is again⁠√6r2/3⁠in thez-direction and a shift in thexandyto match thosex- andy-coordinates of the first A plane.[8] In general, the coordinates of sphere centers can be written as: wherei,jandkare indices starting at 0 for thex-,y- andz-coordinates. Crystallographic features of HCP systems, such as vectors and atomic plane families, can be described using a four-valueMiller indexnotation (hkil) in which the third indexidenotes a degenerate but convenient component which is equal to −h−k. Theh,iandkindex directions are separated by 120°, and are thus not orthogonal; thelcomponent is mutually perpendicular to theh,iandkindex directions. The FCC and HCP packings are the densest known packings of equal spheres with the highest symmetry (smallest repeat units). Densersphere packingsare known, but they involveunequal sphere packing. A packing density of 1, filling space completely, requires non-spherical shapes, such ashoneycombs. Replacing each contact point between two spheres with an edge connecting the centers of the touching spheres produces tetrahedrons and octahedrons of equal edge lengths. The FCC arrangement produces thetetrahedral-octahedral honeycomb. The HCP arrangement produces thegyrated tetrahedral-octahedral honeycomb. If, instead, every sphere is augmented with the points in space that are closer to it than to any other sphere, the duals of these honeycombs are produced: therhombic dodecahedral honeycombfor FCC, and thetrapezo-rhombic dodecahedral honeycombfor HCP. Spherical bubbles appear in soapy water in a FCC or HCP arrangement when the water in the gaps between the bubbles drains out. This pattern also approaches therhombic dodecahedral honeycombortrapezo-rhombic dodecahedral honeycomb. However, such FCC or HCP foams of very small liquid content are unstable, as they do not satisfyPlateau's laws. TheKelvin foamand theWeaire–Phelan foamare more stable, having smaller interfacial energy in the limit of a very small liquid content.[9] There are two types ofinterstitial holesleft by hcp and fcc conformations; tetrahedral and octahedral void. Four spheres surround the tetrahedral hole with three spheres being in one layer and one sphere from the next layer. Six spheres surround an octahedral voids with three spheres coming from one layer and three spheres coming from the next layer. Structures of many simple chemical compounds, for instance, are often described in terms of small atoms occupying tetrahedral or octahedral holes in closed-packed systems that are formed from larger atoms. Layered structures are formed by alternating empty and filled octahedral planes. Two octahedral layers usually allow for four structural arrangements that can either be filled by an hpc of fcc packing systems. In filling tetrahedral holes a complete filling leads to fcc field array. In unit cells, hole filling can sometimes lead to polyhedral arrays with a mix of hcp and fcc layering.[10]
https://en.wikipedia.org/wiki/Close-packing_of_equal_spheres
James Douglas Montgomery(born April 13, 1963) is professor ofsociologyandeconomicsat theUniversity of Wisconsin–Madison. He received his Ph.D. in economics fromMassachusetts Institute of Technology. He has applied game-theoretic models and non-monotonic logic to present formal analysis and description of social theories and sociological phenomena. He was the recipient of James Coleman Award (1999) for his paper “Toward a Role-Theoretic Conception of Embeddedness”. His paper is a major contribution towards formalization of social theories and sociological interpretation of game theories since he presents a repeated-game model in which the players are not individuals (as traditionally conceived in economic models) but assume social roles such as a profit-maximizing "businessperson" and nonstrategic "friend" (Montgomery, 1999). In the early 1990s, Montgomery contributed to economic theories of network structures in labor market. In 1991, Montgomery incorporated network structures in anadverse selectionmodel to analyze the effects of social networks onlabor marketoutcomes.[1]In 1992, Montgomery explored the role of “weak ties”, which he defined as non-frequent and transitory social relations, in labor market.[2][3]He demonstrates that weak ties are positively related to higher wages and higher aggregate employment rates. He is currently[when?]working on integrating non-monotonic logic with social network analysis in the context of sociological theories.
https://en.wikipedia.org/wiki/James_D._Montgomery_(economist)
TheUNCITRAL Model Law on Electronic Transferable Records(“MLETR”) is auniform model lawthat has been adopted by theUnited Nations Commission on International Trade Law(UNCITRAL) in 2017.[1]Its scope is to allow the use of transferable documents and instruments in electronic form. Transferable documents and instruments typically includebills of lading,warehouse receipts,bills of exchange,promissory notesandcheques. National law qualifies a document or instrument as transferable. Transferable documents and instruments allow to request delivery of goods and payment of a sum of money based on possession of the document or instrument. However, it has been difficult to reproduce the notion of possession, which has to do with control over tangible goods, in an electronic environment. The MLETR addresses that legal gap. Under the MLETR each dematerialised document does not need to be managed in a separate information system, but the same system could manage multiple documents, or also all documents related to a business transactions. This may allow to merge logistics and supply chain, or even commercial and regulatory documents, in a single electronic transferable record.[2] A study on the impact of the adoption of a law aligned to the MLETR in the United Kingdom has quantified the benefits of such adoption. Besides economic benefits, which include up to £224 billion in efficiency savings, adoption of such legislation may reduce the number of days needed for processing trade documents by up to 75%.[3] The impact assessment of the Electronic Trade Documents Bill (see below) prepared by theUK Governmentestimates in the next 10 years economic benefits ranging from a low estimate of 249.8 million pounds to a high estimate of 2,049.7 million pounds, with a best estimate of 1,137.0 million pounds.[4] At the micro-economic level, a study describing 16 case studies of application of the UK Electronic Trade Documents Act (which is aligned with MLETR) and associated economic benefits is available.[5] The MLETR is divided in four chapters: general provisions; provisions on functional equivalence; use of electronic transferable records; and cross-border recognition of electronic transferable records. The MLETR is built on the same fundamental principles of other UNCITRAL texts on electronic commerce, namely functional equivalence (articles 8-11 MLETR), technology neutrality and non-discrimination against the use of electronic means (article 7 MLETR). The MLETR is also model-neutral and may be implemented by using registries, tokens or distributed ledgers.[6]The Explanatory Note to the MLETR provides some guidance on the use ofdistributed ledgersin implementing the MLETR and is therefore considered an early example of legislative text facilitating the use ofblockchain.[7][8] Article 2 MLETR defines the notion of electronic transferable record as an electronic record that complies with the requirements of article 10 MLETR. It also defines "transferable document or instrument" as a document that entitles its holder to the payment of a sum of money or the delivery of goods. Article 6 MLETR legally recognizes the possibility of including metadata in electronic transferable records. It is therefore considered asmart contractenabler.[9] Articles 8 and 9 MLETR provide functional equivalence rules, respectively, for the paper-based notions of "writing" and "signature". Those articles do not need to be enacted if national law, for instance an electronic transactions act, already contains those notions and they are made applicable by reference to electronic transferable records. Article 10 MLETR establishes the conditions for functional equivalence between paper-based transferable documents and instruments, on the one hand, and electronic transferable records, on the other hand. Those conditions are: 1) the electronic transferable record shall contain all information required for the corresponding paper-based transferable document or instrument; 2) a reliable method shall be used: a) to identify the electronic transferable record as such; b) to render the electronic transferable record subject to control throughout its life-cycle; c) to retain the integrity of the electronic transferable record throughout its life-cycle. Article 11 MLETR establishes the functional equivalence rule for possession of a transferable document or instrument. The conditions to satisfy that requirement are the use of a reliable method to establish exclusive control of the electronic transferable record and the identification of the person in control. Article 10 and 11 MLETR are based on the notions of "control" and "singularity" of the electronic transferable record.[10] In general, all events that may occur in relation to a transferable document or instrument may also occur in relation to an electronic transferable record.[11]Articles 15 and 16 MLETR reaffirm that general rule with respect to, respectively, endorsement and amendment of an electronic transferable record. The amendment should be identified as such as otherwise the electronic nature may not make the amendment easily recognisable. Article 12 MLETR contains a non-exclusive list of elements relevant to assess the reliability of the method used. It contains also a safety clause that indicates that a method is reliable in fact if it has fulfilled the function it pursued, alone or with other evidence. Article 19 MLETR contains a provision on geographic non-discrimination of the electronic transferable record. The provision does not affect private international law rules. The MLETR has been enacted in Bahrain,[12]in Belize,[13]in France,[14]in Kiribati,[15]in Paraguay,[16]in Papua New Guinea,[17]in Singapore,[18]in Timor-Leste,[19]in the United Kingdom,[20]and in the Abu Dhabi Global Market (ADGM), an International Financial Centre located in Abu Dhabi, United Arab Emirates.[21] The adoption of the MLETR in Bahrain has taken place in conjunction with a review of the Electronic Transactions Act, which was originally passed in 2002 and is based on the UNCITRAL Model Law on Electronic Commerce.[22] Singapore had conducted two public consultations prior to enactment, the first in March 2017[23]and the second in summer 2019, in the broader framework of the review of the Electronic Transactions Act.[24] In Thailand, the Cabinet has approved the inclusion of the MLETR in the Electronic Transactions Act.[25]Czechia has conducted a public consultation on MLETR adoption.[26] The International Chamber of Commerce (ICC) has been promoting actively adoption of the MLETR. Initially, this was done to facilitate the use of electronic bills of lading as recommended in a report by the law firm Clyde & Co and the ICC Banking Commission.[27]MLETR adoption is now being actively promoted by the ICC Digital Standards Initiative (DSI), including as a manner to overcome the effects of the COVID-19 pandemic and to increase supply chain resilience. ICC DSI offers also guidance on MLETR implementation, including technical standards and business practices.[28] On 28 April 2021 the UK, Canada, France, Germany, Italy, Japan, the US and the European Union adopted a G7 Digital and Technology Ministerial Declaration[29]to develop a framework for the use of electronic transferable records that promotes the adoption of legal frameworks compatible with the principles of the MLETR. On 11 May 2022, the G7 Digital Ministers adopted a Ministerial Declaration[30]endorsing the “Principles for domestic legal frameworks to promote the use of electronic transferable records” contained in Annex 2 to the Declaration.[31] The G7 declarations have prompted the consideration of MLETR adoption in G7 member States, with significant impact: With respect to use in business practice, one provider has started offering issuance of electronic bills of lading based on Singapore law incorporating MLETR and approved by the International Group of P&I Club as of 1 July 2021.[36]These electronic bills of lading issued under the law of Singapore and MLETR have been used for the first time to cover shipments from Australia to China.[37] In Bahrain, an electronic check system has been launched based on MLETR provisions incorporated in Bahraini law. It allows issuing, endorsing and presenting electronic checks on mobile phones and other devices.[38]
https://en.wikipedia.org/wiki/UNCITRAL_Model_Law_on_Electronic_Signatures
Theworst-case execution time(WCET) of acomputationaltask is the maximum length of time the task could take to execute on a specifichardwareplatform. Worst case execution time is typically used in reliablereal-time systems, where understanding the worst case timing behaviour of software is important for reliability or correct functional behaviour. As an example, a computer system that controls the behaviour of an engine in a vehicle might need to respond to inputs within a specific amount of time. One component that makes up the response time is the time spent executing the software – hence if the software worst case execution time can be determined, then the designer of the system can use this with other techniques such asschedulability analysisto ensure that the system responds fast enough. While WCET is potentially applicable to many real-time systems, in practice an assurance of WCET is mainly used by real-time systems that are related to high reliability or safety. For example, in airborne software some attention to software is required byDO178Csection 6.3.4. The increasing use of software in automotive systems is also driving the need to use WCET analysis of software. In the design of some systems, WCET is often used as an input toschedulability analysis, although a much more common use of WCET in critical systems is to ensure that the pre-allocated timing budgets in a partition-scheduled system such asARINC 653are not violated. Since the early days of embedded computing, embedded software developers have either used: Both of these techniques have limitations. End to end measurements place a high burden on software testing to achieve the longest path; counting instructions is only applicable to simple software and hardware. In both cases, a margin for error is often used to account for untested code, hardware performance approximations or mistakes. A margin of 20% is often used, although there is very little justification used for this figure, save for historical confidence ("it worked last time"). As software and hardware have increased in complexity, they have driven the need for tool support. Complexity is increasingly becoming an issue in both static analysis and measurements. It is difficult to judge how wide the error margin should be and how well tested the software system is. System safety arguments based on a high-water mark achieved during testing are widely used, but become harder to justify as the software and hardware become less predictable. In the future, it is likely that a requirement for safety critical systems is that they are analyzed using both static and measurement-based approaches.[citation needed] The problem of finding WCET by analysis is equivalent to thehalting problemand is therefore not solvable in the general. Fortunately, for the kind of systems that engineers typically want to find WCET for, the software is typically well structured, will always terminate and is analyzable. Most methods for finding a WCET involve approximations (usually a rounding upwards when there are uncertainties) and hence in practice the exact WCET itself is often regarded as unobtainable. Instead, different techniques for finding the WCET produce estimates for the WCET.[1]Those estimates are typically pessimistic, meaning that the estimated WCET is known to be higher than the real WCET (which is usually what is desired). Much work on WCET analysis is on reducing the pessimism in analysis so that the estimated value is low enough to be valuable to the system designer. WCET analysis usually refers to the execution time of single thread, task or process. However, on modern hardware, especially multi-core, other tasks in the system will impact the WCET of a given task if they share cache, memory lines and other hardware features. Further, task scheduling events such asblockingor to beinterruptionsshould be considered in WCET analysis if they can occur in a particular system. Therefore, it is important to consider the context in which WCET analysis is applied. There are many automated approaches to calculating WCET beyond the manual techniques above. These include: A static WCET tool attempts to estimate WCET by examining the computer software without executing it directly on the hardware. Static analysis techniques have dominated research in the area since the late 1980s, although in an industrial setting, end-to-end measurements approaches were the standard practice. Static analysis tools work at a high-level to determine the structure of aprogram's task, working either on a piece ofsource codeor disassembled binaryexecutable. They also work at a low-level, using timing information about the real hardware that the task will execute on, with all its specific features. By combining those two kinds of analysis, the tool attempts to give an upper bound on the time required to execute a given task on a given hardware platform. At the low-level, static WCET analysis is complicated by the presence of architectural features that improve the average-case performance of theprocessor: instruction/datacaches,branch predictionandinstruction pipelines, for example. It is possible, but increasingly difficult, to determine tight WCET bounds if these modern architectural features are taken into account in the timing model used by the analysis. Certification authorities such as theEuropean Aviation Safety Agency, therefore, rely on model validation suites.[citation needed] Static analysis has resulted in good results for simpler hardware, however a possible limitation of static analysis is that the hardware (the CPU in particular) has reached a complexity which is extremely hard to model. In particular, the modelling process can introduce errors from several sources: errors in chip design, lack of documentation, errors in documentation, errors in model creation; all leading to cases where the model predicts a different behavior to that observed on real hardware. Typically, where it is not possible to accurately predict a behavior, a pessimistic result is used, which can lead to the WCET estimate being much larger than anything achieved at run-time. Obtaining tight static WCET estimation is particularly difficult on multi-core processors. There are a number of commercial and academic tools that implement various forms of static analysis. Measurement-based and hybrid approaches usually try to measure the execution times of short code segments on the real hardware, which are then combined in a higher level analysis. Tools take into account the structure of the software (e.g. loops, branches), to produce an estimate of the WCET of the larger program. The rationale is that it's hard to test the longest path in complex software, but it is easier to test the longest path in many smaller components of it. A worst case effect needs only to be seen once during testing for the analysis to be able to combine it with other worst case events in its analysis. Typically, the small sections of software can be measured automatically using techniques such as instrumentation (adding markers to the software) or with hardware support such as debuggers, and CPU hardware tracing modules. These markers result in a trace of execution, which includes both the path taken through the program and the time at which different points were executed. The trace is then analyzed to determine the maximum time that each part of the program has ever taken to execute, what the maximum observed iteration time of each loop is and whether there are any parts of the software that are untested (Code coverage). Measurement-based WCET analysis has resulted in good results for both simple and complex hardware, although like static analysis it can suffer excessive pessimism in multi-core situations, where the impact of one core on another is hard to define. A limitation of measurement is that it relies on observing the worst-case effects during testing (although not necessarily at the same time). It can be hard to determine if the worst case effects have necessarily been tested. There are a number of commercial and academic tools that implement various forms of measurement-based analysis. The most active research groups are in USA (American Michigan University ), Sweden (Mälardalen, Linköping), Germany (Saarbrücken, Dortmund, Braunschweig), France (Toulouse, Saclay, Rennes), Austria (Vienna), UK (University of York and Rapita Systems Ltd), Italy (Bologna), Spain (Cantabria, Valencia), and Switzerland (Zurich). Recently, the topic of code-level timing analysis has found more attention outside of Europe by research groups in the US (North Carolina, Florida), Canada, Australia, Bangladesh(MBI LAB and RDS), Kingdom of Saudi Arabia-UQU(HISE LAB), Singapore and India (IIT Madras, IISc Bangalore). The first international WCET Tool Challenge took place during the autumn of 2006. It was organized by theUniversity of Mälardalenand sponsored by the ARTIST2 Network of Excellence on Embedded Systems Design. The aim of the Challenge was to inspect and to compare different approaches in analyzing the worst-case execution time. All available tools and prototypes able to determine safe upper bounds for the WCET of tasks have participated. The final results[2]were presented in November 2006 at theISoLA 2006International Symposium inPaphos, Cyprus. A second Challenge took place in 2008.[3]
https://en.wikipedia.org/wiki/Worst-case_execution_time
TheLiberty Alliance Projectwas an organization formed in September 2001 to establish standards, guidelines and best practices foridentity managementin computer systems. It grew to more than 150 organizations, including technology vendors, consumer-facing companies, educational organizations and governments. It released frameworks for federation, identity assurance, anIdentity Governance Framework, and Identity Web Services. By 2009, theKantara Initiativetook over the work of the Liberty Alliance. The group was originally conceived and named byJeff Veis, atSun Microsystemsbased inMenlo Park, California.[1]The initiative's goal, which was personally promoted byScott McNealyof Sun, was to unify technology, commercial and government organizations to create a standard for federated, identity-based Internet applications as an alternative to technology appearing in the marketplace controlled by a single entity such asMicrosoft'sPassport.[2]Another Microsoft initiative,HailStorm, was renamed My Services but quietly shelved by April 2002.[3]Sun positioned the group as independent, andEric C. DeanofUnited Airlinesbecame its president.[4] In July 2002, the alliance announced Liberty Identity Federation (ID-FF) 1.0.[5]At that time, several member companies announced upcoming availability of Liberty-enabled products. Liberty Federation allowed consumers and users of Internet-based services and e-commerce applications to authenticate and sign-on to a network or domain once from any device and then visit or take part in services from multiple Websites. This federated approach did not require the user to re-authenticate and can support privacy controls established by the user. The Liberty Alliance subsequently released two more versions of the Identity Federation Framework, and then in November 2003, Liberty contributed its final version of the specification, ID-FF 1.2, toOASIS.[6]This contribution formed the basis forSAML 2.0. By 2007, industry analyst firmGartnerclaimed that SAML had gained wide acceptance in the community.[7] Liberty Alliance, releasing the Liberty Identity Web Services Framework (ID-WSF) in April 2004 for deploying and managing identity-based web services. Applications includedgeolocation, contact book, calendar, mobile messaging and People Service, for managing social applications such as bookmarks, blogs, calendars, photo sharing and instant messaging in a secure and privacy-respecting federated social network. In a 2008 marketing report recommended considering it for federation.[8] The alliance introduced a certification program in 2003, designed to test commercial and open source products against published standards to assure base levels of interoperability between products. In 2007, the USGeneral Services Administrationbegan requiring this certification for participating in the US E-Authentication Identity Federation.[9] In January 2007, the alliance announced a project foropen-source softwaredevelopers building identity-based applications. OpenLiberty.org was a portal where developers can collaborate and access tools and information to develop applications based on alliance standards.[10]In November 2008, OpenLiberty released an open sourceapplication programming interfacecalled ArisID.[11] In February 2007Oracle Corporationcontributed theIdentity Governance Frameworkto the alliance,[12]which released the first version publicly in July 2007.[13]The Identity Governance Framework defined how identity related information is used, stored, and propagated using protocols such asLDAP, Security Assertion Markup Language,WS-Trust, and ID-WSF. The Liberty Alliance began work on itsidentity assuranceframework in 2008. The Identity Assurance Framework (IAF) detailed four identity assurance levels designed to link trusted identity-enabled enterprise, social networking and Web applications together based on business rules and security risks associated with each level. The four levels of assurance were outlined by a 2006 document from the USNational Institute of Standards and Technology.[14]The level of assurance provided is measured by the strength and rigor of the identity proofing process, the credential's strength, and the management processes the service provider applies to it. These four assurance levels were adopted by UK, Canada, and USA government services. In 2007 the Liberty Alliance helped to found theProject Concordia, an independent initiative for harmonization identity specifications. It was active through 2008.[15] The alliance wrote papers on business and policy aspects of identity management.[16]It hosted meetings in 2007 and 2008 to promote itself.[17] Management board members includedAOL,British Telecom,Computer Associates(CA),Fidelity Investments,Intel,Internet Society(ISOC),Novell,Nippon Telegraph and Telephone(NTT), Vodafone, Oracle Corporation and Sun Microsystems. As described above,Liberty contributed Identity Federation Framework (ID-FF) 1.2 to OASISin November 2003. For the record, here is a complete list of contributed ID-FF 1.2 documents: Only the archived PDF files are individually addressable on the Liberty Alliance web site. (The original contributed documents are lost.) To obtain copies of the remaining archived files, download both theLiberty ID-FF 1.2 archiveand theLiberty 1.1 support archive.
https://en.wikipedia.org/wiki/Liberty_Alliance
Aninformation modelinsoftware engineeringis a representation of concepts and the relationships, constraints, rules, andoperationsto specifydata semanticsfor a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context.[1] The terminformation modelin general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised tofacility information model,building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility. Within the field of software engineering anddata modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations. An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are calleddata models, irrespective of whether they areobject models(e.g. usingUML),entity relationship modelsorXML schemas. In 1976, anentity-relationship(ER) graphic notation was introduced byPeter Chen. He stressed that it was a "semantic" modelling technique and independent of any database modelling techniques such as Hierarchical, CODASYL, Relational etc.[2]Since then,languages for information modelshave continued to evolve. Some examples are the Integrated Definition Language 1 Extended (IDEF1X), theEXPRESSlanguage and theUnified Modeling Language(UML).[1] Research by contemporaries of Peter Chen such as J.R.Abrial (1974) and G.M Nijssen (1976) led to today's Fact Oriented Modeling (FOM) languages which are based on linguistic propositions rather than on "entities". FOM tools can be used to generate an ER model which means that the modeler can avoid the time-consuming and error prone practice of manual normalization. Object-Role Modeling language (ORM) and Fully Communication Oriented Information Modeling (FCO-IM) are both research results developed in the early 1990s, based upon earlier research. In the 1980s there were several approaches to extend Chen’s Entity Relationship Model. Also important in this decade is REMORA byColette Rolland.[3] TheICAMDefinition (IDEF) Language was developed from the U.S. Air Force ICAM Program during the 1976 to 1982 timeframe.[4]The objective of the ICAM Program, according to Lee (1999), was to increase manufacturing productivity through the systematic application of computer technology. IDEF includes three different modeling methods:IDEF0,IDEF1, andIDEF2for producing a functional model, an information model, and a dynamic model respectively.IDEF1Xis an extended version of IDEF1. The language is in the public domain. It is a graphical representation and is designed using the ER approach and the relational theory. It is used to represent the “real world” in terms of entities, attributes, and relationships between entities. Normalization is enforced by KEY Structures and KEY Migration. The language identifies property groupings (Aggregation) to form complete entity definitions.[1] EXPRESSwas created as ISO 10303-11 for formally specifying information requirements of product data model. It is part of a suite of standards informally known as the STandard for the Exchange of Product model data (STEP). It was first introduced in the early 1990s.[5][6]The language, according to Lee (1999), is a textual representation. In addition, a graphical subset of EXPRESS called EXPRESS-G is available. EXPRESS is based on programming languages and the O-O paradigm. A number of languages have contributed to EXPRESS. In particular, Ada, Algol, C, C++, Euler, Modula-2, Pascal, PL/1, and SQL. EXPRESS consists of language elements that allow an unambiguous object definition and specification of constraints on the objects defined. It uses SCHEMA declaration to provide partitioning and it supports specification of data properties, constraints, and operations.[1] UML is a modeling language for specifying, visualizing, constructing, and documenting the artifacts, rather than processes, of software systems. It was conceived originally byGrady Booch,James Rumbaugh, andIvar Jacobson. UML was approved by theObject Management Group(OMG) as a standard in 1997. The language, according to Lee (1999), is non-proprietary and is available to the public. It is a graphical representation. The language is based on the objected-oriented paradigm. UML contains notations and rules and is designed to represent data requirements in terms of O-O diagrams. UML organizes a model in a number of views that present different aspects of a system. The contents of a view are described in diagrams that are graphs with model elements. A diagram contains model elements that represent common O-O concepts such as classes, objects, messages, and relationships among these concepts.[1] IDEF1X, EXPRESS, and UML all can be used to create a conceptual model and, according to Lee (1999), each has its own characteristics. Although some may lead to a natural usage (e.g., implementation), one is not necessarily better than another. In practice, it may require more than one language to develop all information models when an application is complex. In fact, the modeling practice is often more important than the language chosen.[1] Information models can also be expressed in formalized natural languages, such asGellish. Gellish, which has natural language variantsGellish Formal English, Gellish Formal Dutch(Gellish Formeel Nederlands), etc. is an information representation language or modeling language that is defined in the Gellish smart Dictionary-Taxonomy, which has the form of aTaxonomy/Ontology. A Gellish Database is not only suitable to store information models, but also knowledge models, requirements models and dictionaries, taxonomies and ontologies. Information models in Gellish English use Gellish Formal English expressions. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as: whereas information requirements and knowledge can be expressed for example as follows: Such Gellish expressions use names of concepts (such as 'city') and relation types (such as⟨is located in⟩and⟨is classified as a⟩) that should be selected from the Gellish Formal English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains definitions of more than 40000 concepts, including more than 600 standard relation types. Thus, an information model in Gellish consists of a collection of Gellish expressions that use those phrases and dictionary concepts to express facts or make statements, queries and answers. TheDistributed Management Task Force(DMTF) provides a standard set of information models for various enterprise domains under the general title of theCommon Information Model(CIM). Specific information models are derived from CIM for particular management domains. TheTeleManagement Forum(TMF) has defined an advanced model for the Telecommunication domain (theShared Information/Data model, or SID) as another. This includes views from the business, service and resource domains within the Telecommunication industry. The TMF has established a set of principles that anOSSintegration should adopt, along with a set of models that provide standardized approaches. The models interact with the information model (theShared Information/Data Model, or SID), via aprocess model(theBusiness Process Framework (eTOM), or eTOM) and a life cycle model.
https://en.wikipedia.org/wiki/Information_model
Sideways Arithmetic From Wayside Schoolis achildren'snovelbyLouis Sacharin theWayside Schoolseries. The book contains mathematical and logic puzzles for the reader to solve, presented as whatThe New Yorkercalled "absurdist math problems."[1]The problems are interspersed with characteristically quirky stories about the students at Wayside School. Sideways Arithmetic from Wayside Schoolbegins with a foreword from Sachar in character as Louis the yard teacher, explaining the "sideways" nature of the problems within. He says that when he showed the students at Wayside School a regular math textbook, they laughed, thinking it was a book of jokes. The first chapter introduces Sue, a new student in Mrs. Jewls's class. She is bewildered to discover that the arithmetic lessons involve adding words instead of numbers usingverbal arithmetic, e.g., "elf + elf = fool." The book presents an explanation for children of how these problems are solved, and then gives them several to do on their own. In chapter 2, Sue protests that math isn't supposed to be done that way, and gives the class a few traditional math problems like "seven + four = eleven." These are also presented as verbal arithmetic puzzles that are, as Mrs. Jewls states, impossible; the reader is tasked with figuring out why. In the next chapter, Mrs. Jewls tells Sue that if she doesn't understand how to do math in her class, she should switch schools. But when Sue inadvertently gets a question correct, Mrs. Jewls lets her stay. Chapter 4 contains more verbal arithmetic problems, this time with multiplication. Beginning with chapter 5, the book switches to logic and optimization problems. In this chapter, students have to determine what happened at recess through logical elimination. In Chapter 6, Mrs. Jewls is having trouble filling outreport cardsbecause she lost the correct answers to a series of quizzes; the reader must logically deduce those answers based on the scores each student got. Chapter 7 presents an algebraic optimization problem: lunch lady Miss Mush's meals become more and more disgusting the more of them she prepares, and the reader must determine, among other things, how many meals she should cook so that the most students are willing to eat. Chapter 8 involves "false logic" puzzles, with statements presented as questions on true-or-false quizzes. In the final chapter, Sue finally makes a new friend, Joy, who has stayed after school trying to solve her impossible true-or-false test involving theliar's paradox). They go home together. Charles Ashbacher, writing in theJournal of Recreational Mathematics, calledSideways Arithmetican "excellent supplementary book for elementary school mathematics", and suggested that the verbal arithmetic problems would be particularly useful in teaching.[2]The Guardianpraised the book and its sequel, writing: "Sachar never wastes a moment, a word or a clue."[3]
https://en.wikipedia.org/wiki/Sideways_Arithmetic_From_Wayside_School
Incryptography, anoblivious transfer(OT) protocol is a type of protocol in which a sender transfers one of potentially many pieces of information to a receiver, but remainsobliviousas to what piece (if any) has been transferred. The first form of oblivious transfer was introduced in 1981 byMichael O. Rabin.[1]In this form, the sender sends a message to the receiver withprobability1/2, while the sender remains oblivious as to whether or not the receiver received the message. Rabin's oblivious transfer scheme is based on theRSAcryptosystem. A more useful form of oblivious transfer called1–2 oblivious transferor "1 out of 2 oblivious transfer", was developed later byShimon Even,Oded Goldreich, andAbraham Lempel,[2]in order to build protocols forsecure multiparty computation. It is generalized to "1 out ofnoblivious transfer" where the user gets exactly one database element without the server getting to know which element was queried, and without the user knowing anything about the other elements that were not retrieved. The latter notion of oblivious transfer is a strengthening ofprivate information retrieval, in which the database is not kept private. Claude Crépeaushowed that Rabin's oblivious transfer is equivalent to 1–2 oblivious transfer.[3] Further work has revealed oblivious transfer to be a fundamental and important problem in cryptography. It is considered one of the critical problems in the field, because of the importance of the applications that can be built based on it. In particular, it iscompleteforsecure multiparty computation: that is, given an implementation of oblivious transfer it is possible to securely evaluate any polynomial time computable function without any additional primitive.[4] In Rabin's oblivious transfer protocol, the sender generates an RSA public modulusN=pqwherepandqare largeprime numbers, and an exponenterelatively primetoλ(N)= (p− 1)(q− 1). The sender encrypts the messagemasmemodN. If the receiver findsyis neitherxnor −xmoduloN, the receiver will be able tofactorNand therefore decryptmeto recoverm(seeRabin encryptionfor more details). However, ifyisxor −xmodN, the receiver will have no information aboutmbeyond the encryption of it. Since everyquadratic residuemoduloNhas four square roots, the probability that the receiver learnsmis 1/2. In a 1–2 oblivious transfer protocol, Alice the sender has two messagesm0andm1, and wants to ensure that the receiver only learns one. Bob, the receiver, has a bitband wishes to receivembwithout Alice learningb. The protocol of Even, Goldreich, and Lempel (which the authors attribute partially toSilvio Micali) is general, but can be instantiated using RSA encryption as follows. A 1-out-of-noblivious transfer protocol can be defined as a natural generalization of a 1-out-of-2 oblivious transfer protocol. Specifically, a sender hasnmessages, and the receiver has an indexi, and the receiver wishes to receive thei-th among the sender's messages, without the sender learningi, while the sender wants to ensure that the receiver receive only one of thenmessages. 1-out-of-noblivious transfer is incomparable toprivate information retrieval(PIR). On the one hand, 1-out-of-noblivious transfer imposes an additional privacy requirement for the database: namely, that the receiver learn at most one of the database entries. On the other hand, PIR requires communicationsublinearinn, whereas 1-out-of-noblivious transfer has no such requirement. However, assuming single server PIR is a sufficient assumption in order to construct 1-out-of-2 Oblivious Transfer.[5] 1-out-of-noblivious transfer protocol withsublinearcommunication was first constructed (as a generalization of single-server PIR) byEyal KushilevitzandRafail Ostrovsky.[6]More efficient constructions were proposed byMoni NaorandBenny Pinkas,[7]William Aiello,Yuval IshaiandOmer Reingold,[8]Sven LaurandHelger Lipmaa.[9]In 2017,Kolesnikov et al.,[10]proposed an efficient 1-n oblivious transfer protocol which requires roughly 4x the cost of 1-2 oblivious transfer in amortized setting. Brassard,CrépeauandRobertfurther generalized this notion tok-noblivious transfer,[11]wherein the receiver obtains a set ofkmessages from thenmessage collection. The set ofkmessages may be received simultaneously ("non-adaptively"), or they may be requested consecutively, with each request based on previous messages received.[12] k-nOblivious transfer is a special case of generalized oblivious transfer, which was presented by Ishai and Kushilevitz.[13]In that setting, the sender has a setUofnmessages, and the transfer constraints are specified by a collectionAof permissible subsets ofU. The receiver may obtain any subset of the messages inUthat appears in the collectionA. The sender should remain oblivious of the selection made by the receiver, while the receiver cannot learn the value of the messages outside the subset of messages that he chose to obtain. The collectionAis monotone decreasing, in the sense that it is closed under containment (i.e., if a given subsetBis in the collectionA, so are all of the subsets ofB). The solution proposed by Ishai and Kushilevitz uses the parallel invocations of 1-2 oblivious transfer while making use of a special model of private protocols. Later on, other solutions that are based on secret sharing were published – one by Bhavani Shankar, Kannan Srinathan, andC. Pandu Rangan,[14]and another by Tamir Tassa.[15] In the early seventiesStephen Wiesnerintroduced a primitive calledmultiplexingin his seminal paper "Conjugate Coding", which was the starting point ofquantum cryptography.[16]Unfortunately it took more than ten years to be published. Even though this primitive was equivalent to what was later called1–2 oblivious transfer, Wiesner did not see its application to cryptography. Protocols for oblivious transfer can be implemented withquantum systems. In contrast to other tasks inquantum cryptography, likequantum key distribution, it has been shown that quantum oblivious transfer cannot be implemented with unconditional security, i.e. the security of quantum oblivious transfer protocols cannot be guaranteed only from the laws ofquantum physics.[17]
https://en.wikipedia.org/wiki/Oblivious_transfer
Anacronymis a type ofabbreviationconsisting of a phrase whose only pronounced elements are the initial letters or initial sounds of words inside that phrase. Acronyms are often spelled with the initialletterof eachwordinall capswith nopunctuation. For some, aninitialism[1]oralphabetismconnotesthis general meaning, and anacronymis asubsetwith a narrower definition; an acronym is pronounced as a word rather than as a sequence of letters. In this sense,NASA(/ˈnæsə/) is an acronym, butUSA(/ˌjuː.ɛsˈeɪ/) is not.[2][3] The broader sense ofacronym, ignoring pronunciation, is its original meaning[4]and in common use.[5]Dictionary and style-guide editors dispute whether the termacronymcan be legitimately applied to abbreviations which are not pronounced as words, and they do not agree on acronymspacing,casing, and punctuation. The phrase that the acronym stands for is called itsexpansion. Themeaningof an acronym includes both its expansion and the meaning of its expansion. The wordacronymis formed from theGreek rootsakro-, meaning 'height, summit, or tip', and-nym, 'name'.[6][unreliable source]Thisneoclassical compoundappears to have originated inGerman, with attestations for the German formAkronymappearing as early as 1921.[7][8]Citations in English date to a 1940 translation of a novel by the German writerLion Feuchtwanger.[9] It is an unsettled question in Englishlexicographyandstyle guideswhether it is legitimate to use the wordacronymto describe forms that use initials but are not pronounced as a word. While there is plenty of evidence thatacronymis used widely in this way, some sources do not acknowledge this usage, reserving the termacronymonly for forms pronounced as a word, and usinginitialismorabbreviationfor those that are not. Some sources acknowledge the usage, but vary in whether they criticize or forbid it, allow it without comment, or explicitly advocate it. Some mainstream English dictionaries from across the English-speaking world affirm asenseofacronymwhich does not require being pronounced as a word. American English dictionaries such asMerriam-Webster,[10]Dictionary.com'sRandom House Webster's Unabridged Dictionary[11]and theAmerican Heritage Dictionary[12]as well as the BritishOxford English Dictionary[4]and the AustralianMacquarie Dictionary[13]all include a sense in their entries foracronymequating it withinitialism, althoughThe American Heritage Dictionarycriticizes it with the label "usage problem".[12]However, many English language dictionaries, such as theCollins COBUILD Advanced Dictionary,[14]Cambridge Advanced Learner's Dictionary,[15]Macmillan Dictionary,[16]Longman Dictionary of Contemporary English,[17]New Oxford American Dictionary,[18]Webster's New World Dictionary,[19]andLexicofrom Oxford University Press[20]do not acknowledge such a sense. Most of the dictionary entries and style guide recommendations regarding the termacronymin the twentieth century did not explicitly acknowledge or support the expansive sense. TheMerriam–Webster's Dictionary of English Usagefrom 1994 is one of the earliest publications to advocate for the expansive sense,[21]and all the major dictionary editions that include a sense ofacronymequating it withinitialismwere first published in the twenty-first century. The trend among dictionary editors appears to be towards including a sense definingacronymasinitialism: theMerriam-Webster's Collegiate Dictionaryadded such a sense in its 11th edition in 2003,[22][23]and both theOxford English Dictionary[24][4]andThe American Heritage Dictionary[25][12]added such senses in their 2011 editions. The 1989 edition of theOxford English Dictionaryonly included the exclusive sense foracronymand its earliest citation was from 1943.[24]In early December 2010,Duke Universityresearcher Stephen Goranson published a citation foracronymto theAmerican Dialect Societye-mail discussion list which refers toPGNbeing pronounced "pee-gee-enn",antedatingEnglish language usage of the word to 1940.[26]LinguistBen Zimmerthen mentioned this citation in his December 16, 2010 "On Language" column about acronyms inThe New York Times Magazine.[27]By 2011, the publication of the 3rd edition of theOxford English Dictionaryadded the expansive sense to its entry foracronymand included the 1940 citation.[4]As theOxford English Dictionarystructures the senses in order of chronological development,[28]it now gives the "initialism" sense first. English language usage and style guides which have entries foracronymgenerally criticize the usage that refers to forms that are not pronounceable words.Fowler's Dictionary of Modern English Usagesays thatacronym"denotes abbreviations formed from initial letters of other words and pronounced as a single word, such asNATO(as distinct fromB-B-C)" but adds later "In everyday use,acronymis often applied to abbreviations that are technically initialisms, since they are pronounced as separate letters."[29]The Chicago Manual of Styleacknowledges the complexity ("Furthermore, an acronym and initialism are occasionally combined (JPEG), and the line between initialism and acronym is not always clear") but still defines the terms as mutually exclusive.[30]Other guides outright deny any legitimacy to the usage:Bryson's Dictionary of Troublesome Wordssays "Abbreviations that are not pronounced as words (IBM, ABC, NFL) are not acronyms; they are just abbreviations."[31]Garner's Modern American Usagesays "An acronym is made from the first letters or parts of a compound term. It's read or spoken as a single word, not letter by letter."[32]The New York Times Manual of Style and Usagesays "Unless pronounced as a word, an abbreviation is not an acronym."[33] In contrast, some style guides do support it, whether explicitly or implicitly. The 1994 edition ofMerriam-Webster's Dictionary of English Usagedefends the usage on the basis of a claim that dictionaries do not make a distinction.[21]TheBuzzFeedstyle guide describes CBS and PBS as "acronyms ending in S".[34] Acronymy, likeretronymy, is a linguistic process that has existed throughout history but for which there was little to nonaming, conscious attention, orsystematic analysisuntil relatively recent times. Like retronymy, it became much more common in the twentieth century than it had formerly been. Ancient examples of acronymy (before the term "acronym" was invented) include the following: During the mid- to late nineteenth century, acronyms became a trend among American and European businessmen: abbreviatingcorporationnames, such as on the sides ofrailroad cars(e.g., "Richmond, Fredericksburg and Potomac Railroad" → "RF&P"); on the sides of barrels and crates; and onticker tapeand newspaper stock listings (e.g. American Telephone and Telegraph Company → AT&T). Some well-known commercial examples dating from the 1890s through 1920s include "Nabisco" ("National Biscuit Company"),[37]"Esso" (from "S.O.", from "Standard Oil"), and "Sunoco" ("Sun Oil Company"). Another field for the adoption of acronyms was modern warfare, with its many highly technical terms. While there is no recorded use of military acronyms dating from theAmerican Civil War(acronyms such as "ANV" for "Army of Northern Virginia" post-date the war itself), they became somewhat common inWorld War I, and byWorld War IIthey were widespread even in the slang of soldiers,[38]who referred to themselves asG.I.s. The widespread, frequent use of acronyms across the whole range of linguisticregistersis relatively new in most languages, becoming increasingly evident since the mid-twentieth century. As literacy spread and technology produced a constant stream of new and complex terms, abbreviations became increasingly convenient. TheOxford English Dictionary(OED) records the first printed use of the wordinitialismas occurring in 1899, but it did not come into general use until 1965, well afteracronymhad become common. In English, acronymspronounced as wordsmay be a twentieth-century phenomenon. Linguist David Wilton inWord Myths: Debunking Linguistic Urban Legendsclaims that "forming words from acronyms is a distinctly twentieth- (and now twenty-first-) century phenomenon. There is only one known pre-twentieth-century [English] word with an acronymic origin and it was in vogue for only a short time in 1886. The word iscolinderiesorcolinda, an acronym for theColonial and Indian Expositionheld in London in that year."[39][40]However, although acronymic words seem not to have beenemployed in general vocabularybefore the twentieth century (as Wilton points out), theconcept of their formationis treated as effortlessly understood (and evidently not novel) in anEdgar Allan Poestory of the 1830s, "How to Write a Blackwood Article", which includes the contrived acronym "P.R.E.T.T.Y.B.L.U.E.B.A.T.C.H." The use of Latin and Neo-Latin terms invernacularshas been pan-European and pre-dates modern English. Some examples of acronyms in this class are: The earliest example of a word derived from an acronym listed by theOEDis "abjud" (now "abjad"), formed from the original first four letters of theArabic alphabetin the late eighteenth century.[41]Someacrosticspre-date this, however, such as theRestorationwitticism arranging the names of some members ofCharles II's Committee for Foreign Affairs to produce the"CABAL" ministry.[42] OK, a term of disputed origin, dates back at least to the early nineteenth century and is now used around the world. Acronyms are used most often to abbreviate names of organizations and long or frequently referenced terms. Thearmed forcesand government agencies frequently employ acronyms; some well-known examples from the United States are among the "alphabet agencies" (jokingly referred to as "alphabet soup") created under theNew DealbyFranklin D. Roosevelt(himself known as "FDR"). Business and industry also coin acronyms prolifically. The rapid advance of science and technology also drives the usage, as new inventions and concepts with multiword names create a demand for shorter, more pronounceable names.[citation needed]One representative example, from the U.S. Navy, is "COMCRUDESPAC", which stands for "commander, cruisers destroyers Pacific"; it is also seen as "ComCruDesPac". Inventors are encouraged to anticipate the formation of acronyms by making new terms "YABA-compatible" ("yet another bloody acronym"), meaning the term's acronym can be pronounced and is not an offensive word: "When choosing a new name, be sure it is 'YABA-compatible'."[43] Acronym use has been further popularized by text messaging on mobile phones withshort message service(SMS), andinstant messenger(IM). To fit messages into the 160-character SMS limit, and to save time, acronyms such as "GF" ("girlfriend"), "LOL" ("laughing out loud"), and "DL" ("download" or "down low") have become popular.[44]Someprescriptivistsdisdain texting acronyms and abbreviations as decreasing clarity, or as failure to use "pure" or "proper" English. Others point out that languages have alwayscontinually changed, and argue that acronyms should be embraced as inevitable, or as innovation that adapts the language to changing circumstances. In this view, the modern practice is just the "proper" English of the current generation of speakers, much like the earlier abbreviation of corporation names on ticker tape or newspapers. Exact pronunciation of "word acronyms" (those pronounced as words rather than sounded out as individual letters) often vary by speaker population. These may be regional, occupational, or generational differences, or simply personal preference. For instance, there have been decades of online debate about how to pronounceGIF(/ɡɪf/or/dʒɪf/) andBIOS(/ˈbaɪoʊs/,/ˈbaɪoʊz/, or/ˈbaɪɒs/). Similarly, some letter-by-letter initialisms may become word acronyms over time, especially in combining forms:IPforInternet Protocolis generally said as two letters, butIPsecforInternet Protocol Securityis usually pronounced as/ˌaɪˈpiːsɛk/or/ˈɪpsɛk/, along with variant capitalization like "IPSEC" and "Ipsec". Pronunciation may even vary within a single speaker's vocabulary, depending on narrow contexts. As an example, the database programming languageSQLis usually said as three letters, but in reference toMicrosoft's implementationis traditionally pronounced like the wordsequel. In writing for a broad audience, the words of an acronym are typically written out in full at its first occurrence within a given text. Expansion At First Use (EAFU) benefits readers unfamiliar with the acronym.[45] Another text aid is an abbreviation key which lists and expands all acronyms used, a reference for readers who skipped past the first use. (This is especially important for paper media, where no search utility is available to find the first use.) It also gives students a convenient review list to memorize the important acronyms introduced in a textbook chapter. Expansion at first use and abbreviation keys originated in the print era, but they are equally useful forelectronic text. While acronyms provide convenience and succinctness for specialists, they often degenerate into confusingjargon. This may be intentional, to exclude readers without domain-specific knowledge. New acronyms may also confuse when they coincide with an already existing acronym having a different meaning. Medical literature has been struggling to control the proliferation of acronyms, including efforts by the American Academy of Dermatology.[46] Acronyms are often taught asmnemonicdevices: for example the colors of the rainbow areROY G. BIV(red, orange, yellow, green, blue, indigo, violet). They are also used as mental checklists: in aviationGUMPSstands for gas-undercarriage-mixture-propeller-seat belts. Other mnemonic acronyms includeCAN SLIMin finance, PAVPANIC in English grammar, andPEMDASin mathematics. It is not uncommon for acronyms to be cited in a kind offalse etymology, called afolk etymology, for a word. Such etymologies persist in popular culture but have no factual basis inhistorical linguistics, and are examples of language-relatedurban legends. For example, "cop" is commonly cited as being derived, it is presumed, from "constable on patrol",[47]and "posh" from "port outward, starboard home".[48]With some of these specious expansions, the "belief" that the etymology is acronymic has clearly beentongue-in-cheekamong many citers, as with "gentlemen only, ladies forbidden" for "golf", although many other (morecredulous) people have uncritically taken it for fact.[48][49]Taboo wordsin particular commonly have such false etymologies: "shit" from "ship/store high in transit"[39][38]or "special high-intensity training" and "fuck" from "for unlawful carnal knowledge", or "fornication under consent/command of the king".[38] In English, abbreviations have previously been marked by a wide variety ofpunctuation. Obsolete forms include using anoverbarorcolonto show theellipsisof letters following the initial part. Theforward slashis still common in many dialects for some fixed expressions—such as inw/for "with" orA/Cfor "air conditioning"—while only infrequently being used to abbreviate new terms. Theapostropheis common forgrammatical contractions(e.g.don't,y'all, andain't) and for contractions marking unusual pronunciations (e.g.a'ight,cap'n, andfo'c'slefor "all right", "captain", and "forecastle"). By the early twentieth century, it was standard to use afull stop/period/point, especially in the cases of initialisms and acronyms. Previously, especially forLatin abbreviations, this was done with a full space between every full word (e.g.A. D.,i. e., ande. g.for "Anno Domini", "id est", and "exempli gratia"). This even included punctuation after bothRomanandArabic numeralsto indicate their use in place of the full names of each number (e.g.LII.or52.in place of "fifty-two" and "1/4." or "1./4." to indicate "one-fourth"). Both conventions have fallen out of common use in all dialects of English, except in places where an Arabicdecimalincludes a medialdecimal point. Particularly inBritishandCommonwealth English, all such punctuation marking acronyms and other capitalized abbreviations is now uncommon and considered either unnecessary or incorrect. The presence of all-capital letters is now thought sufficient to indicate the nature of theUK, theEU, and theUN. Forms such asthe U.S.A.for "theUnited States of America" are now considered to indicateAmericanorNorth American English. Even within those dialects, such punctuation is becoming increasingly uncommon.[50] Somestyle guides, such as that of theBBC, no longer require punctuation to showellipsis; some even proscribe it.Larry Trask, American author ofThePenguinGuide to Punctuation, states categorically that, inBritish English, "this tiresome and unnecessary practice is now obsolete."[51] Nevertheless, some influentialstyle guides, many of themAmerican, still require periods in certain instances. For example,The New York Times Manual of Style and Usagerecommends following each segment with a period when the letters are pronounced individually, as in "K.G.B.", but not when pronounced as a word, as in "NATO".[52]The logic of this style is that the pronunciation is reflected graphically by the punctuation scheme. When a multiple-letter abbreviation is formed from a single word, periods are in general not used, although they may be common in informal usage. "TV", for example, may stand for asingleword ("television" or "transvestite", for instance), and is in general spelled without punctuation (except in the plural). Although "PS" stands for the single English word "postscript" or the Latinpostscriptum, it is often spelled with periods ("P.S.") as if parsed as Latinpost scriptuminstead. Theslash('/', orsolidus) is sometimes used to separate the letters in an acronym, as in "N/A" ("not applicable, not available") and "c/o" ("care of"). Inconveniently long words used frequently in related contexts can be represented according to their letter count as anumeronym. For example, "i18n" abbreviates "internationalization", a computer-science term for adapting software for worldwide use; the "18" represents the 18 letters that come between the first and the last in "internationalization". Similarly, "localization" can be abbreviated "l10n"; "multilingualization" "m17n"; and "accessibility" "a11y". In addition to the use of a specific number replacing that many letters, the more general "x" can be used to replace an unspecified number of letters. Examples include "Crxn" for "crystallization" and the series familiar to physicians forhistory,diagnosis, andtreatment("hx", "dx", "tx"). Terms relating to a command structure may also sometimes use this formatting, for example gold, silver, and bronze levels of command in UK policing being referred to as Gx, Sx, and Bx. There is a question about how to pluralize acronyms. Often a writer will add an 's' following an apostrophe, as in "PC's". However,Kate L. Turabian'sA Manual for Writers of Research Papers, Theses, and Dissertations, writing about style in academic writings,[53]allows for an apostrophe to form plural acronyms "only when an abbreviation contains internal periods or both capital and lowercase letters". Turabian would therefore prefer "DVDs" and "URLs" but "Ph.D.'s". The style guides of theModern Language Association[54]andAmerican Psychological Association[55][56]prohibit apostrophes from being used to pluralize acronyms regardless of periods (so "compact discs" would be "CDs" or "C.D.s"), whereasThe New York Times Manual of Style and Usagerequires an apostrophe when pluralizing all abbreviations regardless of periods (preferring "PC's, TV's and VCR's").[57] Possessive plurals that also include apostrophes for mere pluralization and periods appear especially complex: for example, "the C.D.'s' labels" (the labels of the compact discs). In some instances, however, an apostrophe may increase clarity: for example, if the final letter of an abbreviation is "S", as in "SOS's" (although abbreviations ending with S can also take "-es", e.g. "SOSes"), or when pluralizing an abbreviation that has periods.[58][59] A particularly rich source of options arises when the plural of an acronym would normally be indicated in a word other than the final word if spelled out in full. A classic example is "Member of Parliament", which in plural is "Members of Parliament". It is possible then to abbreviate this as "M's P", which was fairly common in mid-twentieth-century Australian news writing[60][61](or similar),[62]and used by former Australian Prime MinisterBen Chifley.[63][64][65]This usage is less common than forms with "s" at the end, such as "MPs", and may appear dated or pedantic. In common usage, therefore, "weapons of mass destruction" becomes "WMDs", "prisoners of war" becomes "POWs", and "runs batted in" becomes "RBIs".[66] Abbreviations that come from single, rather than multiple, words—such as "TV" ("television")—are usually pluralized without apostrophes ("two TVs"); most writers feel that the apostrophe should be reserved for the possessive ("the TV's antenna").[citation needed] In some languages, the convention of doubling the letters in the acronym is used to indicate plural words: for example, the SpanishEE.UU., forEstados Unidos('United States'). This old convention is still sometimes followed for a limited number of English abbreviations, such asSS.forSaints,pp.for the plural of 'pages', ormss.formanuscripts.[citation needed] The most commoncapitalizationscheme seen with acronyms is all-uppercase (all caps).Small capsare sometimes used to make the run of capital letters seem less jarring to the reader. For example, the style of some American publications, including theAtlantic MonthlyandUSA Today, is to use small caps for acronyms longer than three letters;[citation needed]thus "U.S." and "FDR" in normal caps, but "nato" in small caps. The acronyms "AD" and "BC" are often smallcapped as well, as in: "From4004bctoad525". Where an acronym has linguistically taken on an identity as regular word, the acronym may use normal case rules, e.g. it would appear generally in lower case, but with an initial capital when starting a sentence or when in a title. Once knowledge of the words underlying such an acronym has faded from common recall, the acronym may be termed ananacronym.[67]Examples of anacronyms are the words "scuba", "radar", and "laser". The word "anacronym" should not be confused with the word "anachronym", which is a type of misnomer. Words derived from an acronym by affixing are typically expressed in mixed case, so the root acronym is clear. For example, "pre-WWII politics", "post-NATO world", "DNase". In some cases a derived acronym may also be expressed in mixed case. For example, "messenger RNA" and "transfer RNA" become "mRNA" and "tRNA". Some publications choose to capitalize only the first letter of acronyms, reserving all-caps styling for initialisms, writing the pronounced acronyms "Nato" and "Aids" in mixed case, but the initialisms "USA" and "FBI" in all caps. For example, this is the style used inThe Guardian,[68]andBBC Newstypically edits to this style (though its official style guide, dating from 2003, still recommends all-caps[69]). The logic of this style is that the pronunciation is reflected graphically by the capitalization scheme. However, it conflicts with conventional English usage of first-letter upper-casing as a marker of proper names in many cases; e.g.AIDSstands foracquired immuno-deficiency syndromewhich is not a proper name, whileAidsis in the style of one. Some style manuals also base the letters'caseon their number.The New York Times, for example, keeps "NATO" in all capitals (while several guides in the British press may render it "Nato"), but uses lower case in "Unicef" (from "United Nations International Children's Emergency Fund") because it is more than four letters, and to style it in caps might look ungainly (flirting with the appearance of "shouting capitals"). While abbreviations typically exclude the initials of shortfunction words(such as "and", "or", "of", or "to"), this is not always the case. Sometimes function words are included to make a pronounceable acronym, such as CORE (Congress of Racial Equality). Sometimes the letters representing these words are written in lower case, such as in the cases of "TfL" ("Transport for London") andLotR(The Lord of the Rings); this usually occurs when the acronym represents a multi-word proper noun. Numbers (bothcardinalandordinal) in names are often represented bydigitsrather than initial letters, as in "4GL" ("fourth generation language") or "G77" ("Group of 77"). Large numbers may usemetric prefixes, as with "Y2K" for "Year 2000". Exceptions using initials for numbers include "TLA" ("three-letter acronym/abbreviation") and "GoF" ("Gang of Four"). Abbreviations using numbers for other purposes include repetitions, such as "A2DP" ("Advanced Audio Distribution Profile"), "W3C" ("World Wide Web Consortium"), andT3(Trends, Tips & Tools for Everyday Living); pronunciation, such as "B2B" ("business to business"); andnumeronyms, such as "i18n" ("internationalization"; "18" represents the 18 letters between the initial "i" and the final "n"). Authors ofexpository writingwill sometimes capitalize or otherwise distinctively format the initials of the expansion forpedagogicalemphasis (for example, writing: "the onset of Congestive Heart Failure (CHF)" or "the onset ofcongestiveheartfailure (CHF)"). Capitalization like this, however, conflicts with the convention of English orthography, which generally reserves capitals in the middle of sentences for proper nouns; when following theAMA Manual of Style, this would instead be rendered as "the onset of congestive heart failure (CHF)".[70] Some apparent acronyms or other abbreviations do not stand for anything and cannot be expanded to some meaning. Such pseudo-acronyms may be pronunciation-based, such as "BBQ" (bee-bee-cue), for "barbecue", and "K9" (kay-nine) for "canine". Pseudo-acronyms also frequently develop as "orphan initialisms": an existing acronym is redefined as a non-acronymous name, severing its link to its previous meaning.[71][72]For example, the letters of the "SAT", a US college entrance test originally dubbed "Scholastic Aptitude Test", no longer officially stand for anything.[73][74]The US-basedabortion-rightsorganization "NARAL" is another example of this; in that case, the organization changed its name three times, with the long-form of the name always corresponding to the letters "NARAL", before eventually opting to simply be known by the short-form, without being connected to a long-form. This is common with companies that want to retainbrand recognitionwhile moving away from an outdated image: American Telephone and Telegraph becameAT&T[71]andBritish Petroleumbecame BP.[72][75]Russia Todayhas rebranded itself asRT.American Movie Classicshas simply rebranded itself as AMC. Genzyme Transgenics Corporation became GTC Biotherapeutics, Inc.;The Learning Channelbecame TLC;MTVdropped the name Music Television out of its brand; andAmerican District Telegraphbecame simply known as ADT. "Kentucky Fried Chicken" went partway, re-branding itself with its initialism "KFC" to de-emphasize the role of frying in the preparation of its signature dishes, though they have since returned to using both interchangeably.[76][a]The East Coast Hockey League became theECHLwhen it expanded to include cities in the western United States prior to the 2003–2004 season. Pseudo-acronyms may have advantages in international markets: for example, some nationalaffiliatesofInternational Business Machinesare legally incorporated with "IBM" in their names (for example, IBM Canada) to avoid translating the full name into local languages.[citation needed]Likewise,UBSis the name of the mergedUnion Bank of SwitzerlandandSwiss Bank Corporation,[77]andHSBChas replaced the long name Hongkong and Shanghai Banking Corporation. Some companies which have a name giving a clear indication of their place of origin will choose to use acronyms when expanding to foreign markets: for example,Toronto-Dominion Banksometimes continues to operate under its full name in Canada, but its U.S. subsidiary is known only asTD Bank, just asRoyal Bank of Canadasometimes still uses its full name in Canada (aconstitutional monarchy) while its U.S. subsidiary is always only calledRBC Bank. The India-basedJSW Groupof companies is another example of the original name (Jindal South West Group) being re-branded into a pseudo-acronym while expanding into other geographical areas in and outside of India. Rebranding can lead toredundant acronym syndrome, as whenTrustee Savings Bankbecame TSB Bank, or whenRailway Express Agencybecame REA Express. A fewhigh-techcompanies have taken the redundant acronym to the extreme: for example, ISM Information Systems Management Corp. and SHL Systemhouse Ltd. Examples in entertainment include the television showsCSI: Crime Scene InvestigationandNavy: NCIS("Navy" was dropped in the second season), where the redundancy was likely designed to educate new viewers as to what the initials stood for. The same reasoning was in evidence when theRoyal Bank of Canada's Canadian operations rebranded to RBC Royal Bank, or whenBank of Montrealrebranded their retail banking subsidiary BMO Bank of Montreal. Another common example is "RAMmemory", which is redundant because "RAM" ("random-access memory") includes the initial of the word "memory". "PIN" stands for "personal identification number", obviating the second word in "PINnumber"; in this case its retention may be motivated to avoid ambiguity with the homophonous word "pin". Other examples include "ATMmachine", "EABbank", "HIVvirus", Microsoft'sNTTechnology, and the formerly redundant "SATtest", now simply "SAT Reasoning Test").TNN(The Nashville/National Network) also renamed itself "The New TNN" for a brief interlude. In some cases, while the initials in an acronym may stay the same, for what those letters stand may change. Examples include the following: Abackronym(orbacronym) is aphrasethat is constructed "after the fact" from a previously existing word. For example, the novelist and criticAnthony Burgessonce proposed that the word "book" ought to stand for "box of organized knowledge".[83]A classic real-world example of this is the name of the predecessor to the Apple Macintosh, theApple Lisa, which was said to refer to "Local Integrated Software Architecture", but was actually named after Steve Jobs' daughter, born in 1978. Acronyms are sometimescontrived, that is, deliberately designed to be especially apt for the thing being named (by having a dual meaning or by borrowing the positive connotations of an existing word). Some examples of contrived acronyms areUSA PATRIOT,CAN SPAM,CAPTCHAandACT UP.[citation needed]The clothing companyFrench Connectionbegan referring to itself asfcuk, standing for "French Connection United Kingdom". The company then created T-shirts and several advertising campaigns that exploit the acronym's similarity to the taboo word "fuck". Contrived acronyms find frequent use as names offictional agencies, with a famous example being frequentJames Bondantagonist organizationSPECTRE(SPecial Executive for Counterintelligence, Terrorism, Revenge and Extortion). TheU.S. Department of Defense's Defense Advanced Research Projects Agency (DARPA) is known for developing contrived acronyms to name projects, includingRESURRECT,NIRVANA, andDUDE. In July 2010,Wiredmagazine reported that DARPA announced programs to "transform biology from a descriptive to a predictive field of science" namedBATMANandROBINfor "Biochronicity and Temporal Mechanisms Arising in Nature" and "Robustness of Biologically-Inspired Networks",[84]a reference to comic-book superheroesBatmanandRobin. The short-formnames of clinical trialsand other scientific studies constitute a large class of acronyms that includes many contrived examples, as well as many with a partial rather than complete correspondence of letters to expansion components. These trials tend to have full names that are accurately descriptive of what the trial is about but are thus also too long to serve practically asnameswithin the syntax of a sentence, so a short name is also developed, which can serve as a syntactically useful handle and also provide at least a degree ofmnemonicreminder as to the full name. Examples widely known inmedicineinclude the ALLHAT trial (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) and the CHARM trial (Candesartan in Heart Failure: Assessment of Reduction in Mortality and Morbidity). The fact thatRAS syndromeis often involved, as well as that the letters often do not entirely match, have sometimes been pointed out by annoyed researchers preoccupied by the idea that because thearchetypalform of acronyms originated with one-to-one letter matching, there must be some impropriety in their ever deviating from that form. However, theraison d'êtreof clinical trial acronyms, as withgene and protein symbols, is simply to have a syntactically usable and easilyrecalledshort name to complement the long name that is often syntactically unusable and notmemorized. It is useful for the short name to give a reminder of the long name, which supports the reasonable censure of "cutesy" examples that provide little to no hint of it. But beyond that reasonably close correspondence, the short name's chief utility is in functioning cognitively as aname, rather than being acrypticand forgettable string, albeit faithful to the matching of letters. However, other reasonable critiques have been (1) that it is irresponsible to mention trial acronyms without explaining them at least once by providing the long names somewhere in the document,[85]and (2) that the proliferation of trial acronyms has resulted in ambiguity, such as three different trials all called ASPECT, which is another reason why failing to explain them somewhere in the document is irresponsible in scientific communication.[85]At least one study has evaluated thecitation impactand other traits of acronym-named trials compared with others,[86]finding both good aspects (mnemonic help, name recall) and potential flaws (connotativelydrivenbias).[86] Some acronyms are chosen deliberately to avoid a name considered undesirable: For example,Verliebt in Berlin(ViB), a Germantelenovela, was first intended to beAlles nur aus Liebe('All for Love'), but was changed to avoid the resultant acronymANAL. Likewise, the Computer Literacy and Internet Technology qualification is known asCLaIT,[87]rather thanCLIT. In Canada, theCanadian Conservative Reform Alliance (Party)was quickly renamed to the "Canadian Reform Conservative Alliance" when its opponents pointed out that its initials spelled CCRAP (pronounced "seecrap"). Two Irish institutes of technology (Galway and Tralee) chose different acronyms from other institutes when they were upgraded from regional technical colleges. Tralee RTC became the Institute of Technology Tralee (ITT), as opposed to Tralee Institute of Technology (TIT). Galway RTC became Galway-Mayo Institute of Technology (GMIT), as opposed to Galway Institute of Technology (GIT). The charity sports organizationTeam in Trainingis known as "TNT" and not "TIT".Technological Institute of Textile & Sciences, however, is still known as "TITS".George Mason Universitywas planning to name their law school the "Antonin Scalia School of Law" (ASSOL) in honor of the lateAntonin Scalia, only to change it to the "Antonin Scalia Law School" later.[88] Amacronym, ornested acronym, is an acronym in which one or more letters stand for acronyms (or abbreviations) themselves. The word "macronym" is aportmanteauof "macro-" and "acronym". Some examples of macronyms are: Some macronyms can be multiply nested: the second-order acronym points to another one further down a hierarchy.VITAL, for example, which expands to "VHDLInitiative TowardsASICLibraries" is a total of 15 words when fully expanded. In an informal competition run by the magazineNew Scientist, a fully documented specimen was discovered that may be the most deeply nested of all: RARS is the "Regional ATOVS Retransmission Service"; ATOVS is "Advanced TOVS"; TOVS is "TIROSoperational vertical sounder"; and TIROS is "Television infrared observational satellite".[89]Fully expanded, "RARS" might thus become "Regional Advanced Television Infrared Observational Satellite Operational Vertical Sounder Retransmission Service", which would produce the much more unwieldy acronym "RATIOSOVSRS". However, to say that "RARS" stands directly for that string of words, or can be interchanged with it insyntax(in the same way that "CHF" can be usefully interchanged with "congestive heart failure"), is aprescriptivemisapprehension rather than a linguistically accurate description; the true nature of such a term is closer toanacronymicthan to being interchangeable like simpler acronyms are. The latter are fully reducible in an attempt to "spell everything out and avoid all abbreviations", but the former are irreducible in that respect; they can beannotatedwith parenthetical explanations, but they cannot be eliminated from speech or writing in any useful or practical way. Just as the wordslaserandradarfunction as words insyntaxandcognitionwithout a need to focus on their acronymic origins, terms such as "RARS" and "CHA2DS2–VASc score" are irreducible innatural language; if they are purged, the form of language that is left may conform to some imposed rule, but it cannot be described as remaining natural. Similarly,proteinandgenenomenclature,which uses symbols extensively, includes such terms as the name of theNACHT protein domain, which reflects the symbols of some proteins that contain the domain – NAIP (NLR family apoptosisinhibitor protein), C2TA (major histocompatibility complex class II transcription activator), HET-E (incompatibility locus protein fromPodospora anserine), and TP1 (telomerase-associated protein) – but is not syntactically reducible to them. The name is thus itself more symbol than acronym, and its expansion cannot replace it while preserving its function in natural syntax as anamewithin aclauseclearlyparsableby human readers or listeners. A special type of macronym, therecursive acronym, has letters whose expansion refers back to the macronym itself. One of the earliest examples appears inThe Hacker's DictionaryasMUNG, which stands for "MUNG Until No Good". Some examples of recursive acronyms are: With English terminology, discussions of languages withsyllabicorlogographicwriting systems (such as Chinese, Japanese, and Korean), "acronyms" describe the short forms that take selected characters from a multi-character word. For example, in Chinese, 'university' (大學/大学,lit.'great learning') is usually abbreviated simply as大('great') when used with the name of the institute. So 'Peking University' (北京大学) is commonly shortened to北大(lit.'north-great') by also only taking the first character ofPeking, the "northern capital" (北京;Beijing). In some cases, however, other characters than the first can be selected. For example, the local short form of 'Hong Kong University' (香港大學) usesKong(港大) rather thanHong. There are also cases where some longer phrases are abbreviated drastically, especially in Chinese politics, where proper nouns were initially translated from Soviet Leninist terms. For instance, the full name of China's highest ruling council, thePolitburo Standing Committee(PSC), is 'Standing Committee of the Central Political Bureau of the Communist Party of China' (中国共产党中央政治局常务委员会). The term then reduced the 'Communist Party of China' part of its name through acronyms, then the 'Standing Committee' part, again through acronyms, to create中共中央政治局常委. Alternatively, it omitted the 'Communist Party' part altogether, creating 'Politburo Standing Committee' (政治局常委会), and eventually just 'Standing Committee' (常委会). The PSC's members full designations are 'Member of the Standing Committee of the Central Political Bureau of the Communist Party of China' (中国共产党中央政治局常务委员会委员); this was eventually drastically reduced to simplyChangwei(常委), with the termRuchang(入常) used increasingly for officials destined for a future seat on the PSC. In another example, the word全国人民代表大会('National People's Congress') can be broken into four parts:全国= 'the whole nation',人民= 'people',代表= 'representatives',大会= 'conference'. Yet, in its short form人大(literally 'man/people big'), only the first characters from the second and the fourth parts are selected; the first part (全国) and the third part (代表) are completely dropped. Many proper nouns become shorter and shorter over time. For example, theCCTV New Year's Gala, whose full name is literally read as 'China Central Television Spring Festival Joint Celebration Evening Gala' (中国中央电视台春节联欢晚会) was first shortened to 'Spring Festival Joint Celebration Evening Gala' (春节联欢晚会), but eventually referred to as simplyChunwan(春晚). In the same vein, CCTV orZhongguo Zhongyang Dianshi Tai(中国中央电视台) was reduced toYangshi(央视) in the mid-2000s. Many aspects of academics in Korea follow similar acronym patterns as Chinese, owing to the two languages' commonalities, like using the word for 'big' or 'great' i.e.dae(대), to refer to universities (대학;daehak, literally 'great learning' although 'big school' is an acceptable alternate). They can be interpreted similarly to American university appellations, such as "UPenn" or "Texas Tech". Some acronyms are shortened forms of the school's name, like howHongik University(홍익대학교,Hongik Daehakgyo) is shortened toHongdae(홍대, 'Hong, the big [school]' or 'Hong-U') Other acronyms can refer to the university's main subject, e.g.Korea National University of Education(한국교원대학교,Hanguk Gyowon Daehakgyo) is shortened toGyowondae(교원대, 'Big Ed.' or 'Ed.-U'). Other schools use a Koreanized version of their English acronym. TheKorea Advanced Institute of Science and Technology(한국과학기술원,Hanguk Gwahak Gisulwon) is referred to as KAIST (카이스트,Kaiseuteu) in both English and Korean. The 3 most prestigious schools in Korea are known as SKY (스카이,seukai), combining the first letter of their English names (Seoul National,Korea, andYonsei Universities). In addition, the College Scholastic Ability Test (대학수학능력시험,Daehak Suhang Neungryeok Siheom) is shortened toSuneung(수능, 'S.A.'). TheJapanese languagemakes extensive use of abbreviations, but only some of these are acronyms. Chinese-based words (Sino-Japanese vocabulary) uses similar acronym formation to Chinese, likeTōdai(東大)forTōkyō Daigaku(東京大学,Tokyo University). In some cases alternative pronunciations are used, as inSaikyōfor 埼京, fromSaitama+Tōkyō(埼玉+東京), rather than Saitō. Non-Chinese foreign borrowings (gairaigo) are instead frequently abbreviated asclipped compounds, rather than acronyms, using several initial sounds. This is visible inkatakanatranscriptions of foreign words, but is also found with native words (written inhiragana). For example, thePokémonmedia franchise's name originally stood for "pocket monsters" (ポケット·モンスター[po-ke-tto-mon-su-tā]ポケモン), which is still the long-form of the name in Japanese, and "wāpuro" stands for "word processor" (ワード·プロセッサー[wā-do-pu-ro-se-ssā]ワープロ). To a greater degree than English does, German tends toward acronyms that use initial syllables rather than initial single letters, although it uses many of the latter type as well. Some examples of the syllabic type areGestaporather thanGSP(forGeheime Staatspolizei, 'Secret State Police');Flakrather thanFAK(forFliegerabwehrkanone, 'anti-aircraftgun');Kriporather thanKP(forKriminalpolizei, 'detective division police'). The extension of such contraction to a pervasive or whimsical degree has been mockingly labeledAküfi(forAbkürzungsfimmel, 'strange habit of abbreviating'). Examples ofAküfiincludeVokuhila(forvorne kurz, hinten lang, 'short in the front, long in the back', i.e., amullethaircut) and the mocking ofAdolf Hitler's title asGröfaz(Größter Feldherr aller Zeiten, 'Greatest General of all Time'). It is common to take more than just one initial letter from each of the words composing the acronym; regardless of this, the abbreviation signgershayim⟨״⟩is always written between the second-last and last letters of the non-inflected form of the acronym, even if by this it separates letters of the same original word. Examples (keeping in mind that Hebrew reads right-to-left):ארה״ב(forארצות הברית, the United States);ברה״מ(forברית המועצות, the Soviet Union);ראשל״צ(forראשון לציון,Rishon LeZion);ביה״ס(forבית הספר, the school). An example that takes only the initial letters from its component words isצה״ל(Tzahal, forצבא הגנה לישראל,Israel Defense Forces). In inflected forms, the abbreviation signgershayimremains between the second-last and last letters of the non-inflected form of the acronym (e.g. 'report', singular:דו״ח, plural:דו״חות; 'squad commander', masculine:מ״כ, feminine:מ״כית). There is also a widespread use of acronyms inIndonesiain every aspect of social life. For example, theGolkarpolitical party stands forPartai Golongan Karya,Monasstands forMonumen Nasional('National Monument'), theAngkotpublic transport stands forAngkutan Kota('city public transportation'),warnetstands forwarung internet('internet cafe'), and many others. Some acronyms are considered formal (or officially adopted), while many more are considered informal,slang, orcolloquial. The capital's metropolitan area (Jakartaand its surroundingsatellite regions),Jabodetabek, is another acronym. This stands forJakarta-Bogor-Depok-Tangerang-Bekasi. Many highways are also named by the acronym method; e.g.Jalan Tol('Toll Road')Jagorawi(Jakarta-Bogor-Ciawi),Purbaleunyi(Purwakarta-Bandung-Cileunyi), andJoglo Semar(Jogja-Solo-Semarang). In some languages, especially those that use certainalphabets, many acronyms come from the governmental use, particularly in the military and law enforcement services. TheIndonesian military(TNI –Tentara Nasional Indonesia) andIndonesian police(POLRI –Kepolisian Republik Indonesia) are known for heavy acronyms use. Examples include theKopassus(Komando Pasukan Khusus; 'Special ForcesCommand'),Kopaska(Komando Pasukan Katak; 'FrogmenCommand'),Kodim(Komando Distrik Militer; 'Military District Command' – one of the Indonesian army'sadministrative divisions),Serka(Sersan Kepala; 'HeadSergeant'),Akmil(Akademi Militer; 'Military Academy' – inMagelang), and many other terms regardingranks, units, divisions, procedures, etc. Although not as common as in Indonesian, a number of Malay words are formed by merging two words, such astadikafromtaman didikan kanak-kanak('kindergarten') andpawagamfrompanggung wayang gambar. This, however, has been less prevalent in the modern era, in contrary to Indonesian. It is still often for names such as organisation names, among the most famous being MARA fromMajlis Amanah Rakyat('People's Trust Council'), a government agency in Malaysia. Some acronyms are developed from theJawi(Malay in Arabic script) spelling of the name and may not reflect its Latin counterpart such as PAS fromParti Islam Se-Malaysia('Malaysian Islamic Party') which originated from the Jawi acronymڤاس from ڤرتي إسلام سمليسيا, with the same pronunciation, since the first letter of the word 'Islam' in Jawi uses the letterAleph, which is pronounced like the letterAwhen in such position as in the acronym. Rules in writing initialisms in Malay differ based on its script. In its Latin form, the initialism would be spelt much like in English, using capitals written without any spacing, such as TNB forTenaga NasionalBerhad. In Jawi, however, initialisms differ depending on the source language. For Malay initialisms, the initial Jawi letters would be written separated by a period such asد.ب.ڤforديوان بهاس دان ڤوستاک.[90]If the initialism is from a different language, however, it would be written by transliterating each letter from the original language, such asعيم.سي.عيم.سي.forMCMC, orالفا.ڤي.ثيتاforΑ.Π.Θ.[91] Acronyms that use parts of words (not necessarily syllables) are commonplace in Russian as well, e.g.Газпром(Gazprom), forГазовая промышленность(Gazovaya promyshlennost, 'gas industry'). There are also initialisms, such asСМИ('SMI', forсредства массовой информацииsredstva massovoy informatsii, 'means of mass informing';ГУЛаг(GULag) combines two initials and three letters of the final word: it stands forГлавное управление лагерей(Glavnoe upravlenie lagerey, 'Chief Administration of Camps'). Historically,OTMAwas an acronym sometimes used by the daughters ofEmperorNicholas II of Russiaand his consort,Alexandra Feodorovna, as a group nickname for themselves, built from the first letter of each girl's name in the order of their births: Olga, Tatiana, Maria, and Anastasia. InSwahili, acronyms are common for naming organizations such asTUKI, which stands forTaasisi ya Uchunguzi wa Kiswahili('Institute for Swahili Research'). Multiple initial letters (often the initial syllable of words) are often drawn together, as seen more in some languages than others. InVietnamese, which has an abundance of compound words, initialisms are very commonly used for both proper and common nouns. Examples includeTP.HCM(Thành phố Hồ Chí Minh, 'Ho Chi Minh City'),THPT(trung học phổ thông, 'high school'),CLB(câu lạc bộ, 'club'),CSDL(cơ sở dữ liệu, 'database'),NXB(nhà xuất bản, 'publisher'),ÔBACE(ông bà anh chị em, a general form of address), andCTTĐVN(các Thánh tử đạo Việt Nam, 'Vietnamese Martyrs'). Longer examples includeCHXHCNVN(Cộng hòa Xã hội chủ nghĩa Việt Nam, 'Socialist Republic of Vietnam') andMTDTGPMNVN(Mặt trận Dân tộc Giải phóng miền Nam Việt Nam, 'Liberation Army of South Vietnam or the National Liberation Front of South Vietnam'). Long initialisms have become widespread in legal contexts inVietnam, for exampleTTLT-VKSNDTC-TANDTC.[92]It is also common for a writer to coin an ad hoc initialism for repeated use in an article. Each letter in an initialism corresponds to onemorpheme, that is, one syllable. When the first letter of a syllable has a tone mark or other diacritic, the diacritic may be omitted from the initialism, for exampleĐNAorĐNÁforĐông Nam Á('Southeast Asia') andLMCAorLMCÂforLiên minh châu Âu('European Union'). The letterƯis often replaced byWin initialisms to avoid confusion withU, for exampleUBTWMTTQVNorUBTƯMTTQVNforỦy ban Trung ương Mặt trận Tổ quốc Việt Nam('Central Committee of theVietnamese Fatherland Front'). Initialisms are purely a written convenience, being pronounced the same way as their expansions. As thenames of many Vietnamese lettersare disyllabic, it would be less convenient to pronounce an initialism by its individual letters. Acronyms pronounced as words are rare in Vietnamese, occurring when an acronym itself is borrowed from another language. Examples includeSIĐA(pronounced[s̪i˧ˀɗaː˧]), a respelling of the French acronymSIDA('AIDS');VOA(pronounced[vwaː˧]), a literal reading of the English initialism for 'Voice of America'; andNASA(pronounced[naː˧zaː˧]), borrowed directly from the English acronym. As inChinese, many compound words can be shortened to the first syllable when forming a longer word. For example, the termViệt Cộngis derived from the first syllables ofViệt Nam('Vietnam') andCộng sản('communist'). This mechanism is limited toSino-Vietnamese vocabulary. Unlike with Chinese, suchclipped compoundsare considered to beportmanteauwords orblend wordsrather than acronyms or initialisms, because theVietnamese alphabetstill requires each component word to be written as more than one character. In languages where nouns aredeclined, various methods are used. An example isFinnish, where a colon is used to separate inflection from the letters: The process above is similar to the way that hyphens are used for clarity in English when prefixes are added to acronyms: thuspre-NATO policy(rather thanpreNATO). In languages such asScottish GaelicandIrish, wherelenition(initial consonant mutation) is commonplace, acronyms must also be modified in situations where case and context dictate it. In the case of Scottish Gaelic, a lower-casehis often added after the initial consonant; for example, 'BBC Scotland' in the genitive case would be written asBhBC Alba, with the acronym pronouncedVBC. Likewise, the Gaelic acronym fortelebhisean'television' isTBh, pronouncedTV, as in English. acronym,n. Pronunciation:Brit. /ˈakrənɪm/, U.S. /ˈækrəˌnɪm/Origin:Formed within English, by compounding; modelled on a German lexical item.Etymons:acro-comb. form,-onymcomb. form.Etymology:<acro-comb. form +-onymcomb. form, after GermanAkronym(1921 or earlier).OriginallyU.S.1.A group of initial letters used as an abbreviation for a name or expression, each letter or part being pronounced separately; an initialism (such asATM,TLS).In theO.E.D.the terminitialismis used for this phenomenon. (See sense 2 forO.E.D.use of the word.) 2.A word formed from the initial letters of other words or (occasionally) from the initial parts of syllables taken from other words, the whole being pronounced as a single word (such asNATO,RADA). acronymnounac·ro·nym | \ˈa-krə-ˌnim\Definition ofacronym: a word (such asNATO,radar, orlaser) formed from the initial letter or letters of each of the successive parts or major parts of a compound termalso: an abbreviation (such asFBI) formed from initial letters :initialism ac·ro·nym (ăk′rə-nĭm′)n.1.A word formed by combining the initial letters of a multipart name, such asNATOfromNorthAtlanticTreatyOrganization or by combining the initial letters or parts of a series of words, such asradarfromradiodetectingandranging.2.Usage ProblemAn initialism.[acr(o)- + -onym.]ac′ro·nym′ic, a·cron′y·mous (ə-krŏn′ə-məs)adj.Usage Note:In strict usage, the termacronymrefers to a word made from the initial letters or parts of other words, such assonarfromso(und) na(vigation and) r(anging). The distinguishing feature of an acronym is that it is pronounced as if it were a single word, in the manner ofNATOandNASA. Acronyms are often distinguished from initialisms likeFBIandNIH, whose individual letters are pronounced as separate syllables. While observing this distinction has some virtue in precision, it may be lost on many people, for whom the termacronymrefers to both kinds of abbreviations. acronym/ˈækrənɪm/ ('say''akruhnim)noun1.a word formed from the initial letters of a sequence of words, asradar(fromradio detection and ranging) orANZAC(fromAustralian and New Zealand Army Corps). Compareinitialism.2.an initialism.[acro-+-(o)nym; modelled onsynonym] ac·ro·nym/ˈakrəˌnim/ ▸n.an abbreviation formed from the initial letters of other words and pronounced as a word (e.g.ASCII,NASA).—origin1940s: from Greekakron'end, tip' +onoma'name,' on the pattern ofhomonym. acronymsA number of commentators (as Copperud 1970, Janis 1984, Howard 1984) believe that acronyms can be differentiated from other abbreviations in being pronounceable as words. Dictionaries, however, do not make this distinction because writers in general do not: "The powder metallurgy industry has officially adopted the acronym 'P/M Parts'"—Precision Metal Molding, January 1966."Users of the termacronymmake no distinction between those pronounced as words ... and those pronounced as a series of characters" —Jean Praninskas,Trade Name Creation, 1968."It is not J.C.B.'s fault that its name, let alone its acronym, is not a household word among European scholars"—Times Literary Supp.5 February 1970."... the confusion in the Pentagon about abbreviations and acronyms—words formed from the first letters of other words"—Bernard Weinraub,N.Y. Times, 11 December 1978. Pyles & Algeo 1970 divide acronyms into "initialisms", which consists of initial letters pronounced with the letter names, and "word acronyms", which are pronounced as words.Initialism, an older word thanacronym, seems to be too little known to the general public to serve as the customary term standing in contrast withacronymin a narrow sense.
https://en.wikipedia.org/wiki/Acronym
Insignal processing,independent component analysis(ICA) is a computational method for separating amultivariatesignal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and that the subcomponents arestatistically independentfrom each other.[1]ICA was invented by Jeanny Hérault and Christian Jutten in 1985.[2]ICA is a special case ofblind source separation. A common example application of ICA is the "cocktail party problem" of listening in on one person's speech in a noisy room.[3] Independent component analysis attempts to decompose a multivariate signal into independent non-Gaussian signals. As an example, sound is usually a signal that is composed of the numerical addition, at each time t, of signals from several sources. The question then is whether it is possible to separate these contributing sources from the observed total signal. When the statistical independence assumption is correct, blind ICA separation of a mixed signal gives very good results.[5]It is also used for signals that are not supposed to be generated by mixing for analysis purposes. A simple application of ICA is the "cocktail party problem", where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays or echoes. Note that a filtered and delayed signal is a copy of a dependent component, and thus the statistical independence assumption is not violated. Mixing weights for constructing theM{\textstyle M}observed signals from theN{\textstyle N}components can be placed in anM×N{\textstyle M\times N}matrix. An important thing to consider is that ifN{\textstyle N}sources are present, at leastN{\textstyle N}observations (e.g. microphones if the observed signal is audio) are needed to recover the original signals. When there are an equal number of observations and source signals, the mixing matrix is square (M=N{\textstyle M=N}). Other cases of underdetermined (M<N{\textstyle M<N}) and overdetermined (M>N{\textstyle M>N}) have been investigated. The success of ICA separation of mixed signals relies on two assumptions and three effects of mixing source signals. Two assumptions: Three effects of mixing source signals: Those principles contribute to the basic establishment of ICA. If the signals extracted from a set of mixtures are independent and have non-Gaussian distributions or have low complexity, then they must be source signals.[6][7] Another common example is imagesteganography, where ICA is used to embed one image within another. For instance, two grayscale images can be linearly combined to create mixed images in which the hidden content is visually imperceptible. ICA can then be used to recover the original source images from the mixtures. This technique underlies digital watermarking, which allows the embedding of ownership information into images, as well as more covert applications such as undetected information transmission. The method has even been linked to real-world cyberespionage cases. In such applications, ICA serves to unmix the data based on statistical independence, making it possible to extract hidden components that are not apparent in the observed data. Steganographic techniques, including those potentially involving ICA-based analysis, have been used in real-world cyberespionage cases. In 2010, the FBI uncovered a Russian spy network known as the "Illegals Program" (Operation Ghost Stories), where agents used custom-built steganography tools to conceal encrypted text messages within image files shared online.[8] In another case, a former General Electric engineer, Xiaoqing Zheng, was convicted in 2022 for economic espionage. Zheng used steganography to exfiltrate sensitive turbine technology by embedding proprietary data within image files for transfer to entities in China.[9] ICA finds the independent components (also called factors, latent variables or sources) by maximizing the statistical independence of the estimated components. We may choose one of many ways to define a proxy for independence, and this choice governs the form of the ICA algorithm. The two broadest definitions of independence for ICA are The Minimization-of-Mutual information(MMI) family of ICA algorithms uses measures likeKullback-Leibler Divergenceandmaximum entropy. The non-Gaussianity family of ICA algorithms, motivated by thecentral limit theorem, useskurtosisandnegentropy.[10] Typical algorithms for ICA use centering (subtract the mean to create a zero mean signal),whitening(usually with theeigenvalue decomposition),[11]anddimensionality reductionas preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case. In the classical ICA model, it is assumed that the observed dataxi∈Rm{\displaystyle \mathbf {x} _{i}\in \mathbb {R} ^{m}}at timeti{\displaystyle t_{i}}is generated from source signalssi∈Rm{\displaystyle \mathbf {s} _{i}\in \mathbb {R} ^{m}}via a linear transformationxi=Asi{\displaystyle \mathbf {x} _{i}=A\mathbf {s} _{i}}, whereA{\displaystyle A}is an unknown, invertible mixing matrix. To recover the source signals, the data is first centered (zero mean), and then whitened so that the transformed data has unit covariance. This whitening reduces the problem from estimating a general matrixA{\displaystyle A}to estimating an orthogonal matrixV{\displaystyle V}, significantly simplifying the search for independent components. If the covariance matrix of the centered data isΣx=AA⊤{\displaystyle \Sigma _{x}=AA^{\top }}, then using the eigen-decompositionΣx=QDQ⊤{\displaystyle \Sigma _{x}=QDQ^{\top }}, the whitening transformation can be taken asD−1/2Q⊤{\displaystyle D^{-1/2}Q^{\top }}. This step ensures that the recovered sources are uncorrelated and of unit variance, leaving only the task of rotating the whitened data to maximize statistical independence. This general derivation underlies many ICA algorithms and is foundational in understanding the ICA model.[12] Independent component analysis(ICA) addresses the problem of recovering a set of unobserved source signalssi=(si1,si2,…,sim)T{\displaystyle s_{i}=(s_{i1},s_{i2},\dots ,s_{im})^{T}}from observed mixed signalsxi=(xi1,xi2,…,xim)T{\displaystyle x_{i}=(x_{i1},x_{i2},\dots ,x_{im})^{T}}, based on the linear mixing model: xi=Asi,{\displaystyle x_{i}=A\,s_{i},} where theA{\displaystyle A}is anm×m{\displaystyle m\times m}invertible matrix called themixing matrix,si{\displaystyle s_{i}}represents the m‑dimensional vector containing the values of the sources at timeti{\displaystyle t_{i}}, andxi{\displaystyle x_{i}}is the corresponding vector of observed values at timeti{\displaystyle t_{i}}. The goal is to estimate bothA{\displaystyle A}and the source signals{si}{\displaystyle \{s_{i}\}}solely from the observed data{xi}{\displaystyle \{x_{i}\}}. After centering, the Gram matrix is computed as:(X∗)TX∗=QDQT,{\displaystyle (X^{*})^{T}X^{*}=Q\,D\,Q^{T},}where D is a diagonal matrix with positive entries (assumingX∗{\displaystyle X^{*}}has maximum rank), and Q is an orthogonal matrix.[13]Writing the SVD of the mixing matrixA=UΣVT{\displaystyle A=U\Sigma V^{T}}and comparing withAAT=UΣ2UT{\displaystyle AA^{T}=U\Sigma ^{2}U^{T}}the mixing A has the formA=QD1/2VT.{\displaystyle A=Q\,D^{1/2}\,V^{T}.}So, the normalized source values satisfysi∗=Vyi∗{\displaystyle s_{i}^{*}=V\,y_{i}^{*}}, whereyi∗=D−12QTxi∗.{\displaystyle y_{i}^{*}=D^{-{\tfrac {1}{2}}}Q^{T}x_{i}^{*}.}Thus, ICA reduces to finding the orthogonal matrixV{\displaystyle V}. This matrix can be computed using optimization techniques via projection pursuit methods (seeProjection Pursuit).[14] Well-known algorithms for ICA includeinfomax,FastICA,JADE, andkernel-independent component analysis, among others. In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals. ICA is important toblind signal separationand has many practical applications. It is closely related to (or even a special case of) the search for afactorial codeof the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent. The componentsxi{\displaystyle x_{i}}of the observed random vectorx=(x1,…,xm)T{\displaystyle {\boldsymbol {x}}=(x_{1},\ldots ,x_{m})^{T}}are generated as a sum of the independent componentssk{\displaystyle s_{k}},k=1,…,n{\displaystyle k=1,\ldots ,n}: xi=ai,1s1+⋯+ai,ksk+⋯+ai,nsn{\displaystyle x_{i}=a_{i,1}s_{1}+\cdots +a_{i,k}s_{k}+\cdots +a_{i,n}s_{n}} weighted by the mixing weightsai,k{\displaystyle a_{i,k}}. The same generative model can be written in vector form asx=∑k=1nskak{\displaystyle {\boldsymbol {x}}=\sum _{k=1}^{n}s_{k}{\boldsymbol {a}}_{k}}, where the observed random vectorx{\displaystyle {\boldsymbol {x}}}is represented by the basis vectorsak=(a1,k,…,am,k)T{\displaystyle {\boldsymbol {a}}_{k}=({\boldsymbol {a}}_{1,k},\ldots ,{\boldsymbol {a}}_{m,k})^{T}}. The basis vectorsak{\displaystyle {\boldsymbol {a}}_{k}}form the columns of the mixing matrixA=(a1,…,an){\displaystyle {\boldsymbol {A}}=({\boldsymbol {a}}_{1},\ldots ,{\boldsymbol {a}}_{n})}and the generative formula can be written asx=As{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}}, wheres=(s1,…,sn)T{\displaystyle {\boldsymbol {s}}=(s_{1},\ldots ,s_{n})^{T}}. Given the model and realizations (samples)x1,…,xN{\displaystyle {\boldsymbol {x}}_{1},\ldots ,{\boldsymbol {x}}_{N}}of the random vectorx{\displaystyle {\boldsymbol {x}}}, the task is to estimate both the mixing matrixA{\displaystyle {\boldsymbol {A}}}and the sourcess{\displaystyle {\boldsymbol {s}}}. This is done by adaptively calculating thew{\displaystyle {\boldsymbol {w}}}vectors and setting up a cost function which either maximizes the non-gaussianity of the calculatedsk=wTx{\displaystyle s_{k}={\boldsymbol {w}}^{T}{\boldsymbol {x}}}or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function. The original sourcess{\displaystyle {\boldsymbol {s}}}can be recovered by multiplying the observed signalsx{\displaystyle {\boldsymbol {x}}}with the inverse of the mixing matrixW=A−1{\displaystyle {\boldsymbol {W}}={\boldsymbol {A}}^{-1}}, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (n=m{\displaystyle n=m}). If the number of basis vectors is greater than the dimensionality of the observed vectors,n>m{\displaystyle n>m}, the task is overcomplete but is still solvable with thepseudo inverse. With the added assumption of zero-mean and uncorrelated Gaussian noisen∼N(0,diag⁡(Σ)){\displaystyle n\sim N(0,\operatorname {diag} (\Sigma ))}, the ICA model takes the formx=As+n{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}+n}. The mixing of the sources does not need to be linear. Using a nonlinear mixing functionf(⋅|θ){\displaystyle f(\cdot |\theta )}with parametersθ{\displaystyle \theta }thenonlinear ICAmodel isx=f(s|θ)+n{\displaystyle x=f(s|\theta )+n}. The independent components are identifiable up to a permutation and scaling of the sources.[15]This identifiability requires that: A special variant of ICA is binary ICA in which both signal sources and monitors are in binary form and observations from monitors are disjunctive mixtures of binary independent sources. The problem was shown to have applications in many domains includingmedical diagnosis,multi-cluster assignment,network tomographyandinternet resource management. Letx1,x2,…,xm{\displaystyle {x_{1},x_{2},\ldots ,x_{m}}}be the set of binary variables fromm{\displaystyle m}monitors andy1,y2,…,yn{\displaystyle {y_{1},y_{2},\ldots ,y_{n}}}be the set of binary variables fromn{\displaystyle n}sources. Source-monitor connections are represented by the (unknown) mixing matrixG{\textstyle {\boldsymbol {G}}}, wheregij=1{\displaystyle g_{ij}=1}indicates that signal from thei-th source can be observed by thej-th monitor. The system works as follows: at any time, if a sourcei{\displaystyle i}is active (yi=1{\displaystyle y_{i}=1}) and it is connected to the monitorj{\displaystyle j}(gij=1{\displaystyle g_{ij}=1}) then the monitorj{\displaystyle j}will observe some activity (xj=1{\displaystyle x_{j}=1}). Formally we have: where∧{\displaystyle \wedge }is Boolean AND and∨{\displaystyle \vee }is Boolean OR. Noise is not explicitly modelled, rather, can be treated as independent sources. The above problem can be heuristically solved[16]by assuming variables are continuous and runningFastICAon binary observation data to get the mixing matrixG{\textstyle {\boldsymbol {G}}}(real values), then applyround numbertechniques onG{\textstyle {\boldsymbol {G}}}to obtain the binary values. This approach has been shown to produce a highly inaccurate result.[citation needed] Another method is to usedynamic programming: recursively breaking the observation matrixX{\textstyle {\boldsymbol {X}}}into its sub-matrices and run the inference algorithm on these sub-matrices. The key observation which leads to this algorithm is the sub-matrixX0{\textstyle {\boldsymbol {X}}^{0}}ofX{\textstyle {\boldsymbol {X}}}wherexij=0,∀j{\textstyle x_{ij}=0,\forall j}corresponds to the unbiased observation matrix of hidden components that do not have connection to thei{\displaystyle i}-th monitor. Experimental results from[17]show that this approach is accurate under moderate noise levels. The Generalized Binary ICA framework[18]introduces a broader problem formulation which does not necessitate any knowledge on the generative model. In other words, this method attempts to decompose a source into its independent components (as much as possible, and without losing any information) with no prior assumption on the way it was generated. Although this problem appears quite complex, it can be accurately solved with abranch and boundsearch tree algorithm or tightly upper bounded with a single multiplication of a matrix with a vector. Signal mixtures tend to have Gaussian probability density functions, and source signals tend to have non-Gaussian probability density functions. Each source signal can be extracted from a set of signal mixtures by taking the inner product of a weight vector and those signal mixtures where this inner product provides an orthogonal projection of the signal mixtures. The remaining challenge is finding such a weight vector. One type of method for doing so isprojection pursuit.[19][20] Projection pursuit seeks one projection at a time such that the extracted signal is as non-Gaussian as possible. This contrasts with ICA, which typically extractsMsignals simultaneously fromMsignal mixtures, which requires estimating aM×Munmixing matrix. One practical advantage of projection pursuit over ICA is that fewer thanMsignals can be extracted if required, where each source signal is extracted fromMsignal mixtures using anM-element weight vector. We can usekurtosisto recover the multiple source signal by finding the correct weight vectors with the use of projection pursuit. The kurtosis of the probability density function of a signal, for a finite sample, is computed as wherey¯{\displaystyle \mathbf {\overline {y}} }is thesample meanofy{\displaystyle \mathbf {y} }, the extracted signals. The constant 3 ensures that Gaussian signals have zero kurtosis, Super-Gaussian signals have positive kurtosis, and Sub-Gaussian signals have negative kurtosis. The denominator is thevarianceofy{\displaystyle \mathbf {y} }, and ensures that the measured kurtosis takes account of signal variance. The goal of projection pursuit is to maximize the kurtosis, and make the extracted signal as non-normal as possible. Using kurtosis as a measure of non-normality, we can now examine how the kurtosis of a signaly=wTx{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {x} }extracted from a set ofMmixturesx=(x1,x2,…,xM)T{\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{M})^{T}}varies as the weight vectorw{\displaystyle \mathbf {w} }is rotated around the origin. Given our assumption that each source signals{\displaystyle \mathbf {s} }is super-gaussian we would expect: For multiple source mixture signals, we can use kurtosis andGram-SchmidtOrthogonalization (GSO) to recover the signals. GivenMsignal mixtures in anM-dimensional space, GSO project these data points onto an (M-1)-dimensional space by using the weight vector. We can guarantee the independence of the extracted signals with the use of GSO. In order to find the correct value ofw{\displaystyle \mathbf {w} }, we can usegradient descentmethod. We first of all whiten the data, and transformx{\displaystyle \mathbf {x} }into a new mixturez{\displaystyle \mathbf {z} }, which has unit variance, andz=(z1,z2,…,zM)T{\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{M})^{T}}. This process can be achieved by applyingSingular value decompositiontox{\displaystyle \mathbf {x} }, Rescaling each vectorUi=Ui/E⁡(Ui2){\displaystyle U_{i}=U_{i}/\operatorname {E} (U_{i}^{2})}, and letz=U{\displaystyle \mathbf {z} =\mathbf {U} }. The signal extracted by a weighted vectorw{\displaystyle \mathbf {w} }isy=wTz{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {z} }. If the weight vectorwhas unit length, then the variance ofyis also 1, that isE⁡[(wTz)2]=1{\displaystyle \operatorname {E} [(\mathbf {w} ^{T}\mathbf {z} )^{2}]=1}. The kurtosis can thus be written as: The updating process forw{\displaystyle \mathbf {w} }is: whereη{\displaystyle \eta }is a small constant to guarantee thatw{\displaystyle \mathbf {w} }converges to the optimal solution. After each update, we normalizewnew=wnew|wnew|{\displaystyle \mathbf {w} _{new}={\frac {\mathbf {w} _{new}}{|\mathbf {w} _{new}|}}}, and setwold=wnew{\displaystyle \mathbf {w} _{old}=\mathbf {w} _{new}}, and repeat the updating process until convergence. We can also use another algorithm to update the weight vectorw{\displaystyle \mathbf {w} }. Another approach is usingnegentropy[10][21]instead of kurtosis. Using negentropy is a more robust method than kurtosis, as kurtosis is very sensitive to outliers. The negentropy methods are based on an important property of Gaussian distribution: a Gaussian variable has the largest entropy among all continuous random variables of equal variance. This is also the reason why we want to find the most nongaussian variables. A simple proof can be found inDifferential entropy. y is a Gaussian random variable of the same covariance matrix as x An approximation for negentropy is A proof can be found in the original papers of Comon;[22][10]it has been reproduced in the bookIndependent Component Analysisby Aapo Hyvärinen, Juha Karhunen, andErkki Oja[23]This approximation also suffers from the same problem as kurtosis (sensitivity to outliers). Other approaches have been developed.[24] A choice ofG1{\displaystyle G_{1}}andG2{\displaystyle G_{2}}are Infomax ICA[25]is essentially a multivariate, parallel version of projection pursuit. Whereas projection pursuit extracts a series of signals one at a time from a set ofMsignal mixtures, ICA extractsMsignals in parallel. This tends to make ICA more robust than projection pursuit.[26] The projection pursuit method usesGram-Schmidtorthogonalization to ensure the independence of the extracted signal, while ICA useinfomaxandmaximum likelihoodestimate to ensure the independence of the extracted signal. The Non-Normality of the extracted signal is achieved by assigning an appropriate model, or prior, for the signal. The process of ICA based oninfomaxin short is: given a set of signal mixturesx{\displaystyle \mathbf {x} }and a set of identical independent modelcumulative distribution functions(cdfs)g{\displaystyle g}, we seek the unmixing matrixW{\displaystyle \mathbf {W} }which maximizes the jointentropyof the signalsY=g(y){\displaystyle \mathbf {Y} =g(\mathbf {y} )}, wherey=Wx{\displaystyle \mathbf {y} =\mathbf {Wx} }are the signals extracted byW{\displaystyle \mathbf {W} }. Given the optimalW{\displaystyle \mathbf {W} }, the signalsY{\displaystyle \mathbf {Y} }have maximum entropy and are therefore independent, which ensures that the extracted signalsy=g−1(Y){\displaystyle \mathbf {y} =g^{-1}(\mathbf {Y} )}are also independent.g{\displaystyle g}is an invertible function, and is the signal model. Note that if the source signal modelprobability density functionps{\displaystyle p_{s}}matches theprobability density functionof the extracted signalpy{\displaystyle p_{\mathbf {y} }}, then maximizing the joint entropy ofY{\displaystyle Y}also maximizes the amount ofmutual informationbetweenx{\displaystyle \mathbf {x} }andY{\displaystyle \mathbf {Y} }. For this reason, using entropy to extract independent signals is known asinfomax. Consider the entropy of the vector variableY=g(y){\displaystyle \mathbf {Y} =g(\mathbf {y} )}, wherey=Wx{\displaystyle \mathbf {y} =\mathbf {Wx} }is the set of signals extracted by the unmixing matrixW{\displaystyle \mathbf {W} }. For a finite set of values sampled from a distribution with pdfpy{\displaystyle p_{\mathbf {y} }}, the entropy ofY{\displaystyle \mathbf {Y} }can be estimated as: The joint pdfpY{\displaystyle p_{\mathbf {Y} }}can be shown to be related to the joint pdfpy{\displaystyle p_{\mathbf {y} }}of the extracted signals by the multivariate form: whereJ=∂Y∂y{\displaystyle \mathbf {J} ={\frac {\partial \mathbf {Y} }{\partial \mathbf {y} }}}is theJacobian matrix. We have|J|=g′(y){\displaystyle |\mathbf {J} |=g'(\mathbf {y} )}, andg′{\displaystyle g'}is the pdf assumed for source signalsg′=ps{\displaystyle g'=p_{s}}, therefore, therefore, We know that whenpy=ps{\displaystyle p_{\mathbf {y} }=p_{s}},pY{\displaystyle p_{\mathbf {Y} }}is of uniform distribution, andH(Y){\displaystyle H({\mathbf {Y} })}is maximized. Since where|W|{\displaystyle |\mathbf {W} |}is the absolute value of the determinant of the unmixing matrixW{\displaystyle \mathbf {W} }. Therefore, so, sinceH(x)=−1N∑t=1Nln⁡px(xt){\displaystyle H(\mathbf {x} )=-{\frac {1}{N}}\sum _{t=1}^{N}\ln p_{\mathbf {x} }(\mathbf {x} ^{t})}, and maximizingW{\displaystyle \mathbf {W} }does not affectHx{\displaystyle H_{\mathbf {x} }}, so we can maximize the function to achieve the independence of the extracted signal. If there areMmarginal pdfs of the model joint pdfps{\displaystyle p_{\mathbf {s} }}are independent and use the commonly super-gaussian model pdf for the source signalsps=(1−tanh⁡(s)2){\displaystyle p_{\mathbf {s} }=(1-\tanh(\mathbf {s} )^{2})}, then we have In the sum, given an observed signal mixturex{\displaystyle \mathbf {x} }, the corresponding set of extracted signalsy{\displaystyle \mathbf {y} }and source signal modelps=g′{\displaystyle p_{\mathbf {s} }=g'}, we can find the optimal unmixing matrixW{\displaystyle \mathbf {W} }, and make the extracted signals independent and non-gaussian. Like the projection pursuit situation, we can use gradient descent method to find the optimal solution of the unmixing matrix. Maximum likelihoodestimation (MLE)is a standard statistical tool for finding parameter values (e.g. the unmixing matrixW{\displaystyle \mathbf {W} }) that provide the best fit of some data (e.g., the extracted signalsy{\displaystyle y}) to a given a model (e.g., the assumed joint probability density function (pdf)ps{\displaystyle p_{s}}of source signals).[26] TheML"model" includes a specification of a pdf, which in this case is the pdfps{\displaystyle p_{s}}of the unknown source signalss{\displaystyle s}. UsingML ICA, the objective is to find an unmixing matrix that yields extracted signalsy=Wx{\displaystyle y=\mathbf {W} x}with a joint pdf as similar as possible to the joint pdfps{\displaystyle p_{s}}of the unknown source signalss{\displaystyle s}. MLEis thus based on the assumption that if the model pdfps{\displaystyle p_{s}}and the model parametersA{\displaystyle \mathbf {A} }are correct then a high probability should be obtained for the datax{\displaystyle x}that were actually observed. Conversely, ifA{\displaystyle \mathbf {A} }is far from the correct parameter values then a low probability of the observed data would be expected. UsingMLE, we call the probability of the observed data for a given set of model parameter values (e.g., a pdfps{\displaystyle p_{s}}and a matrixA{\displaystyle \mathbf {A} }) thelikelihoodof the model parameter values given the observed data. We define alikelihoodfunctionL(W){\displaystyle \mathbf {L(W)} }ofW{\displaystyle \mathbf {W} }: L(W)=ps(Wx)|detW|.{\displaystyle \mathbf {L(W)} =p_{s}(\mathbf {W} x)|\det \mathbf {W} |.} This equals to the probability density atx{\displaystyle x}, sinces=Wx{\displaystyle s=\mathbf {W} x}. Thus, if we wish to find aW{\displaystyle \mathbf {W} }that is most likely to have generated the observed mixturesx{\displaystyle x}from the unknown source signalss{\displaystyle s}with pdfps{\displaystyle p_{s}}then we need only find thatW{\displaystyle \mathbf {W} }which maximizes thelikelihoodL(W){\displaystyle \mathbf {L(W)} }. The unmixing matrix that maximizes equation is known as theMLEof the optimal unmixing matrix. It is common practice to use the loglikelihood, because this is easier to evaluate. As the logarithm is a monotonic function, theW{\displaystyle \mathbf {W} }that maximizes the functionL(W){\displaystyle \mathbf {L(W)} }also maximizes its logarithmln⁡L(W){\displaystyle \ln \mathbf {L(W)} }. This allows us to take the logarithm of equation above, which yields the loglikelihoodfunction ln⁡L(W)=∑i∑tln⁡ps(wiTxt)+Nln⁡|detW|{\displaystyle \ln \mathbf {L(W)} =\sum _{i}\sum _{t}\ln p_{s}(w_{i}^{T}x_{t})+N\ln |\det \mathbf {W} |} If we substitute a commonly used high-Kurtosismodel pdf for the source signalsps=(1−tanh⁡(s)2){\displaystyle p_{s}=(1-\tanh(s)^{2})}then we have ln⁡L(W)=1N∑iM∑tNln⁡(1−tanh⁡(wiTxt)2)+ln⁡|detW|{\displaystyle \ln \mathbf {L(W)} ={1 \over N}\sum _{i}^{M}\sum _{t}^{N}\ln(1-\tanh(w_{i}^{T}x_{t})^{2})+\ln |\det \mathbf {W} |} This matrixW{\displaystyle \mathbf {W} }that maximizes this function is themaximum likelihoodestimation. The early general framework for independent component analysis was introduced by Jeanny Hérault and Bernard Ans from 1984,[27]further developed by Christian Jutten in 1985 and 1986,[2][28][29]and refined by Pierre Comon in 1991,[22]and popularized in his paper of 1994.[10]In 1995, Tony Bell andTerry Sejnowskiintroduced a fast and efficient ICA algorithm based oninfomax, a principle introduced by Ralph Linsker in 1987. A link exists between maximum-likelihood estimation and Infomax approaches.[30]A quite comprehensive tutorial on the maximum-likelihood approach to ICA has been published by J-F. Cardoso in 1998.[31] There are many algorithms available in the literature which do ICA. A largely used one, including in industrial applications, is the FastICA algorithm, developed by Hyvärinen and Oja,[32]which uses thenegentropyas cost function, already proposed 7 years before by Pierre Comon in this context.[10]Other examples are rather related toblind source separationwhere a more general approach is used. For example, one can drop the independence assumption and separate mutually correlated signals, thus, statistically "dependent" signals. Sepp Hochreiter andJürgen Schmidhubershowed how to obtain non-linear ICA or source separation as a by-product ofregularization(1999).[33]Their method does not require a priori knowledge about the number of independent sources. ICA can be extended to analyze non-physical signals. For instance, ICA has been applied to discover discussion topics on a bag of news list archives. Some ICA applications are listed below:[6] ICA can be applied through the following software:
https://en.wikipedia.org/wiki/Independent_component_analysis
Inclinical trialsand other scientific studies, aninterim analysisis an analysis of data that is conducted before data collection has been completed. Clinical trials are unusual in that enrollment of subjects is a continual process staggered in time. If a treatment can be proven to be clearly beneficial or harmful compared to the concurrent control, or to be obviously futile, based on a pre-defined analysis of an incomplete data set while the study is on-going, the investigators may stop the study early. The design of many clinical trials includes some strategy for early stopping if an interim analysis reveals large differences between treatment groups, or shows obvious futility such that there is no chance that continuing to the end would show a clinically meaningful effect. In addition to saving time and resources, such a design feature can reduce study participants' exposure to an inferior or useless treatment. However, when repeated significance testing on accumulating data is done, some adjustment of the usual hypothesis testing procedure must be made to maintain an overall significance level.[1][2]The methods described by Pocock[3][4]and O'Brien & Fleming,[5]among others,[6][7][8]are popular implementations of groupsequential testingfor clinical trials.[9][10][11]Sometimes interim analyses are equally spaced in terms of calendar time or the information available from the data, but this assumption can be relaxed to allow for unplanned or unequally spaced analyses.[citation needed] The secondMulticenter Automatic Defibrillator Implantation Trial(MADIT II) was conducted to help better identify patients with coronary heart disease who would benefit from anICD. MADIT II is the latest in a series of trials involving the use of ICDs to improve management and clinical treatment of arrhythmia patients. The Antiarrhythmics versus Implantable Defibrillators (AVID) Trial compared ICDs with antiarrhythmic-drug therapy (amiodaroneorsotalol, predominantly the former) in patients who had survived life-threatening ventricular arrhythmias. After inclusion of 1,232 patients, the MADIT II study was terminated when interim analysis showed significant (31%) reduction in all-cause death in patients assigned to ICD therapy.[12]
https://en.wikipedia.org/wiki/Interim_analysis
TheMcKendrick–von Foerster equationis a linear first-orderpartial differential equationencountered in several areas ofmathematical biology– for example,demography[1]andcell proliferationmodeling; it is applied when age structure is an important feature in themathematical model.[2]It was first presented byAnderson Gray McKendrickin 1926 as a deterministic limit of lattice models applied toepidemiology,[3]and subsequently independently in 1959 bybiophysicsprofessorHeinz von Foersterfor describing cell cycles. The mathematical formula can be derived from first principles. It reads: ∂n∂t+∂n∂a=−m(a)n{\displaystyle {\frac {\partial n}{\partial t}}+{\frac {\partial n}{\partial a}}=-m(a)n} where the population densityn(t,a){\displaystyle n(t,a)}is a function of agea{\displaystyle a}and timet{\displaystyle t}, andm(a){\displaystyle m(a)}is the death function. Whenm(a)=0{\displaystyle m(a)=0}, we have:[2] It relates that a population ages, and that fact is the only one that influences change in population density; the negative sign shows that time flows in just one direction, that there is no birth and the population is going to die out. Suppose that for a change in timedt{\displaystyle dt}and change in ageda{\displaystyle da}, the population density is:n(t+dt,a+da)=[1−m(a)dt]n(t,a){\displaystyle n(t+dt,a+da)=[1-m(a)dt]n(t,a)}That is, during a time perioddt{\displaystyle dt}the population density decreases by a percentagem(a)dt{\displaystyle m(a)dt}. Taking aTaylor seriesexpansion to orderdt{\displaystyle dt}gives us that:n(t+dt,a+da)≈n(t,a)+∂n∂tdt+∂n∂ada{\displaystyle n(t+dt,a+da)\approx n(t,a)+{\partial n \over {\partial t}}dt+{\partial n \over {\partial a}}da}We know thatda/dt=1{\textstyle da/dt=1}, since the change of age with time is 1. Therefore, after collecting terms, we must have that:∂n∂t+∂n∂a=−m(a)n{\displaystyle {\partial n \over {\partial t}}+{\partial n \over {\partial a}}=-m(a)n} The von Foerster equation is acontinuity equation; it can be solved using themethod of characteristics.[2]Another way is bysimilarity solution; and a third is a numerical approach such asfinite differences. To get the solution, the following boundary conditions should be added: which states that the initial births should be conserved (see Sharpe–Lotka–McKendrick’s equation for otherwise), and that: which states that the initial population must be given; then it will evolve according to the partial differential equation. In Sebastian Aniţa, Viorel Arnăutu, Vincenzo Capasso.An Introduction to Optimal Control Problems in Life Sciences and Economics(Birkhäuser. 2011), this equation appears as a special case of theSharpe–Lotka–McKendrick’s equation; in the latter there is inflow, and the math is based ondirectional derivative. The McKendrick’s equation appears extensively in the context of cell biology as a good approach to model the eukaryotic cell cycle.[4]
https://en.wikipedia.org/wiki/Von_Foerster_equation
Low-power electronicsareelectronicsdesigned to consume lesselectrical powerthan usual, often at some expense. For example,notebook processorsusually consume less power than theirdesktopcounterparts, at the expense ofcomputer performance.[1] The earliest attempts to reduce the amount of power required by an electronic device were related to the development of thewristwatch. Electronic watches require electricity as a power source, and some mechanical movements and hybrid electromechanical movements also require electricity. Usually, the electricity is provided by a replaceablebattery. The first use of electrical power in watches was as a substitute for themainspring, to remove the need for winding. The first electrically powered watch, theHamilton Electric 500, was released in 1957 by theHamilton Watch CompanyofLancaster, Pennsylvania. The first quartz wristwatches were manufactured in 1967, using analog hands to display the time.[2] Watch batteries(strictly speaking cells, as a battery is composed of multiple cells) are specially designed for their purpose. They are very small and provide tiny amounts of power continuously for very long periods (several years or more). In some cases, replacing the battery requires a trip to a watch repair shop or watch dealer. Rechargeable batteries are used in somesolar-powered watches. The first digitalelectronicwatch was aPulsarLED prototype produced in 1970.[3]Digital LED watches were very expensive and out of reach to the common consumer until 1975, whenTexas Instrumentsstarted to mass-produce LED watches inside a plastic case. Most watches with LED displays required that the user press a button to see the time displayed for a few seconds because LEDs used so much power that they could not be kept operating continuously. Watches with LED displays were popular for a few years, but soon the LED displays were superseded byliquid crystal displays(LCDs), which used less battery power and were much more convenient in use, with the display always visible and no need to push a button before seeing the time. Only in darkness, you had to press a button to light the display with a tiny light bulb, later illuminating LEDs.[4] Most electronic watches today use 32.768 KHzquartz oscillators.[2] As of 2013, processors specifically designed for wristwatches are thelowest-power processorsmanufactured today—often4-bit, 32.768 kHz processors. Whenpersonal computerswere first developed, power consumption was not an issue. With the development ofportable computershowever, the requirement to run a computer off abattery packnecessitated the search for a compromise betweencomputing powerand power consumption. Originally mostprocessorsran both the core and I/O circuits at 5 volts, as in theIntel 8088used by the firstCompaq Portable. It was later reduced to 3.5, 3.3, and 2.5 volts to lower power consumption. For example, thePentium P5core voltage decreased from 5V in 1993, to 2.5V in 1997. With lower voltage comes lower overall power consumption, making a system less expensive to run on any existing battery technology and able to function for longer. This is crucially important for portable or mobile systems. The emphasis on battery operation has driven many of the advances in lowering processor voltage because this has a significant effect on battery life. The second major benefit is that with less voltage and therefore less power consumption, there will be less heat produced. Processors that run cooler can be packed into systems more tightly and will last longer. The third major benefit is that a processor running cooler on less power can be made to run faster. Lowering the voltage has been one of the key factors in allowing theclock rateof processors to go higher and higher.[5] The density and speed of integrated-circuit computing elements has increased exponentially for several decades, following a trend described byMoore's Law. While it is generally accepted that this exponential improvement trend will end, it is unclear exactly how dense and fast integrated circuits will get by the time this point is reached. Working devices have been demonstrated which were fabricated with aMOSFETtransistorchannel length of 6.3nanometresusing conventional semiconductor materials, and devices have been built that usecarbon nanotubesas MOSFET gates, giving a channel length of approximately onenanometre. The density and computing power of integrated circuits are limited primarily by power-dissipation concerns. The overallpower consumptionof a new personal computer has been increasing at about 22% growth per year.[6]This increase in consumption comes even though the energy consumed by a single CMOS logic gate in order to change its state has fallen exponentially in accordance with Moore's law, by virtue of shrinkage.[6] An integrated-circuit chip contains manycapacitiveloads, formed both intentionally (as with gate-to-channel capacitance) and unintentionally (between conductors which are near each other but not electrically connected). Changing the state of the circuit causes a change in the voltage across theseparasitic capacitances, which involves a change in the amount of stored energy. As the capacitive loads are charged and discharged throughresistivedevices, an amount of energy comparable to that stored in the capacitor is dissipated as heat: The effect of heat dissipation on state change is to limit the amount of computation that may be performed within a given power budget. While device shrinkage can reduce some parasitic capacitances, the number of devices on an integrated circuit chip has increased more than enough to compensate for reduced capacitance in each individual device. Some circuits –dynamic logic, for example – require a minimum clock rate in order to function properly, wasting "dynamic power" even when they do not perform useful computations. Other circuits – most prominently, theRCA 1802, but also several later chips such as theWDC 65C02, theIntel 80C85, theFreescale 68HC11and some otherCMOSchips – use "fully static logic" that has no minimum clock rate, but can "stop the clock" and hold their state indefinitely. When the clock is stopped, such circuits use no dynamic power but they still have a small, static power consumption caused by leakage current. As circuit dimensions shrink,subthreshold leakagecurrent becomes more prominent. This leakage current results in power consumption, even when no switching is taking place (static power consumption). In modern chips, this current generally accounts for half the power consumed by the IC. Loss fromsubthreshold leakagecan be reduced by raising thethreshold voltageand lowering the supply voltage. Both these changes slow down the circuit significantly. To address this issue, some modern low-power circuits use dual supply voltages to improve speed on critical paths of the circuit and lower power consumption on non-critical paths. Some circuits even use different transistors (with different threshold voltages) in different parts of the circuit, in an attempt to further reduce power consumption without significant performance loss. Another method that is used to reduce power consumption ispower gating:[7]the use of sleep transistors to disable entire blocks when not in use. Systems that are dormant for long periods of time and "wake up" to perform a periodic activity are often in an isolated location monitoring an activity. These systems are generally battery- or solar-powered and hence, reducing power consumption is a key design issue for these systems. By shutting down a functional but leaky block until it is used, leakage current can be reduced significantly. For some embedded systems that only function for short periods at a time, this can dramatically reduce power consumption. Two other approaches also exist to lower the power overhead of state changes. One is to reduce the operating voltage of the circuit, as in adual-voltage CPU, or to reduce the voltage change involved in a state change (making a state change only, changing node voltage by a fraction of the supply voltage—low voltage differential signaling, for example). This approach is limited by thermal noise within the circuit. There is a characteristic voltage (proportional to the device temperature and to theBoltzmann constant), which the state switching voltage must exceed in order for the circuit to be resistant to noise. This is typically on the order of 50–100 mV, for devices rated to 100degrees Celsiusexternal temperature (about 4kT, whereTis the device's internal temperature inKelvinsandkis theBoltzmann constant). The second approach is to attempt to provide charge to the capacitive loads through paths that are not primarily resistive. This is the principle behindadiabatic circuits. The charge is supplied either from a variable-voltageinductivepower supply or by other elements in areversible-logiccircuit. In both cases, the charge transfer must be primarily regulated by the non-resistive load. As a practical rule of thumb, this means the change rate of a signal must be slower than that dictated by theRC time constantof the circuit being driven. In other words, the price of reduced power consumption per unit computation is a reduced absolute speed of computation. In practice, although adiabatic circuits have been built, it has been difficult for them to reduce computation power substantially in practical circuits. Finally, there are several techniques for reducing the number of state changes associated with a given computation. For clocked-logic circuits, theclock gatingtechnique is used, to avoid changing the state of functional blocks that are not required for a given operation. As a more extreme alternative, theasynchronous logicapproach implements circuits in such a way that a specific externally supplied clock is not required. While both of these techniques are used to different extents in integrated circuit design, the limit of practical applicability for each appears to have been reached.[citation needed] There are a variety of techniques for reducing the amount of battery power required for a desired wireless communicationgoodput.[8]Somewireless mesh networksuse"smart" low power broadcastingtechniques that reduce the battery power required to transmit. This can be achieved by usingpower aware protocolsand joint power control systems. In 2007, about 10% of the average IT budget was spent on energy, and energy costs for IT were expected to rise to 50% by 2010.[9] The weight and cost of power supply and cooling systems generally depends on the maximum possible power that could be used at any one time. There are two ways to prevent a system from being permanently damaged by excessive heat. Most desktop computers design power and cooling systems around the worst-caseCPU power dissipationat the maximum frequency, maximum workload, and worst-case environment. To reduce weight and cost, many laptop computers choose to use a much lighter, lower-cost cooling system designed around a much lowerThermal Design Power, that is somewhat above expected maximum frequency, typical workload, and typical environment. Typically such systems reduce (throttle) the clock rate when the CPU die temperature gets too hot, reducing the power dissipated to a level that the cooling system can handle.
https://en.wikipedia.org/wiki/Low-power_electronics
Inmathematicsandcomputing, themethod of complementsis a technique to encode a symmetric range of positive and negativeintegersin a way that they can use the samealgorithm(ormechanism) foradditionthroughout the whole range. For a given number ofplaceshalf of the possible representations of numbers encode the positive numbers, the other half represents their respectiveadditive inverses. The pairs of mutually additive inverse numbers are calledcomplements. Thussubtractionof any number is implemented by adding its complement. Changing the sign of any number is encoded by generating its complement, which can be done by a very simple and efficient algorithm. This method was commonly used inmechanical calculatorsand is still used in moderncomputers. The generalized concept of theradix complement(as described below) is also valuable innumber theory, such as inMidy's theorem. Thenines' complementof a number given in decimal representation is formed by replacing each digit with nine minus that digit. To subtract a decimal numbery(thesubtrahend) from another numberx(theminuend) two methods may be used: In the first method, the nines' complement ofxis added toy. Then the nines' complement of the result obtained is formed to produce the desired result. In the second method, the nines' complement ofyis added toxand one is added to the sum. The leftmost digit '1' of the result is then discarded. Discarding the leftmost '1' is especially convenient on calculators or computers that use a fixed number of digits: there is nowhere for it to go so it is simply lost during the calculation. The nines' complement plus one is known as thetens' complement. The method of complements can be extended to other number bases (radices); in particular, it is used on most digital computers to perform subtraction, represent negative numbers in base 2 orbinary arithmeticand test overflow in calculation.[1] Theradix complementof ann{\displaystyle n}-digit numbery{\displaystyle y}inradixb{\displaystyle b}is defined asbn−y{\displaystyle b^{n}-y}. In practice, the radix complement is more easily obtained by adding 1 to thediminished radix complement, which is(bn−1)−y{\displaystyle \left(b^{n}-1\right)-y}. While this seems equally difficult to calculate as the radix complement, it is actually simpler since(bn−1){\displaystyle \left(b^{n}-1\right)}is simply the digitb−1{\displaystyle b-1}repeatedn{\displaystyle n}times. This is becausebn−1=(b−1)(bn−1+bn−2+⋯+b+1)=(b−1)bn−1+⋯+(b−1){\displaystyle b^{n}-1=(b-1)\left(b^{n-1}+b^{n-2}+\cdots +b+1\right)=(b-1)b^{n-1}+\cdots +(b-1)}(see alsoGeometric series Formula). Knowing this, the diminished radix complement of a number can be found by complementing each digit with respect tob−1{\displaystyle b-1}, i.e. subtracting each digit iny{\displaystyle y}fromb−1{\displaystyle b-1}. The subtraction ofy{\displaystyle y}fromx{\displaystyle x}using diminished radix complements may be performed as follows. Add the diminished radix complement ofx{\displaystyle x}toy{\displaystyle y}to obtainbn−1−x+y{\displaystyle b^{n}-1-x+y}or equivalentlybn−1−(x−y){\displaystyle b^{n}-1-(x-y)}, which is the diminished radix complement ofx−y{\displaystyle x-y}. Further taking the diminished radix complement ofbn−1−(x−y){\displaystyle b^{n}-1-(x-y)}results in the desired answer ofx−y{\displaystyle x-y}. Alternatively using the radix complement,x−y{\displaystyle x-y}may be obtained by adding the radix complement ofy{\displaystyle y}tox{\displaystyle x}to obtainx+bn−y{\displaystyle x+b^{n}-y}orx−y+bn{\displaystyle x-y+b^{n}}. Assumingy≤x{\displaystyle y\leq x}, the result will be greater or equal tobn{\displaystyle b^{n}}and dropping the leading1{\displaystyle 1}from the result is the same as subtractingbn{\displaystyle b^{n}}, making the resultx−y+bn−bn{\displaystyle x-y+b^{n}-b^{n}}or justx−y{\displaystyle x-y}, the desired result. In thedecimalnumbering system, the radix complement is called theten's complementand the diminished radix complement thenines' complement. Inbinary, the radix complement is called thetwo's complementand the diminished radix complement theones' complement. The naming of complements in other bases is similar. Some people, notablyDonald Knuth, recommend using the placement of the apostrophe to distinguish between the radix complement and the diminished radix complement. In this usage, thefour's complementrefers to the radix complement of a number in base four whilefours' complementis the diminished radix complement of a number in base 5. However, the distinction is not important when the radix is apparent (nearly always), and the subtle difference in apostrophe placement is not common practice. Most writers useone'sandnine's complement, and many style manuals leave out the apostrophe, recommendingonesandnines complement. The nines' complement of a decimal digit is the number that must be added to it to produce 9; the nines' complement of 3 is 6, the nines' complement of 7 is 2, and so on, see table. To form the nines' complement of a larger number, each digit is replaced by its nines' complement. Consider the following subtraction problem: 1. Compute the nines' complement of the minuend (873). 2. Add that to the subtrahend (218). 3. Now calculate the nines' complement of the result 1. Compute the nines' complement of 218, which is 781. Because 218 is three digits long, this is the same as subtracting 218 from 999. 2. Next, the sum ofx{\displaystyle x}and the nines' complement ofy{\displaystyle y}is taken. 3. The leading "1" digit is then dropped, giving 654. 4. This is not yet correct. In the first step, 999 was added to the equation. Then 1000 was subtracted when the leading 1 was dropped. So, the answer obtained (654) is one less than the correct answerx−y{\displaystyle x-y}. To fix this, 1 is added to the answer. Adding a 1 gives 655, the correct answer to our original subtraction problem. The last step of adding 1 could be skipped if instead the ten's complement of y was used in the first step. In the following example the result of the subtraction has fewer digits thanx{\displaystyle x}: Using the first method the sum of the nines' complement ofx{\displaystyle x}andy{\displaystyle y}is The nines' complement of 999990 is 000009. Removing the leading zeros gives 9, the desired result. If the subtrahend,y{\displaystyle y}, has fewer digits than the minuend,x{\displaystyle x}, leading zeros must be added in the second method. These zeros become leading nines when the complement is taken. For example: can be rewritten Replacing 00391 with its nines' complement and adding 1 produces the sum: Dropping the leading 1 gives the correct answer: 47641. The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing '0' to '1' and vice versa). Adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example: becomes the sum: Dropping the initial "1" gives the answer: 0100 1110 (equals decimal 78) The method of complements normally assumes that the operands are positive and thaty≤x, logical constraints given that adding and subtracting arbitrary integers is normally done by comparing signs, adding the two or subtracting the smaller from the larger, and giving the result the correct sign. Let's see what happens ifx<y. In that case, there will not be a "1" digit to cross out after the addition sincex−y+bn{\displaystyle x-y+b^{n}}will be less thanbn{\displaystyle b^{n}}. For example, (in decimal): Complementingyand adding gives: At this point, there is nosimpleway to complete the calculation by subtractingbn{\displaystyle b^{n}}(1000 in this case); one cannot simply ignore a leading 1. The expected answer is −144, which isn't as far off as it seems; 856 happens to be the ten's complement of 144. This issue can be addressed in a number of ways: The method of complements was used in many mechanical calculators as an alternative to running the gears backwards. For example: Use of the method of complements is ubiquitous in digital computers, regardless of the representation used for signed numbers. However, the circuitry required depends on the representation: The method of complements was used to correct errors when accounting books were written by hand. To remove an entry from a column of numbers, the accountant could add a new entry with the ten's complement of the number to subtract. A bar was added over the digits of this entry to denote its special status. It was then possible to add the whole column of figures to obtain the corrected result. Complementing the sum is handy for cashiers making change for a purchase from currency in a single denomination of 1 raised to an integer power of the currency's base. For decimal currencies that would be 10, 100, 1,000, etc., e.g. a $10.00 bill. In grade schools, students are sometimes taught the method of complements as a shortcut useful inmental arithmetic.[3]Subtraction is done by adding the ten's complement of thesubtrahend, which is the nines' complement plus 1. The result of this addition is used when it is clear that the difference will be positive, otherwise the ten's complement of the addition's result is used with it marked as negative. The same technique works for subtracting on an adding machine.
https://en.wikipedia.org/wiki/Method_of_complements
Inmathematics, theTaylor seriesorTaylor expansionof afunctionis aninfinite sumof terms that are expressed in terms of the function'sderivativesat a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named afterBrook Taylor, who introduced them in 1715. A Taylor series is also called aMaclaurin serieswhen 0 is the point where the derivatives are considered, afterColin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. Thepartial sumformed by the firstn+ 1terms of a Taylor series is apolynomialof degreenthat is called thenthTaylor polynomialof the function. Taylor polynomials are approximations of a function, which become generally more accurate asnincreases.Taylor's theoremgives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function isconvergent, its sum is thelimitof theinfinite sequenceof the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function isanalyticat a pointxif it is equal to the sum of its Taylor series in someopen interval(oropen diskin thecomplex plane) containingx. This implies that the function is analytic at every point of the interval (or disk). The Taylor series of arealorcomplex-valued functionf(x), that isinfinitely differentiableat arealorcomplex numbera, is thepower seriesf(a)+f′(a)1!(x−a)+f″(a)2!(x−a)2+⋯=∑n=0∞f(n)(a)n!(x−a)n.{\displaystyle f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+\cdots =\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}.}Here,n!denotes thefactorialofn. The functionf(n)(a)denotes thenthderivativeoffevaluated at the pointa. The derivative of order zero offis defined to befitself and(x−a)0and0!are both defined to be 1. This series can be written by usingsigma notation, as in the right side formula.[1]Witha= 0, the Maclaurin series takes the form:[2]f(0)+f′(0)1!x+f″(0)2!x2+⋯=∑n=0∞f(n)(0)n!xn.{\displaystyle f(0)+{\frac {f'(0)}{1!}}x+{\frac {f''(0)}{2!}}x^{2}+\cdots =\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}x^{n}.} The Taylor series of anypolynomialis the polynomial itself. The Maclaurin series of⁠1/1 −x⁠is thegeometric series 1+x+x2+x3+⋯.{\displaystyle 1+x+x^{2}+x^{3}+\cdots .} So, by substitutingxfor1 −x, the Taylor series of⁠1/x⁠ata= 1is 1−(x−1)+(x−1)2−(x−1)3+⋯.{\displaystyle 1-(x-1)+(x-1)^{2}-(x-1)^{3}+\cdots .} By integrating the above Maclaurin series, we find the Maclaurin series ofln(1 −x), wherelndenotes thenatural logarithm: −x−12x2−13x3−14x4−⋯.{\displaystyle -x-{\tfrac {1}{2}}x^{2}-{\tfrac {1}{3}}x^{3}-{\tfrac {1}{4}}x^{4}-\cdots .} The corresponding Taylor series oflnxata= 1is (x−1)−12(x−1)2+13(x−1)3−14(x−1)4+⋯,{\displaystyle (x-1)-{\tfrac {1}{2}}(x-1)^{2}+{\tfrac {1}{3}}(x-1)^{3}-{\tfrac {1}{4}}(x-1)^{4}+\cdots ,} and more generally, the corresponding Taylor series oflnxat an arbitrary nonzero pointais: ln⁡a+1a(x−a)−1a2(x−a)22+⋯.{\displaystyle \ln a+{\frac {1}{a}}(x-a)-{\frac {1}{a^{2}}}{\frac {\left(x-a\right)^{2}}{2}}+\cdots .} The Maclaurin series of theexponential functionexis ∑n=0∞xnn!=x00!+x11!+x22!+x33!+x44!+x55!+⋯=1+x+x22+x36+x424+x5120+⋯.{\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}&={\frac {x^{0}}{0!}}+{\frac {x^{1}}{1!}}+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\frac {x^{5}}{5!}}+\cdots \\&=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+{\frac {x^{4}}{24}}+{\frac {x^{5}}{120}}+\cdots .\end{aligned}}} The above expansion holds because the derivative ofexwith respect toxis alsoex, ande0equals 1. This leaves the terms(x− 0)nin the numerator andn!in the denominator of each term in the infinite sum. Theancient Greek philosopherZeno of Eleaconsidered the problem of summing an infinite series to achieve a finite result, but rejected it as an impossibility;[3]the result wasZeno's paradox. Later,Aristotleproposed a philosophical resolution of the paradox, but the mathematical content was apparently unresolved until taken up byArchimedes, as it had been prior to Aristotle by the Presocratic AtomistDemocritus. It was through Archimedes'smethod of exhaustionthat an infinite number of progressive subdivisions could be performed to achieve a finite result.[4]Liu Huiindependently employed a similar method a few centuries later.[5] In the 14th century, the earliest examples of specific Taylor series (but not the general method) were given by Indian mathematicianMadhava of Sangamagrama.[6]Though no record of his work survives, writings of his followers in theKerala school of astronomy and mathematicssuggest that he found the Taylor series for thetrigonometric functionsofsine,cosine, andarctangent(seeMadhava series). During the following two centuries his followers developed further series expansions and rational approximations. In late 1670,James Gregorywas shown in a letter fromJohn Collinsseveral Maclaurin series(sin⁡x,{\textstyle \sin x,}cos⁡x,{\textstyle \cos x,}arcsin⁡x,{\textstyle \arcsin x,}andxcot⁡x{\textstyle x\cot x})derived byIsaac Newton, and told that Newton had developed a general method for expanding functions in series. Newton had in fact used a cumbersome method involving long division of series and term-by-term integration, but Gregory did not know it and set out to discover a general method for himself. In early 1671 Gregory discovered something like the general Maclaurin series and sent a letter to Collins including series forarctan⁡x,{\textstyle \arctan x,}tan⁡x,{\textstyle \tan x,}sec⁡x,{\textstyle \sec x,}lnsec⁡x{\textstyle \ln \,\sec x}(the integral oftan{\displaystyle \tan }),lntan⁡12(12π+x){\textstyle \ln \,\tan {\tfrac {1}{2}}{{\bigl (}{\tfrac {1}{2}}\pi +x{\bigr )}}}(theintegral ofsec, the inverseGudermannian function),arcsec⁡(2ex),{\textstyle \operatorname {arcsec} {\bigl (}{\sqrt {2}}e^{x}{\bigr )},}and2arctan⁡ex−12π{\textstyle 2\arctan e^{x}-{\tfrac {1}{2}}\pi }(the Gudermannian function). However, thinking that he had merely redeveloped a method by Newton, Gregory never described how he obtained these series, and it can only be inferred that he understood the general method by examining scratch work he had scribbled on the back of another letter from 1671.[7] In 1691–1692, Isaac Newton wrote down an explicit statement of the Taylor and Maclaurin series in an unpublished version of his workDe Quadratura Curvarum. However, this work was never completed and the relevant sections were omitted from the portions published in 1704 under the titleTractatus de Quadratura Curvarum. It was not until 1715 that a general method for constructing these series for all functions for which they exist was finally published byBrook Taylor,[8]after whom the series are now named. The Maclaurin series was named afterColin Maclaurin, a Scottish mathematician, who published a special case of the Taylor result in the mid-18th century. Iff(x)is given by a convergent power series in an open disk centred atbin the complex plane (or an interval in the real line), it is said to beanalyticin this region. Thus forxin this region,fis given by a convergent power series f(x)=∑n=0∞an(x−b)n.{\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-b)^{n}.} Differentiating byxthe above formulantimes, then settingx=bgives: f(n)(b)n!=an{\displaystyle {\frac {f^{(n)}(b)}{n!}}=a_{n}} and so the power series expansion agrees with the Taylor series. Thus a function is analytic in an open disk centered atbif and only if its Taylor series converges to the value of the function at each point of the disk. Iff(x)is equal to the sum of its Taylor series for allxin the complex plane, it is calledentire. The polynomials,exponential functionex, and thetrigonometric functionssine and cosine, are examples of entire functions. Examples of functions that are not entire include thesquare root, thelogarithm, thetrigonometric functiontangent, and its inverse,arctan. For these functions the Taylor series do notconvergeifxis far fromb. That is, the Taylor seriesdivergesatxif the distance betweenxandbis larger than theradius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point. Uses of the Taylor series for analytic functions include: Pictured is an accurate approximation ofsinxaround the pointx= 0. The pink curve is a polynomial of degree seven: sin⁡x≈x−x33!+x55!−x77!.{\displaystyle \sin {x}\approx x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}.\!} The error in this approximation is no more than|x|9/ 9!. For a full cycle centered at the origin (−π <x< π) the error is less than 0.08215. In particular, for−1 <x< 1, the error is less than 0.000003. In contrast, also shown is a picture of the natural logarithm functionln(1 +x)and some of its Taylor polynomials arounda= 0. These approximations converge to the function only in the region−1 <x≤ 1; outside of this region the higher-degree Taylor polynomials areworseapproximations for the function. Theerrorincurred in approximating a function by itsnth-degree Taylor polynomial is called theremainderorresidualand is denoted by the functionRn(x). Taylor's theorem can be used to obtain a bound on thesize of the remainder. In general, Taylor series need not beconvergentat all. In fact, the set of functions with a convergent Taylor series is ameager setin theFréchet spaceofsmooth functions. Even if the Taylor series of a functionfdoes converge, its limit need not be equal to the value of the functionf(x). For example, the function f(x)={e−1/x2ifx≠00ifx=0{\displaystyle f(x)={\begin{cases}e^{-1/x^{2}}&{\text{if }}x\neq 0\\[3mu]0&{\text{if }}x=0\end{cases}}} isinfinitely differentiableatx= 0, and has all derivatives zero there. Consequently, the Taylor series off(x)aboutx= 0is identically zero. However,f(x)is not the zero function, so does not equal its Taylor series around the origin. Thus,f(x)is an example of anon-analytic smooth function. Inreal analysis, this example shows that there areinfinitely differentiable functionsf(x)whose Taylor series arenotequal tof(x)even if they converge. By contrast, theholomorphic functionsstudied incomplex analysisalways possess a convergent Taylor series, and even the Taylor series ofmeromorphic functions, which might have singularities, never converge to a value different from the function itself. The complex functione−1/z2, however, does not approach 0 whenzapproaches 0 along the imaginary axis, so it is notcontinuousin the complex plane and its Taylor series is undefined at 0. More generally, every sequence of real or complex numbers can appear ascoefficientsin the Taylor series of an infinitely differentiable function defined on the real line, a consequence ofBorel's lemma. As a result, theradius of convergenceof a Taylor series can be zero. There are even infinitely differentiable functions defined on the real line whose Taylor series have a radius of convergence 0 everywhere.[9] A function cannot be written as a Taylor series centred at asingularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variablex; seeLaurent series. For example,f(x) =e−1/x2can be written as a Laurent series. The generalization of the Taylor series does converge to the value of the function itself for anyboundedcontinuous functionon(0,∞), and this can be done by using the calculus offinite differences. Specifically, the following theorem, due toEinar Hille, that for anyt> 0,[10] limh→0+∑n=0∞tnn!Δhnf(a)hn=f(a+t).{\displaystyle \lim _{h\to 0^{+}}\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}{\frac {\Delta _{h}^{n}f(a)}{h^{n}}}=f(a+t).} HereΔnhis thenth finite difference operator with step sizeh. The series is precisely the Taylor series, except that divided differences appear in place of differentiation: the series is formally similar to theNewton series. When the functionfis analytic ata, the terms in the series converge to the terms of the Taylor series, and in this sense generalizes the usual Taylor series. In general, for any infinite sequenceai, the following power series identity holds: ∑n=0∞unn!Δnai=e−u∑j=0∞ujj!ai+j.{\displaystyle \sum _{n=0}^{\infty }{\frac {u^{n}}{n!}}\Delta ^{n}a_{i}=e^{-u}\sum _{j=0}^{\infty }{\frac {u^{j}}{j!}}a_{i+j}.} So in particular, f(a+t)=limh→0+e−t/h∑j=0∞f(a+jh)(t/h)jj!.{\displaystyle f(a+t)=\lim _{h\to 0^{+}}e^{-t/h}\sum _{j=0}^{\infty }f(a+jh){\frac {(t/h)^{j}}{j!}}.} The series on the right is theexpected valueoff(a+X), whereXis aPoisson-distributedrandom variablethat takes the valuejhwith probabilitye−t/h·⁠(t/h)j/j!⁠. Hence, f(a+t)=limh→0+∫−∞∞f(a+x)dPt/h,h(x).{\displaystyle f(a+t)=\lim _{h\to 0^{+}}\int _{-\infty }^{\infty }f(a+x)dP_{t/h,h}(x).} Thelaw of large numbersimplies that the identity holds.[11] Several important Maclaurin series expansions follow. All these expansions are valid for complex argumentsx. Theexponential functionex{\displaystyle e^{x}}(with basee) has Maclaurin series[12] ex=∑n=0∞xnn!=1+x+x22!+x33!+⋯.{\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots .}It converges for allx. The exponentialgenerating functionof theBell numbersis the exponential function of the predecessor of the exponential function: exp⁡(exp⁡x−1)=∑n=0∞Bnn!xn{\displaystyle \exp(\exp {x}-1)=\sum _{n=0}^{\infty }{\frac {B_{n}}{n!}}x^{n}} Thenatural logarithm(with basee) has Maclaurin series[13] ln⁡(1−x)=−∑n=1∞xnn=−x−x22−x33−⋯,ln⁡(1+x)=∑n=1∞(−1)n+1xnn=x−x22+x33−⋯.{\displaystyle {\begin{aligned}\ln(1-x)&=-\sum _{n=1}^{\infty }{\frac {x^{n}}{n}}=-x-{\frac {x^{2}}{2}}-{\frac {x^{3}}{3}}-\cdots ,\\\ln(1+x)&=\sum _{n=1}^{\infty }(-1)^{n+1}{\frac {x^{n}}{n}}=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-\cdots .\end{aligned}}} The last series is known asMercator series, named afterNicholas Mercator(since it was published in his 1668 treatiseLogarithmotechnia).[14]Both of these series converge for|x|<1{\displaystyle |x|<1}. (In addition, the series forln(1 −x)converges forx= −1, and the series forln(1 +x)converges forx= 1.)[13] Thegeometric seriesand its derivatives have Maclaurin series 11−x=∑n=0∞xn1(1−x)2=∑n=1∞nxn−11(1−x)3=∑n=2∞(n−1)n2xn−2.{\displaystyle {\begin{aligned}{\frac {1}{1-x}}&=\sum _{n=0}^{\infty }x^{n}\\{\frac {1}{(1-x)^{2}}}&=\sum _{n=1}^{\infty }nx^{n-1}\\{\frac {1}{(1-x)^{3}}}&=\sum _{n=2}^{\infty }{\frac {(n-1)n}{2}}x^{n-2}.\end{aligned}}} All are convergent for|x|<1{\displaystyle |x|<1}. These are special cases of thebinomial seriesgiven in the next section. Thebinomial seriesis the power series (1+x)α=∑n=0∞(αn)xn{\displaystyle (1+x)^{\alpha }=\sum _{n=0}^{\infty }{\binom {\alpha }{n}}x^{n}} whose coefficients are the generalizedbinomial coefficients[15] (αn)=∏k=1nα−k+1k=α(α−1)⋯(α−n+1)n!.{\displaystyle {\binom {\alpha }{n}}=\prod _{k=1}^{n}{\frac {\alpha -k+1}{k}}={\frac {\alpha (\alpha -1)\cdots (\alpha -n+1)}{n!}}.} (Ifn= 0, this product is anempty productand has value 1.) It converges for|x|<1{\displaystyle |x|<1}for any real or complex numberα. Whenα= −1, this is essentially the infinite geometric series mentioned in the previous section. The special casesα=⁠1/2⁠andα= −⁠1/2⁠give thesquare rootfunction and itsinverse:[16] (1+x)12=1+12x−18x2+116x3−5128x4+7256x5−⋯=∑n=0∞(−1)n−1(2n)!4n(n!)2(2n−1)xn,(1+x)−12=1−12x+38x2−516x3+35128x4−63256x5+⋯=∑n=0∞(−1)n(2n)!4n(n!)2xn.{\displaystyle {\begin{aligned}(1+x)^{\frac {1}{2}}&=1+{\frac {1}{2}}x-{\frac {1}{8}}x^{2}+{\frac {1}{16}}x^{3}-{\frac {5}{128}}x^{4}+{\frac {7}{256}}x^{5}-\cdots &=\sum _{n=0}^{\infty }{\frac {(-1)^{n-1}(2n)!}{4^{n}(n!)^{2}(2n-1)}}x^{n},\\(1+x)^{-{\frac {1}{2}}}&=1-{\frac {1}{2}}x+{\frac {3}{8}}x^{2}-{\frac {5}{16}}x^{3}+{\frac {35}{128}}x^{4}-{\frac {63}{256}}x^{5}+\cdots &=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{4^{n}(n!)^{2}}}x^{n}.\end{aligned}}} When only thelinear termis retained, this simplifies to thebinomial approximation. The usualtrigonometric functionsand their inverses have the following Maclaurin series:[17] sin⁡x=∑n=0∞(−1)n(2n+1)!x2n+1=x−x33!+x55!−⋯for allxcos⁡x=∑n=0∞(−1)n(2n)!x2n=1−x22!+x44!−⋯for allxtan⁡x=∑n=1∞B2n(−4)n(1−4n)(2n)!x2n−1=x+x33+2x515+⋯for|x|<π2sec⁡x=∑n=0∞(−1)nE2n(2n)!x2n=1+x22+5x424+⋯for|x|<π2arcsin⁡x=∑n=0∞(2n)!4n(n!)2(2n+1)x2n+1=x+x36+3x540+⋯for|x|≤1arccos⁡x=π2−arcsin⁡x=π2−∑n=0∞(2n)!4n(n!)2(2n+1)x2n+1=π2−x−x36−3x540−⋯for|x|≤1arctan⁡x=∑n=0∞(−1)n2n+1x2n+1=x−x33+x55−⋯for|x|≤1,x≠±i{\displaystyle {\begin{aligned}\sin x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}&&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-\cdots &&{\text{for all }}x\\[6pt]\cos x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}&&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots &&{\text{for all }}x\\[6pt]\tan x&=\sum _{n=1}^{\infty }{\frac {B_{2n}(-4)^{n}\left(1-4^{n}\right)}{(2n)!}}x^{2n-1}&&=x+{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\sec x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}E_{2n}}{(2n)!}}x^{2n}&&=1+{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\arcsin x&=\sum _{n=0}^{\infty }{\frac {(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&=x+{\frac {x^{3}}{6}}+{\frac {3x^{5}}{40}}+\cdots &&{\text{for }}|x|\leq 1\\[6pt]\arccos x&={\frac {\pi }{2}}-\arcsin x\\&={\frac {\pi }{2}}-\sum _{n=0}^{\infty }{\frac {(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&={\frac {\pi }{2}}-x-{\frac {x^{3}}{6}}-{\frac {3x^{5}}{40}}-\cdots &&{\text{for }}|x|\leq 1\\[6pt]\arctan x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}x^{2n+1}&&=x-{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}-\cdots &&{\text{for }}|x|\leq 1,\ x\neq \pm i\end{aligned}}} All angles are expressed inradians. The numbersBkappearing in the expansions oftanxare theBernoulli numbers. TheEkin the expansion ofsecxareEuler numbers.[18] Thehyperbolic functionshave Maclaurin series closely related to the series for the corresponding trigonometric functions:[19] sinh⁡x=∑n=0∞x2n+1(2n+1)!=x+x33!+x55!+⋯for allxcosh⁡x=∑n=0∞x2n(2n)!=1+x22!+x44!+⋯for allxtanh⁡x=∑n=1∞B2n4n(4n−1)(2n)!x2n−1=x−x33+2x515−17x7315+⋯for|x|<π2arsinh⁡x=∑n=0∞(−1)n(2n)!4n(n!)2(2n+1)x2n+1=x−x36+3x540−⋯for|x|≤1artanh⁡x=∑n=0∞x2n+12n+1=x+x33+x55+⋯for|x|≤1,x≠±1{\displaystyle {\begin{aligned}\sinh x&=\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}&&=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+\cdots &&{\text{for all }}x\\[6pt]\cosh x&=\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}&&=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+\cdots &&{\text{for all }}x\\[6pt]\tanh x&=\sum _{n=1}^{\infty }{\frac {B_{2n}4^{n}\left(4^{n}-1\right)}{(2n)!}}x^{2n-1}&&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\operatorname {arsinh} x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&=x-{\frac {x^{3}}{6}}+{\frac {3x^{5}}{40}}-\cdots &&{\text{for }}|x|\leq 1\\[6pt]\operatorname {artanh} x&=\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{2n+1}}&&=x+{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}+\cdots &&{\text{for }}|x|\leq 1,\ x\neq \pm 1\end{aligned}}} The numbersBkappearing in the series fortanhxare theBernoulli numbers.[19] Thepolylogarithmshave these defining identities: Li2(x)=∑n=1∞1n2xnLi3(x)=∑n=1∞1n3xn{\displaystyle {\begin{aligned}{\text{Li}}_{2}(x)&=\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}x^{n}\\{\text{Li}}_{3}(x)&=\sum _{n=1}^{\infty }{\frac {1}{n^{3}}}x^{n}\end{aligned}}} TheLegendre chi functionsare defined as follows: χ2(x)=∑n=0∞1(2n+1)2x2n+1χ3(x)=∑n=0∞1(2n+1)3x2n+1{\displaystyle {\begin{aligned}\chi _{2}(x)&=\sum _{n=0}^{\infty }{\frac {1}{(2n+1)^{2}}}x^{2n+1}\\\chi _{3}(x)&=\sum _{n=0}^{\infty }{\frac {1}{(2n+1)^{3}}}x^{2n+1}\end{aligned}}} And the formulas presented below are calledinverse tangent integrals: Ti2(x)=∑n=0∞(−1)n(2n+1)2x2n+1Ti3(x)=∑n=0∞(−1)n(2n+1)3x2n+1{\displaystyle {\begin{aligned}{\text{Ti}}_{2}(x)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)^{2}}}x^{2n+1}\\{\text{Ti}}_{3}(x)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)^{3}}}x^{2n+1}\end{aligned}}} Instatistical thermodynamicsthese formulas are of great importance. The completeelliptic integralsof first kind K and of second kind E can be defined as follows: 2πK(x)=∑n=0∞[(2n)!]216n(n!)4x2n2πE(x)=∑n=0∞[(2n)!]2(1−2n)16n(n!)4x2n{\displaystyle {\begin{aligned}{\frac {2}{\pi }}K(x)&=\sum _{n=0}^{\infty }{\frac {[(2n)!]^{2}}{16^{n}(n!)^{4}}}x^{2n}\\{\frac {2}{\pi }}E(x)&=\sum _{n=0}^{\infty }{\frac {[(2n)!]^{2}}{(1-2n)16^{n}(n!)^{4}}}x^{2n}\end{aligned}}} TheJacobi theta functionsdescribe the world of the elliptic modular functions and they have these Taylor series: ϑ00(x)=1+2∑n=1∞xn2ϑ01(x)=1+2∑n=1∞(−1)nxn2{\displaystyle {\begin{aligned}\vartheta _{00}(x)&=1+2\sum _{n=1}^{\infty }x^{n^{2}}\\\vartheta _{01}(x)&=1+2\sum _{n=1}^{\infty }(-1)^{n}x^{n^{2}}\end{aligned}}} The regularpartition number sequenceP(n) has this generating function: ϑ00(x)−1/6ϑ01(x)−2/3[ϑ00(x)4−ϑ01(x)416x]−1/24=∑n=0∞P(n)xn=∏k=1∞11−xk{\displaystyle \vartheta _{00}(x)^{-1/6}\vartheta _{01}(x)^{-2/3}{\biggl [}{\frac {\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}}{16\,x}}{\biggr ]}^{-1/24}=\sum _{n=0}^{\infty }P(n)x^{n}=\prod _{k=1}^{\infty }{\frac {1}{1-x^{k}}}} The strict partition number sequence Q(n) has that generating function: ϑ00(x)1/6ϑ01(x)−1/3[ϑ00(x)4−ϑ01(x)416x]1/24=∑n=0∞Q(n)xn=∏k=1∞11−x2k−1{\displaystyle \vartheta _{00}(x)^{1/6}\vartheta _{01}(x)^{-1/3}{\biggl [}{\frac {\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}}{16\,x}}{\biggr ]}^{1/24}=\sum _{n=0}^{\infty }Q(n)x^{n}=\prod _{k=1}^{\infty }{\frac {1}{1-x^{2k-1}}}} Several methods exist for the calculation of Taylor series of a large number of functions. One can attempt to use the definition of the Taylor series, though this often requires generalizing the form of the coefficients according to a readily apparent pattern. Alternatively, one can use manipulations such as substitution, multiplication or division, addition or subtraction of standard Taylor series to construct the Taylor series of a function, by virtue of Taylor series being power series. In some cases, one can also derive the Taylor series by repeatedly applyingintegration by parts. Particularly convenient is the use ofcomputer algebra systemsto calculate Taylor series. In order to compute the 7th degree Maclaurin polynomial for the function f(x)=ln⁡(cos⁡x),x∈(−π2,π2),{\displaystyle f(x)=\ln(\cos x),\quad x\in {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )},} one may first rewrite the function as f(x)=ln(1+(cos⁡x−1)),{\displaystyle f(x)={\ln }{\bigl (}1+(\cos x-1){\bigr )},} the composition of two functionsx↦ln⁡(1+x){\displaystyle x\mapsto \ln(1+x)}andx↦cos⁡x−1.{\displaystyle x\mapsto \cos x-1.}The Taylor series for the natural logarithm is (usingbig O notation) ln⁡(1+x)=x−x22+x33+O(x4){\displaystyle \ln(1+x)=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}+O{\left(x^{4}\right)}} and for the cosine function cos⁡x−1=−x22+x424−x6720+O(x8).{\displaystyle \cos x-1=-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-{\frac {x^{6}}{720}}+O{\left(x^{8}\right)}.} The first several terms from the second series can be substituted into each term of the first series. Because the first term in the second series has degree 2, three terms of the first series suffice to give a 7th-degree polynomial: f(x)=ln⁡(1+(cos⁡x−1))=(cos⁡x−1)−12(cos⁡x−1)2+13(cos⁡x−1)3+O((cos⁡x−1)4)=−x22−x412−x645+O(x8).{\displaystyle {\begin{aligned}f(x)&=\ln {\bigl (}1+(\cos x-1){\bigr )}\\&=(\cos x-1)-{\tfrac {1}{2}}(\cos x-1)^{2}+{\tfrac {1}{3}}(\cos x-1)^{3}+O{\left((\cos x-1)^{4}\right)}\\&=-{\frac {x^{2}}{2}}-{\frac {x^{4}}{12}}-{\frac {x^{6}}{45}}+O{\left(x^{8}\right)}.\end{aligned}}\!} Since the cosine is aneven function, the coefficients for all the odd powers are zero. Suppose we want the Taylor series at 0 of the function g(x)=excos⁡x.{\displaystyle g(x)={\frac {e^{x}}{\cos x}}.\!} The Taylor series for the exponential function is ex=1+x+x22!+x33!+x44!+⋯,{\displaystyle e^{x}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\cdots ,} and the series for cosine is cos⁡x=1−x22!+x44!−⋯.{\displaystyle \cos x=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots .} Assume the series for their quotient is excos⁡x=c0+c1x+c2x2+c3x3+c4x4+⋯{\displaystyle {\frac {e^{x}}{\cos x}}=c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+\cdots } Multiplying both sides by the denominatorcos⁡x{\displaystyle \cos x}and then expanding it as a series yields ex=(c0+c1x+c2x2+c3x3+c4x4+⋯)(1−x22!+x44!−⋯)=c0+c1x+(c2−c02)x2+(c3−c12)x3+(c4−c22+c04!)x4+⋯{\displaystyle {\begin{aligned}e^{x}&=\left(c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+\cdots \right)\left(1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots \right)\\[5mu]&=c_{0}+c_{1}x+\left(c_{2}-{\frac {c_{0}}{2}}\right)x^{2}+\left(c_{3}-{\frac {c_{1}}{2}}\right)x^{3}+\left(c_{4}-{\frac {c_{2}}{2}}+{\frac {c_{0}}{4!}}\right)x^{4}+\cdots \end{aligned}}} Comparing the coefficients ofg(x)cos⁡x{\displaystyle g(x)\cos x}with the coefficients ofex,{\displaystyle e^{x},} c0=1,c1=1,c2−12c0=12,c3−12c1=16,c4−12c2+124c0=124,….{\displaystyle c_{0}=1,\ \ c_{1}=1,\ \ c_{2}-{\tfrac {1}{2}}c_{0}={\tfrac {1}{2}},\ \ c_{3}-{\tfrac {1}{2}}c_{1}={\tfrac {1}{6}},\ \ c_{4}-{\tfrac {1}{2}}c_{2}+{\tfrac {1}{24}}c_{0}={\tfrac {1}{24}},\ \ldots .} The coefficientsci{\displaystyle c_{i}}of the series forg(x){\displaystyle g(x)}can thus be computed one at a time, amounting to long division of the series forex{\displaystyle e^{x}}andcos⁡x{\displaystyle \cos x}: excos⁡x=1+x+x2+23x3+12x4+⋯.{\displaystyle {\frac {e^{x}}{\cos x}}=1+x+x^{2}+{\tfrac {2}{3}}x^{3}+{\tfrac {1}{2}}x^{4}+\cdots .} Here we employ a method called "indirect expansion" to expand the given function. This method uses the known Taylor expansion of the exponential function. In order to expand(1 +x)exas a Taylor series inx, we use the known Taylor series of functionex: ex=∑n=0∞xnn!=1+x+x22!+x33!+x44!+⋯.{\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\cdots .} Thus, (1+x)ex=ex+xex=∑n=0∞xnn!+∑n=0∞xn+1n!=1+∑n=1∞xnn!+∑n=0∞xn+1n!=1+∑n=1∞xnn!+∑n=1∞xn(n−1)!=1+∑n=1∞(1n!+1(n−1)!)xn=1+∑n=1∞n+1n!xn=∑n=0∞n+1n!xn.{\displaystyle {\begin{aligned}(1+x)e^{x}&=e^{x}+xe^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=0}^{\infty }{\frac {x^{n+1}}{n!}}=1+\sum _{n=1}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=0}^{\infty }{\frac {x^{n+1}}{n!}}\\&=1+\sum _{n=1}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=1}^{\infty }{\frac {x^{n}}{(n-1)!}}=1+\sum _{n=1}^{\infty }\left({\frac {1}{n!}}+{\frac {1}{(n-1)!}}\right)x^{n}\\&=1+\sum _{n=1}^{\infty }{\frac {n+1}{n!}}x^{n}\\&=\sum _{n=0}^{\infty }{\frac {n+1}{n!}}x^{n}.\end{aligned}}} Classically,algebraic functionsare defined by an algebraic equation, andtranscendental functions(including those discussed above) are defined by some property that holds for them, such as adifferential equation. For example, theexponential functionis the function which is equal to its own derivative everywhere, and assumes the value 1 at the origin. However, one may equally well define ananalytic functionby its Taylor series. Taylor series are used to define functions and "operators" in diverse areas of mathematics. In particular, this is true in areas where the classical definitions of functions break down. For example, using Taylor series, one may extend analytic functions to sets of matrices and operators, such as thematrix exponentialormatrix logarithm. In other areas, such as formal analysis, it is more convenient to work directly with thepower seriesthemselves. Thus one may define a solution of a differential equationasa power series which, one hopes to prove, is the Taylor series of the desired solution. The Taylor series may also be generalized to functions of more than one variable with[20] T(x1,…,xd)=∑n1=0∞⋯∑nd=0∞(x1−a1)n1⋯(xd−ad)ndn1!⋯nd!(∂n1+⋯+ndf∂x1n1⋯∂xdnd)(a1,…,ad)=f(a1,…,ad)+∑j=1d∂f(a1,…,ad)∂xj(xj−aj)+12!∑j=1d∑k=1d∂2f(a1,…,ad)∂xj∂xk(xj−aj)(xk−ak)+13!∑j=1d∑k=1d∑l=1d∂3f(a1,…,ad)∂xj∂xk∂xl(xj−aj)(xk−ak)(xl−al)+⋯{\displaystyle {\begin{aligned}T(x_{1},\ldots ,x_{d})&=\sum _{n_{1}=0}^{\infty }\cdots \sum _{n_{d}=0}^{\infty }{\frac {(x_{1}-a_{1})^{n_{1}}\cdots (x_{d}-a_{d})^{n_{d}}}{n_{1}!\cdots n_{d}!}}\,\left({\frac {\partial ^{n_{1}+\cdots +n_{d}}f}{\partial x_{1}^{n_{1}}\cdots \partial x_{d}^{n_{d}}}}\right)(a_{1},\ldots ,a_{d})\\&=f(a_{1},\ldots ,a_{d})+\sum _{j=1}^{d}{\frac {\partial f(a_{1},\ldots ,a_{d})}{\partial x_{j}}}(x_{j}-a_{j})+{\frac {1}{2!}}\sum _{j=1}^{d}\sum _{k=1}^{d}{\frac {\partial ^{2}f(a_{1},\ldots ,a_{d})}{\partial x_{j}\partial x_{k}}}(x_{j}-a_{j})(x_{k}-a_{k})\\&\qquad \qquad +{\frac {1}{3!}}\sum _{j=1}^{d}\sum _{k=1}^{d}\sum _{l=1}^{d}{\frac {\partial ^{3}f(a_{1},\ldots ,a_{d})}{\partial x_{j}\partial x_{k}\partial x_{l}}}(x_{j}-a_{j})(x_{k}-a_{k})(x_{l}-a_{l})+\cdots \end{aligned}}} For example, for a functionf(x,y){\displaystyle f(x,y)}that depends on two variables,xandy, the Taylor series to second order about the point(a,b)is f(a,b)+(x−a)fx(a,b)+(y−b)fy(a,b)+12!((x−a)2fxx(a,b)+2(x−a)(y−b)fxy(a,b)+(y−b)2fyy(a,b)){\displaystyle f(a,b)+(x-a)f_{x}(a,b)+(y-b)f_{y}(a,b)+{\frac {1}{2!}}{\Big (}(x-a)^{2}f_{xx}(a,b)+2(x-a)(y-b)f_{xy}(a,b)+(y-b)^{2}f_{yy}(a,b){\Big )}} where the subscripts denote the respectivepartial derivatives. A second-order Taylor series expansion of a scalar-valued function of more than one variable can be written compactly as T(x)=f(a)+(x−a)TDf(a)+12!(x−a)T{D2f(a)}(x−a)+⋯,{\displaystyle T(\mathbf {x} )=f(\mathbf {a} )+(\mathbf {x} -\mathbf {a} )^{\mathsf {T}}Df(\mathbf {a} )+{\frac {1}{2!}}(\mathbf {x} -\mathbf {a} )^{\mathsf {T}}\left\{D^{2}f(\mathbf {a} )\right\}(\mathbf {x} -\mathbf {a} )+\cdots ,} whereDf(a)is thegradientoffevaluated atx=aandD2f(a)is theHessian matrix. Applying themulti-index notationthe Taylor series for several variables becomes T(x)=∑|α|≥0(x−a)αα!(∂αf)(a),{\displaystyle T(\mathbf {x} )=\sum _{|\alpha |\geq 0}{\frac {(\mathbf {x} -\mathbf {a} )^{\alpha }}{\alpha !}}\left({\mathrm {\partial } ^{\alpha }}f\right)(\mathbf {a} ),} which is to be understood as a still more abbreviatedmulti-indexversion of the first equation of this paragraph, with a full analogy to the single variable case. In order to compute a second-order Taylor series expansion around point(a,b) = (0, 0)of the functionf(x,y)=exln⁡(1+y),{\displaystyle f(x,y)=e^{x}\ln(1+y),} one first computes all the necessary partial derivatives: fx=exln⁡(1+y)fy=ex1+yfxx=exln⁡(1+y)fyy=−ex(1+y)2fxy=fyx=ex1+y.{\displaystyle {\begin{aligned}f_{x}&=e^{x}\ln(1+y)\\[6pt]f_{y}&={\frac {e^{x}}{1+y}}\\[6pt]f_{xx}&=e^{x}\ln(1+y)\\[6pt]f_{yy}&=-{\frac {e^{x}}{(1+y)^{2}}}\\[6pt]f_{xy}&=f_{yx}={\frac {e^{x}}{1+y}}.\end{aligned}}} Evaluating these derivatives at the origin gives the Taylor coefficients fx(0,0)=0fy(0,0)=1fxx(0,0)=0fyy(0,0)=−1fxy(0,0)=fyx(0,0)=1.{\displaystyle {\begin{aligned}f_{x}(0,0)&=0\\f_{y}(0,0)&=1\\f_{xx}(0,0)&=0\\f_{yy}(0,0)&=-1\\f_{xy}(0,0)&=f_{yx}(0,0)=1.\end{aligned}}} Substituting these values in to the general formula T(x,y)=f(a,b)+(x−a)fx(a,b)+(y−b)fy(a,b)+12!((x−a)2fxx(a,b)+2(x−a)(y−b)fxy(a,b)+(y−b)2fyy(a,b))+⋯{\displaystyle {\begin{aligned}T(x,y)=&f(a,b)+(x-a)f_{x}(a,b)+(y-b)f_{y}(a,b)\\&{}+{\frac {1}{2!}}\left((x-a)^{2}f_{xx}(a,b)+2(x-a)(y-b)f_{xy}(a,b)+(y-b)^{2}f_{yy}(a,b)\right)+\cdots \end{aligned}}} produces T(x,y)=0+0(x−0)+1(y−0)+12(0(x−0)2+2(x−0)(y−0)+(−1)(y−0)2)+⋯=y+xy−12y2+⋯{\displaystyle {\begin{aligned}T(x,y)&=0+0(x-0)+1(y-0)+{\frac {1}{2}}{\big (}0(x-0)^{2}+2(x-0)(y-0)+(-1)(y-0)^{2}{\big )}+\cdots \\&=y+xy-{\tfrac {1}{2}}y^{2}+\cdots \end{aligned}}} Sinceln(1 +y)is analytic in|y| < 1, we have exln⁡(1+y)=y+xy−12y2+⋯,|y|<1.{\displaystyle e^{x}\ln(1+y)=y+xy-{\tfrac {1}{2}}y^{2}+\cdots ,\qquad |y|<1.} The trigonometricFourier seriesenables one to express aperiodic function(or a function defined on a closed interval[a,b]) as an infinite sum oftrigonometric functions(sinesandcosines). In this sense, the Fourier series is analogous to Taylor series, since the latter allows one to express a function as an infinite sum ofpowers. Nevertheless, the two series differ from each other in several relevant issues:
https://en.wikipedia.org/wiki/Taylor_series
Freedom of information lawsallow access by the general public to data held by national governments and, where applicable, by state and local governments. The emergence offreedom of informationlegislation was a response to increasing dissatisfaction with the secrecy surrounding government policy development and decision making.[1]In recent yearsAccess to Information Acthas also been used. They establish a "right-to-know" legal process by which requests may be made for government-held information, to be received freely or at minimal cost, barring standard exceptions. Also variously referred to asopen records, orsunshine laws(in theUnited States), governments are typically bound by a duty to publish and promote openness. In many countries there are constitutional guarantees for the right of access to information, but these are usually unused if specific support legislation does not exist. Additionally, the United NationsSustainable Development Goal 16has a target to ensure public access to information and the protection of fundamental freedoms as a means to ensure accountable, inclusive and just institutions.[2] Over 100 countries around the world have implemented some form offreedom of informationlegislation.[3][4][5]Sweden'sFreedom of the Press Actof 1766 is the oldest in the world.[6][7] Most freedom of information laws exclude the private sector from their jurisdiction thus information held by the private sector cannot be accessed as a legal right. This limitation has serious implications because the private sector performs many functions which were previously the domain of the public sector. As a result, information that was previously public is now within the private sector, and the private contractors cannot be forced to disclose information.[8] Other countries are working towards introducing such laws, and many regions of countries with national legislation have local laws. For example, all U.S. states have laws governing access to public documents belonging to the state and local taxing entities. Additionally, the U.S. Freedom of Information Act governs record management of documents in the possession of the federal government. A related concept isopen meetingslegislation, which allows access to government meetings, not just to the records of them. In many countries,privacyordata protectionlaws may be part of the freedom of information legislation; the concepts are often closely tied together in political discourse. A basic principle behind most freedom of information legislation is that theburden of prooffalls on the bodyaskedfor information, not the personaskingfor it. The person making the request does not usually have to give an explanation for their actions, but if the information is not disclosed a valid reason has to be given. In 2015 TheUNESCOGeneral Conference voted to designate 28 Sep as “International Day for the Universal Access to Information” or, as it is more commonly known,Access to Information Day. The date had previously been celebrated as “Right to Know Day” since 2002. The UNESCO resolution recommends approval by theUN General Assembly.[9] InAlbania, the constitution of 1998 guarantees the right of access to information; the legislation for supporting this is Law no. 119/2014 "On the right to information" (Ligji nr. 119/2014 "Për të drejtën e informimit"). The law regulates the right of access to information being produced or held by public sector. The rules contained in this law are designated to ensure the public access to information, in the framework of assuming the rights and freedoms of the individual in practice, as well as establishing views on the state and society situation. This law aims also at encouraging integrity, transparency and accountability of the public sector bodies. Every person shall, where deemed that the rights provided for in this law have been violated, be entitled to file a complaint administratively to the Information and Data Protection Commissioner's Office.[10] InArgentina, the Access to public information Act (Ley 27.275) was adopted in 2016. The Law on Freedom of Information[11]was unanimously approved by the Parliament on 23 September 2003 and went into force in November 2003. InAustralia, theFreedom of Information Act 1982was passed at the federal level in 1982, applying to all "ministers, departments and public authorities" of the Commonwealth. The act was amended in 2010 under the Rudd Government, establishing the government office of the information commissioner, to further promote freedom of information. There is similar legislation in all states and territories:[12] "Austria’s government has frequently been criticized for inadequate transparency. Official secrecy remains enshrined in the constitution, and Austria’s overall legal framework on access to information is weak" writes theNGOFreedom Housein its 2022 and 2023 reports.[21][22]Reporters without Borders (RSF): "In Austria,press freedomhas been undermined by various political pressures or restrictions on access to information."[23] In the context of a newly proposed public access law that has yet to be passed byparliament,Transparency Internationalwrites: "More than 110 countries have already created freedom of information – Nonsense that this should not be possible in Austria."[24] InAzerbaijan, a Law on Access to Information was approved in 2005. It has gone into effect. Previously in 1998 there was accepted Law on Freedom on Information, but the Law of 2005 provided more detailed and secured regulation for access to official information. On 21 October 2008, theCaretaker Government of Bangladeshissued in the Bangladesh Gazette the Right to Information Ordinance (No. 50 of 2008), based loosely on the IndianRight to Information Act, 2005.[25]The Ordinance was passed by the current government of Bangladesh in the first session of this parliament on 29 March 2009. TheA2iprogramme is a part of theVision 2021, apolitical manifestoof theBangladesh Awami Leagueparty before winning theNational Elections of 2008. Article 32 of the Constitution was amended in 1993 to include a right of access to documents held by the government. InBelize, theFreedom of Information Actwas passed in 1998 was amended in 2000 and is currently in force, though a governmental commission noted that "not much use has been made of the Act".[26] The National Assembly of Bhutan passed an RTI Bill in February 2014. Its purpose is to curb corruption by providing the public with the right to access information. Bosnia and Herzegovina (BiH) was the first country in the Balkan region to adopt a Freedom of Information Act.Freedom of Access to Information Actor FOIA – was adopted by the Parliament Assembly of Bosnia and Herzegovina on 17 November 2000. Both federal entities – the Republika Srpska and the Federation of Bosnia and Herzegovina – passed freedom of information laws in 2001, theFreedom of Access to Information Act for the Republika SrpskaandFreedom of Access to Information Act for the Federation of Bosnia and Herzegovinarespectively. The FOIA Act changed on the BiH state level two times. The first alteration was passed in 2006, enabling stronger legal protection within the framework of administrative law of BiH. The second alteration was passed in December 2009, which enforced legal penalties for prescribed violations. InBrazil, the Article 5, XXXIII, of the Constitution sets that "everyone shall have the right to receive information of his own interest or of public interest from public entities, which shall be given within the time prescribed by law". Also, article 22 of the Federal law nº 8.159/1991 grants the right to "full access to public documents". A statute passed in 2011 and that will enter into force in 2012 (Federal Law 12.527/2011, promulgated on 28 November 2011) regulates the manner and the timetable for the information to be given by the State. InBulgaria, theAccess to Public Information Act(APIA) was passed in 2000, following a 1996 recommendation from the Constitutional Court to implement such a law. The act defined public information as any information related to the social life in the Republic of Bulgaria. It allows citizens of Bulgaria access to public information created by state bodies and provides principles under which the information may be accessed, as well as when access can be denied.[27] The Access to Public Information Act was amended in 2005, 2007, 2008, 2011, 2015, and 2018. Several amendments, particularly those made in 2007 and 2018, faced backlash from government organizations, media, journalists, and information advocates. The 2007 amendments to the act were criticized for limiting access by extending reply times from 14 to 20 business days, removing the obligation for public authorities to provide partial access and allowing fees to be charged for information requests. Despite widespread criticism, all of the 2007 amendments were passed.[28] In 2008, the authorities obligated to provide information were expanded, and the obligation for proactive publishing of information online was introduced. Additional focus was placed on access to information related to trade secrets.[29] 2015 brought extensive changes to the APIA with a focus on the digital aspects of information access, such as the publication of information online and the acceptance of e-requests. Amendments included: allowing citizens to submit e-requests for information with no need for electronic signatures, clarifying the definition of a “public law organization” expanding the organizations that comply with the APIA, requiring public sector bodies to publish their material in machine readable formats with all appropriate metadata, the calculation of fees and information reuse, and expanding categories of information required to be proactively made available online.[30]The amendments also discuss third party consent and dissent, allowing public agencies to provide partial access to information to a requestor if a third party does not respond within 14 days.[30] In 2018, amendments were introduced to Article 40 of the APIA which states that request or denial of access to information may be appealed in front of the Administrative or Supreme Court .[27]The amendment to Article 40 in 2018 made information requests unable to be subjected to a cassation appeal.[31] InCanada, theAccess to Information Actallows citizens to demand records from federal bodies. The act came into force in 1983, under thePierre Trudeaugovernment, permitting Canadians to retrieve information from government files, establishing what information could be accessed, mandating timelines for response.[32]This is enforced by theInformation Commissioner of Canada. There is also a complementaryPrivacy Actthat was introduced in 1983. The purpose of thePrivacy Actis to extend the present laws of Canada that protect theprivacyof individuals with respect to personal information about themselves held by a federal government institution and that provide individuals with a right of access to that information. It is aCrown copyright. Complaints for possible violations of the Act may be reported to thePrivacy Commissioner of Canada. Canadian access to information laws distinguish between access to records generally and access to records that containpersonal informationabout the person making the request. Subject to exceptions, individuals have a right of access to records that contain their own personal information under thePrivacy Actbut the general public does not have a right of access to records that contain personal information about others under theAccess to Information Act. Each province and territory in Canada has its own access to information legislation. In all cases, this is also the provincial public sector privacy legislation. For example: From 1989 to 2008, requests made to the federal government were catalogued in theCoordination of Access to Information Requests System. A 393-page report released in September 2008, sponsored by several Canadian newspaper groups, compares Canada's Access to Information Act to the FOI laws of the provinces and of 68 other nations.[33] In 2009,The Walrus (magazine)published a detailed history of FOI in Canada.[34] The Freedom of Information Law in theCayman Islandswas passed in 2007 and was brought into force in January 2009. The act applies to public authorities and grants citizens the right to access information created by those public authorities.[35]The act was last revised in January 2021 and includes six sections: preliminary, right of access, repealed, internal review, information managers, miscellaneous.[36] In the Cayman Islands, all information requests are processed by information managers working in public authorities. A pdf of all public authorities, Information Managers, and contact information as of 2025 is available on the Cayman Island Government website. Part 1, Preliminary, contains the citation for the Act and definitions of chief officer, consent, information manager, personal information, and public access. Part 2, Right of Access, covers general information such as applications by third parties, provisions to access, reasonable search, receipt and acknowledgment of requests, access to record during working hours, personal information, and third party rights and fees. Part 3, Repealed, contains three acts that have since been repealed and removed from the act. Part 4, Internal Review, states that an internal review can be conducted by a person of higher or equal rank to whoever made an initial decision about an information request. Part 5, Information Managers, outlines the role of information managers, their part in internal reviews, and what information they need to register and monitor information requests. Finally, Part 6, Miscellaneous, discusses what to do if a minor places an information request. It states that a child does not need parental consent to place a request but that the information manager may decide to withhold access depending on the content of the request.[36] InChile, article 8 of theConstitutionprovides for the freedom of information. A law titled Law on Access to Public Information (Ley de Acceso a la Información Pública) took effect on 20 April 2009.[37] In April 2007, theState Council of the People's Republic of Chinapromulgated the "Regulations of the People's Republic of China on Open Government Information" (中华人民共和国政府信息公开条例), which came into effect on 1 May 2008.[38] TheColombianconstitution grants the right of access to public information throughLaw 57 of 1985which thereby mandates the publishing of acts and official documents. This is implemented and applies to documents that belong to official facilities (offices or the like). Additionally, there is the anticorruptionstatement ofLaw 190 of 1955also known asanti corruption actwhich in its 51st article mandates public offices to list in visible area all the contracts and purchases made by month. The latter taking place slowly. A more modern law, the "Ley de transparencia y del derecho de acceso a la información pública nacional" it's at its final stages.[39] Article 23 of theconstitutionstates that "Every person has the right to present petitions to the authorities for the general or private interest and to secure their prompt resolution. The legislative body may regulate the presentation of petitions to private organisations in order to guarantee fundamental rights."[40] This article justifies the existence of a jurisdictional mechanism known a petition action. This action is regulated by the law 1755 of 2015 and is considered by the Colombian Judicial Doctrine as a fundamental human right. According to the law all petitions should be fully addressed in 15 business days. If not addressed the official in charge of resolving the petition may be charged with misconduct.[41] Access to official information is governed by theOfficial Information Act 2008. The law is based heavily on theNew Zealand legislation. InCroatia, theZakon o pravu na pristup informacijama(Act on the Right of Access to Information) first introduced in 2003 extends to all public authorities.[42] The right of access to information in Cyprus is guaranteed in constitutional provisions on freedom of expression. The No. 184(I)/2017 law on access to information in the southern part of the Republic of Cyprus of the country has been published on 22 December 2017. A law that falls below Council of Europe standards in the Northern occupied part of Cyprus.[43]The right to access to public information is provided in different ways in the two parts of the island, in which Cyprus is de facto divided. As to 2011, research by the Open Cyprus Project showed that there was a level of 75% of administrative silence island-wide, in response to information requests.[44]Over half of the respondents to this survey stated that, in practice, access to key documents is not possible.[44] Since late 2013, a draft law on the Right to Access Public Information was being discussed in the Parliament of the Republic of Cyprus. On 22 December 2017 the law has finally been approved (Law number 184(I)/2017 Law on the Right of Access to Information of the Public Sector). In theCzech Republic, theZákon č. 106/1999 Sb., o svobodném přístupu k informacím(Act No. 106/1999 Coll. on Free Access to Information) covers the "state agencies, territorial self-administration authorities and public institutions managing public funds" as well as anybody authorized by the law to reach legal decisions relating to the public sector, to the extent of such authorisation.[45] Access to Public Administration Files Act of 1985 is aDanishact passed by theFolketingconcerning public access to governmental records. The Act came into force in 1987 and repealed the Public Records Act of 1970.[46]New version of the Act came into force on 1 January 2014.[47]Denmark is considered to be a historic pioneer in the field of FOI along with Sweden, Finland and Norway.[48]There is no constitutional basis in theConstitution of Denmarkfor the right of the public to information.[49]Denmark scores 64 points in Global Right to Information Rating.[50] According to the Act of 1985, Section 4 Part 1 states that “any person may ask to see documents received or issued by an administrative authority.”[51]Information concerning administrative matters of the public administration; electricity and heating utilities as well as private bodies receiving public funding or performing public function can be acquired. Yet, the information concerning activities of judicial branch and legislators is not accessible.[52][53] Reasons do not have to be given while making a request; however, the authorities can ask for additional information regarding document.[53]The requests are supposed to be handled as soon as possible; if within period of 10 days response to an application was not provided, the authority has to inform on reasons for the delay as well as expected date for a decision.[54]More detailed procedures are not laid down in the Act.[53] Access to information is limited by “the obligation to maintain secrecy.”[55]: Ch.4, S.14Considerations of State security, defense, foreign policy, external economic interests as well as public financial interests can limit the granting of access to the information.[55]: Ch.3, S.13Registers and records processed electronically are excluded from the administrative documents that can be given access to.[55]: Ch.2, S.5.2Section 10 outlines other areas excluded from access, such as records of meetings of theCouncil of State, minutes, as well as documents prepared for such meetings; correspondence between ministries concerning legislation and material used for scientific research or public statistics.[55]: Ch.3, S.10 Decision to grant or not to grant access can be appealed.[56][55]: Ch.4, S.15.2Decisions can also be appealed externally to Folketingets Ombudsman.[56][57]Ombudsman can also deliver opinions and review decisions; however, these are not binding even though generally followed.[57]Ombudsman receives 200–300 complaints annually; approximately 15 percent of complaints are ruled in favor of appellants.[57] The exemption regarding EU documents was taken out of the Act in 1991.[58]Amendments were also made in 2000; they concerned data on the employees of the Government.[58]In January 2014 new Public Records Act was enforced.[59]The new act was highly debated since it was considered to limit transparency in the Government and legislative proceedings; Denmark received one point less in the category of Political Environment when compared with the Freedom of the Press report of 2015.[60]The new legislation caused demonstrations and protests.[60]It can be regarded as a response to the 9/11 terrorist attacks.[60]After the Public Records Act of 2013 came into effect, public access to information regarding theIntelligence Servicesinstead of falling under the Public Records Act is now managed by the Act on the Security and Intelligence Service as well as the Act on the Defense Intelligence Service.[60]In addition, the access to legislative process was further restricted. According to the new Act documents in the drafting stage are not to be accessed as well as “other corresponding political activities,” so restriction is not concerning only Bills.[60]In the future, it will not be possible to find the calendars of ministers being published.[60]Nevertheless, the Act was created while keeping in mind the strengthening the project of the Open Government; the list of institutions covered by the Act was extended as well as list of public-private institutions and companies.[60] Hipólito Mejía approvedLey No.200-04 – Ley General de Libre Acceso a la Información Pública[61](Law number 200-04 – Law on Access to Information) on 28 July 2004, which allows public access to information from the government and private organisations that receive public money to conduct state business. Rough drafts and projects that are not part of an administrative procedure are not included. InEcuador, theTransparency and Access to Information Lawof 2004 declares that the right of access to information is guaranteed by the state. InEl Salvador, theLaw on Access to Public Informationwas given assent by The Legislative Assembly of El Salvador on 3 March 2011.[62]The act ensures the right to access information is guaranteed by the state and that all organizations and institutions receiving funding from the government are required to set up a website listing "bylaws, regulations, plans, directories, staff salaries, services provided, collective bargain contracts, budgets, auditing results, contracts, acquisitions, credits and loans, among other reports"[63] InEstonia, thePublic Information Act[64]of 2000 seeks to "ensure that the public and every person has the opportunity to access information intended for public use, based on the principles of a democratic and social rule of law and an open society, and to create opportunities for the public to monitor the performance of public duties". It extends to all "holders of information", covering all state and local government bodies,legal persons in public lawandlegal personsin private law if they are performing public duties (providing health, education etc.). In matters concerning the local, national and transboundary environment, theAarhus conventiongrants the public rights regarding access to information, public participation and access to justice in governmentaldecision-makingprocesses. It focuses on interactions between the public and public authorities. The recognition of the right to access to public information underArticle 10(including "freedom (..) to receive (..) information") of theEuropean Convention on Human Rightswas one of subjects inGuerra v. Italycase before the European Court of Human Rights in 1998. The majority considered Article 10 was not applicable to the complaint. However, the court found that in the specific case, which included living near a high-risk factory, not providing information was in violation ofArticle 8(respect to private and family life). Besides, two judges expressed a dissent on applicability of Article 10, and further six judges reserved a possibility, that in other circumstances, right to access to information could be protected by Article 10.[65] TheParliamentary Assembly of the Council of Europehas considered in 1996, that "public access to clear and full information on this subject [Chernobyl disaster]—and many others for that matter—must be viewed as a basic human right".[66]In 2009, CoE Convention on Access to Official Documents was opened for signature.[67] Article 42CFRandArticle 15TFEUgive ″[a]ny citizen of the Union, and any natural or legal person residing or having its registered office in a Member State, [...] a right of access to documents of the institutions, bodies, offices and agencies of the Union, whatever their medium." It follows from Article 15 TFEU that this right is "subject to the principles and the conditions to be defined" in legislation. Regulation (EC) No 1049/2001 of the European Parliament and the Council of 30 May 2001 regarding public access to European Parliament, Council and Commission documents[68]further defines this right of access to documents of the three institutions; for most other EU bodies and agencies, there is a provision in the legal act establishing them which makes Regulation No 1049/2001 applicable to them as well.[69]In some other cases, specific rules apply (e.g. to theEESC,[70]theCoR,[71]theCourt of Justice,[72]theCourt of Auditors[73]and theECB).[74]"Document" is defined broadly and it is assumed that all documents, even if classified, may be subject to right of access unless it falls under one of the exceptions. If access is refused, the applicant is allowed a confirmatory request. A complaint against a refusal can be made with the European Ombudsman and/or an appeal can be brought before the EuropeanGeneral Court. In addition,Directive 2003/98/EC of the European Parliament and the Council of 17 November 2003 on the re-use of public sector information[75]sets out the rules and practices for accessing public sector information resources for further exploitation. This directive has been reviewed in 2013 byDirective 2013/37/EU of the European Parliament and the Council of 26 June 2013 amending Directive 2003/98/EC on the re-use of public sector information[76] Since 2008, theEuropean Commissionoperates the Register of Interest representatives, a voluntary register of lobbyists at the European union.[77] Directive 2003/4/ECof the European Parliament and Council provides for citizens of each country to have freedom of access to information on the environment, in line with the requirements of theAarhus Convention. Governments are required to transcribe the directive into national legislation (for example, in the United Kingdom, theEnvironmental Information Regulations 2004). Directive 95/46/EC, theData Protection directive, provides a variety of rights in relation to personal data, including a right of access. This has been transcribed into national legislation through, for example, theData Protection Act 1998(United Kingdom) and the Data Protection 2003 (Ireland). InFinland, theLaki yleisten asiakirjain julkisuudesta 9.2.1951/83(Act on the Openness of Public Documentsof 1951) established the openness of all records and documents in the possession of officials of the state, municipalities, and registered religious communities. Exceptions to the basic principle could only be made by law, or by an executive order for specific enumerated reasons such as national security. The openness of unsigned draft documents was not mandated, but up to the consideration of the public official. This weakness of the law was removed when the law was revised in the 1990s. The revised law, theLaki viranomaisten toiminnan julkisuudesta 21.5.1999/621(Act on the Openness of Government Activitiesof 1999), called in short "Publicity Act" (Finnish:Julkisuuslaki) also extended the principle of openness to corporations that perform legally mandated public duties, such as pension funds and public utilities, and to computer documents.[78] The Publicity Act establishes a process by which any person may access any record in possession of an authority. The person may ask the authority for the document in person or in writing. When making the request, the requester needs to specify the document so that it can be identified. However, the authority is liable to assist the person with its document registers and indices in this task. After receiving the request, the authority has two weeks to give the document. If the decision is negative, and document is withheld, the requester may appeal to the administrative court. The document may be given orally, for reading and copying in the authority's premises or as an electronic or paper copy, as requested by the person. However, the copying may be declined if it would be unfeasible because of the large number of documents or otherwise technically difficult. There are also a number of limitations on the release of electronic documents designed for the protection of the individual privacy.[79]: §§13, 14, 15 The reasons for withholding a document are listed in the article 24 of the Act. They may be grouped to three categories: automatic non-openness, conditional non openness or conditional openness. The documents where automatic non-openness is prescribed remain withheld in all cases. In the case of conditional non-openness, the reasonability of the non-openness is reviewed case-by-case by the authority and, if appeals are made, by the court. In the third category, openness is a rule, and the reason for non-openness needs to be established by the authority.[79]: §24 The absolute reasons for non-openness are (subpoint of Article 24 in captions)[79]: §24 Conditional non-openness is mandated for the following categories of documents, unless it is "obviously clear" that the protected interest is not endangered[79]: §24.1 Conditional openness is prescribed for the following categories of information:[79]: §24.1 Non-open information remains non-open for 25 years after it was created or obtained by an authority. Documents that are non-open to protect the privacy of an individual remain non-open for 50 years after the protected individual has died.[79]: §31.2, 31, 5 If information is still, after 25 years, valid and describes a security measure of a building, facility, system or method or it is still part of a plan used for national defence or civil defence, it remains non-open as long as the information is pertinent for the purpose. The same indefinite non-openness applies to all documents under international security obligations, if the release might still affect Finnish foreign relations negatively. The non-openness of other documents may be prolonged up to 55 years by theCouncil of State, if necessary to safeguard a protected interest.[79]: §31.3–4 InFrance, the accountability of public servants is a constitutional right, according to theDeclaration of the Rights of Man and of the Citizen. The implementing legislation is theLoi n°78–753 du 17 juillet 1978 portant diverses mesures d'amélioration des relations entre l'administration et le public et diverses dispositions d'ordre administratif, social et fiscal(Act No. 78-753 of 17 July 1978).On various measures for improved relations between the Civil Service and the public and on various arrangements of administrative, social and fiscal nature). It sets as a general rule that citizens can request a copy of any administrative document (in paper, digitised or other form), and establishes the Commission d’Accès aux Documents Administratifs, an independent administrative authority, to oversee the process, although no administration is required to accept those request.[80] InGeorgia, the General Administrative Code contains aLaw on Freedom of Information. InGermany, the federal government passed a freedom of information law on 5 September 2005; it was last updated on 7 August 2013.[81]The law grants each person an unconditional right to access official federal information. No legal, commercial, or any other kind of justification is necessary. Thirteen of the sixteenBundesländer—Baden-Württemberg,Berlin,Brandenburg,Bremen,Hamburg,Hesse,Mecklenburg-Vorpommern,Nordrhein-Westfalen,Rheinland-Pfalz,Saarland,Sachsen-Anhalt,Schleswig-HolsteinandThüringen—have approved individual "Informationsfreiheitsgesetze" (Freedom of Information laws). InGreece, the 1975 Greek Constitution guaranteed the right of access to administrative documents and the right of citizens to obtain information. However it was not until 1986 that the first law was passed to provide for access to information.[82] Article 16 (Right to Access Administrative Documents—Δικαίωμα γνώσης διοικητικών εγγράφων) of Law 1599/1986 (State-citizenry Relationship—Σχέσεις Κράτους-πολίτη) introduced the right of all citizens to read most administrative documents. This right is now codified as article 5 (Access to documents—Πρόσβαση σε έγγραφα) of the Administrative Procedural Code (Κώδικας Διοικητικής Διαδικασίας), Law 2690/1999. Under this article, citizens have a right to know the content of administrative documents. Administrative documents are defined as those produced by public sector entities, such as reports, studies, minutes, statistical data, circulars, instructions, responses, consultatory responses, and decisions. In addition, citizens with a legitimate interest may also accessprivatedocuments stored by public services.[83]The right cannot be exercised if the document concerns the private or family lives of others, or if the document's confidentiality is safeguarded by specific legal provisions. Furthermore, the public body can refuse access if the document refers to discussions in the Cabinet, or if accessing the document can seriously hamper criminal or administrative violation investigations carried out by judicial, police, or military authorities.[84] Citizens may study the documents at the place where they are archived, or they may obtain a copy at their own cost. Access to one's own medical data is provided with the help of a doctor. Access to documents should take into account whether they be covered by copyright, patent, or trade secret regulations. In addition, Law 3448/2006, on the reuse of public sector information, harmonizes the national laws with the requirements on theEuropean Union Directive 2003/98/EC.[85] Guyana has a freedom of information act, which came into force in 2013, but it has relatively weak provisions. A commission tasked with ensuring asset declarations by government officials has begun functioning since 2018.Guyana also entered into the EITI, which guarantees the transparency of the proceeds of oil reserves of countries.[86] InHong Kongthere are no laws specifically enacted to guarantee the freedom of information. Since March 1995, theGovernment of Hong Konghas promulgated a "Code on Access to Information" to serve a similar purpose. This code, like other internal regulations of the Government, was not legislated by theLegislative Counciland has a minimal legal status. It requires government agencies listed in its appendix to appoint Access to Information Officers to answer citizens' requests for governmental records. A fee may be charged prior to the release of information. The code does not require the government to archive information.[87] InHungary, theAct on the Protection of Personal Data and Public Access to Data of Public Interestof 1992 extends a right of access to all data of public interest, defined as any information processed by a body performing a governmental function. Complaints and contested applications may be appealed to the Data Protection Commissioner (until 2011) or to the court.[88] In 2005 the Parliament adopted theAct on the Freedom of Information by Electronic Means(Act XC of 2005). The Act has three basic parts: 1. electronic disclosure of certain data by public sector bodies, 2. publicity of legislation and 3. openness of Court decisions. From 2010 on theSecond Orbán Governmenthave changed considerable parts of the legislation, changing the constitution and by releasing a completely rewritten law (Act CXII of 2011on the right to information self-determination and freedom of information). The move discontinued the Data Protection Commissioner office (in January 2012 theEuropean Commissionlaunched infringement proceedings against Hungary for the abolition of the position and for violation of doing it mid-term), and moved data protection into theNational Authority of Data Protection and Freedom of Information(NAIH) government body, run a leader loyal to the government; the results were that controversial data is withheld without merit and needs to be forced out by lengthy and costly court process. Legally the law withholds openness of public data (Section III and IV) and protection of personal data (section II). InIcelandthe Information Act (Upplýsingalög) Act no. 50/1996[89]gives access to public information. TheRight to Information Act(RTI Act) was passed byParliamenton 11 May 2005 and was published in the gazette of India on 15 June 2005. It came into effect on 12 October 2005[90][91]replacing the erstwhile Freedom of information Act, 2002. The Supreme Court of India had, in several Judgments prior to enactment of both Acts, interpreted Indian Constitution to read Right to Information as the Fundamental Right as embodied in Right to Freedom of Speech and Expression and also in Right to Life. RTI Act laid down a procedure to guarantee this right. Under this law all Government Bodies or Government funded agencies have to designate a Public Information Officer (PIO). The PIO's responsibility is to ensure that information requested is disclosed to the petitioner within 30 days or within 48 hours in case of information concerning the life or liberty of a person. The law was inspired by previous legislation from select states (among themTamil Nadu(1997),Goa(1997),Rajasthan(2000),Karnataka(2000),Delhi(2001),Maharashtra(2002) etc.) that allowed the right to information (to different degrees) to citizens about activities of any State Government body. 12. Question No.115 Starred 28 November 2019 India Justice Report 2019 Legal Aid to Poor A number of high-profile disclosures revealed corruption in various government schemes such scams inPublic Distribution Systems(ration stores), disaster relief, construction of highways etc. The law itself has been hailed as a landmark in India's drive towards more openness and accountability. However the RTI has certain weaknesses that hamper implementation. There have been questions on the lack of speedy appeal to non-compliance to requests. The lack of a central PIO makes it difficult to pin-point the correct PIO to approach for requests. There is also a criticism of the manner in which the Information Commissioners are appointed to head the information commission. It is alleged by RTI Activists that bureaucrats working in close proximity with the government are appointed in the RTI Commissions in a non-transparent manner.[92]The PIO, being an officer of the relevant Government institution, may have a vested interest in not disclosing damaging information on activities of his/her Institution, This therefore creates a conflict of interest. In the state of Maharashtra it was estimated that only 30% of the requests are actually realised under the Maharashtra Right to Information act. The law does not allow disclosure of information that affects national security, defence, and other matters that are deemed of national interest.[93][94][95][96][97][98][99][100][101] The Law on Dissemination of and Free Access to Information was approved by Iranian Parliament in 2008. Its English and Arabic renditions were officially released as part of the government's efforts to promote Freedom of Information (FOI) in October 2018.[102] In 2023 Iranian government charged Etemad after publishing information on denied news by the government around hijab watch guards law obtained by Foia, the government claimed it was top secret.[103] InIreland, the Freedom of Information Act 1997 came into effect in April 1998, one year after its enactment.[104]It provided for members of the public to access information specifically about themselves, amend incorrect information, and request an explanation behind administrative decisions concerning themselves, as well as allowing any person to access records generated by a list of specified public bodies. The Act is seen as having led to a sea-change in the relationship between the citizen, journalists,government departmentsandpublic bodies. Disclosure is the default assumption of the Act; bodies can withhold information only by citing exemptions specified in the legislation. Decisions of public bodies in relation to requests for information may be reviewed by theOffice of the Information Commissioner. The 2014 Act was amended by the Freedom of Information (Amendment) Act 2003.[105]The amendments introduced fees for non-personal requests and restricted the kinds of material which could be accessed. The Freedom of Information Act 2014 repealed the 1997 and 2003 Acts, removing most of the restrictions introduced in 2003 and widening the range of bodies covered to all public bodies, unless specifically exempt.[106]It also allowed for the government to prescribe (or designate) other bodies receiving significant public funds, so that the FOI legislation would also apply to them. InIsrael, the Freedom of Information Law, 5758–1998, supported by the Freedom of Information Regulations, 5759–1999, controls freedom of information. It defines the bodies subject to the legislation by a set of listed categories – essentially, most public bodies – and provides for the government to publish a list of all affected bodies. However, this list does not seem to have been made publicly available, if indeed it was ever compiled.[neutralityisdisputed]Many public bodies are not obliged to follow the law, which limits the potential for use by the public. The Israeli Freedom of Information Law has, in some cases, actually achieved the opposite intended result.[citation needed]some Government agencies now take the position that a citizen may only request information via FOIL—i.e., an official letter designated as such and including the 95shekelfee. Thus an Israeli citizen in many cases cannot simply write a letter asking a question, and can be asked to file a FOIL application with a fee and wait the minimum statutory 30 days for a reply, which the agency can extend to 60 days. In many cases FOIL letters are simply ignored,[citation needed]or some laconic response is sent stating the request is either unclear, unspecific, too vague or some other legalese, anything in order to keep the information away from the public.[citation needed]When the 60 days are up, if the anticipated result usually yield nothing significant,[citation needed]the applicant must petition the District Court to compel disclosure, a procedure that requires attorneys to draft pleadings and a payment of (approximately) $420 court fee. A judgement in such FOIL appeals in Israel can take many months, and again the agency can easily[neutralityisdisputed]avoid disclosure by simply not complying, although risking being charged with contempt of court. While there are some successes in courts compelling Israeli government agencies to disclose information, they are usually in non-controversial areas. The law provides for the expected[neutralityisdisputed]"security" exemption and an applicant applying for such information can expect not to benefit from FOIL (and also have his or her court appeal rejected). Applicants can sometimes be helped byThe Movement for Freedom of Information.[107] WhileItalydoes not have a freedom of information act, it has several legislations over the past 35 years. Chapter V of Law No. 241 of 7 August 1990, which provides access to administrative documents, was the first Italian law to allow information requests. Chapter V of Law No. 241 of 7 August 1990 provides for access to administrative documents. However, the right to access is limited. The law states that those requesting information must have a legal interest. The 1992 regulations require "a personal concrete interest to safeguard in legally relevant situations."[108]The act was amended in 2005, inserting the principle of transparency into the law and rewrote article 22 of the law to state that access to administrative documents is to promote transparency and participation.[109] In 2013, Article 5, d.l. 33/2013, aka the transparency decree, was written into law and expanded the limited access granted by Law No. 241 Chapter 5. The article does not replace Law No. 241, Chapter 5. The article defined transparency as, “total accessibility (of data and documents held by public administrations, in order to protect citizens' rights, promote the participation of data subjects in administrative activity and) encourage widespread forms of control over the pursuit of institutional functions and the use of public resources.”[110]In 2016 Legislative Decree No. 97 amended Article 5, d.l. 33/2013. Under the 2016 legislation, any person has a right to obtain access to documents, information and data that public entities hold.[111]No particular interest is required in this case, but the law states specific limits for this right, mainly to balance it with other public and private rights. In some cases, these limits are absolutes and in other cases they are subject to discretion.[112]The legislation also outlines general civic access and its limitations, as well as how to submit information requests to.[113] The last update to the Transparency Decree was made in 2022, with Legislative Decree no. 104/2022. The act expands to include information requests between that of employee and employer and was applicable to all employment relationships as of August 1, 2021.[114]It requires specific and complete information related to employment contracts be mandatorily given between employer and employee. It provides protection for employees in their right to request access to information from employers and the employees right to stability of employment, work planning, multiple employments, and mandatory probationary and training periods.[114] In Jamaica, the relevant legislation is theAccess to Information Act, 2002.[115] InJapan, the "Law Concerning Access to Information Held by Administrative Organs" (行政機関の保有する情報の公開に関する法律) was promulgated in 1999. The law was enforced in 2001. Small town governments, rather than the federal government, were the first to take measures to enact freedom of information as the national government was "not...as eager as local governments to deal with freedom of information legislation"[116] Local efforts in some ways predate national efforts; In many local governments, regulations about information disclosure (情報公開条例) were established starting from the latter half of the 1980s.[117] The Constitution of Latvia states: "Article 100. Everyone has the right to freedom of expression, which includes the right to freely receive, keep and distribute information and to express his or her views. Censorship is prohibited." The right to access state held information has been repeatedly recognized by the Constitutional Court of Latvia, most notably in its judgment "On Conformity of the Cabinet of Ministers 21 January 1997 Regulations No.46 "On Government Agreements" with the 20 November 1998 "Information Accessibility Law"[118][119] The Law on Freedom of Information was signed into law by the State President in November 1998 and has been amended a number of times recently. Any person can ask for information in "any technically feasible form" without having to show a reason. The request can be oral or written. Bodies must respond in 15 days. On 1 September 2012, Legal Notice 156 of 2012 brought the Freedom of Information Act (Chapter 496 of the Laws of Malta) fully into force, allowing the public (resident citizens of Malta, the EU and the EEA) to submit requests for documents/information held by the Government. FOI requests are submitted free of charge but processing of documents by public authorities may require the public to pay fees which never exceed Eur 40. When access to documents is refused, the FOIA in Malta provides for a complaint and appeal mechanism that can be ultimately resolved through the Courts of Appeal. PresidentEllen Johnson Sirleafsigned the Freedom of Information Act of 2010 into law in October 2010.Liberiabecame only the fourth country inAfrica, and the first inWest Africa, to pass such legislation.[120]The law allows both the media and individual citizens to demand information from any public authority or any private authority that carries out government functions.[121] Article 16 of the Constitution of North Macedonia guarantees "access to information and the freedom of reception and transmission of information". The Law on Free Access to Information of Public Character was adopted on 25 January 2006. It is scheduled to go into force in September 2006.The law allows any natural or legal person to obtain information from state and municipal bodies and natural and legal persons who are performing public functions. The requests can be oral, written or electronic. Requests must be responded to in 10 days. The state ofSelangorpassed the Freedom of Information Enactment (Selangor) 2010 on 1 April 2011, allowing the Malaysian public an access to the state documents including that of local councils, city halls and state government-linked companies.[122]Subsequently, the state ofPenangpassed the Freedom of Information bill on 4 November 2011, allowing the public to access to state documents.[123]Both states are under the ruling of the federal oppositionPakatan Rakyat. The Maldives passed the Right to Information Act (RTI) on 12 January 2014.[124] The Constitution was amended in 1977 to include a right of freedom of information. Article 6 says in part, "the right of information shall be guaranteed by the state." The Supreme Court made a number of decisions further enhancing that right. The Federal Law of Transparency and Access to Public Government Information was unanimously approved by Congress in April 2002 and signed by President Fox in June 2002. It went into effect in June 2003. Article 34 of the Constitution provides for a right of access to information. The Law of the Republic of Moldova on Access to Information[125]was approved by Parliament in May 2000 and went into force in August 2000. Under the law, citizens and residents of Moldova can demand information from state institutions, organisations financed by the public budget and individuals and legal entities that provide public services and hold official information. A freedom of information law was passed in Montenegro late in 2005, after a process of several years. Nepal Government passed a draft of information act in September 2007 on behalf of freedom. Based on that draft, the government enacted a specific law to regulate right to information on 18 July 2007. However, in February 2009 for the protection, promotion and execution of Right to Information in Nepal National Information Commission formedRight to Information Act, 2007.[126] Article 110 of the Constitution states: "In the exercise of their duties government bodies shall observe the principle of transparency in accordance with the rules to be prescribed by Act of Parliament." The Dutch act on public access to government information entered into force in 1980 and is updated several times later. Under the act known as theWet Openbaarheid van Bestuur[nl], orWobfor short, any person can demand information (calledwobbing) related to an administrative matter if it is contained in documents held by public authorities or companies carrying out work for a public authority. The request can either be written or oral. The authority has two (on environmental issues) or four weeks to respond. The act also obliges the government to provide informationunsolicitedas it is in the interest of good and democratic governance. In New Zealand, the relevant legislation is theOfficial Information Act 1982. This implemented a general policy of openness regarding official documents and replaced the Official Secrets Act. Former PresidentGoodluck Jonathansigned into law the Freedom of Information (FoI) Bill, awaited for 12 years by media proprietors and practitioners alike, during which the Villa got knocks for filibustering and lawmakers complained of bombardment by campaigners. The House of Representatives passed the Bill on 24 February 2011, and the Senate dialed up integrity on 16 March as it delivered on promise to pass it. The harmonized version was passed by both Chambers on 26 May 2011.It was conveyed to Jonathan on 27 May, and he signed it on 28 May 2011, according to a statement Aso Rock issued on Tuesday.[127] Two states in Nigeria (namely Ekiti and Lagos State) have adopted the Freedom of Information Act at State level but they have extended the response date at State level from 7 days to 14 days. More states are expected to adopt the bill and come up with their own version. The current freedom of information legislation was enacted 19 May. 2006,[128]and superseded the previous law of 1970[129]by 1 January 2009. Article 100 of the Constitution gives access to public documents.[130]The basic principle of the law is everyone has the right to access to State and municipal documents and to be present at sittings of courts and elected assemblies. PresidentPervez Musharrafpromulgated the Freedom of Information Ordinance 2002 in October 2002.[131]The law allows any citizen access to public records held by a public body of the federal government including ministries, departments, boards, councils, courts and tribunals. It does not apply to government owned corporations or provincial governments. The bodies must respond within 21 days. More recently, by virtue of the 18th Amendment of 2010, article 19A has been inserted in theConstitution of Pakistan.[132]It gives the right to access to information the status of a fundamental constitutional right. Article 19A "Right to Information" reads: "Every citizen shall have the right to have access to information in all matters of public importance subject to regulation and reasonable restrictions imposed by law". The National Constitution ofParaguay[133]enacted in 1992, guarantees the right to be informed and to receive true, responsible, and equitable information (Art. 28). The same article states that public sources of information are free, and that a law will regulate the modalities, time periods, and sanctions "in order to make this right effective". In practice, this last provision delayed the recognition of the right due to the absence of a law making it "effective". Congress, government agencies and Courts were reluctant to enforce the right to access public sources of information until 2013. A Supreme Court judgment (No. 1306 of 15 October 2013),[134]marked the beginning of what has been called a "Transparency Spring".[135] The ruling from the Supreme Court was made in the context of anAmparofiled by a citizen called Jose Daniel Vargas Tellez, after the San Lorenzo Municipality denied him access to the information about the names, the job descriptions and the wages of all the employees that were working in that public office. The Court of First Instance and the Court of Appeals rejected the Amparo on the grounds that information of that type was considered sensitive by the Data Protection and Privacy Act (Law 1682/02 and 1969/02). The latter rulings were challenged on constitutional grounds and the Supreme Court ruled in favor of Vargas Tellez holding that while this information relating to the identity and wages of public employees and officers constitutes personal propriety data, it is nonetheless registered in a "public source of information", which makes it available to any citizen who requests it. The right to access to this information is recognized under the Constitution and international instruments such as theAmerican Convention on Human Rights(Art. 13); TheInternational Covenant on Civil and Political Rights(Art. 19); and theUnited Nations Convention against Corruption(Art. 13). Following the Supreme Court's decision, and with the support of the civil society and PresidentHoracio Cartes, the first Transparency law was enacted (Law No. 5189/14) requiring all public offices to disclose information regarding the use of public funds to pay salaries. In addition, The Freedom of Information and Government Transparency Law (Law 5282/2014) was enacted in 2014 and a final regulation of 2015 (Executive Decree 4064/15) set the final step in the road to Transparency. These rules expressly recognize that the right to access public information is a human right, which improves the State, promotes citizen participation and public accountability, and serves as a tool to combat corruption. Currently, all requests to access public information can be done online through a single portal, and government offices are obliged to respond within 15 days. Paraguay became internationally committed to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance after becoming a member of theOpen Government Partnership. Presently, most government offices have Transparency offices and can provide information to citizens and receive reports of corruption. The main Executive agency in charge of promoting Electronic Government is the SENATICS. Art 28 of the Constitution also states that any person affected by the diffusion of a false, distorted, or ambiguous information has the right to demand its rectification or its clarification by the same means and under the same conditions in which it was divulged, without prejudice to the other compensatory rights. There is also a specific law that regulatesHabeas Data, and any citizen can request a copy of publicly or privately held information relating to them, and can demand that any inaccurate data found be destroyed. On 23 July 2016, Philippine presidentRodrigo Dutertesigned theexecutive order on freedom of informationto be implemented effectively in all offices under the executive branch of government.[136] Section 13(4) of theConstitution of the Pitcairn Islandsprovides that "Freedom of information in Pitcairn shall be provided by Ordinance, which shall reflect thefreedom of information legislation of the United Kingdomadapted to the circumstances of Pitcairn".[137] TheFreedom of Information Ordinance 2012implements this requirement.[138] Article 61 of the Constitution provides for the right to information and mandates that Parliament enact a law setting out this right. The Law on Access to Public Information was approved in September 2001 and went into effect in January 2002. The Act allows anyone to demand access to public information, public data and public assets held by public bodies, private bodies that exercise public tasks, trade unions and political parties. The requests can be oral or written. The bodies must respond within 14 days. ThePortuguese Constitutionguarantees the right of access to administrative documents in its Article 268, titled "Citizens’ rights and guarantees [before the Administration]". Its paragraphs (1), (2) and (6) read as following: "1. Citizens have the right to be informed by the Administration, whenever they so request, as to the progress of the procedures and cases in which they are directly interested, together with the right to be made aware of the definitive decisions that are taken in relation to them. 2. Without prejudice to the law governing matters concerning internal and external security, criminal investigation and personal privacy, citizens also have the right of access to administrative files and records. [...] 6. For the purposes of paragraphs (1) and (2) the law shall lay down a maximum time limit for responses by the Administration."[139] The rule enshrined in Art. 268, par. (2) of the Constitution is known as the "principle ofopen Administration"[140]and it is regulated by Law no. 26/2016 (Lei n.º 26/2016, de 22 de Agosto[141]) which also enacts into national law theEuropean Directivesno.2003/4/ECand2003/98/EC. Art. 15 of this law requires public entities to respond to each request within 10 days and the law's Chapter 3 created an independent watchdog to keep track of compliance with its rules, the Commission for Access to Administrative Documents (Comissão de Acesso aos Documentos Administrativos). Since 2001 there is one law on Freedom of Information and one on transparent decision-making processes in public administration (asunshine law).[142][143] The law Relating to Access to Information was passed on 8 February 2013. It puts forth the purpose of the law, recognises the right to access to information, the procedures for accessing information, and compliance related issues.available athttp://www.humanrightsinitiative.org/postoftheday/2013/18/Rwanda_ATI_Law_March2013_NewDelhi_SatbirS.pdf In Serbia, theAccess to Public Information Actgives access to documents of public authorities. The President of the Republic, Mr. Danny Faure assented to the Access to Information Act in July 2018. The Access to Information Bill 2018 was published in the Official Gazette on 24 March 2017. The Right of Access to Information is guaranteed under Article 28 of the Constitution of the Republic of Seychelles. This Act gives the public with the constitutional right of access to information held by public authorities performing a governmental function. The Act will is administered and applied by an independent Information Commission, the setting of which has been cleared with the enactment of the Law. The commission is appointed by the President in consultation with the Speaker of the National Assembly on the recommendation of the Constitutional Appointments Authority (CAA).The Information Commission strives to promote awareness, educate and popularise the right to access to information and fosters good governance by enhancing transparency, accountability and integrity in the Public Service and Administration.https://www.infocom.sc/ Slovakia passed the Freedom of Information Act in May 2000 (Num. law: 211/2000 Z. z.). Under the law, everybody can demand information from state institutions, organisations, from municipalities, individuals and legal entities financed by the public budget.[144] Slovenia passed the Access to Public Information Act in March 2003.[145]The Act governs the procedure which ensures everyone free access to public information held by state bodies, local government bodies, public agencies, public funds and other entities of public law, public powers holders and public service contractors.[146] Section 32 of theConstitution of South Africaguarantees "the right of access to any information held by the state; and any information that is held by another person and that is required for the exercise or protection of any rights." This right is implemented through the Promotion of Access to Information Act, which was enacted on 2 February 2000. The right of access toprivatelyheld information is an interesting feature, as most freedom of information laws only cover governmental bodies. The Constitutional Court ruled in 1989 that there is a constitutional right to information "as an aspect of the right of freedom of expression and specific implementing legislation to define the contours of the right was not a prerequisite to its enforcement." The Act on Disclosure of Information by Public Agencies was enacted in 1996 and went into effect in January 1998. It allows citizens to demand information held by public agencies. Sri Lanka's Right to Information Act No 12 of 2016 was certified on 4 August 2016. After much debate and many amendments to the draft Bill, the final Act comprising 44 Sections was certified in early August 2016. The implementation of the Act is expected to take time due to the necessity of establishing cadre positions in government institutions to provide information to the general public. The Act is considered to hold many strengths and positive features that would effectively authorize citizens to be actively involved in the process of governance. Moreover, Article 14A(1) introduced by virtue of 19th Amendment to the 1978 Constitution of Sri Lanka has paved the way for the recognition of right to information as a fundamental right. InSweden, theSwedish Freedom of the Press Actgrants public access to official documents and is included in theConstitution of Sweden. Dating back to 1766, it is the first freedom of information legislation in the modern sense. In modern times the right has become known as the Principle of Public Access (Swedish:offentlighetsprincipen).[147] The Principle of Public Access means that the general public is guaranteed insight into activities pursued by government agencies. All official documents handled by government agencies are public unless they contain information specified as secret under thePublic Access to Information and Secrecy Act. Each request to take part of official documents is handled individually and classifying documents or information as secret is subject to appeal. The constitution also grants the right for government employees to pass on information without risk of criminal charges or repercussions and the right to attend court proceedings and meetings of legislative assemblies like theRiksdag. There are a number of exemptions to this principle when the information concerns: Switzerland is a federal state. Access to federal documents is governed by theSwiss Federal Act on the Principle of Freedom of Information in the Administration, and supervised by theFederal Data Protection and Information Commissioner.[149]Access to documents at thecantonal levelis governed by cantonal laws, which are mostly similar to the federal law. As of 2018, the cantons ofAppenzell Innerrhoden,Glarus,Lucerne,Nidwalden,ObwaldenandThurgaudo not have freedom of information legislation.[150] The "Freedom of Government Information Law" (政府資訊公開法), enacted by theLegislative Yuanof theROCgovernment in Taiwan, has been in force since 28 December 2005.[151] Tanzania's Access to Information Act was passed in 2016. In Thailand, the relevant legislation is theOfficial Information Act of 1997. InTrinidad and Tobago, the relevant legislation is theFreedom of Information Act, 1999. Tunisia adopted a freedom of information law after the revolution, in 2016. However the law was criticized for security related exemptions. A 2018 law requiring public officials revealing their assets was a step forward to transparency.[152] InTurkey, theTurkish Law on the Right to Information(Bilgi Edinme Hakkı Kanunu) was signed on 24 October 2003, and it came into effect 6 months later on 24 April 2004. InUganda, theAccess to Information Act(ATI) was approved in 2005 but its regulations were not passed until 2011. The laws states that citizen and especially journalists can demand accountability from a government official. The Hub for Investigative Media (HIM) in Uganda offers training programs that teaches East-African journalists in matters of fact-checking and digital security. HIM also has made government officials are of the ATI law and its provision. They have also conducted a nationwide campaign to train journalists on the knowledge and application of the ATI laws as right holders.[153] The 1996Constitutiondoes not include a specific general right of access to information but contains a general right of freedom of collect and disseminate information and rights of access to personal and environmental information. The Art. 5 of The Law on Information of 1992 (revised in 2011) provides the term «right for information» which includes the possibility of free collection, usage, distribution, storage and protection of information necessary for the exercise of person's rights, freedoms and legitimate interests.[154] Law on Access to Public Information was adopted 13 January 2011 and go into force from 9 May 2011. It widens the range of subjects, obliged to provide information, gives legislative definition of public information and makes public information accessible with statutory restrictions.[155][156] TheFreedom of Information Act 2000(2000 c. 36) is the implementation of freedom of information legislation in theUnited Kingdomon a national level, with the exception of Scottish bodies, which are covered by theFreedom of Information (Scotland) Act 2002(2002 asp. 13). Environmental information is covered by further legislationEnvironmental Information Regulations 2004.Tony Blair, the UK Prime Minister who introduced the Freedom of Information Act, later expressed regret over the Act, claiming that the Act impeded the ability of officials to deliberate "with a reasonable level of confidentiality".[157] In the United States theFreedom of Information Actwas signed into law by PresidentLyndon B. Johnsonon 4 July 1966, and went into effect the following year.Ralph Naderhas been credited with the impetus for creating this act, among others.[158]The Electronic Freedom of Information Act Amendments were signed by PresidentBill Clintonon 2 October 1996.[159] The Act applies only tofederal agencies. However, all of the states, as well as the District of Columbia and some territories, have enacted similar statutes to require disclosures by agencies of the state and of local governments, though some are significantly broader than others. Some state and local government agencies attempt to get around state open records laws by claiming copyright for their works and then demanding high fees to license the public information.[160]: 441–42Some states expand government transparency throughopen meeting laws, which require government meetings to be announced in advance and held publicly. The Act was enacted in 2008 under President Vazquez's Administration and is mainly implemented by the Judiciary. InZimbabwe, theAccess to Information and Privacy Act (AIPPA)was signed by their PresidentRobert Mugabein February 2002.
https://en.wikipedia.org/wiki/Freedom_of_information_laws_by_country
Manual testingis the process of manuallytesting softwarefor defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a writtentest planthat leads them through a set of importanttest cases. A key step in the process is testing the software for correct behavior prior to release to end users. For small scale engineering efforts (including prototypes),ad hoc testingmay be sufficient. With this informal approach, the tester does not follow any rigorous testing procedure and simply performs testing without planning or documentation. Conversely,exploratory testing, which involves simultaneous learning, test design and test execution, explores the user interface of the application using as many of its features as possible, using information gained in prior tests to intuitively derive additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester, because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application. Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps.[1] A rigorous test case based approach is often traditional for large software engineering projects that follow aWaterfall model.[2]However, at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing.[3] Testing can be throughblack-,white-orgrey-box testing. In white-box testing the tester is concerned with the execution of the statements through the source code. In black-box testing the software is run to check for the defects and is less concerned with how the processing of the input is done. Black-box testers do not have access to the source code. Grey-box testing is concerned with running the software while having an understanding of the source code and algorithms.[4] Staticanddynamic testingapproach may also be used. Dynamic testing involves running the software. Static testing includes verifying requirements, syntax of code and any other activities that do not include actually running the code of the program. Testing can be further divided intofunctionalandnon-functional testing. In functional testing the tester would check the calculations, any link on the page, or any other field which on given input, output may be expected. Non-functional testing includes testing performance, compatibility and fitness of the system under test, its security and usability among other things. There are several stages. They are: Test automationmay be able to reduce or eliminate the cost of actual testing.[5]A computer can follow a rote sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labor that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labor than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time-consuming task of interpreting the results. Things such asdevice driversandsoftware librariesmust be tested using test programs. In addition, testing of large numbers of users (performance testingandload testing) is typically simulated in software rather than performed in practice. Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly.
https://en.wikipedia.org/wiki/Manual_testing
Attention seekingbehavior is to act in a way that is likely to elicit attention. Attention seeking behavior is defined in theDSM-5as "engaging in behavior designed to attract notice and to make oneself the focus of others' attention and admiration".[1]: 780This definition does not ascribe a motivation to the behavior and assumes a human actor, although the term "attention seeking" sometimes also assumes a motive of seekingvalidation. People are thought to engage in both positive and negative attention seeking behavior independent of the actual benefit or harm to health. In line with much research and a dynamic self-regulatory processing model of narcissism, motivations for attention seeking are considered to be driven by self-consciousness and thus an externalization of personality rather than internal and self-motivated behavior.[2]Attention seeking is often caused by threats to one's self-concept and the need for social acceptance.[3]This type of influence on behavior can result in a potential loss of a person'ssense of agency, personality disorder and the behavior associated with these conditions. Enjoying the attention of others is socially acceptable in some situations,[4]and attention-seeking may be adaptive in some contexts like acting (upstaging) or marketing.[5]However, an excessive need for attention is often a symptom of an underlyingpersonality disorderand can lead to difficulties ininterpersonal relationships. One strategy often used by teachers and behavior analysts to counter attention-seeking behavior is planned ortactical ignoring.[6] The causes of attention seeking behavior are varied. Risk factors leading to attention seeking behavior include loneliness, jealousy, low self-esteem, narcissism, rejection, and self-pity.[7]A desire forvalidationis theorised as a motivation for attention seeking behavior. As of 2022[update], no studies have evaluated the prevalence of attention seeking behavior in the general population. One area of concern with attention seeking is misbehavior in classroom settings. Research has shown that parental rejection leads young students to adopt a diminished sense of self consequently resulting in the child feeling insecure, undervalued, and powerless.[8]Experiencing rejection pushes the child to strive for acceptance through attention seeking behaviors. These children may grow in assertiveness as a means of being heard and seen. Thus, rejected children embrace attention seeking behaviors to feel some sense of security and acceptance.[8] Repeated attention seeking behavior is a symptom of severalpersonality disorders, includingnarcissistic personality disorder,histrionic personality disorder,borderline personality disorder, and sometimes (though more rarely) inantisocial personality disorder. Attention-seeking behavior should be distinguished from impulsive or disruptive behaviors associated withADHD; while ADHD can sometimes make it difficult to suppress normal attention-seeking impulses, most ADHD-related misbehavior is not motivated by attention-seeking.[9] A 2019 study on adolescents with narcissistic tendencies and the use of social media explores this relation between narcissism and attention seeking behavior.[3]In the study it was found that adolescents' social media behavior was used as a means of gaining acceptance, validation, and attention. The research suggests that the need of motives behind social acceptance mediated the link between social media use and narcissism. The research also found that attention seeking behavior increases when these adolescents experiencesocial rejectionor threats to their ego/self-image.[3] The term "attention seeking" has been the subject of criticism for its usage as a pejorative term as a kind ofvictim blaming, especially when it is used in a non-clinical and non-academic context.[10][11]Student exposure to psychiatric environments has shown evidence to reduce bias and stigma towards individuals with mental disorders or attention-seeking behavior.[12] According to a 2005 survey of 133 books containing the term, the term is often used with either no definition or a poor definition, no empirical studies specifically about attention seeking behavior were found, and there existed widespread academic disagreement on the causes and implications of attention seeking.[13] Self-harmis sometimes viewed as a attention-seeking behaviour.[14]However, young people who self-harm rarely disclose it to friends or family, and they seldom seek medical attention or other support. Therefore, the idea that self-harm is primarily attention-seeking is a myth.[14] There exists research on the relationship between social media usage and attention seeking behavior. A 2013 study of Facebook users found thatagreeablenessandconscientiousnessarenegatively correlatedwith attention seeking tendencies.[15]Internet trollsin social media also tend to exhibit attention seeking behavior.[16]A 2016 study found evidence that social media can benefit some users by compensating for a lack of attention in other domains, although this has been disputed.[17] A 2019 study found evidence correlating narcissism with attention seeking behavior on Facebook.[18] A 2021 study found that experiencingphubbing(being ignored in favor of focusing on a phone) was positively correlated with attention seeking behavior, and the effect was larger in men.[19] Tactical ignoring is a behavioral management strategy, used to combat attention seeking behaviors, where a person gives no outward sign of recognizing a behavior, such as no eye contact, no verbal response and no physical response to the person seeking attention.[20]However, they are very aware of the behavior and monitor the individual to ensure their safety and the safety of others that are potentially involved. The desiredconsequenceof attention-seeking behavior is receiving attention in some form (positive or negative) from another person.[21]Tactical ignoring is often used in the hopes that when an attention-seeking behavior no longer attracts attention,it will eventually cease.[22]It is most frequently used in the behavioral training of children,[23]but is suitable for changing or shunning adult behavior as well.[citation needed]
https://en.wikipedia.org/wiki/Attention_seeking
Internet culturerefers to culture developed and maintained among frequent and active users of theInternet(also known asnetizens) who primarily communicate with one another as members ofonline communities; that is, a culture whose influence is "mediated by computer screens" andinformation communication technology,[1]: 63specifically the Internet. Internet culture arises from the frequent interactions between members within various online communities and the use of these communities forcommunication,entertainment,business, andrecreation. Studied aspects of Internet culture include anonymity/pseudonymity, social media, gaming and specific communities, such as fandoms, and has also raised questions aboutonline identityandInternet privacy.[2] Increasingly widespread Internet adoption has influenced Internet culture; frequently provoking enforcing norms viashaming,censuringandcensorshipwhile pressuring other cultural expressionsunderground.[3] The cultural history of the Internet is a story of rapid change. The Internet developed in parallel with rapid and sustainedtechnological advancesincomputinganddata communication. Widespread access to theInternetemerged as the cost of infrastructure dropped by several orders of magnitude with consecutive technological improvements. Though Internet culture originated during the creation and development of earlyonline communities– such as those found onbulletin board systemsbefore the Internet reached mainstream adoption in developed countries – many cultural elements have roots in other previously existingofflinecultures and subcultures which predate the Internet. Specifically, Internet culture includes many elements oftelegraphyculture (especiallyamateur radio culture),gaming cultureandhacker culture. Initially, digital culture tilted toward theAnglosphere. As a consequence of computer technology's early reliance ontextual coding systemsthat were mainly adapted to the English language,Anglophonesocieties—followed by other societies with languages based onLatin script—enjoyed privileged access to digital culture. However, other languages have gradually increased in prominence. In specific, the proportion of content on the Internet that is in English has dropped from roughly 80% in the 1990s to around 52.9% in 2018.[4][5] As technology advances, Internet Culture continues to change. The introduction ofsmartphonesandtablet computersand the growing computer network infrastructure around the world have increased the number of Internet users and have likewise resulted in the proliferation and expansion of online communities. While Internet culture continues to evolve among active and frequent Internet users, it remains distinct from other previously offline cultures and subcultures which now have a presence online, even those cultures and subcultures from which Internet Culture borrows many elements. One cultural antecedent of Internet culture was amateur radio (commonly known as ham radio). By connecting over great distances, ham operators were able to form a distinct cultural community with a strong technocratic foundation, as the radio gear involved was finicky and prone to failure. The area that later becameSilicon Valley, where much of modern Internet technology originates, had been an early locus of radio engineering.[6]Alongside the original mandate for robustness and resiliency, the renegade spirit of the early ham radio community later infused the cultural value ofdecentralizationand near-total rejection ofregulationand political control that characterized the Internet's original growth era, with strong undercurrents of the Wild West spirit of theAmerican frontier. At its inception in the early 1970s as part ofARPANET, digital networks were small, institutional, arcane, and slow, which confined the majority of use to the exchange oftextual information, such as interpersonal messages andsource code. Access to these networks was largely limited to a technological elite based at a small number of prestigious universities; the original American network connected one computer in Utah with three in California.[7] Text on these digital networks usually encoded in the ASCII character set, which was minimalistic even for established Englishtypography, barely suited to other European languages sharing a Latin script (but with an additional requirement to support accented characters), and entirely unsuitable to any language not based on a Latin script, such asMandarin,Arabic, orHindi. Interactive use was discouraged except for high value activities. Hence a store and forward architecture was employed for many message systems, functioning more like a post office than modern instant messaging; however, by the standards of postal mail, the system (when it worked) was stunningly fast and cheap. Among the heaviest users were those actively involved in advancing the technology, most of whom implicitly shared much the same base of arcane knowledge, effectively forming a technological priesthood. The origins ofsocial mediapredate the Internet proper. The first bulletin board system was created in 1978,[8]GEniewas created by General Electric in 1985[9][unreliable source?], the mailing listListservappeared in 1986[9][unreliable source?], andInternet Relay Chatwas created in 1988.[9][unreliable source?]The first official[dubious–discuss]social media site,SixDegreeslaunched in 1997.[9][unreliable source?] In the 1980s, the network grew to encompass most universities and many corporations, especially those involved with technology, including heavy but segregated participation within the Americanmilitary–industrial complex. Use of interactivity grew, and the user base became less dominated by programmers,computer scientistsand hawkish industrialists, but it remained largely an academic culture centered around institutions of higher learning. It was observed that each September, with an intake of new students, standards of productive discourse would plummet until the established user base brought the influx up to speed on cultural etiquette. CommercialInternet service providers(ISPs) emerged in 1989 in the United States and Australia, opening the door for public participation. Soon the network was no longer dominated by academic culture, and the termeternal September, initially referring to September 1993, was coined as Internet slang for the endless intake of culturalnewbies. Commercial use became established alongside academic and professional use, beginning with a sharp rise in unsolicited commercial e-mail commonly calledspam. Around this same time, the network transitioned to support the burgeoningWorld Wide Web.Multimediaformats such asaudio,graphics, andvideobecome commonplace and began to displace plain text, but multimedia remained painfully slow fordial-upusers. Also around this time the Internet also began to internationalize, supporting most of the world's major languages, but support for many languages remained patchy and incomplete into the 2010s. On the arrival ofbroadbandaccess,file sharingservices grew rapidly, especially ofdigital audio(with a prevalence ofbootleggedcommercial music) with the arrival ofNapsterin 1999 and similar projects which effectively catered to music enthusiasts, especially teenagers and young adults, soon becoming established as a prototype for rapid evolution into modern social media. Alongside ongoing challenges to traditional norms ofintellectual property, business models of many of the largest Internet corporations evolved into whatShoshana Zubofftermssurveillance capitalism. Not only is social media a novel form of social culture, but also a novel form of economic culture wheresharingis frictionless, but personalprivacyhas become ascarcegood. In 1998, there wasHampster Dance, the first[dubious–discuss]successfulInternet meme.[10] One early study, conducted from 1998 to 1999, found that the participants view information obtained online as slightly more credible than information from magazines, radio, and television, information obtained from newspapers was the most credible.[11]Credibility online is established in much the same way that it is established in the offline world.Lawrence Lessigclaimed that the architecture of a given online community may be the most important factor in establishing credibility. Factors include: anonymity, connection to physical identity, comment rating system, feedback type (positive vs positive/negative), moderation.[12] Many sites allow anonymous commentary, where the user-id attached to the comment is something like "guest". In an architecture that allows anonymous commentary, credibility attaches only to the object of the comment. Sites that require some link to an identity may require only a nickname that is sufficient to allow comment readers to rate the commenter, either explicitly, or by informal reputation. Architectures can require that physical identity be associated with commentary, as in Lessig's example of Counsel Connect.[12]: 94–97However, to require linkage to a physical identity, sensitive information about a user must be collected and safeguards for that collected information must be established – users must place sufficient trust in the site. Irrespective of safeguards, as with Counsel Connect,[12]: 94–97use of physical identities links credibility across the frames of the Internet and real space, influencing the behaviors of those who contribute in those spaces. However, even purely online identities can establish credibility. Even though nothing inherently links a person or group to their Internet-based persona, credibility can be earned, because of the time required.[12]: 113 In some architectures, commenters can, in turn, be rated by other users, potentially encouraging more responsible commentary, although the profusion of popularshitpostersbelies this. Architectures can be oriented around positive feedback or allow both positive and negative feedback. This feedback can take form through likes or upvotes, dislikes or downvotes, emoji reactions, rating systems, and written responses like comments or reviews. While a particular user may be able to equate certain responses with a "negative" evaluation, the actual meaning may be contextual.[13] Architectures can give editorial control to a group or individual not employed by the site (e.g.,Reddit), termed moderators. Moderation may take be either proactive (previewing contents) or reactive (punishing violators). The moderator's credibility can be damaged by overly aggressive behavior.[1]
https://en.wikipedia.org/wiki/Cyberculture
Private biometricsis a form of encryptedbiometrics, also calledprivacy-preserving biometric authentication methods, in which the biometricpayloadis a one-way,homomorphically encrypted feature vectorthat is 0.05% the size of the originalbiometrictemplate and can be searched with full accuracy, speed and privacy. The feature vector'shomomorphic encryptionallows search and match to be conducted inpolynomial timeon an encrypted dataset and the search result is returned as an encrypted match. One or more computing devices may use an encrypted feature vector to verify an individualperson(1:1 verify) or identify an individual in adatastore(1:many identify) without storing, sending or receivingplaintextbiometric data within or between computing devices or any other entity. The purpose of private biometrics is to allow a person to beidentifiedorauthenticatedwhile guaranteeing individualprivacyand fundamentalhuman rightsby only operating on biometric data in the encrypted space. Some private biometrics including fingerprint authentication methods, face authentication methods, and identity-matching algorithms according to bodily features. Private biometrics are constantly evolving based on the changing nature of privacy needs, identity theft, and biotechnology. Biometricsecuritystrengthens user authentication but, until recently, also implied important risks to personal privacy. Indeed, while compromisedpasswordscan be easily replaced and are notpersonally identifiable information(PII), biometric data is considered highly sensitive due to its personal nature, unique association with users, and the fact that compromised biometrics (biometric templates) cannot be revoked or replaced. Private biometrics have been developed to address this challenge. Private Biometrics provide the necessary biometric authentication while simultaneously minimizing user's privacy exposure through the use of one-way, fullyhomomorphic encryption. The Biometric Open Protocol Standard,IEEE 2410-2018, was updated in 2018 to include private biometrics and stated that the one-way fully homomorphic encrypted feature vectors, “...bring a new level of consumer privacy assurance by keeping biometric data encrypted both at rest and in transit.” TheBiometric Open Protocol Standard (BOPS III)also noted a key benefit of private biometrics was the new standard allowed for simplification of theAPIsince the biometric payload was always one-way encrypted and therefore had no need forkey management.[1] Historically, biometric matching techniques have been unable to operate in the encrypted space and have required the biometric to be visible (unencrypted) at specific points during search and match operations. This decrypt requirement made large-scale search across encrypted biometrics (“1:many identify”) infeasible due to both significant overhead issues (e.g. complex key management and significant data storage and processing requirements) and the substantial risk that the biometrics were vulnerable to loss when processed in plaintext within theapplicationoroperating system(seeFIDO Alliance, for example). Biometric security vendors complying withdata privacy lawsand regulations (including Apple FaceID, Samsung, Google) therefore focused their efforts on the simpler 1:1 verify problem and were unable to overcome the large computational demands required forlinear scanto solve the 1:many identify problem.[2] Today, private biometric cryptosystems overcome these limitations and risks through the use of one-way, fullyhomomorphic encryption. This form of encryption allows computations to be carried out onciphertext, allows the match to be conducted on an encrypted dataset without decrypting the reference biometric, and returns an encrypted match result. Matching in the encrypted space offers the highest levels of accuracy, speed and privacy and eliminates the risks associated with decrypting biometrics.[3] The private biometric feature vector is much smaller (0.05% the size of the original biometric template) but yet maintains the same accuracy as the original plaintext reference biometric. In testing using Google's unified embedding forface recognitionand clusteringCNN(“Facenet”),[4]Labeled Faces in the Wild (LFW) (source), and other open source faces, private biometric feature vectors returned the same accuracy as plaintext facial recognition. Using an 8MB facial biometric, one vendor reported an accuracy rate of 98.7%. The same vendor reported accuracy increased to 99.99% when using three 8MB facial biometrics and a vote algorithm (best two out of 3) to predict.[5] As the quality of the facial biometric image declined, accuracy degraded very slowly. For 256kB facial images (3% the quality of an 8MB picture), the same vendor reported 96.3% accuracy and that theneural networkwas able to maintain similar accuracy through boundary conditions including extreme cases of light or background.[6] The private biometric feature vector is 4kB and contains 128floating point numbers. In contrast, plaintext biometric security instances (including Apple Face ID[7]) currently use 7MB to 8MB reference facial biometrics (templates). By using the much smaller feature vector, the resulting search performance is less than one second per prediction using a datastore of 100 million open source faces (“polynomial search”).[8]The private biometric test model used for these results was Google's unified embedding for face recognition and clusteringCNN(“Facenet”),[4]Labeled Faces in the Wild (LFW) (source), and other open source faces. As with all ideal one-waycryptographic hashfunctions, decrypt keys do not exist for private biometrics so it isinfeasibleto generate the original biometric message from the private biometric feature vector (its hash value) except by trying all possible messages. Unlike passwords, however, no two instances of a biometric are exactly the same or, stated in another way, there is no constant biometric value, so a brute force attack using all possible faces would only produce an approximate (fuzzy) match. Privacy and fundamental human rights are therefore guaranteed. Specifically, the private biometric feature vector is produced by a one-way cryptographic hash algorithm that maps plaintext biometric data of arbitrary size to a small feature vector of a fixed size (4kB) that is mathematically impossible to invert. The one-way encryption algorithm is typically achieved using a pre-trained convolutional neural network (CNN), which takes a vector of arbitrary real-valued scores and squashes it to a 4kB vector of values between zero and one that sum to one.[9]It is mathematically impossible to reconstruct the original plaintext image from a private biometric feature vector of 128 floating point numbers.[10] One-way encryptions offer unlimited privacy by containing no mechanism to reverse the encryption and disclose the original data. Once a value is processed through a one-way hash, it is not possible to discover to the original value (hence the name “one-way”).[11] The first one-way encryptions were likely developed by James H. Ellis, Clifford Cocks, and Malcolm Williamson at the UK intelligence agency GCHQ during the 1960s and 1970s and were published independently by Diffie and Hellman in 1976 (History of cryptography). Common modern one-way encryption algorithms, includingMD5(message digest) andSHA-512(secure hash algorithm) are similar to the first such algorithms in that they also contain no mechanism to disclose the original data. The output of these modern one-way encryptions offer high privacy but are not homomorphic, meaning that the results of the one-way encryptions do not allow high order math operations (such as match). For example, we cannot use twoSHA-512sums to compare the closeness of two encrypted documents. This limitation makes it impossible for these one-way encryptions to be used to support classifying models in machine learning—or nearly anything else.[citation needed] The first one-way, homomorphically encrypted,Euclidean-measurablefeature vector for biometric processing was proposed in a paper by Streit, Streit and Suffian in 2017.[12]In this paper, the authors theorized and also demonstrated using a small sample size (n=256 faces) that (1) it was possible to use neural networks to build a cryptosystem for biometrics that produced one-way, fully homomorphic feature vectors composed of normalized floating-point values; (2) the sameneural networkwould also be useful for 1:1 verification (matching); and (3) the sameneural networkwould not be useful in 1:many identification tasks since search would occur inlinear time(i.e.non polynomial). The paper's first point was (in theory) later shown to be true, and the papers first, second and third points were later shown to be true only for small samples but not for larger samples. A later tutorial (blog posting) by Mandel in 2018 demonstrated a similar approach to Streit, Streit and Suffian and confirmed using aFrobenius2 distance function to determine the closeness of two feature vectors. In this posting, Mandel used a Frobenius 2 distance function to determine the closeness of two feature vectors and also demonstrated successful 1:1 verification. Mandel did not offer a scheme for 1:many identification as this method would have required a non polynomial full linear scan of the entire database. The Streit, Streit and Suffian paper attempted a novel “banding” approach for 1:many identification in order to mitigate the full linear scan requirement, but it is now understood that this approach produced too much overlap to help in identification.[13] The first claimed commercial implementation of private biometrics,Private.id, was published by Private Identity, LLC in May 2018 by using the same method to provide 1:many identification in polynomial time across a large biometrics database (100 million faces). On the client device, Private.id transforms each reference biometric (template) into a one-way, fully homomorphic, Euclidean-measurablefeature vectorusing matrix multiplication from the neural network that may then be stored locally or transmitted. The original biometric is deleted immediately after the feature vector is computed or, if the solution isembeddedin firmware, the biometric is transient and never stored. Once the biometric is deleted, it is no longer possible to lose or compromise the biometric.[5] The Private.idfeature vectorcan be used in one of two ways. If the feature vector is stored locally, it may be used to compute 1:1 verification with high accuracy (99% or greater) usinglinear mathematics. If the feature vector is also stored in aCloud, thefeature vectormay also be used as input for a neural network to perform 1:many identification with the same accuracy, speed and privacy as the original plaintext reference biometric (template).[5] Private biometrics use the following two properties in deriving compliance with biometric data privacy laws and regulations worldwide. First, the private biometrics encryption is a one-way encryption, so loss of privacy by decryption is mathematically impossible and privacy is therefore guaranteed. Second, since no two instances of a biometric are exactly the same or, stated in another way, there is no constant biometric value, the private biometrics one-way encryptedfeature vectoris Euclidean Measureable in order to provide a mechanism to determine a fuzzy match in which two instances of the same identity are “closer” than two instances of a different identity. The IEEE 2410-2018Biometric Open Protocol Standardwas updated in 2018 to include private biometrics. The specification stated that one-way fully homomorphic encrypted feature vectors, “bring a new level of consumer privacy assurance by keeping biometric data encrypted both at rest and in transit.”IEEE 2410-2018also noted a key benefit of private biometrics is that the new standard allows for simplification of theAPIsince the biometricpayloadis always one-way encrypted and there is no need for key management.[1] Private biometrics enables passive encryption (encryption at rest), the most difficult requirement of the US Department of Defense Trusted Computer System Evaluation Criteria (TCSEC). No other cryptosystem or method provides operations on rested encrypted data, so passive encryption—an unfulfilled requirement of theTCSECsince 1983, is no longer an issue. Private biometrics technology is an enabling technology for applications and operating systems—but itself does not directly address—the auditing and constant protection concepts introduced in theTCSEC. Private biometrics, as implemented in a system that conforms toIEEE 2410-2018 BOPS III,[1]satisfies the privacy requirements of the US Department of Defense Standard Trusted Computer System Evaluation Criteria (TCSEC). TheTCSECsets the basic requirements for assessing the effectiveness of computer security controls built into a computer system (“Orange Book, section B1”). Today, the applications and operating systems contain features that comply withTCSEClevels C2 and B1 except they lackhomomorphic encryptionand so do not process dataencryptedat rest. We typically, if not always, obtained waivers, because there was not a known work around. Adding private biometrics to these operating systems and applications resolves this issue. For example, consider the case of a typicalMySQLdatabase. To queryMySQLin a reasonable period of time, we need data that maps to indexes that maps to queries that maps to end user data. To do this, we work withplaintext. The only way to encrypt this is to encrypt the entire data store, and to decrypt the entire data store, prior to use. Since data use is constant, the data is never encrypted. Thus, in the past we would apply for waivers because there was no known work around. Now using private biometrics, we can match and do operations on data that is alwaysencrypted. Private biometrics, as implemented in a system that conforms to IEEE 2410-2018BOPS III, comply with the standards of the Multiple Independent Levels of Security/Safety (MILS) architecture.MILSbuilds on the Bell and La Padula theories on secure systems that represent the foundational theories of the US DoD Standard Trusted Computer System Evaluation Criteria (TCSEC), or the DoD “Orange Book.” (See paragraphs above.) Private biometrics’ high-assurancesecurityarchitecture is based on the concepts of separation and controlled information flow and implemented using only mechanisms that support trustworthy components, thus the security solution is non-bypassable, evaluable, always invoked and tamper proof. This is achieved using the one-way encryptedfeature vector, which elegantly allows only encrypted data (and never stores or processes plaintext) between security domains and through trustworthy security monitors. Specifically, private biometrics systems are: Unsecure biometric data are sensitive due to their nature and how they can be used.Implicit authenticationis a common practice when usingpasswords, as a user may prove knowledge of a password without actually revealing it. However, two biometric measurements of the samepersonmay differ, and this fuzziness of biometric measurements renders implicit authentication protocols useless in the biometrics domain. Similarly, private equality testing, where two devices or entities want to check whether the values that they hold are the same without presenting them to each other or to any other device or entity, is well practiced and detailed solutions have been published. However, since two biometrics of the same person may not be equal, these protocols are also ineffective in the biometrics domain. For instance, if the two values differ in τ bits, then one of the parties may need to present 2τ candidate values for checking.[14] Prior to the introduction of private biometrics, biometric techniques required the use ofplaintextsearch for matching so each biometric was required to be visible (unencrypted) at some point in the search process. It was recognized that it would be beneficial to instead conduct matching on an encrypted dataset. Encrypt match is typically accomplished using one-way encryption algorithms, meaning that given the encrypted data, there is no mechanism to get to the original data. Common one-way encryption algorithms areMD5andSHA-512. However, these algorithms are nothomomorphic, meaning that there is no way to compare the closeness of two samples of encrypted data, and thus no means to compare. The inability to compare renders any form of classifying model inmachine learninguntenable. Homomorphic encryptionis a form ofencryptionthat allows computations to be carried out onciphertext, thus generating an encrypted match result. Matching in theencryptedspace using a one-way encryption offers the highest level of privacy. With a payload offeature vectorsone-wayencrypted, there is no need to decrypt and no need for key management. A promising method of homomorphic encryption on biometric data is the use of machine learning models to generatefeature vectors. Forblack-box models, such asneural networks, these vectors can not by themselves be used to recreate the initial input data and are therefore a form of one-way encryption. However, the vectors are euclidean measurable, so similarity between vectors can be calculated. This process allows for biometric data to be homomorphically encrypted. For instance if we consider facial recognition performed with theEuclidean Distance, when we match two face images using a neural network, first each face is converted to a float vector, which in the case of Google's FaceNet, is of size 128. The representation of this float vector is arbitrary and cannot bereverse-engineeredback to the original face. Indeed, the matrix multiplication from the neural network then becomes the vector of the face, is Euclidean measurable but unrecognizable, and cannot map back to any image. Prior to the availability of private biometrics, research focused on ensuring the prover's biometric would be protected against misuse by a dishonest verifier through the use of partiallyhomomorphicdata or decrypted(plaintext) data coupled with a private verification function intended to shield private data from the verifier. This method introduced a computational and communication overhead which was computationally inexpensive for 1:1 verification but proved infeasible for large 1:many identification requirements. From 1998 to 2018cryptographicresearchers pursued four independent approaches to solve the problem:cancelable biometrics, BioHashing, Biometric Cryptosystems, and two-way partiallyhomomorphic encryption.[15] The feature transformation approach “transformed” biometric feature data to random data through the use of a client-specific key or password. Examples of this approach includedbiohashingand cancelable biometrics.The approach offered reasonable performance but was found to be insecure if the client-specific key was compromised. Cancelable Biometrics The first use of indirect biometric templates (later calledcancelable biometrics) was proposed in 1998 by Davida, Frankel and Matt.[16]Three years later, Ruud Bolle, Nilini Ratha and Jonathan Connell, working in IBM's Exploratory Computer Vision Group, proposed the first concrete idea ofcancelable biometrics.[17][18] Cancelable biometrics were defined in these communications as biometric templates that were unique for every application and that, if lost, could be easily cancelled and replaced. The solution was (at the time) thought to provide higher privacy levels by allowing multiple templates to be associated with the same biometric data by storing only the transformed (hashed) version of the biometric template. The solution was also promoted for its ability toprevent linkageof the user's biometric data across various databases since only a transformed version of the biometric template (and not the unencrypted (plaintext) biometric template) was stored for later use.[19][20][21] Cancelable biometrics were deemed useful because of their diversity, reusability and one-way encryption (which, at the time, was referred to as a one-way transformation). Specifically, no cancellable template could be used in two different applications (diversity); it was straightforward to revoke and reissuance a cancellable template in the event of compromise (reusability); and the one-way hash of the template prevented recovery of sensitive biometric data. Finally, it was postulated that the transformation would not deteriorate accuracy.[22] Research intocancelable biometricsmoved into BioHashing by 2004. The BioHashing feature transformation technique was first published by Jin, Ling and Goh and combined biometric features and atokenized(pseudo-) random number (TRN). Specifically, BioHash combined the biometric template with a user-specific TRN to produce a set of non-invertible binary bit strings that were thought to be irreproducible if both the biometric and the TRN were not presented simultaneously.[23] Indeed, it was first claimed that the BioHashing technique had achieved perfect accuracy (equal error rates) for faces, fingerprints and palm prints, and the method gained further traction when its extremely low error rates were combined with the claim that its biometric data was secure against loss because factoring the inner products of biometrics feature and TRN was an intractable problem.[23][19] By 2005, however, researchers Cheung and Kong (Hong Kong Polytechnic and University of Waterloo) asserted in two journal articles that BioHashing performance was actually based on the sole use of TRN and conjectured that the introduction of any form of biometric become meaningless since the system could be used only with the tokens.[24][25]These researchers also reported that the non-invertibility of the random hash would deteriorate the biometric recognition accuracy when the genuine token was stolen and used by an impostor (“the stolen-token scenario”).[24][26] Biometriccryptosystemswere originally developed to either securecryptographic keysusing biometric features (“key-biometrics binding”) or to directly generate cryptographic keys from biometric features.[27]Biometric cryptosystems used cryptography to provide the system with cryptographic keys protection and biometrics to provide the system with dynamically generated keys to secure the template and biometric system.[28] The acceptance and deployment of biometric cryptosystem solutions was constrained, however, by the fuzziness related with biometric data. Hence,error correction codes(ECCs), including includes fuzzy vault and fuzzy commitment, were adopted to alleviate the fuzziness of the biometric data. This overall approach proved impractical, however, due to the need for accurate authentication and suffered from security issues due to its need for strong restriction to support authentication accuracy.[29] Future research on biometric cryptosystems is likely to focus on a number of remaining implementation challenges and security issues involving both the fuzzy representations of biometric identifiers and the imperfect nature of biometric feature extraction and matching algorithms. And, unfortunately, since biometric cryptosystems can, at the current time, be defeated using relatively simple strategies leveraging both weaknesses of the current systems (the fuzzy representations of biometric identifiers and the imperfect nature of biometric feature extraction and matching algorithms), it is unlikely that these systems will be able to deliver acceptable end-to-end system performance until suitable advances are achieved.[30] The two-way partiallyhomomorphic encryptionmethod for private biometrics was similar to the today's private biometrics in that it offered protection of biometric feature data through the use of homomorphic encryption and measured the similarity of encrypted feature data by metrics such as the Hamming and the Euclidean distances. However, the method was vulnerable to data loss due to the existence of secret keys that were to be managed by trusted parties. Widespread adoption of the approach also suffered from the encryption schemes’ complex key management and large computational and data storage requirements.[15]
https://en.wikipedia.org/wiki/Private_biometrics
Bidirectional encoder representations from transformers(BERT) is alanguage modelintroduced in October 2018 by researchers atGoogle.[1][2]It learns to represent text as a sequence of vectors usingself-supervised learning. It uses theencoder-only transformerarchitecture. BERT dramatically improved thestate-of-the-artforlarge language models. As of 2020[update], BERT is a ubiquitous baseline innatural language processing(NLP) experiments.[3] BERT is trained by masked token prediction and next sentence prediction. As a result of this training process, BERT learns contextual,latent representationsof tokens in their context, similar toELMoandGPT-2.[4]It found applications for many natural language processing tasks, such ascoreference resolutionandpolysemyresolution.[5]It is an evolutionary step overELMo, and spawned the study of "BERTology", which attempts to interpret what is learned by BERT.[3] BERT was originally implemented in the English language at two model sizes, BERTBASE(110 million parameters) and BERTLARGE(340 million parameters). Both were trained on the TorontoBookCorpus[6](800M words) andEnglish Wikipedia(2,500M words).[1]: 5The weights were released onGitHub.[7]On March 11, 2020, 24 smaller models were released, the smallest being BERTTINYwith just 4 million parameters.[7] BERT is an "encoder-only"transformerarchitecture. At a high level, BERT consists of 4 modules: The task head is necessary for pre-training, but it is often unnecessary for so-called "downstream tasks," such asquestion answeringorsentiment classification. Instead, one removes the task head and replaces it with a newly initialized module suited for the task, and finetune the new module. The latent vector representation of the model is directly fed into this new module, allowing for sample-efficienttransfer learning.[1][8] This section describes the embedding used by BERTBASE. The other one, BERTLARGE, is similar, just larger. The tokenizer of BERT is WordPiece, which is a sub-word strategy likebyte pair encoding. Its vocabulary size is 30,000, and any token not appearing in its vocabulary is replaced by[UNK]("unknown"). The first layer is the embedding layer, which contains three components: token type embeddings, position embeddings, and segment type embeddings. The three embedding vectors are added together representing the initial token representation as a function of these three pieces of information. After embedding, the vector representation is normalized using aLayerNormoperation, outputting a 768-dimensional vector for each input token. After this, the representation vectors are passed forward through 12 Transformer encoder blocks, and are decoded back to 30,000-dimensional vocabulary space using a basic affine transformation layer. The encoder stack of BERT has 2 free parameters:L{\displaystyle L}, the number of layers, andH{\displaystyle H}, thehidden size. There are alwaysH/64{\displaystyle H/64}self-attention heads, and the feed-forward/filter size is always4H{\displaystyle 4H}. By varying these two numbers, one obtains an entire family of BERT models.[9] For BERT The notation for encoder stack is written as L/H. For example, BERTBASEis written as 12L/768H, BERTLARGEas 24L/1024H, and BERTTINYas 2L/128H. BERT was pre-trained simultaneously on two tasks.[10] In masked language modeling, 15% of tokens would be randomly selected for masked-prediction task, and the training objective was to predict the masked token given its context. In more detail, the selected token is The reason not all selected tokens are masked is to avoid the dataset shift problem. The dataset shift problem arises when the distribution of inputs seen during training differs significantly from the distribution encountered during inference. A trained BERT model might be applied to word representation (likeWord2Vec), where it would be run over sentences not containing any[MASK]tokens. It is later found that more diverse training objectives are generally better.[11] As an illustrative example, consider the sentence "my dog is cute". It would first be divided into tokens like "my1dog2is3cute4". Then a random token in the sentence would be picked. Let it be the 4th one "cute4". Next, there would be three possibilities: After processing the input text, the model's 4th output vector is passed to its decoder layer, which outputs a probability distribution over its 30,000-dimensional vocabulary space. Given two spans of text, the model predicts if these two spans appeared sequentially in the training corpus, outputting either[IsNext]or[NotNext]. The first span starts with a special token[CLS](for "classify"). The two spans are separated by a special token[SEP](for "separate"). After processing the two spans, the 1-st output vector (the vector coding for[CLS]) is passed to a separate neural network for the binary classification into[IsNext]and[NotNext]. BERT is meant as a general pretrained model for various applications in natural language processing. That is, after pre-training, BERT can befine-tunedwith fewer resources on smaller datasets to optimize its performance on specific tasks such asnatural language inferenceandtext classification, and sequence-to-sequence-based language generation tasks such asquestion answeringand conversational response generation.[12] The original BERT paper published results demonstrating that a small amount of finetuning (for BERTLARGE, 1 hour on 1 Cloud TPU) allowed it to achievedstate-of-the-artperformance on a number ofnatural language understandingtasks:[1] In the original paper, all parameters of BERT are finetuned, and recommended that, for downstream applications that are text classifications, the output token at the[CLS]input token is fed into a linear-softmax layer to produce the label outputs.[1] The original code base defined the final linear layer as a "pooler layer", in analogy withglobal poolingin computer vision, even though it simply discards all output tokens except the one corresponding to[CLS].[15] BERT was trained on theBookCorpus(800M words) and a filtered version of English Wikipedia (2,500M words) without lists, tables, and headers. Training BERTBASEon 4 cloudTPU(16 TPU chips total) took 4 days, at an estimated cost of 500 USD.[7]Training BERTLARGEon 16 cloud TPU (64 TPU chips total) took 4 days.[1] Language models like ELMo, GPT-2, and BERT, spawned the study of "BERTology", which attempts to interpret what is learned by these models. Their performance on thesenatural language understandingtasks are not yet well understood.[3][16][17]Several research publications in 2018 and 2019 focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences,[18][19]analysis of internalvector representationsthrough probing classifiers,[20][21]and the relationships represented byattentionweights.[16][17] The high performance of the BERT model could also be attributed to the fact that it is bidirectionally trained.[22]This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the wordfinecan have two different meanings depending on the context (I feelfinetoday,She hasfineblond hair). BERT considers the words surrounding the target wordfinefrom the left and right side. However it comes at a cost: due toencoder-onlyarchitecture lacking a decoder, BERT can'tbe promptedand can'tgenerate text, while bidirectional models in general do not work effectively without the right side, thus being difficult to prompt. As an illustrative example, if one wishes to use BERT to continue a sentence fragment "Today, I went to", then naively one would mask out all the tokens as "Today, I went to[MASK][MASK][MASK]...[MASK]." where the number of[MASK]is the length of the sentence one wishes to extend to. However, this constitutes a dataset shift, as during training, BERT has never seen sentences with that many tokens masked out. Consequently, its performance degrades. More sophisticated techniques allow text generation, but at a high computational cost.[23] BERT was originally published by Google researchers Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. The design has its origins from pre-training contextual representations, includingsemi-supervised sequence learning,[24]generative pre-training,ELMo,[25]and ULMFit.[26]Unlike previous models, BERT is a deeply bidirectional,unsupervisedlanguage representation, pre-trained using only a plaintext corpus. Context-free models such asword2vecorGloVegenerate a single word embedding representation for each word in the vocabulary, whereas BERT takes into account the context for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.[4] On October 25, 2019,Googleannounced that they had started applying BERT models forEnglish languagesearch querieswithin theUS.[27]On December 9, 2019, it was reported that BERT had been adopted by Google Search for over 70 languages.[28][29]In October 2020, almost every single English-based query was processed by a BERT model.[30] The BERT models were influential and inspired many variants. RoBERTa(2019)[31]was an engineering improvement. It preserves BERT's architecture (slightly larger, at 355M parameters), but improves its training, changing key hyperparameters, removing thenext-sentence predictiontask, and using much largermini-batchsizes. DistilBERT(2019)distillsBERTBASEto a model with just 60% of its parameters (66M), while preserving 95% of its benchmark scores.[32][33]Similarly,TinyBERT(2019)[34]is a distilled model with just 28% of its parameters. ALBERT(2019)[35]used shared-parameter across layers, and experimented with independently varying the hidden size and the word-embedding layer's output size as two hyperparameters. They also replaced thenext sentence predictiontask with thesentence-order prediction(SOP) task, where the model must distinguish the correct order of two consecutive text segments from their reversed order. ELECTRA(2020)[36]applied the idea ofgenerative adversarial networksto the MLM task. Instead of masking out tokens, a small language model generates random plausible substitutions, and a larger network identify these replaced tokens. The small model aims to fool the large model. DeBERTa(2020)[37]is a significant architectural variant, withdisentangled attention. Its key idea is to treat the positional and token encodings separately throughout the attention mechanism. Instead of combining the positional encoding (xposition{\displaystyle x_{position}}) and token encoding (xtoken{\displaystyle x_{\text{token}}}) into a single input vector (xinput=xposition+xtoken{\displaystyle x_{input}=x_{position}+x_{token}}), DeBERTa keeps them separate as a tuple: ((xposition,xtoken){\displaystyle (x_{position},x_{token})}). Then, at each self-attention layer, DeBERTa computes three distinct attention matrices, rather than the single attention matrix used in BERT:[note 1] The three attention matrices are added together element-wise, then passed through a softmax layer and multiplied by a projection matrix. Absolute position encoding is included in the final self-attention layer as additional input.
https://en.wikipedia.org/wiki/BERT_(language_model)
BLEU(bilingual evaluation understudy) is an algorithm forevaluatingthe quality of text which has beenmachine-translatedfrom onenatural languageto another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU.[1]Invented atIBMin 2001, BLEU was one of the firstmetricsto claim a highcorrelationwith human judgements of quality,[2][3]and remains one of the most popular automated and inexpensive metrics. Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations. Those scores are then averaged over the wholecorpusto reach an estimate of the translation's overall quality. Intelligibility or grammatical correctness are not taken into account.[4] BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.[5] A basic, first attempt at defining the BLEU score would take two arguments: a candidate stringy^{\displaystyle {\hat {y}}}and a list of reference strings(y(1),...,y(N)){\displaystyle (y^{(1)},...,y^{(N)})}. The idea is thatBLEU(y^;y(1),...,y(N)){\displaystyle BLEU({\hat {y}};y^{(1)},...,y^{(N)})}should be close to 1 wheny^{\displaystyle {\hat {y}}}is similar toy(1),...,y(N){\displaystyle y^{(1)},...,y^{(N)}}, and close to 0 if not. As an analogy, the BLEU score is like a language teacher trying to score the quality of a student translationy^{\displaystyle {\hat {y}}}by checking how closely it follows the reference answersy(1),...,y(N){\displaystyle y^{(1)},...,y^{(N)}}. Since in natural language processing, one should evaluate a large set of candidate strings, one must generalize the BLEU score to the case where one has a list of M candidate strings (called a "corpus")(y^(1),⋯,y^(M)){\displaystyle ({\hat {y}}^{(1)},\cdots ,{\hat {y}}^{(M)})}, and for each candidate stringy^(i){\displaystyle {\hat {y}}^{(i)}}, a list of reference candidate stringsSi:=(y(i,1),...,y(i,Ni)){\displaystyle S_{i}:=(y^{(i,1)},...,y^{(i,N_{i})})}. Given any stringy=y1y2⋯yK{\displaystyle y=y_{1}y_{2}\cdots y_{K}}, and any integern≥1{\displaystyle n\geq 1}, we define the set of itsn-gramsto beGn(y)={y1⋯yn,y2⋯yn+1,⋯,yK−n+1⋯yK}{\displaystyle G_{n}(y)=\{y_{1}\cdots y_{n},y_{2}\cdots y_{n+1},\cdots ,y_{K-n+1}\cdots y_{K}\}}Note that it is a set of unique elements, not amultisetallowing redundant elements, so that, for example,G2(abab)={ab,ba}{\displaystyle G_{2}(abab)=\{ab,ba\}}. Given any two stringss,y{\displaystyle s,y}, define the substring countC(s,y){\displaystyle C(s,y)}to be the number of appearances ofs{\displaystyle s}as a substring ofy{\displaystyle y}. For example,C(ab,abcbab)=2{\displaystyle C(ab,abcbab)=2}. Now, fix a candidate corpusS^:=(y^(1),⋯,y^(M)){\displaystyle {\hat {S}}:=({\hat {y}}^{(1)},\cdots ,{\hat {y}}^{(M)})}, and reference candidate corpusS=(S1,⋯,SM){\displaystyle S=(S_{1},\cdots ,S_{M})}, where eachSi:=(y(i,1),...,y(i,Ni)){\displaystyle S_{i}:=(y^{(i,1)},...,y^{(i,N_{i})})}. Define themodified n-gram precisionfunction to bepn(S^;S):=∑i=1M∑s∈Gn(y^(i))min(C(s,y^(i)),maxy∈SiC(s,y))∑i=1M∑s∈Gn(y^(i))C(s,y^(i)){\displaystyle p_{n}({\hat {S}};S):={\frac {\sum _{i=1}^{M}\sum _{s\in G_{n}({\hat {y}}^{(i)})}\min(C(s,{\hat {y}}^{(i)}),\max _{y\in S_{i}}C(s,y))}{\sum _{i=1}^{M}\sum _{s\in G_{n}({\hat {y}}^{(i)})}C(s,{\hat {y}}^{(i)})}}}The modified n-gram, which looks complicated, is merely a straightforward generalization of the prototypical case: one candidate sentence and one reference sentence. In this case, it ispn({y^};{y})=∑s∈Gn(y^)min(C(s,y^),C(s,y))∑s∈Gn(y^)C(s,y^){\displaystyle p_{n}(\{{\hat {y}}\};\{y\})={\frac {\sum _{s\in G_{n}({\hat {y}})}\min(C(s,{\hat {y}}),C(s,y))}{\sum _{s\in G_{n}({\hat {y}})}C(s,{\hat {y}})}}}To work up to this expression, we start with the most obvious n-gram count summation:∑s∈Gn(y^)C(s,y)=number of n-substrings iny^that appear iny{\displaystyle \sum _{s\in G_{n}({\hat {y}})}C(s,y)={\text{number of n-substrings in }}{\hat {y}}{\text{ that appear in }}y}This quantity measures how many n-grams in the reference sentence are reproduced by the candidate sentence. Note that we count then-substrings, notn-grams. For example, wheny^=aba,y=abababa,n=2{\displaystyle {\hat {y}}=aba,y=abababa,n=2}, all the 2-substrings iny^{\displaystyle {\hat {y}}}(ab and ba) appear iny{\displaystyle y}3 times each, so the count is 6, not 2. In the above situation, however, the candidate string is too short. Instead of 3 appearances ofab{\displaystyle ab}it contains only one, so we add a minimum function to correct for that:∑s∈Gn(y^)min(C(s,y^),C(s,y)){\displaystyle {\sum _{s\in G_{n}({\hat {y}})}\min(C(s,{\hat {y}}),C(s,y))}}This count summation cannot be used to compare between sentences, since it is not normalized. If both the reference and the candidate sentences are long, the count could be big, even if the candidate is of very poor quality. So we normalize it∑s∈Gn(y^)min(C(s,y^),C(s,y))∑s∈Gn(y^)C(s,y^){\displaystyle {\frac {\sum _{s\in G_{n}({\hat {y}})}\min(C(s,{\hat {y}}),C(s,y))}{\sum _{s\in G_{n}({\hat {y}})}C(s,{\hat {y}})}}}The normalization is such that it is always a number in[0,1]{\displaystyle [0,1]}, allowing meaningful comparisons between corpuses. It is zero if none of the n-substrings in candidate is in reference. It is one if every n-gram in the candidate appears in reference, for at least as many times as in candidate. In particular, if the candidate is a substring of the reference, then it is one. The modified n-gram precision unduly gives a high score for candidate strings that are "telegraphic", that is, containing all the n-grams of the reference strings, but for as few times as possible. In order to punish candidate strings that are too short, define thebrevity penaltyto beBP(S^;S):=e−(r/c−1)+{\displaystyle BP({\hat {S}};S):=e^{-(r/c-1)^{+}}}where(r/c−1)+=max(0,r/c−1){\displaystyle (r/c-1)^{+}=\max(0,r/c-1)}is the positive part ofr/c−1{\displaystyle r/c-1}. c{\displaystyle c}is the length of the candidate corpus, that is,c:=∑i=1M|y^(i)|{\displaystyle c:=\sum _{i=1}^{M}|{\hat {y}}^{(i)}|}where|y|{\displaystyle |y|}is the length ofy{\displaystyle y}. r{\displaystyle r}is theeffective reference corpus length, that is,r:=∑i=1M|y(i,j)|{\displaystyle r:=\sum _{i=1}^{M}|y^{(i,j)}|}wherey(i,j)=arg⁡miny∈Si||y|−|y^(i)||{\displaystyle y^{(i,j)}=\arg \min _{y\in S_{i}}||y|-|{\hat {y}}^{(i)}||}, that is, the sentence fromSi{\displaystyle S_{i}}whose length is as close to|y^(i)|{\displaystyle |{\hat {y}}^{(i)}|}as possible. There is not a single definition of BLEU, but a whole family of them, parametrized by the weighting vectorw:=(w1,w2,⋯){\displaystyle w:=(w_{1},w_{2},\cdots )}. It is a probability distribution on{1,2,3,⋯}{\displaystyle \{1,2,3,\cdots \}}, that is,∑i=1∞wi=1{\displaystyle \sum _{i=1}^{\infty }w_{i}=1}, and∀i∈{1,2,3,⋯},wi∈[0,1]{\displaystyle \forall i\in \{1,2,3,\cdots \},w_{i}\in [0,1]}. With a choice ofw{\displaystyle w}, the BLEU score isBLEUw(S^;S):=BP(S^;S)⋅exp⁡(∑n=1∞wnln⁡pn(S^;S)){\displaystyle BLEU_{w}({\hat {S}};S):=BP({\hat {S}};S)\cdot \exp \left(\sum _{n=1}^{\infty }w_{n}\ln p_{n}({\hat {S}};S)\right)}In words, it is aweighted geometric meanof all the modified n-gram precisions, multiplied by the brevity penalty. We use the weighted geometric mean, rather than the weighted arithmetic mean, to strongly favor candidate corpuses that are simultaneously good according to multiple n-gram precisions. The most typical choice, the one recommended in the original paper, isw1=⋯=w4=14{\displaystyle w_{1}=\cdots =w_{4}={\frac {1}{4}}}.[1] This is illustrated in the following example from Papineni et al. (2002): Of the seven words in the candidate translation, all of them appear in the reference translations. Thus the candidate text is given a unigram precision of, wherem{\displaystyle ~m}is number of words from the candidate that are found in the reference, andwt{\displaystyle ~w_{t}}is the total number of words in the candidate. This is a perfect score, despite the fact that the candidate translation above retains little of the content of either of the references. The modification that BLEU makes is fairly straightforward. For each word in the candidate translation, the algorithm takes its maximum total count,mmax{\displaystyle ~m_{max}}, in any of the reference translations. In the example above, the word "the" appears twice in reference 1, and once in reference 2. Thusmmax=2{\displaystyle ~m_{max}=2}. For the candidate translation, the countmw{\displaystyle m_{w}}of each word is clipped to a maximum ofmmax{\displaystyle m_{max}}for that word. In this case, "the" hasmw=7{\displaystyle ~m_{w}=7}andmmax=2{\displaystyle ~m_{max}=2}, thusmw{\displaystyle ~m_{w}}is clipped to 2. These clipped countsmw{\displaystyle ~m_{w}}are then summed over all distinct words in the candidate. This sum is then divided by the total number ofunigramsin the candidate translation. In the above example, the modified unigram precision score would be: In practice, however, using individual words as the unit of comparison is not optimal. Instead, BLEU computes the same modified precision metric usingn-grams. The length which has the "highest correlation with monolingual human judgements"[6]was found to be four. The unigram scores are found to account for the adequacy of the translation, how much information is retained. The longern-gram scores account for the fluency of the translation, or to what extent it reads like "good English". An example of a candidate translation for the same references as above might be: In this example, the modified unigram precision would be, as the word 'the' and the word 'cat' appear once each in the candidate, and the total number of words is two. The modified bigram precision would be1/1{\displaystyle 1/1}as the bigram, "the cat" appears once in the candidate. It has been pointed out that precision is usually twinned withrecallto overcome this problem[7], as the unigram recall of this example would be3/6{\displaystyle 3/6}or2/7{\displaystyle 2/7}. The problem being that as there are multiple reference translations, a bad translation could easily have an inflated recall, such as a translation which consisted of all the words in each of the references.[8] To produce a score for the whole corpus, the modified precision scores for the segments are combined using thegeometric meanmultiplied by a brevity penalty to prevent very short candidates from receiving too high a score. Letrbe the total length of the reference corpus, andcthe total length of the translation corpus. Ifc≤r{\displaystyle c\leq r}, the brevity penalty applies, defined to bee(1−r/c){\displaystyle e^{(1-r/c)}}. (In the case of multiple reference sentences,ris taken to be the sum of the lengths of the sentences whose lengths are closest to the lengths of the candidate sentences. However, in the version of the metric used byNISTevaluations prior to 2009, the shortest reference sentence had been used instead.) iBLEU is an interactive version of BLEU that allows a user to visually examine the BLEU scores obtained by the candidate translations. It also allows comparing two different systems in a visual and interactive manner which is useful for system development.[9] BLEU has frequently been reported as correlating well with human judgement,[10][11][12]and remains a benchmark for the assessment of any new evaluation metric. There are however a number of criticisms that have been voiced. It has been noted that, although in principle capable of evaluating translations of any language, BLEU cannot, in its present form, deal with languages lacking word boundaries.[13]Designed to be used for several reference translation, in practice it's used with only the single one.[2]BLEU is infamously dependent on thetokenizationtechnique, and scores achieved with different ones are incomparable (which is often overlooked); in order to improve reproducibility and comparability, SacreBLEU variant was designed.[2] It has been argued that although BLEU has significant advantages, there is no guarantee that an increase in BLEU score is an indicator of improved translation quality.[14]
https://en.wikipedia.org/wiki/BLEU
Acryptocurrency(colloquiallycrypto) is adigital currencydesigned to work through acomputer networkthat is not reliant on any central authority, such as agovernmentorbank, to uphold or maintain it.[2] Individual coin ownership records are stored in a digitalledgerorblockchain, which is a computerizeddatabasethat uses a consensus mechanism to securetransactionrecords, control the creation of additional coins, and verify the transfer of coin ownership.[3][4][5]The two most common consensus mechanisms areproof of workandproof of stake.[6]Despite the name, which has come to describe many of thefungibleblockchain tokens that have been created, cryptocurrencies are not considered to becurrenciesin the traditional sense, and varying legal treatments have been applied to them in various jurisdictions, including classification ascommodities,securities, and currencies. Cryptocurrencies are generally viewed as a distinct asset class in practice.[7][8][9] The first cryptocurrency wasbitcoin, which was first released as open-source software in 2009. As of June 2023, there were more than 25,000other cryptocurrenciesin the marketplace, of which more than 40 had amarket capitalizationexceeding $1 billion.[10]As of April 2025, the cryptocurrency market capitalization was already estimated at $2.76 trillion.[11] In 1983, AmericancryptographerDavid Chaumconceived of a type of cryptographicelectronic moneycalledecash.[12][13]Later, in 1995, he implemented it throughDigicash,[14]an early form of cryptographic electronic payments. Digicash required user software in order to withdraw notes from a bank and designate specific encrypted keys before they could be sent to a recipient. This allowed the digital currency to be untraceable by a third party. In 1996, theNational Security Agencypublished a paper entitledHow to Make a Mint: The Cryptography of Anonymous Electronic Cash, describing a cryptocurrency system. The paper was first published in anMITmailing list (October 1996) and later (April 1997) inThe American Law Review.[15] In 1998,Wei Daidescribed "b-money," an anonymous, distributed electronic cash system.[16]Shortly thereafter,Nick Szabodescribedbit gold.[17]Like bitcoin and other cryptocurrencies that would follow it, bit gold (not to be confused with the later gold-based exchange BitGold) was described as an electronic currency system that required users to complete aproof of workfunction with solutions being cryptographically put together and published. In January 2009, bitcoin was created bypseudonymousdeveloperSatoshi Nakamoto. It usedSHA-256, a cryptographic hash function, in itsproof-of-workscheme.[18][19]In April 2011,Namecoinwas created as an attempt at forming a decentralizedDNS. In October 2011,Litecoinwas released, which usedscryptas its hash function instead of SHA-256.Peercoin, created in August 2012, used a hybrid of proof-of-work andproof-of-stake.[20] Cryptocurrency has undergone several periods of growth and retraction, including severalbubblesand market crashes, such as in 2011, 2013–2014/15, 2017–2018, and 2021–2023.[21][22] On 6 August 2014, the UK announced itsTreasuryhad commissioned a study of cryptocurrencies and what role, if any, they could play in the UK economy. The study was also to report on whether regulation should be considered.[23]Its final report was published in 2018,[24]and it issued a consultation on cryptoassets andstablecoinsin January 2021.[25] In June 2021,El Salvadorbecame the first country to accept bitcoin aslegal tender, after theLegislative Assemblyhad voted 62–22 to pass a bill submitted by PresidentNayib Bukeleclassifying the cryptocurrency as such.[26] In August 2021,Cubafollowed with Resolution 215 to recognize and regulate cryptocurrencies such as bitcoin.[27] In September 2021, thegovernment of China, the single largest market for cryptocurrency, declared all cryptocurrency transactions illegal. This completed a crackdown on cryptocurrency that had previously banned the operation of intermediaries and miners within China.[28] On 15 September 2022, the world's second largest cryptocurrency at that time,Ethereum, transitioned itsconsensus mechanismfromproof-of-work(PoW) toproof-of-stake(PoS) in an upgrade process known as "the Merge". According to the Ethereum Founder, the upgrade would cut both Ethereum's energy use and carbon-dioxide emissions by 99.9%.[29] On 11 November 2022,FTX Trading Ltd., acryptocurrency exchange, which also operated a cryptohedge fund, and had been valued at $18 billion,[30]filedforbankruptcy.[31]The financial impact of the collapse extended beyond the immediate FTX customer base, as reported,[32]while, at aReutersconference, financial industry executives said that "regulators must step in to protect crypto investors."[33]Technology analyst Avivah Litan commented on the cryptocurrency ecosystem that "everything...needs to improve dramatically in terms of user experience, controls, safety, customer service."[34] According to Jan Lansky, a cryptocurrency is a system that meets six conditions:[35] In March 2018, the wordcryptocurrencywas added to theMerriam-Webster Dictionary.[36] After the early innovation of bitcoin in 2008 and the earlynetwork effectgained by bitcoin, tokens, cryptocurrencies, and other digital assets that were not bitcoin became collectively known during the 2010s as alternative cryptocurrencies,[37][38][39]or "altcoins".[40]Sometimes the term "alt coins" was used,[41][42]or disparagingly, "shitcoins".[43]Paul Vigna ofThe Wall Street Journaldescribed altcoins in 2020 as "alternative versions of Bitcoin"[44]given its role as the model protocol for cryptocurrency designers. APolytechnic University of Cataloniathesis in 2021 used a broader description, including not only alternative versions of bitcoin but every cryptocurrency other than bitcoin. As of early 2020, there were more than 5,000 cryptocurrencies. Altcoins often have underlying differences when compared to bitcoin. For example,Litecoinaims to process a block every 2.5 minutes, rather than bitcoin's 10 minutes which allows Litecoin to confirm transactions faster than bitcoin.[20]Another example isEthereum, which hassmart contractfunctionality that allows decentralized applications to be run on its blockchain.[45]Ethereum was the most used blockchain in 2020, according to Bloomberg News.[46]In 2016, it had the largest "following" of any altcoin, according to theNew York Times.[47] Significant market price rallies across multiple altcoin markets are often referred to as an "altseason".[48][49] Stablecoinsare cryptocurrencies designed to maintain a stable level ofpurchasing power.[50]Notably, these designs are not foolproof, as a number of stablecoins have crashed or lost theirpeg. For example, on 11 May 2022,Terra's stablecoin UST fell from $1 to 26 cents.[51][52]The subsequent failure ofTerraform Labsresulted in the loss of nearly $40B invested in the Terra and Luna coins.[53]In September 2022, South Korean prosecutors requested the issuance of anInterpol Red Noticeagainst the company's founder,Do Kwon.[54]In Hong Kong, the expected regulatory framework for stablecoins in 2023/24 is being shaped and includes a few considerations.[55] Memecoinsare a category of cryptocurrencies that originated fromInternet memesor jokes. The most notable example isDogecoin, a memecoin featuring theShiba Inudog from theDogememe.[56]Memecoins are known for extreme volatility; for example, the record-high value for a Dogecoin was 73 cents, but that had plunged to 13 cents by mid-2024.[56]Scams are prolific among memecoins.[56] Physical cryptocurrency coins have been made as promotional items and some have become collectibles.[57]Some of these have a private key embedded in them to access crypto worth a few dollars. There have also been attempts to issue bitcoin "bank notes".[58] The term "physical bitcoin" is used in the finance industry when investment funds that hold crypto purchased from crypto exchanges put their crypto holdings in a specialised bank called a "custodian".[59] These physical representations of cryptocurrency do not hold any value by themselves; these are only utilized for collectable purposes. For example, the first incarnation of the bitcoin Casascius, coins made of silver, brass or aluminum sometimes with gold plating, or Titan Bitcoin, which in silver or gold versions are sought after bynumismatists.[60] Cryptocurrency is produced by an entire cryptocurrency system collectively, at a rate that is defined when the system is created and that is publicly stated. In centralized banking and economic systems such as the USFederal Reserve System, corporate boards or governments control the supply of currency.[citation needed]In the case of cryptocurrency, companies or governments cannot produce new units and have not so far provided backing for other firms, banks, or corporate entities that hold asset value measured in it. The underlying technical system upon which cryptocurrencies are based was created bySatoshi Nakamoto.[61] Within aproof-of-worksystem such as bitcoin, the safety, integrity, and balance ofledgersare maintained by a community of mutually distrustful parties referred to asminers. Miners use their computers to help validate and timestamp transactions, adding them to the ledger in accordance with a particular timestamping scheme.[18]In aproof-of-stakeblockchain, transactions are validated by holders of the associated cryptocurrency, sometimes grouped together in stake pools. Most cryptocurrencies are designed to gradually decrease the production of that currency, placing a cap on the total amount of that currency that will ever be in circulation.[62]Compared with ordinary currencies held by financial institutions or kept ascashon hand, cryptocurrencies can be more difficult forseizureby law enforcement.[3] The validity of each cryptocurrency's coins is provided by ablockchain. A blockchain is a continuously growing list ofrecords, calledblocks, which are linked and secured using cryptography.[61][63]Each block typically contains ahashpointer as a link to a previous block,[63]atimestamp, and transaction data.[64]By design, blockchains are inherently resistant to modification of the data. A blockchain is "an open,distributed ledgerthat can record transactions between two parties efficiently and in a verifiable and permanent way".[65]For use as a distributed ledger, a blockchain is typically managed by apeer-to-peernetwork collectively adhering to a protocol for validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority. Blockchains aresecure by designand are an example of a distributed computing system with highByzantine fault tolerance.Decentralizedconsensus has therefore been achieved with a blockchain.[66] Anodeis a computer that connects to a cryptocurrency network. The node supports the cryptocurrency's network through either relaying transactions, validation, or hosting a copy of the blockchain. In terms of relaying transactions, each network computer (node) has a copy of the blockchain of the cryptocurrency it supports. When a transaction is made, the node creating the transaction broadcasts details of the transaction using encryption to other nodes throughout the node network so that the transaction (and every other transaction) is known. Node owners are either volunteers, those hosted by the organization or body responsible for developing the cryptocurrency blockchain network technology, or those who are enticed to host a node to receive rewards from hosting the node network.[67] Cryptocurrencies use various timestamping schemes to "prove" the validity of transactions added to the blockchain ledger without the need for a trusted third party. The first timestamping scheme invented was theproof-of-workscheme. The most widely used proof-of-work schemes are based on SHA-256 andscrypt.[20] Some other hashing algorithms that are used for proof-of-work includeCryptoNote,Blake,SHA-3, andX11. Another method is called theproof-of-stakescheme. Proof-of-stake is a method of securing a cryptocurrency network and achieving distributed consensus through requesting users to show ownership of a certain amount of currency. It is different from proof-of-work systems that run difficult hashing algorithms to validate electronic transactions. The scheme is largely dependent on the coin, and there is currently no standard form of it. Some cryptocurrencies use a combined proof-of-work and proof-of-stake scheme.[20] On a blockchain,miningis the validation of transactions. For this effort, successful miners obtain new cryptocurrency as a reward. The reward decreasestransaction feesby creating a complementary incentive to contribute to the processing power of the network. The rate of generating hashes, which validate any transaction, has been increased by the use of specialized hardware such asFPGAsandASICsrunning complex hashing algorithms like SHA-256 andscrypt.[68]This arms race for cheaper-yet-efficient machines has existed since bitcoin was introduced in 2009.[68]Mining is measured byhash rate, typically in TH/s.[69]A 2023IMFworking paper found that crypto mining could generate 450 million tons of CO2emissions by 2027, accounting for 0.7 percent of global emissions, or 1.2 percent of the world total[70] With more people entering the world of virtual currency, generating hashes for validation has become more complex over time, forcing miners to invest increasingly large sums of money to improve computing performance. Consequently, the reward for finding a hash has diminished and often does not justify the investment in equipment and cooling facilities (to mitigate the heat the equipment produces) and the electricity required to run them.[71]Popular regions for mining include those with inexpensive electricity, a cold climate, and jurisdictions with clear and conducive regulations. By July 2019, bitcoin's electricity consumption was estimated to be approximately 7 gigawatts, around 0.2% of the global total, or equivalent to the energy consumed nationally by Switzerland.[72] Someminers pool resources, sharing theirprocessing powerover a network to split the reward equally, according to the amount of work they contributed to the probability of finding ablock. A "share" is awarded to members of the mining pool who present a valid partial proof-of-work. As of February 2018[update], the Chinese government has halted trading of virtual currency, banned initial coin offerings, and shut down mining. Many Chinese miners have since relocated to Canada[73]and Texas.[74]One company is operating data centers for mining operations at Canadian oil and gas field sites due to low gas prices.[75]In June 2018,Hydro Quebecproposed to the provincial government to allocate 500 megawatts of power to crypto companies for mining.[76]According to a February 2018 report fromFortune, Iceland has become a haven for cryptocurrency miners in part because of its cheap electricity.[77] In March 2018, the city ofPlattsburgh, New Yorkput an 18-monthmoratoriumon all cryptocurrency mining in an effort to preserve natural resources and the "character and direction" of the city.[78]In 2021,Kazakhstanbecame the second-biggest crypto-currency mining country, producing 18.1% of the globalexahashrate. The country built a compound containing 50,000 computers nearEkibastuz.[79] An increase in cryptocurrency mining increased the demand forgraphics cards(GPU) in 2017.[80]The computing power of GPUs makes them well-suited to generating hashes. Popular favorites of cryptocurrency miners, such as Nvidia'sGTX 1060andGTX 1070graphics cards, as well as AMD's RX 570 and RX 580 GPUs, doubled or tripled in price – or were out of stock.[81]A GTX 1070 Ti, which was released at a price of $450, sold for as much as $1,100. Another popular card, the GTX 1060 (6 GB model), was released at anMSRPof $250 and sold for almost $500. RX 570 and RX 580 cards fromAMDwere out of stock for almost a year. Miners regularly buy up the entire stock of new GPUs as soon as they are available.[82] Nvidiahas asked retailers to do what they can when it comes to selling GPUs to gamers instead of miners. Boris Böhles, PR manager for Nvidia in the German region, said: "Gamers come first for Nvidia."[83] Numerous companies developed dedicated crypto-mining accelerator chips, capable of price-performance far higher than that of CPU orGPU mining. At one point,Intelmarketed its own brand of crypto accelerator chip, namedBlockscale.[84] Acryptocurrency walletis a means of storing thepublic and private "keys"(address) or seed, which can be used to receive or spend the cryptocurrency.[85]With the private key, it is possible to write in the public ledger, effectively spending the associated cryptocurrency. With the public key, it is possible for others to send currency to the wallet. There exist multiple methods of storing keys or seed in a wallet. These methods range from using paper wallets (which are public, private, or seed keys written on paper), to using hardware wallets (which are hardware to store your wallet information), to a digital wallet (which is a computer with software hosting your wallet information), to hosting your wallet using an exchange where cryptocurrency is traded, or by storing your wallet information on a digital medium such as plaintext.[86] Bitcoin ispseudonymous, rather thananonymous; the cryptocurrency in a wallet is not tied to a person but rather to one or more specific keys (or "addresses").[87]Thereby, bitcoin owners are not immediately identifiable, but all transactions are publicly available in the blockchain.[88]Still,cryptocurrency exchangesare often required by law to collect the personal information of their users.[89] Some cryptocurrencies, such asMonero,Zerocoin,Zerocash, andCryptoNote, implement additional measures to increase privacy, such as by usingzero-knowledge proofs.[90][91] A recent 2020 study presented different attacks on privacy in cryptocurrencies. The attacks demonstrated how the anonymity techniques are not sufficient safeguards. In order to improve privacy, researchers suggested several different ideas, including new cryptographic schemes and mechanisms for hiding theIP addressof the source.[92] Cryptocurrencies are used primarily outside banking and governmental institutions and are exchanged over the Internet. Proof-of-work cryptocurrencies, such as bitcoin, offer block rewards incentives for miners. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the blockchain, but a study suggests that this may not be the case under certain circumstances.[93] The rewards paid to miners increase the supply of the cryptocurrency. By making sure that verifying transactions is a costly business, the integrity of the network can be preserved as long as benevolent nodes control a majority of computing power. The verification algorithm requires a lot of processing power, and thus electricity, in order to make verification costly enough to accurately validate the public blockchain. Not only do miners have to factor in the costs associated with expensive equipment necessary to stand a chance of solving a hash problem, they must further consider the significant amount of electrical power in search of the solution. Generally, the block rewards outweigh electricity and equipment costs, but this may not always be the case.[94] The current value, not the long-term value, of the cryptocurrency supports the reward scheme to incentivize miners to engage in costly mining activities.[95]In 2018, bitcoin's design caused a 1.4% welfare loss compared to an efficient cash system, while a cash system with 2% money growth has a minor 0.003% welfare cost. The main source for this inefficiency is the large mining cost, which is estimated to be US$360 million per year. This translates into users being willing to accept a cash system with an inflation rate of 230% before being better off using bitcoin as a means of payment. However, the efficiency of the bitcoin system can be significantly improved by optimizing the rate of coin creation and minimizing transaction fees. Another potential improvement is to eliminate inefficient mining activities by changing the consensus protocol altogether.[96] Transaction fees (sometimes also referred to asminer feesorgas fees) for cryptocurrency depend mainly on thesupplyof network capacity at the time, versus thedemandfrom the currency holder for a faster transaction.[97]The ability for the holder to be allowed to set the fee manually often depends on the wallet software used, and centralexchanges for cryptocurrency(CEX) usually do not allow the customer to set a custom transaction fee for the transaction.[citation needed]Their wallet software, such asCoinbaseWallet, however, might support adjusting the fee.[98] Select cryptocurrency exchanges have offered to let the user choose between different presets of transaction fee values during the currency conversion. One of those exchanges, namelyLiteBit, previously headquartered in the Netherlands, was forced to cease all operations on August 13th, 2023, "due to market changes and regulatory pressure".[99] The "recommended fee" suggested by the network will often depend on the time of day (due to depending on network load). ForEthereum, transaction fees differ by computational complexity, bandwidth use, and storage needs, while bitcoin transaction fees differ by transaction size and whether the transaction usesSegWit. In February 2023, the median transaction fee for Ether corresponded to $2.2845,[100]while for bitcoin it corresponded to $0.659.[101] Some cryptocurrencies have no transaction fees, the most well-known example beingNano (XNO), and instead rely onclient-sideproof-of-work as the transaction prioritization and anti-spam mechanism.[102][103][104] Cryptocurrency exchangesallow customers to trade cryptocurrencies[105]for other assets, such as conventionalfiat money, or to trade between different digital currencies. Crypto marketplaces do not guarantee that an investor is completing a purchase or trade at the optimal price. As a result, as of 2020, it was possible toarbitrageto find the difference in price across several markets.[106] Atomic swaps are a mechanism where one cryptocurrency can be exchanged directly for another cryptocurrency without the need for a trusted third party, such as an exchange.[107] Jordan Kelley, founder ofRobocoin, launched the firstbitcoin ATMin the United States on 20 February 2014. The kiosk installed in Austin, Texas, is similar to bank ATMs but has scanners to read government-issued identification such as a driver's license or a passport to confirm users' identities.[108] Aninitial coin offering(ICO) is a controversial means of raising funds for a new cryptocurrency venture. An ICO may be used by startups with the intention of avoiding regulation. However, securities regulators in many jurisdictions, including in the U.S. and Canada, have indicated that if a coin or token is an "investment contract" (e.g., under the Howey test, i.e., an investment of money with a reasonable expectation of profit based significantly on the entrepreneurial or managerial efforts of others), it is a security and is subject to securities regulation. In an ICO campaign, a percentage of the cryptocurrency (usually in the form of "tokens") is sold to early backers of the project in exchange for legal tender or other cryptocurrencies, often bitcoin or Ether.[109][110][111] According toPricewaterhouseCoopers, four of the 10 biggest proposed initial coin offerings have usedSwitzerlandas a base, where they are frequently registered as non-profit foundations. The Swiss regulatory agencyFINMAstated that it would take a "balanced approach" to ICO projects and would allow "legitimate innovators to navigate the regulatory landscape and so launch their projects in a way consistent with national laws protecting investors and the integrity of the financial system." In response to numerous requests by industry representatives, a legislative ICO working group began to issue legal guidelines in 2018, which are intended to remove uncertainty from cryptocurrency offerings and to establish sustainable business practices.[112] Themarket capitalizationof a cryptocurrency is calculated by multiplying the price by the number of coins in circulation. The total cryptocurrency market cap has historically been dominated by bitcoin accounting for at least 50% of the market cap value where altcoins have increased and decreased in market cap value in relation to bitcoin. Bitcoin's value is largely determined by speculation among other technological limiting factors known as blockchain rewards coded into the architecture technology of bitcoin itself. The cryptocurrency market cap follows a trend known as the "halving", which is when the block rewards received from bitcoin are halved due to technological mandated limited factors instilled into bitcoin which in turn limits the supply of bitcoin. As the date reaches near of a halving (twice thus far historically) the cryptocurrency market cap increases, followed by a downtrend.[113] By June 2021, cryptocurrency had begun to be offered by some wealth managers in the US for401(k)s.[114][115][116] Cryptocurrency prices are much more volatile than established financial assets such asstocks. For example, over one week in May 2022, bitcoin lost 20% of its value and Ethereum lost 26%, whileSolanaandCardanolost 41% and 35% respectively. The falls were attributed to warnings about inflation. By comparison, in the same week, theNasdaqtech stock index fell 7.6 per cent and theFTSE 100was 3.6 per cent down.[117] In the longer term, of the10 leading cryptocurrenciesidentified by the total value of coins in circulation in January 2018, only four (bitcoin, Ethereum, Cardano andRipple(XRP)) were still in that position in early 2022.[118]The total value of all cryptocurrencies was $2 trillion at the end of 2021, but had halved nine months later.[119][120]TheWall Street Journalhas commented that the crypto sector has become "intertwined" with the rest of the capital markets and "sensitive to the same forces that drive tech stocks and other risk assets," such as inflation forecasts.[121] There are alsocentralizeddatabases, outside of blockchains, that store crypto market data. Compared to the blockchain, databases perform fast as there is no verification process. Four of the most popular cryptocurrency market databases are CoinMarketCap, CoinGecko, BraveNewCoin, and Cryptocompare.[122] According to Alan Feuer ofThe New York Times,libertariansandanarcho-capitalistswere attracted to the philosophical idea behind bitcoin. Early bitcoin supporterRoger Versaid: "At first, almost everyone who got involved did so for philosophical reasons. We saw bitcoin as a great idea, as a way to separate money from the state."[123]EconomistPaul Krugmanargues that cryptocurrencies like bitcoin are "something of a cult" based in "paranoid fantasies" of government power.[124] David Golumbiasays that the ideas influencing bitcoin advocates emerge from right-wing extremist movements such as theLiberty Lobbyand theJohn Birch Societyand their anti-Central Bank rhetoric, or, more recently,Ron PaulandTea Party-style libertarianism.[125]Steve Bannon, who owns a "good stake" in bitcoin, sees cryptocurrency as a form of disruptive populism, taking control back from central authorities.[126] Bitcoin's founder,Satoshi Nakamoto, supported the idea that cryptocurrencies go well with libertarianism. "It's very attractive to the libertarian viewpoint if we can explain it properly," Nakamoto said in 2008.[127] According to theEuropean Central Bank, the decentralization of money offered by bitcoin has its theoretical roots in theAustrian school of economics, especially withFriedrich von Hayekin his bookDenationalisation of Money: The Argument Refined,[128]in which Hayek advocates a completefree marketin the production, distribution and management of money to end the monopoly ofcentral banks.[129][130] The rise in the popularity of cryptocurrencies and their adoption by financial institutions has led some governments to assess whether regulation is needed to protect users. TheFinancial Action Task Force(FATF) has defined cryptocurrency-related services as "virtual asset service providers" (VASPs) and recommended that they be regulated with the samemoney laundering(AML) andknow your customer(KYC) requirements as financial institutions.[131] In May 2020, the Joint Working Group on interVASP Messaging Standards published "IVMS 101", a universal common language for communication of required originator and beneficiary information between VASPs. The FATF and financial regulators were informed as the data model was developed.[132] In June 2020, FATF updated its guidance to include the "Travel Rule" for cryptocurrencies, a measure which mandates that VASPs obtain, hold, and exchange information about the originators and beneficiaries of virtual asset transfers.[133]Subsequent standardized protocol specifications recommended usingJSONfor relaying data between VASPs and identity services. As of December 2020, the IVMS 101 data model has yet to be finalized and ratified by the three global standard setting bodies that created it.[134] The European Commission published a digital finance strategy in September 2020. This included a draft regulation on Markets in Crypto-Assets (MiCA), which aimed to provide a comprehensive regulatory framework for digital assets in the EU.[135][136] On 10 June 2021, theBasel Committee on Banking Supervisionproposed that banks that held cryptocurrency assets must set aside capital to cover all potential losses. For instance, if a bank were to hold bitcoin worth $2 billion, it would be required to set aside enough capital to cover the entire $2 billion. This is a more extreme standard than banks are usually held to when it comes to other assets. However, this is a proposal and not a regulation. The IMF is seeking a coordinated, consistent and comprehensive approach to supervising cryptocurrencies.Tobias Adrian, the IMF's financial counsellor and head of its monetary and capital markets department said in a January 2022 interview that "Agreeing global regulations is never quick. But if we start now, we can achieve the goal of maintaining financial stability while also enjoying the benefits which the underlying technological innovations bring,"[137] In May 2024, 15 years after the advent of the first blockchain, bitcoin, theUS Congressadvanced a bill to the fullHouse of Representativesto provide regulatory clarity for digital assets. TheFinancial Innovation and Technology for the 21st Century Act, which defines responsibilities between various US agencies, notably between theCommodity Futures Trading Commission(CFTC) for decentralized blockchains and theSecurities and Exchange Commission(SEC) for blockchains that are functional but not decentralized.Stablecoinsare excluded from both CFTC and SEC regulation in this bill, "except for fraud and certain activities by registered firms."[138] In September 2017, China bannedICOsto causeabnormal returnfrom cryptocurrency decreasing during announcement window. The liquidity changes by banning ICOs in China was temporarily negative while the liquidity effect became positive after news.[139] On 18 May 2021, China banned financial institutions and payment companies from being able to provide cryptocurrency transaction related services.[140]This led to a sharp fall in the price of the biggestproof of workcryptocurrencies. For instance,bitcoinfell 31%,Ethereumfell 44%,Binance Coinfell 32% andDogecoinfell 30%.[141]Proof of work mining was the next focus, with regulators in popular mining regions citing the use of electricity generated from highly polluting sources such as coal to create bitcoin and Ethereum.[142] In September 2021, the Chinese government declared all cryptocurrency transactions of any kind illegal, completing its crackdown on cryptocurrency.[28] In April 2024,TVNZ's1Newsreported that theCook Islandsgovernment was proposing legislation that would allow "recovery agents" to use various means including hacking to investigate or find cryptocurrency that may have been used for illegal means or is the "proceeds of crime." The Tainted Cryptocurrency Recovery Bill was drafted by two lawyers hired by US-based debt collection company Drumcliffe. The proposed legislation was criticised by Cook Islands Crown Law's deputy solicitor general David Greig, who described it as "flawed" and said that some provisions were "clearly unconstitutional". The Cook Islands Financial Services Development Authority described Drumcliffe's involvement as a conflict of interest.[143] Similar criticism was echoed byAuckland University of Technologycryptocurrency specialist and senior lecturer Jeff Nijsse andUniversity of Otagopolitical scientistProfessorRobert Patman, who described it as government overreach and described it as inconsistent with international law. Since the Cook Islands is anassociated statethat is part of theRealm of New Zealand, Patman said that the law would have "implications for New Zealand's governance arrangements." A spokesperson forNew Zealand Foreign MinisterWinston Petersconfirmed that New Zealand officials were discussing the legislation with their Cook Islands counterparts.Cook Islands Prime MinisterMark Browndefended the legislation as part of the territory's fight against international cybercrime.[143] On 9 June 2021,El Salvadorannounced that it will adoptbitcoinas legal tender, becoming the first country to do so.[144] The EU defines crypto assets as "a digital representation of a value or of a right that is able to be transferred and stored electronically using distributed ledger technology or similar technology."[145]The EU regulation Markets in Crypto-Assets (MiCA) covering asset-referenced tokens (ARTs) and electronic money tokens (EMTs) (also known as stablecoins) came into force on 30 June 2024. As of 17 January 2025, theEuropean Securities and Markets Authority(ESMA) issued guidance to crypto-asset service providers (CASPs) allowing them to maintain crypto-asset services for non-compliant ARTs and EMTs until the end of March 2025.[146][147] The rest of MiCA came into force as of 30 December 2024, covering crypto-assets other than ART and EMT and CASPs. MiCA excludes crypto-assets if they qualify as financial instruments according to ESMA guidelines published on 17 December 2024 as well as crypto-assets that are unique and not fungible with other crypto-assets.[148][149] At present, India neither prohibits nor allows investment in the cryptocurrency market. In 2020, the Supreme Court of India had lifted the ban on cryptocurrency, which was imposed by the Reserve Bank of India.[150][151][152][153]Since then, an investment in cryptocurrency is considered legitimate, though there is still ambiguity about the issues regarding the extent and payment of tax on the income accrued thereupon and also its regulatory regime. But it is being contemplated that the Indian Parliament will soon pass a specific law to either ban or regulate the cryptocurrency market in India.[154]Expressing his public policy opinion on the Indian cryptocurrency market to a well-known online publication, a leadingpublic policylawyer and Vice President ofSAARCLAW(South Asian Association for Regional Co-operation in Law)Hemant Batrahas said that the "cryptocurrency market has now become very big with involvement of billions of dollars in the market hence, it is now unattainable and irreconcilable for the government to completely ban all sorts of cryptocurrency and its trading and investment".[155]He mooted regulating the cryptocurrency market rather than completely banning it. He favoured followingIMFandFATFguidelines in this regard. South Africa, which has seen a large number of scams related to cryptocurrency, is said to be putting a regulatory timeline in place that will produce a regulatory framework.[156]The largest scam occurred in April 2021, where the two founders of an African-based cryptocurrency exchange called Africrypt, Raees Cajee and Ameer Cajee, disappeared with $3.8 billion worth ofbitcoin.[157]Additionally,Mirror Trading Internationaldisappeared with $170 million worth of cryptocurrency in January 2021.[157] In March 2021, South Korea implemented new legislation to strengthen their oversight of digital assets. This legislation requires all digital asset managers, providers and exchanges to be registered with the Korea Financial Intelligence Unit in order to operate in South Korea.[158]Registering with this unit requires that all exchanges are certified by the Information Security Management System and that they ensure all customers have real name bank accounts. It also requires that the CEO and board members of the exchanges have not been convicted of any crimes and that the exchange holds sufficient levels of deposit insurance to cover losses arising from hacks.[158] Switzerland was one of the first countries to implement the FATF's Travel Rule. FINMA, the Swiss regulator, issued its own guidance to VASPs in 2019. The guidance followed the FATF's Recommendation 16, however with stricter requirements. According to FINMA's[159]requirements, VASPs need to verify the identity of the beneficiary of the transfer. On 30 April 2021, theCentral Bank of the Republic of Turkeybanned the use of cryptocurrencies andcryptoassetsfor making purchases on the grounds that the use of cryptocurrencies for such payments poses significant transaction risks.[160] In the United Kingdom, as of 10 January 2021, all cryptocurrency firms, such as exchanges, advisors and professionals that have either a presence, market product or provide services within the UK market must register with theFinancial Conduct Authority. Additionally, on 27 June 2021, the financial watchdog demanded thatBinancecease all regulated activities in the UK.[161] The incoming Labour government confirmed in November 2024 that it will proceed with the regulation of cryptoassets and new UK requirements are expected to come into force in 2026.[162] In 2021, 17 states in the US passed laws and resolutions concerning cryptocurrency regulation.[163]This led the Securities and Exchange Commission to start considering what steps to take. On 8 July 2021,Senator Elizabeth Warren, part of theSenate Banking Committee, wrote to the chairman of the SEC and demanded answers on cryptocurrency regulation due to the increase in cryptocurrency exchange use and the danger this posed to consumers. On 5 August 2021, the chairman,Gary Gensler, responded to Warren's letter and called for legislation focused on "crypto trading, lending and DeFi platforms," because of how vulnerable investors could be when they traded on crypto trading platforms without a broker. He also argued that many tokens in the crypto market may be unregistered securities without required disclosures or market oversight. Additionally, Gensler did not hold back in his criticism of stablecoins. These tokens, which are pegged to the value of fiat currencies, may allow individuals to bypass important public policy goals related to traditional banking and financial systems, such as anti-money laundering, tax compliance, and sanctions.[164] On 19 October 2021, the first bitcoin-linked exchange-traded fund (ETF) fromProSharesstarted trading on the NYSE under the ticker "BITO."ProSharesCEO Michael L. Sapir said the ETF would expose bitcoin to a wider range of investors without the hassle of setting up accounts with cryptocurrency providers. Ian Balina, the CEO of Token Metrics, stated that SEC approval of the ETF was a significant endorsement for the crypto industry because many regulators globally were not in favor of crypto, and retail investors were hesitant to accept crypto. This event would eventually open more opportunities for new capital and new people in this space.[165] TheDepartment of the Treasury, on 20 May 2021, announced that it would require any transfer worth $10,000 or more to be reported to theInternal Revenue Servicesince cryptocurrency already posed a problem where illegal activity like tax evasion was facilitated broadly. This release from the IRS was a part of efforts to promote better compliance and consider more severe penalties for tax evaders.[166] On 17 February 2022, theDepartment of Justicenamed Eun Young Choi as the first director of a National Cryptocurrency Enforcement Team to help identify and deal with misuse of cryptocurrencies and other digital assets.[167] The Biden administration faced a dilemma as it tried to develop regulations for the cryptocurrency industry. On one hand, officials were hesitant to restrict a growing industry. On the other hand, they were committed to preventing illegal cryptocurrency transactions. To reconcile these conflicting goals, on 9 March 2022, Biden issued an executive order.[168]Followed this, on 16 September 2022, the Comprehensive Framework for Responsible Development of Digital Assets document was released[169]to support development of cryptocurrencies and restrict their illegal use. The executive order included all digital assets, but cryptocurrencies posed both the greatest security risks and potential economic benefits. Though this might not address all of the challenges in crypto industry, it was a significant milestone in the US cryptocurrency regulation history.[170] In February 2023, the SEC ruled that cryptocurrency exchangeKraken's estimated $42 billion in staked assets globally operated as an illegal securities seller. The company agreed to a $30 million settlement with the SEC and to cease selling its staking service in the US. The case would impact other major crypto exchanges operating staking programs.[171] On 23 March 2023, the SEC issued an alert to investors stating that firms offering crypto asset securities might not be complying with US laws. The SEC argued that unregistered offerings of crypto asset securities might not include important information.[172] On 23 January 2025, President Donald Trump signedExecutive Order 14178,Strengthening American Leadership in Digital Financial Technology[173]revokingExecutive Order 14067of 9 March 2022,Ensuring Responsible Development of Digital Assetsand the Department of the Treasury'sFramework for International Engagement on Digital Assetsof 7 July 2022. In addition the order prohibits the establishment, issuance or promotion ofCentral bank digital currencyand establishes a group tasked with proposing a federal regulatory framework for digital assets within 180 days.[174] Thelegal status of cryptocurrenciesvaries substantially from country to country and is still undefined or changing in many of them. At least one study has shown that broad generalizations about the use of bitcoin in illicit finance are significantly overstated and that blockchain analysis is an effective crime fighting and intelligence gathering tool.[175]While some countries have explicitly allowed their use and trade,[176]others have banned or restricted it. According to theLibrary of Congressin 2021, an "absolute ban" on trading or using cryptocurrencies applies in 9 countries: Algeria, Bangladesh, Bolivia, China, Egypt, Iraq, Morocco, Nepal, and the United Arab Emirates. An "implicit ban" applies in another 39 countries or regions, which include: Bahrain, Benin, Burkina Faso, Burundi, Cameroon, Chad, Cote d’Ivoire, the Dominican Republic, Ecuador, Gabon, Georgia, Guyana, Indonesia, Iran, Jordan, Kazakhstan, Kuwait, Lebanon, Lesotho, Macau, Maldives, Mali, Moldova, Namibia, Niger, Nigeria, Oman, Pakistan, Palau, Republic of Congo, Saudi Arabia, Senegal, Tajikistan, Tanzania, Togo, Turkey, Turkmenistan, Qatar and Vietnam.[177]In the United States and Canada, state and provincial securities regulators, coordinated through theNorth American Securities Administrators Association, are investigating "Bitcoin scams" andICOsin 40 jurisdictions.[178] Various government agencies, departments, and courts have classified bitcoin differently.China Central Bankbanned the handling of bitcoins by financial institutions inChinain early 2014. In Russia, though owning cryptocurrency is legal, its residents are only allowed to purchase goods from other residents using theRussian rublewhile nonresidents are allowed to use foreign currency.[179]Regulations and bans that apply to bitcoin probably extend to similar cryptocurrency systems.[180] In August 2018, theBank of Thailandannounced its plans to create its own cryptocurrency, the Central Bank Digital Currency (CBDC).[181] Cryptocurrency advertisements have been banned on the following platforms: On 25 March 2014, the United StatesInternal Revenue Service(IRS) ruled that bitcoin will be treated as property for tax purposes. Therefore, virtual currencies are considered commodities subject to capital gains tax.[189] As the popularity and demand for online currencies has increased since the inception of bitcoin in 2009,[190]so have concerns that such an unregulated person to person global economy that cryptocurrencies offer may become a threat to society. Concerns abound that altcoins may become tools for anonymous web criminals.[191] Cryptocurrency networks display a lack of regulation that has been criticized as enabling criminals who seek to evade taxes andlaunder money. Money laundering issuesare also present in regular bank transfers, however with bank-to-bank wire transfers for instance, the account holder must at leastprovide a proven identity. Transactions that occur through the use and exchange of these altcoins are independent from formal banking systems, and therefore can make tax evasion simpler for individuals. Since charting taxable income is based upon what a recipient reports to the revenue service, it becomes extremely difficult to account for transactions made using existing cryptocurrencies, a mode of exchange that is complex and difficult to track.[191] Systems of anonymity that most cryptocurrencies offer can also serve as a simpler means to launder money. Rather than laundering money through an intricate net of financial actors and offshore bank accounts, laundering money through altcoins can be achieved through anonymous transactions.[191] Cryptocurrency makes legal enforcement against extremist groups more complicated, which consequently strengthens them.[192]White supremacistRichard Spencerwent as far as to declare bitcoin the "currency of the alt-right".[193] In February 2014, the world's largest bitcoin exchange,Mt. Gox, declaredbankruptcy. Likely due to theft, the company claimed that it had lost nearly 750,000 bitcoins belonging to their clients. This added up to approximately 7% of all bitcoins in existence, worth a total of $473 million. Mt. Gox blamed hackers, who had exploited thetransaction malleability problemsin the network. The price of a bitcoin fell from a high of about $1,160 in December to under $400 in February.[194] On 21 November 2017,Tetherannounced that it had been hacked, losing $31 million in USDT from its core treasury wallet.[195] On 7 December 2017, Slovenian cryptocurrency exchangeNicehashreported that hackers had stolen over $70 million using a hijacked company computer.[196] On 19 December 2017, Yapian, the owner of South Korean exchange Youbit, filed for bankruptcy after suffering two hacks that year.[197][198]Customers were still granted access to 75% of their assets. In May 2018,Bitcoin Goldhad its transactions hijacked and abused by unknown hackers.[199]Exchanges lost an estimated $18m and bitcoin Gold was delisted from Bittrex after it refused to pay its share of the damages. On 13 September 2018, Homero Josh Garza was sentenced to 21 months of imprisonment, followed by three years of supervised release.[200]Garza had founded the cryptocurrency startups GAW Miners and ZenMiner in 2014, acknowledged in aplea agreementthat the companies were part of apyramid scheme, and pleaded guilty towire fraudin 2015. The SEC separately brought a civil enforcement action in the US against Garza, who was eventually ordered to pay a judgment of $9.1 million plus $700,000 in interest. The SEC's complaint stated that Garza, through his companies, had fraudulently sold "investment contracts representing shares in the profits they claimed would be generated" from mining.[201] In January 2018, Japanese exchangeCoincheckreported that hackers had stolen cryptocurrency worth $530 million.[202] In June 2018, South Korean exchange Coinrail was hacked, losing over $37 million in crypto.[203]The hack worsened a cryptocurrency selloff by an additional $42 billion.[204] On 9 July 2018, the exchange Bancor, whose code and fundraising had been subjects of controversy, had $23.5 million in crypto stolen.[205] A 2020 EU report found that users had lost crypto-assets worth hundreds of millions of US dollars in security breaches at exchanges and storage providers. Between 2011 and 2019, reported breaches ranged from four to twelve a year. In 2019, more than a billion dollars worth of cryptoassets was reported stolen. Stolen assets "typically find their way to illegal markets and are used to fund further criminal activity".[206] According to a 2020 report produced by theUnited States Attorney General's Cyber-Digital Task Force, three categories make up the majority of illicit cryptocurrency uses: "(1) financial transactions associated with the commission of crimes; (2)money launderingand theshielding of legitimate activity from tax, reporting, or other legal requirements; or (3) crimes, such as theft, directly implicating the cryptocurrency marketplace itself." The report concluded that "for cryptocurrency to realize its truly transformative potential, it is imperative that these risks be addressed" and that "the government has legal and regulatory tools available at its disposal to confront the threats posed by cryptocurrency's illicit uses".[207][208] According to the UK 2020 national risk assessment—a comprehensive assessment of money laundering and terrorist financing risk in the UK—the risk of using cryptoassets such as bitcoin for money laundering and terrorism financing is assessed as "medium" (from "low" in the previous 2017 report).[209]Legal scholars suggested that the money laundering opportunities may be more perceived than real.[210]Blockchain analysiscompany Chainalysis concluded that illicit activities likecybercrime,money launderingandterrorism financingmade up only 0.15% of all crypto transactions conducted in 2021, representing a total of $14 billion.[211][212][213] In December 2021, Monkey Kingdom, a NFT project based in Hong Kong, lost US$1.3 million worth of cryptocurrencies via a phishing link used by the hacker.[214] On November 2, 2023,Sam Bankman-Friedwas pronounced guilty on seven counts of fraud related toFTX.[215]Federal criminal court sentencing experts speculated on the potential amount of prison time likely to be meted out.[216][217][218]On March 28, 2024, the court sentenced Bankman-Fried to 25 years in prison.[219] According to blockchain data companyChainalysis, criminals launderedUS$8,600,000,000 worth of cryptocurrency in 2021, up by 30% from the previous year.[220]The data suggests that rather than managing numerous illicit havens, cybercriminals make use of a small group of purpose built centralized exchanges for sending and receiving illicit cryptocurrency. In 2021, those exchanges received 47% of funds sent by crime linked addresses.[221]Almost $2.2bn worth of cryptocurrencies was embezzled from DeFi protocols in 2021, which represents 72% of all cryptocurrency theft in 2021. According toBloombergand theNew York Times, Federation Tower, a two skyscraper complex in the heart of Moscow City, is home to many cryptocurrency businesses under suspicion of facilitating extensive money laundering, including accepting illicit cryptocurrency funds obtained through scams, darknet markets, and ransomware.[222]Notable businesses includeGarantex,[223]Eggchange, Cashbank, Buy-Bitcoin, Tetchange, Bitzlato, and Suex, which was sanctioned by the U.S. in 2021. Bitzlato founder and owner Anatoly Legkodymov was arrested following money-laundering charges by the United States Department of Justice.[224] Dark money has also been flowing into Russia through a dark web marketplace called Hydra, which is powered by cryptocurrency, and enjoyed more than $1 billion in sales in 2020, according to Chainalysis.[225]The platform demands that sellers liquidate cryptocurrency only through certain regional exchanges, which has made it difficult for investigators to trace the money. Almost 74% of ransomware revenue in 2021 — over $400 million worth of cryptocurrency — went to software strains likely affiliated with Russia, where oversight is notoriously limited.[222]However, Russians are also leaders in the benign adoption of cryptocurrencies, as the ruble is unreliable, andPresident Putinfavours the idea of "overcoming the excessive domination of the limited number of reserve currencies."[226] In 2022, RenBridge - an unregulated alternative to exchanges for transferring value between blockchains - was found to be responsible for the laundering of at least $540 million since 2020. It is especially popular with people attempting to launder money from theft. This includes a cyberattack on Japanese crypto exchange Liquid that has been linked toNorth Korea.[227] Properties of cryptocurrencies gave them popularity in applications such as a safe haven in banking crises and means of payment, which also led to the cryptocurrency use in controversial settings in the form ofonline black markets, such asSilk Road.[191]The original Silk Road was shut down in October 2013 and there have been two more versions in use since then. In the year following the initial shutdown of Silk Road, the number of prominent dark markets increased from four to twelve, while the amount of drug listings increased from 18,000 to 32,000.[191] Darknet markets present challenges in regard to legality. Cryptocurrency used in dark markets are not clearly or legally classified in almost all parts of the world. In the US, bitcoins are regarded as "virtual assets".[citation needed]This type of ambiguous classification puts pressure on law enforcement agencies around the world to adapt to the shifting drug trade of dark markets.[228][unreliable source?] Various studies have found that crypto-trading is rife withwash trading. Wash trading is a process, illegal in some jurisdictions, involving buyers and sellers being the same person or group, and may be used to manipulate the price of a cryptocurrency or inflate volume artificially. Exchanges with higher volumes can demand higher premiums from token issuers.[229]A study from 2019 concluded that up to 80% of trades on unregulated cryptocurrency exchanges could be wash trades.[229]A 2019 report by Bitwise Asset Management claimed that 95% of all bitcoin trading volume reported on major website CoinMarketCap had been artificially generated, and of 81 exchanges studied, only 10 provided legitimate volume figures.[230] In 2022, cryptocurrencies attracted attention when Western nations imposed severe economic sanctions on Russia in the aftermath ofits invasion of Ukrainein February. However, American sources warned in March that some crypto-transactions could potentially be used to evade economic sanctions against Russia and Belarus.[231] In April 2022, the computer programmerVirgil Griffithreceived a five-year prison sentence in the US for attending a Pyongyang cryptocurrency conference, where he gave a presentation on blockchains which might be used for sanctions evasion.[232] TheBank for International Settlementssummarized several criticisms of cryptocurrencies in Chapter V of their 2018 annual report. The criticisms include the lack of stability in their price, the high energy consumption, high and variable transactions costs, the poor security and fraud at cryptocurrency exchanges, vulnerability to debasement (from forking), and the influence of miners.[233][234][235] Cryptocurrencies have been compared toPonzi schemes,pyramid schemes[236]andeconomic bubbles,[237]such ashousing market bubbles.[238]Howard MarksofOaktree Capital Managementstated in 2017 that digital currencies were "nothing but an unfounded fad (or perhaps even a pyramid scheme), based on a willingness to ascribe value to something that has little or none beyond what people will pay for it", and compared them to thetulip mania(1637),South Sea Bubble(1720), anddot-com bubble(1999), which all experienced profound price booms and busts.[239] Regulators in several countries have warned against cryptocurrency and some have taken measures to dissuade users.[240]However, research in 2021 by the UK's financial regulator suggests such warnings either went unheard, or were ignored. Fewer than one in 10 potential cryptocurrency buyers were aware of consumer warnings on theFCAwebsite, and 12% of crypto users were not aware that their holdings were not protected bystatutory compensation.[241][242]Of 1,000 respondents between the ages of eighteen and forty, almost 70% wrongly assumed cryptocurrencies were regulated, 75% of younger crypto investors claimed to be driven by competition with friends and family, 58% said that social media enticed them to make high risk investments.[243]The FCA recommends making use of its warning list, which flags unauthorized financial firms.[244] Many banks do not offer virtual currency services themselves and can refuse to do business with virtual currency companies.[245]In 2014, Gareth Murphy, a senior banking officer, suggested that the widespread adoption of cryptocurrencies may lead to too much money beingobfuscated, blinding economists who would use such information to better steer the economy.[246]While traditional financial products have strong consumer protections in place, there is no intermediary with the power to limit consumer losses if bitcoins are lost or stolen. One of the features cryptocurrency lacks in comparison to credit cards, for example, is consumer protection against fraud, such aschargebacks. The French regulatorAutorité des marchés financiers(AMF) lists 16 websites of companies that solicit investment in cryptocurrency without being authorized to do so in France.[247] An October 2021 paper by theNational Bureau of Economic Researchfound that bitcoin suffers from systemic risk as the top 10,000 addresses control about one-third of all bitcoin in circulation.[248]It is even worse for miners, with 0.01% controlling 50% of the capacity. According to researcher Flipside Crypto, less than 2% of anonymous accounts control 95% of all available bitcoin supply.[249]This is considered risky as a great deal of the market is in the hands of a few entities. A paper by John Griffin, a finance professor at theUniversity of Texas, and Amin Shams, a graduate student found that in 2017 the price of bitcoin had been substantially inflated using another cryptocurrency, Tether.[250] Roger Lowenstein, author of "Bank of America: The Epic Struggle to Create the Federal Reserve," says in a New York Times story that FTX will face over $8 billion in claims.[251] Non-fungible tokens(NFTs) are digital assets that represent art, collectibles, gaming, etc. Like crypto, their data is stored on the blockchain. NFTs are bought and traded using cryptocurrency. The Ethereum blockchain was the first place where NFTs were implemented, but now many other blockchains have created their own versions of NFTs. According to Vanessa Grellet, renowned panelist in blockchain conferences,[252]there was an increasing interest from traditionalstock exchangesin crypto-assets at the end of the 2010s, while crypto-exchanges such asCoinbasewere gradually entering the traditionalfinancial markets. This convergence marked a significant trend where conventional financial actors were adopting blockchain technology to enhance operational efficiency, while the crypto world introduced innovations likeSecurity Token Offering(STO), enabling new ways offundraising. Tokenization, turning assets such asreal estate,investment funds, andprivate equityinto blockchain-based tokens, had the potential to make traditionally illiquid assets more accessible to investors. Despite the regulatory risks associated with such developments, major financial institutions, includingJPMorgan Chase, were actively working on blockchain initiatives, exemplified by the creation of Quorum, a private blockchain platform.[253] As the first big Wall Street bank to embrace cryptocurrencies,Morgan Stanleyannounced on 17 March 2021 that they will be offering access to bitcoin funds for their wealthy clients through three funds which enable bitcoin ownership for investors with an aggressive risk tolerance.[254]BNY Mellon on 11 February 2021 announced that it would begin offering cryptocurrency services to its clients.[255] On 20 April 2021,[256]Venmoadded support to its platform to enable customers to buy, hold and sell cryptocurrencies.[257] In October 2021, financial services companyMastercardannounced it is working with digital asset managerBakkton a platform that would allow any bank or merchant on the Mastercard network to offer cryptocurrency services.[258] Mining forproof-of-workcryptocurrencies requires enormous amounts of electricity and consequently comes with a largecarbon footprintdue to causinggreenhouse gas emissions.[259]Proof-of-work blockchains such as bitcoin,Ethereum,Litecoin, andMonerowere estimated to have added between 3 million and 15 million tons ofcarbon dioxide (CO2) to the atmospherein the period from 1 January 2016 to 30 June 2017.[260]By November 2018, bitcoin was estimated to have an annualenergy consumptionof 45.8TWh, generating 22.0 to 22.9 million tons of CO2, rivalling nations likeJordanandSri Lanka.[261]By the end of 2021, bitcoin was estimated to produce 65.4 million tons of CO2, as much asGreece,[262]and consume between 91 and 177 terawatt-hours annually.[263][264] Critics have also identified a largeelectronic wasteproblem in disposing ofmining rigs.[265]Mining hardware is improving at a fast rate, quickly resulting in older generations of hardware.[266] Bitcoin is the least energy-efficient cryptocurrency, using 707.6 kilowatt-hours of electricity per transaction.[267] Before June 2021, China was the primary location for bitcoin mining. However, due to concerns over power usage and other factors, China forced out bitcoin operations, at least temporarily. As a result, the United States promptly emerged as the top global leader in the industry. An example of a gross amount of electronic waste associated with bitcoin mining operations in the US is a facility that located in Dalton, Georgia which is consuming nearly the same amount of electricity as the combined power usage of 97,000 households in its vicinity. Another example is that Riot Platforms operates a bitcoin mining facility in Rockdale, Texas, which consumes approximately as much electricity as the nearby 300,000 households. This makes it the most energy-intensive bitcoin mining operation in the United States.[268] The world's second-largest cryptocurrency, Ethereum, uses 62.56 kilowatt-hours of electricity per transaction.[269]XRPis the world's most energy efficient cryptocurrency, using 0.0079 kilowatt-hours of electricity per transaction.[270] Although the biggest PoW blockchains consume energy on the scale of medium-sized countries, the annual power demand from proof-of-stake (PoS) blockchains is on a scale equivalent to a housing estate.The Timesidentified six "environmentally friendly" cryptocurrencies:Chia,IOTA,Cardano,Nano, Solarcoin and Bitgreen.[271]Academics and researchers have used various methods for estimating the energy use and energy efficiency of blockchains. A study of the six largest proof-of-stake networks in May 2021 concluded: In terms of annual consumption (kWh/yr), the figures were: Polkadot (70,237),Tezos(113,249),Avalanche(489,311),Algorand(512,671), Cardano (598,755) and Solana (1,967,930). This equates to Polkadot consuming 7 times the electricity of an average U.S. home, Cardano 57 homes and Solana 200 times as much. The research concluded that PoS networks consumed 0.001% the electricity of the bitcoin network.[272]University College London researchers reached a similar conclusion.[273] Variable renewable energypower stations could invest in bitcoin mining to reducecurtailment,hedgeelectricity price risk, stabilize the grid, increase theprofitability of renewable energypower stations and therefore acceleratetransition to sustainable energy.[274][275][276][277][278] There are also purely technical elements to consider. For example, technological advancement in cryptocurrencies such as bitcoin result in high up-front costs to miners in the form of specializedhardwareandsoftware.[279]Cryptocurrency transactions are normally irreversible after a number of blocks confirm the transaction. Additionally, cryptocurrency private keys can be permanently lost from local storage due to malware, data loss or the destruction of the physical media. This precludes the cryptocurrency from being spent, resulting in its effective removal from the markets.[280] In September 2015, the establishment of thepeer-reviewedacademic journalLedger(ISSN2379-5980) was announced. It covers studies of cryptocurrencies and related technologies, and is published by theUniversity of Pittsburgh.[281] The journal encourages authors todigitally signafile hashof submitted papers, which will then betimestampedinto the bitcoinblockchain. Authors are also asked to include a personal bitcoin address in the first page of their papers.[282][283] A number ofaid agencieshave started accepting donations in cryptocurrencies, includingUNICEF.[284]Christopher Fabian, principal adviser at UNICEF Innovation, said the children's fund would uphold donor protocols, meaning that people making donations online would have to pass checks before they were allowed to deposit funds.[285][286] However, in 2021, there was a backlash against donations in bitcoin because of the environmental emissions it caused. Some agencies stopped accepting bitcoin and others turned to "greener" cryptocurrencies.[287]TheU.S. arm of Greenpeacestopped accepting bitcoin donations after seven years. It said: "As the amount of energy needed to run bitcoin became clearer, this policy became no longer tenable."[288] In 2022, theUkrainian governmentraised overUS$10,000,000 worth of aid through cryptocurrency following the2022 Russian invasion of Ukraine.[289] Bitcoinhas been characterized as aspeculative bubbleby eightwinners of the Nobel Memorial Prize in Economic Sciences:Paul Krugman,[290]Robert J. Shiller,[291]Joseph Stiglitz,[292]Richard Thaler,[293]James Heckman,[294]Thomas Sargent,[294]Angus Deaton,[294]andOliver Hart;[294]and by central bank officials includingAlan Greenspan,[295]Agustín Carstens,[296]Vítor Constâncio,[297]andNout Wellink.[298] InvestorsWarren BuffettandGeorge Soroshave respectively characterized it as a "mirage"[299]and a "bubble";[300]while business executivesJack MaandJP Morgan ChaseCEOJamie Dimonhave called it a "bubble"[301]and a "fraud",[302]respectively, although Jamie Dimon later said he regretted dubbing bitcoin a fraud.[303]BlackRockCEOLaurence D. Finkcalled bitcoin an "index ofmoney laundering".[304] In June 2022, business magnateBill Gatessaid that cryptocurrencies are "100% based ongreater fool theory".[305] Legal scholars criticize the lack of regulation, which hinders conflict resolution when crypto assets are at the center of a legal dispute, for example a divorce or an inheritance. In Switzerland, jurists generally deny that cryptocurrencies are objects that fall underproperty law, as cryptocurrencies do not belong to any class of legally defined objects (Typenzwang, the legalnumerus clausus). Therefore, it is debated whether anybody could even be sued forembezzlementof cryptocurrency if he/she had access to someone's wallet. However, in thelaw of obligationsandcontract law, any kind of object would be legally valid, but the object would have to be tied to an identifiedcounterparty. However, as the more popular cryptocurrencies can be freely and quickly exchanged into legal tender, they are financial assets and have to be taxed and accounted for as such.[306][307] In 2018, an increase in crypto-related suicides was noticed after the cryptocurrency market crashed in August. The situation was particularly critical in Korea as crypto traders were on "suicide watch". A cryptocurrency forum on Reddit even started providing suicide prevention support to affected investors.[308][309]The May 2022 collapse of the Luna currency operated byTerraalso led to reports of suicidal investors in crypto-related subreddits.[310]
https://en.wikipedia.org/wiki/Cryptocurrency
Hexadecimal timeis the representation of thetimeofdayas ahexadecimalnumberin theinterval[0, 1). The day is divided into 1016(1610) hexadecimal hours, each hour into 10016(25610) hexadecimal minutes, and each minute into 1016(1610) hexadecimal seconds. This time format was proposed by the Swedish-American engineerJohn W. Nystromin 1863 as part of histonal system.[1] In 1997, the American Mark Vincent Rogers ofIntuitorproposed a similar system of hexadecimal time and implemented it inJavaScriptas the Hexclock.[2] A day is unity, or1, and any fraction thereof can be shown with digits to the right of the hexadecimalseparator. So the day begins at midnight with.0000and one hexadecimal second after midnight is.0001. Noon is.8000(one half), one hexadecimal second before was.7FFFand one hexadecimal second before next midnight will be.FFFF. Intuitor-hextime may also be formatted with an underscore separating hexadecimal hours, minutes and seconds. For example:
https://en.wikipedia.org/wiki/Hexadecimal_time
Sustainable managementtakes the concepts from sustainability and synthesizes them with the concepts ofmanagement.Sustainabilityhas three branches: theenvironment, the needs of present andfuture generations, and theeconomy. Using these branches, it creates the ability of a system to thrive by maintaining economic viability and also nourishing the needs of the present and future generations by limitingresource depletion. Sustainable management is needed because it is an important part of the ability to successfully maintain the quality of life on our planet. Sustainable management can be applied to all aspects of our lives. For example, the practices of a business should be sustainable if they wish to stay in businesses, because if the business is unsustainable, then by the definition of sustainability they will cease to be able to be in competition. Communities are in a need of sustainable management, because if thecommunityis to prosper, then the management must be sustainable.Forestandnatural resourcesneed to have sustainable management if they are to be able to be continually used by our generation and future generations. Our personal lives also need to be managed sustainably. This can be by making decisions that will help sustain our immediate surroundings and environment, or it can be by managing our emotional and physical well-being. Sustainable management can be applied to many things, as it can be applied as a literal and an abstract concept. Meaning, depending on what they are applied to the meaning of what it is can change. Managers' strategies reflect the mindset of the times. This being the case, it has been a problem for the evolution of sustainable management practices for two reasons. The first reason is that sustainable norms are continually changing. For example, things considered unthinkable a few years ago are now standard practices. And the second reason is that in order to practice sustainable management, one has to be forward thinking, not only in the short term, but also in the long term. Management behavior is a reflection of how accepted conceptions of behavior are defined. This means that forces and beliefs outside of the given program push along the management. Themanagercan take some credit for the cultural changes in his or her program, but overall the organization’s culture reflects dominant conceptions of the public at that time. This is exemplified through the managerial actions taken during the time periods that lead up to the present day. These examples are given below: This was a time period in which, even though there were outside concerns about the environment, the industries were able to resist pressures and make their own definitions and regulations.[1]Environmentalists were not viewed as credible sources of information during this time and usually discredited. The norms of this period radically shifted with the creating of theU.S. Environmental Protection Agency(EPA) in 1970. The EPA became the mediator between the environmentalists and the industry, although the two sides never met.[1]During this period, the environment for the majority of industry and business management teams was only important in terms of compliance with law.[1]In 1974 a conference board survey found that the majority of companies still treated environmental management as a threat.[1]The survey noted a widespread tendency in most of industry to treat pollution control expenditures as non-recoverable investments.[1]According to the consensus environmental protection was considered at best a necessary evil, and at worst a temporary nuisance.[1] By 1982, the EPA had lost its credibility, but at the same time activism became more influential, and there was an increase in the funding and memberships of major non-governmental organizations (NGOs).[1]Industry gradually became more cooperative with government and new managerial structures were implemented to achieve compliances with regulations.[1] During this period, industry progressed into a proactive stance on environmental protection.[1]With this attitude, the issue became one in which they felt qualified to manage on their own. Although there was advancement in organizational power, the concern for the environment still kept being pushed down the hierarchy of important things to do.[1] In 1995 Harvard professorMichael Porterwrote in theHarvard Business Reviewthat environmental protection was not a threat to the corporate enterprise but rather an opportunity, one that could increase competitive advantage in the marketplace.[1]Before 2000, The companies generally regarded green buildings as interesting experiments but unfeasible projects in the real business world.[2]Since then several factors, including the ones listed below, have caused major shifts in thinking.[2]The creation of reliable building rating and performance measurement systems for new construction and renovation has helped change corporate perceptions about green. In 2000, the Washington D.C.–basedUnited States Green Building Councillaunched its rigorousLeadership in Energy and Environmental Design(LEED) program.[2]Hundreds of US and international studies have proven the financial advantages of going green: lower utility costs, higher employee productivity.[2]Green building materials, mechanical systems, and furnishings have become more widely available, and prices have dropped considerably.[2]As changes are made to the norms of what is acceptable from a management perspective, more and more it becomes apparent that sustainable management is the new norm of the future. Currently, there are many programs, organizations, communities, and businesses that follow sustainable management plans. These new entities are pressing forward with the help of changing social norms and management initiatives. A manager is a person that is held responsible for the planning of things that will benefit the situation that they are controlling. To be a manager of sustainability, one needs to be a manager that can control issues and plan solutions that will be sustainable, so that what they put into place will be able to continue for future generations. The job of a sustainable manager is like other management positions, but additionally they have to manage systems so that they are able to support and sustain themselves. Whether it is a person that is a manager of groups, business, family, communities, organizations, agriculture, or the environment, they can all use sustainable management to improve their productivity, environment, and atmosphere, among other things. Some practical skills that are needed to be able to perform the job include: Recently, there has even been the addition of new programs in colleges and universities in order to be able to offer Bachelor of Science and Master of Science degrees in Sustainable management. In business, time and time again, environmentalists are seen facing off against industry, and there is usually very little "meeting in the middle" or compromises. When these two sides agree to disagree, the result is a more powerful message, and it becomes one that allows more people to understand and embrace. Organizations need to face the fact that the boundaries of accountability are moving fast. The trend towards sustainable management means that organizations are beginning to implement a systems wide approach that links in the various parts of the business with the greater environment at large. As sustainable management institutions adapt, it becomes imperative that they include an image of sustainable responsibility that is projected for the public to see. This is because firms are socially based organizations. But this can be a double edged sword, because sometimes they end up focusing too much on their image rather than actually focusing on implementing what they are trying to project to the public; this is called green washing. It is important that the execution of sustainable management practices is not put aside while the firm tries to appeal to the public with their sustainable management “practices.” Additionally, companies must make the connection between sustainability as a vision and sustainability as a practice. Managers need to think systematically and realistically about the application of traditional business principles to environmental problems. By melding the two concepts together, new ideas of business principles emerge and can enable some companies-those with the right industry structure, competitive position, and managerial skills- to deliver increased value to shareholders while making improvements in their environmental performance.[4] Any corporation can become green on a standard budget.[2]By focusing on the big picture, a company can generate more savings and better performance. By using planning, design, and construction based on sustainable values, sustainable management strives to gain LEED points by reducing footprint of the facility by sustainably planning the site with focus on these three core ideas.[2]To complete a successful green building, or business, the management also applies cost benefit analysis in order to allocate funds appropriately. The economic system, like all systems, is subject to the laws of thermodynamics, which define the limit at which the Earth can successfully process energy and wastes.[5]Managers need to understand that their values are critical factors in their decisions. Many of current business values are based on unrealistic economic assumptions; adopting new economic models that take the Earth into account in the decision-making process is at the core of sustainable management.[5]This new management addresses the interrelatedness of the ecosystem and the economic system.[5] The strategic vision that is based on core values of the firm guides the firm’s decision-making processes at all levels. Thus, the sustainable management requires finding out what business activities fit into the Earth’s carrying capacity, and also defining the optimal levels of those activities.[5]Sustainability values form the basis of the strategic management, process the costs and benefits of the firm’s operations, and are measured against the survival needs of the planets stakeholders.[5]Sustainability is the core value because it supports a strategic vision of firms in the long term by integrating economic profits with the responsibility to protect the whole environment.[5] Changing industrial processes so that they actually replenish and magnify the stock ofnatural capitalis another component of sustainable management. One way managers have figured out how to do this is by using a service model of business.[6]This focuses on building relationships with customers, instead of focusing on making and selling products.[6]This type of model represents a fundamental change in the way businesses behave. It allows for managers to be aware of the lifecycle of their products by leaving the responsibility up to the company to take care of the product throughout the life cycle.[6]The service model, because the product is the responsibility of the business, creates an avenue in which the managers can see ways in which they can reduce the use of resources through recycling and product construction. For communities to be able to improve, sustainable management needs to be in practice. If a community relies on the resources that are in the surrounding area, then they need to be used in a sustainable manner to insure the indefinite supply of the resources. A community needs to work together to be able to be productive, and when there is a need to get things done, management needs to take the lead. If sustainable management is in practice in a community, then people will want to stay in that community, and other people will realize the success, and they will also want to live in a similar environment, as their own unsustainable towns fail. Part of a sustainable management system in a community is the education, the cooperation, and the responsiveness of the people that live in the community.[7] There are new ideals to how a community can be sustainable. This can includeurban planning, which allow people to move about a city that are more sustainable for the environment. If management plans a community that allows for people to move without cars, it helps make a community sustainable by increasing mass transit or other modes of transportation. People would spend less time in traffic while improving the environment, and on an occasions exercise.[8] Sustainable management provides plans that can improve multiple parts of people lives, environment, and future generations. If a community sets goals, then people are more likely to reduce energy, water, and waste, but a community cannot set goals unless they have the management in place to set goals.[9] A part of sustainable management for a community is communicating the ideals and plans for an area to the people that will be carrying out the plan. It is important to note that sustainable management is not sustainable if the person that is managing a situation is not communicating what needs to be improved, how it should be improved, why it is important to them, and how they are involved it in the process. For a person to be responsible for their action is a part of managing, and that is part of being managed sustainable. To be able to manage oneself sustainable there are many factors to consider, because to be able to manage oneself a person needs to be able to see what they are doing unsustainable, and how to become sustainable. By using plastic bags at a check out line is unsustainable because it creates pollutants, but using reusable biodegradable bags can resolve the problem. This is not only environmentally sustainable, but it also improves the physical and mental sustainability of the person that uses the reusable bags. It is physical improvement because people do not have to live with the countless plastic bags on the Earth and the pollution that comes with it. It is also an improvement to mental sustainability, because the person that uses the reusable bags has feeling of accomplishment that comes from doing the right thing. Deciding to buy local food to make the community stronger through community sustainable management, can also be emotionally, environmentally, and physically rewarding. In Figure 1[9]Mckenzie shows how a person can look at a behavior that they are doing and determine if it is sustainable or not, and what they could replace the bad behavior with. Education of an individual would be the first step to deciding to take a step towards managing their lives sustainable. To manage a person life the benefits needs to be high and the barriers low. Good managing would come up with a competing behavior that has no barriers to it. To come up with a Competing behavior that does not have a barrier to it would involve good problem solving. Figure 2[9]Mckenzie is an example of what a person might try to change in their life to make it more sustainable. Walking instead of taking the taxi helps the environment, but it also loses time spent with family. The bus is in the middle of walking and taking a taxi, but another option that is not on the list is riding a bike. Good sustainable management would include all the options that are possible, and new options that were not available before. These figures are tools that can be used in helping people manage their lives sustainably, but there are other ways to think about their lives to become more sustainable. There are very practical needs for sustainable management of forest. Since forests provide many as per as resources to the people, and to the world, management of the forests are critical to keep those resources available. To be able to manage a forest, knowledge of how the natural systems work is needed. If a manager knows how the natural system works, then when manager of the forest makes plans how the resources are to remove from the forest, the manager will know how the resources can be removed without damaging the forest. Since many forests are under management of the government that is in the region, the forest are not truly functioning how the ecosystem was naturally developed, and how it is meant to be. An example is the pineflatwoodsinFlorida. To be able to maintain that ecosystem frequent burnings of the forest needs to happen. Fires are a natural part of the ecosystem, but since wild fires can spread to communities near the forest, control of the wild fires is requested from the communities. To maintain flatwoods forest control burning or prescribe burning is part of the management to sustain the forest.[10]
https://en.wikipedia.org/wiki/Sustainable_management
Partial (pooled) likelihood estimation forpanel datais aquasi-maximum likelihoodmethod forpanel analysisthat assumes that density ofyit{\displaystyle y_{it}}givenxit{\displaystyle x_{it}}is correctly specified for each time period but it allows for misspecification in the conditional density ofyi=(yi1,…,yiT){\displaystyle y_{i}=(y_{i1},\dots ,y_{iT})}givenxi=(xi1,…,xiT){\displaystyle x_{i}=(x_{i1},\dots ,x_{iT})}. Concretely, partial likelihood estimation uses the product of conditional densities as the density of the joint conditional distribution. This generality facilitatesmaximum likelihoodmethods in panel data setting because fully specifying conditional distribution ofyican be computationally demanding.[1]On the other hand, allowing for misspecification generally results in violation of information equality and thus requires robuststandard error estimatorfor inference. In the following exposition, we follow the treatment in Wooldridge.[1]Particularly, the asymptotic derivation is done under fixed-T, growing-N setting. Writing the conditional density of yitgivenxitasft(yit|xit;θ), the partial maximum likelihood estimator solves: In this formulation, the joint conditional density ofyigivenxiis modeled asΠtft(yit|xit; θ). We assume thatft(yit|xit; θ)is correctly specified for eacht= 1,...,Tand that there existsθ0∈ Θ that uniquely maximizesE[ft(yit│xit; θ)].But, it is not assumed that the joint conditional density is correctly specified. Under some regularity conditions, partial MLE is consistent and asymptotically normal. By the usual argument forM-estimators(details in Wooldridge[1]), the asymptotic variance of√N(θMLE- θ0) is A−1BA−1whereA−1= E[ Σt∇2θlogft(yit│xit; θ)]−1and B=E[( Σt∇θlogft(yit│xit; θ) ) ( Σt∇θlogft(yit│xit; θ ) )T]. If the joint conditional density of yigiven xiis correctly specified, the above formula for asymptotic variance simplifies because information equality saysB=A. Yet, except for special circumstances, thejoint densitymodeled by partial MLE is not correct. Therefore, for valid inference, the above formula for asymptotic variance should be used. For information equality to hold, one sufficient condition is that scores of the densities for each time period are uncorrelated. In dynamically complete models, the condition holds and thus simplified asymptotic variance is valid.[1] Pooled QMLE is a technique that allows estimating parameters whenpanel datais available with Poisson outcomes. For instance, one might have information on the number of patents files by a number of different firms over time. Pooled QMLE does not necessarily containunobserved effects(which can be eitherrandom effectsorfixed effects), and the estimation method is mainly proposed for these purposes. The computational requirements are less stringent, especially compared tofixed-effect Poisson models, but the trade off is the possibly strong assumption of nounobserved heterogeneity. Pooled refers to pooling the data over the different time periodsT, while QMLE refers to the quasi-maximum likelihood technique. ThePoisson distributionofyi{\displaystyle y_{i}}givenxi{\displaystyle x_{i}}is specified as follows:[2] the starting point for Poisson pooled QMLE is the conditional mean assumption. Specifically, we assume that for someb0{\displaystyle b_{0}}in a compact parameter spaceB, the conditional mean is given by[2] The compact parameter space condition is imposed to enable the use ofM-estimation techniques, while the conditional mean reflects the fact that the population mean of a Poisson process is the parameter of interest. In this particular case, the parameter governing the Poisson process is allowed to vary with respect to the vectorxt⋅{\displaystyle x_{t}\centerdot }.[2]The functionmcan, in principle, change over time even though it is often specified as static over time.[3]Note that only the conditional mean function is specified, and we will get consistent estimates ofb0{\displaystyle b_{0}}as long as this mean condition is correctly specified. This leads to the following first order condition, which represents the quasi-log likelihood for the pooled Poisson estimation:[2] A popular choice ism=(xt,b0)=exp⁡(xtb0){\displaystyle m=(x_{t},b_{0})=\exp(x_{t}b_{0})}, as Poisson processes are defined over the positive real line.[3]This reduces the conditional moment to an exponential index function, wherextb0{\displaystyle x_{t}b_{0}}is the linear index and exp is the link function.[4]
https://en.wikipedia.org/wiki/Partial_likelihood_methods_for_panel_data#Pooled_QMLE_for_Poisson_models
Incryptanalysis,frequency analysis(also known ascounting letters) is the study of thefrequency of lettersor groups of letters in aciphertext. The method is used as an aid to breakingclassical ciphers. Frequency analysis is based on the fact that, in any given stretch of written language, certain letters and combinations of letters occur with varying frequencies. Moreover, there is a characteristic distribution of letters that is roughly the same for almost all samples of that language. For instance, given a section ofEnglish language,E,T,AandOare the most common, whileZ,Q,XandJare rare. Likewise,TH,ER,ON, andANare the most common pairs of letters (termedbigramsordigraphs), andSS,EE,TT, andFFare the most common repeats.[1]The nonsense phrase "ETAOIN SHRDLU" represents the 12 most frequent letters in typical English language text. In some ciphers, such properties of the natural language plaintext are preserved in the ciphertext, and these patterns have the potential to be exploited in aciphertext-only attack. In a simplesubstitution cipher, each letter of theplaintextis replaced with another, and any particular letter in the plaintext will always be transformed into the same letter in the ciphertext. For instance, if all occurrences of the lettereturn into the letterX, a ciphertext message containing numerous instances of the letterXwould suggest to a cryptanalyst thatXrepresentse. The basic use of frequency analysis is to first count the frequency of ciphertext letters and then associate guessed plaintext letters with them. MoreXs in the ciphertext than anything else suggests thatXcorresponds toein the plaintext, but this is not certain;tandaare also very common in English, soXmight be either of them. It is unlikely to be a plaintextzorq, which are less common. Thus the cryptanalyst may need to try several combinations of mappings between ciphertext and plaintext letters. More complex use of statistics can be conceived, such as considering counts of pairs of letters (bigrams), triplets (trigrams), and so on. This is done to provide more information to the cryptanalyst, for instance,QandUnearly always occur together in that order in English, even thoughQitself is rare. SupposeEvehas intercepted thecryptogrambelow, and it is known to be encrypted using a simple substitution cipher: For this example, uppercase letters are used to denote ciphertext, lowercase letters are used to denote plaintext (or guesses at such), andX~tis used to express a guess that ciphertext letterXrepresents the plaintext lettert. Eve could use frequency analysis to help solve the message along the following lines: counts of the letters in the cryptogram show thatIis the most common single letter,[2]XLmost commonbigram, andXLIis the most commontrigram.eis the most common letter in the English language,this the most common bigram, andtheis the most common trigram. This strongly suggests thatX~t,L~handI~e. The second most common letter in the cryptogram isE; since the first and second most frequent letters in the English language,eandtare accounted for, Eve guesses thatE~a, the third most frequent letter. Tentatively making these assumptions, the following partial decrypted message is obtained. Using these initial guesses, Eve can spot patterns that confirm her choices, such as "that". Moreover, other patterns suggest further guesses. "Rtate" might be "state", which would meanR~s. Similarly "atthattMZe" could be guessed as "atthattime", yieldingM~iandZ~m. Furthermore, "heVe" might be "here", givingV~r. Filling in these guesses, Eve gets: In turn, these guesses suggest still others (for example, "remarA" could be "remark", implyingA~k) and so on, and it is relatively straightforward to deduce the rest of the letters, eventually yielding the plaintext. At this point, it would be a good idea for Eve to insert spaces and punctuation: In this example from "The Gold-Bug", Eve's guesses were all correct. This would not always be the case, however; the variation in statistics for individual plaintexts can mean that initial guesses are incorrect. It may be necessary tobacktrackincorrect guesses or to analyze the available statistics in much more depth than the somewhat simplified justifications given in the above example. It is possible that the plaintext does not exhibit the expected distribution of letter frequencies. Shorter messages are likely to show more variation. It is also possible to construct artificially skewed texts. For example, entire novels have been written that omit the letterealtogether — a form of literature known as alipogram. The first known recorded explanation of frequency analysis (indeed, of any kind of cryptanalysis) was given in the 9th century byAl-Kindi, anArabpolymath, inA Manuscript on Deciphering Cryptographic Messages.[3]It has been suggested that a close textual study of theQur'anfirst brought to light thatArabichas a characteristic letter frequency.[4]Its use spread, and similar systems were widely used in European states by the time of theRenaissance. By 1474,Cicco Simonettahad written a manual on deciphering encryptions ofLatinandItaliantext.[5] Several schemes were invented by cryptographers to defeat this weakness in simple substitution encryptions. These included: A disadvantage of all these attempts to defeat frequency counting attacks is that it increases complication of both enciphering and deciphering, leading to mistakes. Famously, a British Foreign Secretary is said to have rejected the Playfair cipher because, even if school boys could cope successfully as Wheatstone and Playfair had shown, "our attachés could never learn it!". Therotor machinesof the first half of the 20th century (for example, theEnigma machine) were essentially immune to straightforward frequency analysis. However, other kinds of analysis ("attacks") successfully decoded messages from some of those machines.[6] Frequency analysis requires only a basic understanding of the statistics of the plaintext language and some problem-solving skills, and, if performed by hand, tolerance for extensive letter bookkeeping. DuringWorld War II, both theBritishand theAmericansrecruited codebreakers by placingcrosswordpuzzles in major newspapers and running contests for who could solve them the fastest. Several of the ciphers used by theAxis powerswere breakable using frequency analysis, for example, some of the consular ciphers used by the Japanese. Mechanical methods of letter counting and statistical analysis (generallyIBMcard type machinery) were first used in World War II, possibly by the US Army'sSIS. Today, the work of letter counting and analysis is done bycomputersoftware, which can carry out such analysis in seconds. With modern computing power, classical ciphers are unlikely to provide any real protection for confidential data. Frequency analysis has been described in fiction.Edgar Allan Poe's "The Gold-Bug" andSir Arthur Conan Doyle'sSherlock Holmestale "The Adventure of the Dancing Men" are examples of stories which describe the use of frequency analysis to attack simple substitution ciphers. The cipher in the Poe story is encrusted with several deception measures, but this is more a literary device than anything significant cryptographically.
https://en.wikipedia.org/wiki/Frequency_analysis
TransferJetis a close proximity wireless transfer technology initially proposed bySonyand demonstrated publicly in early 2008.[1]By touching (or bringing very close together) two electronic devices, TransferJet allows high speed exchange of data. The concept of TransferJet consists of a touch-activated interface which can be applied for applications requiring high-speed data transfer between two devices in a peer-to-peer mode without the need for external physical connectors.[2] TransferJet's maximum physical layer transmission rate is 560 Mbit/s. After allowing for error correction and other protocol overhead, the effective maximum throughput is 375 Mbit/s. TransferJet will adjust the data rate downward according to the wireless environment, thereby maintaining a robust link even when the surrounding wireless condition fluctuates. TransferJet has the capability of identifying the uniqueMACaddresses of individual devices, enabling users to choose which devices can establish a connection. By allowing only devices inside the household, for example, one can prevent data theft from strangers while riding a crowded train. If, on the other hand, one wishes to connect the device with any other device at a party, this can be done by simply disabling the filtering function. TransferJet uses the same frequency spectrum asUWB, but occupies only a section of this band available as a common worldwide channel. Since the RF power is kept under -70 dBm/MHz, it can operate in the same manner as that of UWB devices equipped withDAAfunctionality. In addition, this low power level also ensures that there will be no interference to other wireless systems, including other TransferJet systems, operating nearby. By reducing the RF power and spatial reach down to a few centimeters (about an inch or less), a TransferJet connection in its most basic mode does not require any initial setup procedure by the user for either device, and the action of spontaneously touching one device with another will automatically trigger the data transfer. More complex usage scenarios will require various means to select the specific data to send as well as the location to store (or method to process) the received data. TransferJet utilizes a newly developedTransferJet Couplerbased on the principle of electric induction field as opposed to radiation field for conventional antennas. The functional elements of a generic TransferJet Coupler consist of a coupling electrode or plate, a resonant stub and ground. Compared to conventional radiating antennas, the TransferJet Coupler achieves higher transmission gain and more efficient coupling in the near-field while providing sharp attenuation at longer distances. Because the Coupler generates longitudinal electric fields, there is no polarization and the devices can be aligned at any angle. TransferJet Specifications[3] Corresponds to low-intensity radio wave regulation in Japan and Taiwan, and with local regulations in other countries and regions. System can adjust the transmission rate depending on the wireless environment. π/2-shift BPSK Although sometimes confused withNear Field Communication, TransferJet depends on an entirely different technology and is also generally targeted for different usage scenarios focusing on high-speed data transfer. Thus these two systems will not interfere with each other and can even co-exist in the same location, as already implemented in certain products.[4]Other recent products combine TransferJet with wireless power to allow both data transfer and wireless charging capability simultaneously in the same location.[5]TransferJet, NFC and wireless power are the three major near-field (contact-less) technologies that are expected to eliminate the physical connections and cables currently required to interface devices with each other. Comparison with NFC audio/video streaming Electronic Payment, ID Tagging symmetrical TheTransferJet Consortium[6]was established in July 2008 to advance and promote the TransferJet Format, by developing the technical specifications and compliance testing procedures as well as creating a market for TransferJet-compliant, interoperable products. In September 2011, the consortium was registered as an independent non-profit industry association. As of June 2015, the Consortium is led by five Promoter companies, consisting of:JRC,NTT,Olympus,Sony(consortium administrator), andToshiba. The Consortium currently also has around thirty Adopter companies.[7]The TransferJet regular typeface and TransferJet logos are trademarks managed and licensed by the TransferJet Consortium. Commercial products have been introduced since January 2010 and the initial product categories include digital cameras,[8]laptop PCs,[9]USB cradle accessories,[10]USB dongle accessories[11]and office/business equipment.[12]Compliance testing equipment is provided by Agilent technologies and certification services are offered by Allion Test Labs. The first commercially available TransferJet development platform for embedded systems was launched by Icoteq Ltd in February 2015.[13]Smartphones with integrated TransferJet functionality were launched in June 2015 fromFujitsu,[14]andBkav.[15]Other product vendors includeBuffaloand E-Globaledge.[16] TransferJet X[17]is a new second-generation TransferJet specification capable of data transfer speeds of 13.1 Gbit/sec and above, or about 20 times the speed of current TransferJet. This specification uses the 60 GHz band and requires only 2 msec or less to establish a connection prior to the actual data transfer, thereby enabling the exchange of large content files even in the short amount of time it takes, for example, for a person to walk through a wicket gate. The TransferJet Consortium is currently defining the details of the TransferJet X ecosystem, based on the IEEE 802.15.3e standard[18]completed and published in June 2017. The HRCP Research and Development Partnership,[19]established in 2016, is developing anSoCsolution for implementing TransferJet X in a variety of products and services to be released starting around 2020.
https://en.wikipedia.org/wiki/TransferJet
Inmathematics,hyperbolic geometry(also calledLobachevskian geometryorBolyai–Lobachevskiangeometry) is anon-Euclidean geometry. Theparallel postulateofEuclidean geometryis replaced with: (Compare the above withPlayfair's axiom, the modern version ofEuclid'sparallel postulate.) Thehyperbolic planeis aplanewhere every point is asaddle point. Hyperbolic planegeometryis also the geometry ofpseudospherical surfaces, surfaces with a constant negativeGaussian curvature.Saddle surfaceshave negative Gaussian curvature in at least some regions, where theylocallyresemble the hyperbolic plane. Thehyperboloid modelof hyperbolic geometry provides a representation ofeventsone temporal unit into the future inMinkowski space, the basis ofspecial relativity. Each of these events corresponds to arapidityin some direction. When geometers first realised they were working with something other than the standard Euclidean geometry, they described their geometry under many different names;Felix Kleinfinally gave the subject the namehyperbolic geometryto include it in the now rarely used sequenceelliptic geometry(spherical geometry), parabolic geometry (Euclidean geometry), and hyperbolic geometry. In theformer Soviet Union, it is commonly called Lobachevskian geometry, named after one of its discoverers, the Russian geometerNikolai Lobachevsky. Hyperbolic geometry is more closely related to Euclidean geometry than it seems: the onlyaxiomaticdifference is theparallel postulate. When the parallel postulate is removed from Euclidean geometry the resulting geometry isabsolute geometry. There are two kinds of absolute geometry, Euclidean and hyperbolic. All theorems of absolute geometry, including the first 28 propositions of book one ofEuclid'sElements, are valid in Euclidean and hyperbolic geometry. Propositions 27 and 28 of Book One of Euclid'sElementsprove the existence of parallel/non-intersecting lines. This difference also has many consequences: concepts that are equivalent in Euclidean geometry are not equivalent in hyperbolic geometry; new concepts need to be introduced. Further, because of theangle of parallelism, hyperbolic geometry has anabsolute scale, a relation between distance and angle measurements. Single lines in hyperbolic geometry have exactly the same properties as single straight lines in Euclidean geometry. For example, two points uniquely define a line, and line segments can be infinitely extended. Two intersecting lines have the same properties as two intersecting lines in Euclidean geometry. For example, two distinct lines can intersect in no more than one point, intersecting lines form equal opposite angles, and adjacent angles of intersecting lines aresupplementary. When a third line is introduced, then there can be properties of intersecting lines that differ from intersecting lines in Euclidean geometry. For example, given two intersecting lines there are infinitely many lines that do not intersect either of the given lines. These properties are all independent of themodelused, even if the lines may look radically different. Non-intersecting lines in hyperbolic geometry also have properties that differ from non-intersecting lines inEuclidean geometry: This implies that there are throughPan infinite number of coplanar lines that do not intersectR. These non-intersecting lines are divided into two classes: Some geometers simply use the phrase "parallellines" to mean "limiting parallellines", withultraparallellines meaning justnon-intersecting. Theselimiting parallelsmake an angleθwithPB; this angle depends only on theGaussian curvatureof the plane and the distancePBand is called theangle of parallelism. For ultraparallel lines, theultraparallel theoremstates that there is a unique line in the hyperbolic plane that is perpendicular to each pair of ultraparallel lines. In hyperbolic geometry, the circumference of a circle of radiusris greater than2πr{\displaystyle 2\pi r}. LetR=1−K{\displaystyle R={\frac {1}{\sqrt {-K}}}}, whereK{\displaystyle K}is theGaussian curvatureof the plane. In hyperbolic geometry,K{\displaystyle K}is negative, so the square root is of a positive number. Then the circumference of a circle of radiusris equal to: And the area of the enclosed disk is: Therefore, in hyperbolic geometry the ratio of a circle's circumference to its radius is always strictly greater than2π{\displaystyle 2\pi }, though it can be made arbitrarily close by selecting a small enough circle. If the Gaussian curvature of the plane is −1 then thegeodesic curvatureof a circle of radiusris:1tanh⁡(r){\displaystyle {\frac {1}{\tanh(r)}}}[1] In hyperbolic geometry, there is no line whose points are all equidistant from another line. Instead, the points that are all the same distance from a given line lie on a curve called ahypercycle. Another special curve is thehorocycle, whosenormalradii (perpendicularlines) are alllimiting parallelto each other (all converge asymptotically in one direction to the sameideal point, the centre of the horocycle). Through every pair of points there are two horocycles. The centres of the horocycles are theideal pointsof theperpendicular bisectorof the line-segment between them. Given any three distinct points, they all lie on either a line, hypercycle,horocycle, or circle. Thelengthof a line-segment is the shortest length between two points. The arc-length of a hypercycle connecting two points is longer than that of the line segment and shorter than that of the arc horocycle, connecting the same two points. The lengths of the arcs of both horocycles connecting two points are equal, and are longer than the arclength of any hypercycle connecting the points and shorter than the arc of any circle connecting the two points. If the Gaussian curvature of the plane is −1, then thegeodesic curvatureof a horocycle is 1 and that of a hypercycle is between 0 and 1.[1] Unlike Euclidean triangles, where the angles always add up to πradians(180°, astraight angle), in hyperbolic space the sum of the angles of a triangle is always strictly less than π radians (180°). The difference is called thedefect. Generally, the defect of a convex hyperbolic polygon withn{\displaystyle n}sides is its angle sum subtracted from(n−2)⋅180∘{\displaystyle (n-2)\cdot 180^{\circ }}. The area of a hyperbolic triangle is given by its defect in radians multiplied byR2, which is also true for all convex hyperbolic polygons.[2]Therefore all hyperbolic triangles have an area less than or equal toR2π. The area of a hyperbolicideal trianglein which all three angles are 0° is equal to this maximum. As inEuclidean geometry, each hyperbolic triangle has anincircle. In hyperbolic space, if all three of its vertices lie on ahorocycleorhypercycle, then the triangle has nocircumscribed circle. As insphericalandelliptical geometry, in hyperbolic geometry if two triangles are similar, they must be congruent. Special polygons in hyperbolic geometry are the regularapeirogonandpseudogonuniform polygonswith an infinite number of sides. InEuclidean geometry, the only way to construct such a polygon is to make the side lengths tend to zero and the apeirogon is indistinguishable from a circle, or make the interior angles tend to 180° and the apeirogon approaches a straight line. However, in hyperbolic geometry, a regular apeirogon or pseudogon has sides of any length (i.e., it remains a polygon with noticeable sides). The side and anglebisectorswill, depending on the side length and the angle between the sides, be limiting or diverging parallel. If the bisectors are limiting parallel then it is an apeirogon and can be inscribed and circumscribed by concentrichorocycles. If the bisectors are diverging parallel then it is a pseudogon and can be inscribed and circumscribed byhypercycles(since all its vertices are the same distance from a line, the axis, and the midpoints of its sides are also equidistant from that same axis). Like the Euclidean plane it is also possible to tessellate the hyperbolic plane withregular polygonsasfaces. There are an infinite number of uniform tilings based on theSchwarz triangles(pqr) where 1/p+ 1/q+ 1/r< 1, wherep,q,rare each orders of reflection symmetry at three points of thefundamental domain triangle, the symmetry group is a hyperbolictriangle group. There are also infinitely many uniform tilings that cannot be generated from Schwarz triangles, some for example requiring quadrilaterals as fundamental domains.[3] Though hyperbolic geometry applies for any surface with a constant negativeGaussian curvature, it is usual to assume a scale in which the curvatureKis −1. This results in some formulas becoming simpler. Some examples are: Compared to Euclidean geometry, hyperbolic geometry presents many difficulties for a coordinate system: the angle sum of aquadrilateralis always less than 360°; there are no equidistant lines, so a proper rectangle would need to be enclosed by two lines and two hypercycles; parallel-transporting a line segment around a quadrilateral causes it to rotate when it returns to the origin; etc. There are however different coordinate systems for hyperbolic plane geometry. All are based around choosing a point (the origin) on a chosen directed line (thex-axis) and after that many choices exist. The Lobachevsky coordinatesxandyare found by dropping a perpendicular onto thex-axis.xwill be the label of the foot of the perpendicular.ywill be the distance along the perpendicular of the given point from its foot (positive on one side and negative on the other). Another coordinate system measures the distance from the point to thehorocyclethrough the origin centered around(0,+∞){\displaystyle (0,+\infty )}and the length along this horocycle.[5] Other coordinate systems use the Klein model or the Poincaré disk model described below, and take the Euclidean coordinates as hyperbolic. A Cartesian-like[citation needed]coordinate system (x, y) on the oriented hyperbolic plane is constructed as follows. Choose a line in the hyperbolic plane together with an orientation and an originoon this line. Then: The distance between two points represented by (x_i, y_i),i=1,2in this coordinate system is[citation needed]dist⁡(⟨x1,y1⟩,⟨x2,y2⟩)=arcosh⁡(cosh⁡y1cosh⁡(x2−x1)cosh⁡y2−sinh⁡y1sinh⁡y2).{\displaystyle \operatorname {dist} (\langle x_{1},y_{1}\rangle ,\langle x_{2},y_{2}\rangle )=\operatorname {arcosh} \left(\cosh y_{1}\cosh(x_{2}-x_{1})\cosh y_{2}-\sinh y_{1}\sinh y_{2}\right)\,.} This formula can be derived from the formulas abouthyperbolic triangles. The corresponding metric tensor field is:(ds)2=cosh2⁡y(dx)2+(dy)2{\displaystyle (\mathrm {d} s)^{2}=\cosh ^{2}y\,(\mathrm {d} x)^{2}+(\mathrm {d} y)^{2}}. In this coordinate system, straight lines take one of these forms ((x,y) is a point on the line;x0,y0,A, andαare parameters): ultraparallel to thex-axis asymptotically parallel on the negative side asymptotically parallel on the positive side intersecting perpendicularly intersecting at an angleα Generally, these equations will only hold in a bounded domain (ofxvalues). At the edge of that domain, the value ofyblows up to ±infinity. Since the publication ofEuclid'sElementscirca 300BC, manygeometerstried to prove theparallel postulate. Some tried to prove it byassuming its negation and trying to derive a contradiction. Foremost among these wereProclus,Ibn al-Haytham(Alhacen),Omar Khayyám,[6]Nasīr al-Dīn al-Tūsī,Witelo,Gersonides,Alfonso, and laterGiovanni Gerolamo Saccheri,John Wallis,Johann Heinrich Lambert, andLegendre.[7]Their attempts were doomed to failure (as we now know, the parallel postulate is not provable from the other postulates), but their efforts led to the discovery of hyperbolic geometry. The theorems of Alhacen, Khayyam and al-Tūsī onquadrilaterals, including theIbn al-Haytham–Lambert quadrilateralandKhayyam–Saccheri quadrilateral, were the first theorems on hyperbolic geometry. Their works on hyperbolic geometry had a considerable influence on its development among later European geometers, including Witelo, Gersonides, Alfonso, John Wallis and Saccheri.[8] In the 18th century,Johann Heinrich Lambertintroduced thehyperbolic functions[9]and computed the area of ahyperbolic triangle.[10] In the 19th century, hyperbolic geometry was explored extensively byNikolai Lobachevsky,János Bolyai,Carl Friedrich GaussandFranz Taurinus. Unlike their predecessors, who just wanted to eliminate the parallel postulate from the axioms of Euclidean geometry, these authors realized they had discovered a new geometry.[11][12] Gauss wrote in an 1824 letter to Franz Taurinus that he had constructed it, but Gauss did not publish his work. Gauss called it "non-Euclidean geometry"[13]causing several modern authors to continue to consider "non-Euclidean geometry" and "hyperbolic geometry" to be synonyms. Taurinus published results on hyperbolic trigonometry in 1826, argued that hyperbolic geometry is self-consistent, but still believed in the special role of Euclidean geometry. The complete system of hyperbolic geometry was published by Lobachevsky in 1829/1830, while Bolyai discovered it independently and published in 1832. In 1868,Eugenio Beltramiprovided models of hyperbolic geometry, and used this to prove that hyperbolic geometry was consistentif and only ifEuclidean geometry was. The term "hyperbolic geometry" was introduced byFelix Kleinin 1871.[14]Klein followed an initiative ofArthur Cayleyto use the transformations ofprojective geometryto produceisometries. The idea used aconic sectionorquadricto define a region, and usedcross ratioto define ametric. The projective transformations that leave the conic section or quadricstableare the isometries. "Klein showed that if theCayley absoluteis a real curve then the part of the projective plane in its interior is isometric to the hyperbolic plane..."[15] The discovery of hyperbolic geometry had importantphilosophicalconsequences. Before its discovery many philosophers (such asHobbesandSpinoza) viewed philosophical rigor in terms of the "geometrical method", referring to the method of reasoning used inEuclid'sElements. KantinCritique of Pure Reasonconcluded that space (inEuclidean geometry) and time are not discovered by humans as objective features of the world, but are part of an unavoidable systematic framework for organizing our experiences.[16] It is said that Gauss did not publish anything about hyperbolic geometry out of fear of the "uproar of theBoeotians" (stereotyped as dullards by the ancient Athenians[17]), which would ruin his status asprinceps mathematicorum(Latin, "the Prince of Mathematicians").[18]The "uproar of the Boeotians" came and went, and gave an impetus to great improvements inmathematical rigour,analytical philosophyandlogic. Hyperbolic geometry was finally proved consistent and is therefore another valid geometry. Because Euclidean, hyperbolic and elliptic geometry are all consistent, the question arises: which is the real geometry of space, and if it is hyperbolic or elliptic, what is its curvature? Lobachevsky had already tried to measure the curvature of the universe by measuring theparallaxofSiriusand treating Sirius as the ideal point of anangle of parallelism. He realized that his measurements werenot precise enoughto give a definite answer, but he did reach the conclusion that if the geometry of the universe is hyperbolic, then theabsolute lengthis at least one million times the diameter ofEarth's orbit(2000000AU, 10parsec).[19]Some argue that his measurements were methodologically flawed.[20] Henri Poincaré, with hissphere-worldthought experiment, came to the conclusion that everyday experience does not necessarily rule out other geometries. Thegeometrization conjecturegives a complete list of eight possibilities for the fundamental geometry of our space. The problem in determining which one applies is that, to reach a definitive answer, we need to be able to look at extremely large shapes – much larger than anything on Earth or perhaps even in our galaxy.[21] Special relativityplaces space and time on equal footing, so that one considers the geometry of a unifiedspacetimeinstead of considering space and time separately.[22][23]Minkowski geometryreplacesGalilean geometry(which is the 3-dimensional Euclidean space with time ofGalilean relativity).[24] In relativity, rather than Euclidean, elliptic and hyperbolic geometry, the appropriate geometries to consider areMinkowski space,de Sitter spaceandanti-de Sitter space,[25][26]corresponding to zero, positive and negative curvature respectively. Hyperbolic geometry enters special relativity throughrapidity, which stands in forvelocity, and is expressed by ahyperbolic angle. The study of this velocity geometry has been calledkinematic geometry. The space of relativistic velocities has a three-dimensional hyperbolic geometry, where the distance function is determined from the relative velocities of "nearby" points (velocities).[27] There exist variouspseudospheresin Euclidean space that have a finite area of constant negative Gaussian curvature. ByHilbert's theorem, one cannot isometricallyimmersea complete hyperbolic plane (a complete regular surface of constant negativeGaussian curvature) in a 3-D Euclidean space. Other usefulmodelsof hyperbolic geometry exist in Euclidean space, in which the metric is not preserved. A particularly well-known paper model based on thepseudosphereis due toWilliam Thurston. The art ofcrochethas beenusedto demonstrate hyperbolic planes, the first such demonstration having been made byDaina Taimiņa.[28] In 2000, Keith Henderson demonstrated a quick-to-make paper model dubbed the "hyperbolic soccerball" (more precisely, atruncated order-7 triangular tiling).[29][30] Instructions on how to make a hyperbolic quilt, designed byHelaman Ferguson,[31]have been made available byJeff Weeks.[32] Variouspseudospheres– surfaces with constant negative Gaussian curvature – can be embedded in 3-D space under the standard Euclidean metric, and so can be made into tangible models. Of these, thetractoid(or pseudosphere) is the best known; using the tractoid as a model of the hyperbolic plane is analogous to using aconeorcylinderas a model of the Euclidean plane. However, the entire hyperbolic plane cannot be embedded into Euclidean space in this way, and various other models are more convenient for abstractly exploring hyperbolic geometry. There are fourmodelscommonly used for hyperbolic geometry: theKlein model, thePoincaré disk model, thePoincaré half-plane model, and the Lorentz orhyperboloid model. These models define a hyperbolic plane which satisfies the axioms of a hyperbolic geometry. Despite their names, the first three mentioned above were introduced as models of hyperbolic space byBeltrami, not byPoincaréorKlein. All these models are extendable to more dimensions. TheBeltrami–Klein model, also known as the projective disk model, Klein disk model andKlein model, is named afterEugenio BeltramiandFelix Klein. For the two dimensions this model uses the interior of theunit circlefor the complete hyperbolicplane, and thechordsof this circle are the hyperbolic lines. For higher dimensions this model uses the interior of theunit ball, and thechordsof thisn-ball are the hyperbolic lines. ThePoincaré disk model, also known as the conformal disk model, also employs the interior of theunit circle, but lines are represented by arcs of circles that areorthogonalto the boundary circle, plus diameters of the boundary circle. ThePoincaré half-plane modeltakes one-half of the Euclidean plane, bounded by a lineBof the plane, to be a model of the hyperbolic plane. The lineBis not included in the model. The Euclidean plane may be taken to be a plane with theCartesian coordinate systemand thex-axisis taken as lineBand the half plane is the upper half (y> 0 ) of this plane. Thehyperboloid modelor Lorentz model employs a 2-dimensionalhyperboloidof revolution (of two sheets, but using one) embedded in 3-dimensionalMinkowski space. This model is generally credited to Poincaré, but Reynolds[33]says thatWilhelm Killingused this model in 1885 Thehemispheremodel is not often used as model by itself, but it functions as a useful tool for visualizing transformations between the other models. The hemisphere model uses the upper half of theunit sphere:x2+y2+z2=1,z>0.{\displaystyle x^{2}+y^{2}+z^{2}=1,z>0.} The hyperbolic lines are half-circles orthogonal to the boundary of the hemisphere. The hemisphere model is part of aRiemann sphere, and different projections give different models of the hyperbolic plane: All models essentially describe the same structure. The difference between them is that they represent differentcoordinate chartslaid down on the samemetric space, namely the hyperbolic plane. The characteristic feature of the hyperbolic plane itself is that it has a constant negativeGaussian curvature, which is indifferent to the coordinate chart used. Thegeodesicsare similarly invariant: that is, geodesics map to geodesics under coordinate transformation. Hyperbolic geometry is generally introduced in terms of the geodesics and their intersections on the hyperbolic plane.[34] Once we choose a coordinate chart (one of the "models"), we can alwaysembedit in a Euclidean space of same dimension, but the embedding is clearly not isometric (since the curvature of Euclidean space is 0). The hyperbolic space can be represented by infinitely many different charts; but the embeddings in Euclidean space due to these four specific charts show some interesting characteristics. Since the four models describe the same metric space, each can be transformed into the other. See, for example: In 1966 David Gans proposed aflattened hyperboloid modelin the journalAmerican Mathematical Monthly.[35]It is anorthographic projectionof the hyperboloid model onto the xy-plane. This model is not as widely used as other models but nevertheless is quite useful in the understanding of hyperbolic geometry. The conformal square model of the hyperbolic plane arises from usingSchwarz–Christoffel mappingto convert thePoincaré diskinto a square.[37]This model has finite extent, like the Poincaré disk. However, all of the points are inside a square. This model is conformal, which makes it suitable for artistic applications. The band model employs a portion of the Euclidean plane between two parallel lines.[38]Distance is preserved along one line through the middle of the band. Assuming the band is given by{z∈C:|Im⁡z|<π/2}{\displaystyle \{z\in \mathbb {C} :|\operatorname {Im} z|<\pi /2\}}, the metric is given by|dz|sec⁡(Im⁡z){\displaystyle |dz|\sec(\operatorname {Im} z)}. Everyisometry(transformationormotion) of the hyperbolic plane to itself can be realized as the composition of at most threereflections. Inn-dimensional hyperbolic space, up ton+1 reflections might be required. (These are also true for Euclidean and spherical geometries, but the classification below is different.) All isometries of the hyperbolic plane can be classified into these classes: M. C. Escher's famous printsCircle Limit IIIandCircle Limit IVillustrate the conformal disc model (Poincaré disk model) quite well. The white lines inIIIare not quite geodesics (they arehypercycles), but are close to them. It is also possible to see the negativecurvatureof the hyperbolic plane, through its effect on the sum of angles in triangles and squares. For example, inCircle Limit IIIevery vertex belongs to three triangles and three squares. In the Euclidean plane, their angles would sum to 450°; i.e., a circle and a quarter. From this, we see that the sum of angles of a triangle in the hyperbolic plane must be smaller than 180°. Another visible property isexponential growth. InCircle Limit III, for example, one can see that the number of fishes within a distance ofnfrom the center rises exponentially. The fishes have an equal hyperbolic area, so the area of a ball of radiusnmust rise exponentially inn. The art ofcrochethasbeen usedto demonstrate hyperbolic planes (pictured above) with the first being made byDaina Taimiņa,[28]whose bookCrocheting Adventures with Hyperbolic Planeswon the 2009Bookseller/Diagram Prize for Oddest Title of the Year.[39] HyperRogueis aroguelikegame set on various tilings of the hyperbolic plane. Hyperbolic geometry is not limited to 2 dimensions; a hyperbolic geometry exists for every higher number of dimensions. Hyperbolic spaceof dimensionnis a special case of a Riemanniansymmetric spaceof noncompact type, as it isisomorphicto the quotient Theorthogonal groupO(1,n)actsby norm-preserving transformations onMinkowski spaceR1,n, and it actstransitivelyon the two-sheet hyperboloid of norm 1 vectors. Timelike lines (i.e., those with positive-norm tangents) through the origin pass through antipodal points in the hyperboloid, so the space of such lines yields a model of hyperbolicn-space. Thestabilizerof any particular line is isomorphic to theproductof the orthogonal groups O(n) and O(1), where O(n) acts on the tangent space of a point in the hyperboloid, and O(1) reflects the line through the origin. Many of the elementary concepts in hyperbolic geometry can be described inlinear algebraicterms: geodesic paths are described by intersections with planes through the origin, dihedral angles between hyperplanes can be described by inner products of normal vectors, and hyperbolic reflection groups can be given explicit matrix realizations. In small dimensions, there are exceptional isomorphisms ofLie groupsthat yield additional ways to consider symmetries of hyperbolic spaces. For example, in dimension 2, the isomorphismsSO+(1, 2) ≅ PSL(2,R) ≅ PSU(1, 1)allow one to interpret the upper half plane model as the quotientSL(2,R)/SO(2)and the Poincaré disc model as the quotientSU(1, 1)/U(1). In both cases, the symmetry groups act by fractional linear transformations, since both groups are the orientation-preserving stabilizers inPGL(2,C)of the respective subspaces of the Riemann sphere. The Cayley transformation not only takes one model of the hyperbolic plane to the other, but realizes the isomorphism of symmetry groups as conjugation in a larger group. In dimension 3, the fractional linear action ofPGL(2,C)on the Riemann sphere is identified with the action on the conformal boundary of hyperbolic 3-space induced by the isomorphismO+(1, 3) ≅ PGL(2,C). This allows one to study isometries of hyperbolic 3-space by considering spectral properties of representative complex matrices. For example, parabolic transformations are conjugate to rigid translations in the upper half-space model, and they are exactly those transformations that can be represented byunipotentupper triangularmatrices. "Three scientists, Ibn al-Haytham, Khayyam and al-Tūsī, had made the most considerable contribution to this branch of geometry whose importance came to be completely recognized only in the 19th century. In essence their propositions concerning the properties of quadrangles which they considered assuming that some of the angles of these figures were acute of obtuse, embodied the first few theorems of the hyperbolic and the elliptic geometries. Their other proposals showed that various geometric statements were equivalent to the Euclidean postulate V. It is extremely important that these scholars established the mutual connection between this postulate and the sum of the angles of a triangle and a quadrangle. By their works on the theory of parallel lines Arab mathematicians directly influenced the relevant investigations of their European counterparts. The first European attempt to prove the postulate on parallel lines – made by Witelo, the Polish scientists of the 13th century, while revising Ibn al-Haytham'sBook of Optics(Kitab al-Manazir) – was undoubtedly prompted by Arabic sources. The proofs put forward in the 14th century by the Jewish scholarLevi ben Gerson, who lived in southern France, and by the above-mentioned Alfonso from Spain directly border on Ibn al-Haytham's demonstration. Above, we have demonstrated thatPseudo-Tusi's Exposition of Euclidhad stimulated both J. Wallis's and G. Saccheri's studies of the theory of parallel lines."
https://en.wikipedia.org/wiki/Hyperbolic_geometry
Ininformation theory,Fano's inequality(also known as theFano converseand theFano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived byRobert Fanoin the early 1950s while teaching aPh.D.seminar in information theory atMIT, and later recorded in his 1961 textbook. It is used to find a lower bound on the error probability of any decoder as well as the lower bounds forminimax risksindensity estimation. Let the discreterandom variablesX{\displaystyle X}andY{\displaystyle Y}represent input and output messages with ajoint probabilityP(x,y){\displaystyle P(x,y)}. Lete{\displaystyle e}represent an occurrence of error; i.e., thatX≠X~{\displaystyle X\neq {\tilde {X}}}, withX~=f(Y){\displaystyle {\tilde {X}}=f(Y)}being an approximate version ofX{\displaystyle X}. Fano's inequality is whereX{\displaystyle {\mathcal {X}}}denotes the support ofX{\displaystyle X},|X|{\displaystyle |{\mathcal {X}}|}denotes thecardinalityof (number of elements in)X{\displaystyle {\mathcal {X}}}, is theconditional entropy, is the probability of the communication error, and is the correspondingbinary entropy. Define an indicator random variableE{\displaystyle E}, that indicates the event that our estimateX~=f(Y){\displaystyle {\tilde {X}}=f(Y)}is in error, ConsiderH(E,X|X~){\displaystyle H(E,X|{\tilde {X}})}. We can use thechain rule for entropiesto expand this in two different ways Equating the two Expanding the right most term,H(X∣E,X~){\displaystyle H(X\mid E,{\tilde {X}})} SinceE=0{\displaystyle E=0}meansX=X~{\displaystyle X={\tilde {X}}}; being given the value ofX~{\displaystyle {\tilde {X}}}allows us to know the value ofX{\displaystyle X}with certainty. This makes the termH(X∣E=0,X~)=0{\displaystyle H(X\mid E=0,{\tilde {X}})=0}. On the other hand,E=1{\displaystyle E=1}means thatX~≠X{\displaystyle {\tilde {X}}\neq X}, hence given the value ofX~{\displaystyle {\tilde {X}}}, we can narrow downX{\displaystyle X}to one of|X|−1{\displaystyle |{\mathcal {X}}|-1}different values, allowing us to upper bound the conditional entropyH(X∣E=1,X~)≤log⁡(|X|−1){\displaystyle H(X\mid E=1,{\tilde {X}})\leq \log(|{\mathcal {X}}|-1)}. Hence The other term,H(E∣X~)≤H(E){\displaystyle H(E\mid {\tilde {X}})\leq H(E)}, because conditioning reduces entropy. Because of the wayE{\displaystyle E}is defined,H(E)=Hb(e){\displaystyle H(E)=H_{b}(e)}, meaning thatH(E∣X~)≤Hb(e){\displaystyle H(E\mid {\tilde {X}})\leq H_{b}(e)}. Putting it all together, BecauseX→Y→X~{\displaystyle X\rightarrow Y\rightarrow {\tilde {X}}}is a Markov chain, we haveI(X;X~)≤I(X;Y){\displaystyle I(X;{\tilde {X}})\leq I(X;Y)}by thedata processing inequality, and henceH(X∣X~)≥H(X∣Y){\displaystyle H(X\mid {\tilde {X}})\geq H(X\mid Y)}, giving us Fano's inequalitycan be interpreted as a way of dividing the uncertainty of a conditional distribution into two questions given an arbitrary predictor. The first question, corresponding to the termHb(e){\displaystyle H_{b}(e)}, relates to the uncertainty of the predictor. If the prediction is correct, there is no more uncertainty remaining. If the prediction is incorrect, the uncertainty of any discrete distribution has an upper bound of the entropy of the uniform distribution over all choices besides the incorrect prediction. This has entropylog⁡(|X|−1){\displaystyle \log(|{\mathcal {X}}|-1)}. Looking at extreme cases, if the predictor is always correct the first and second terms of the inequality are 0, and the existence of a perfect predictor impliesX{\displaystyle X}is totally determined byY{\displaystyle Y}, and soH(X|Y)=0{\displaystyle H(X|Y)=0}. If the predictor is always wrong, then the first term is 0, andH(X∣Y){\displaystyle H(X\mid Y)}can only be upper bounded with a uniform distribution over the remaining choices. LetX{\displaystyle X}be arandom variablewithdensityequal to one ofr+1{\displaystyle r+1}possible densitiesf1,…,fr+1{\displaystyle f_{1},\ldots ,f_{r+1}}. Furthermore, theKullback–Leibler divergencebetween any pair of densities cannot be too large, Letψ(X)∈{1,…,r+1}{\displaystyle \psi (X)\in \{1,\ldots ,r+1\}}be an estimate of the index. Then wherePi{\displaystyle P_{i}}is theprobabilityinduced byfi{\displaystyle f_{i}}. The following generalization is due to Ibragimov and Khasminskii (1979), Assouad and Birge (1983). LetFbe a class of densities with a subclass ofr+ 1 densitiesƒθsuch that for anyθ≠θ′ Then in the worst case theexpected valueof error of estimation is bound from below, whereƒnis anydensity estimatorbased on asampleof sizen.
https://en.wikipedia.org/wiki/Fano%27s_inequality
Runtime verificationis a computing system analysis and execution approach based on extracting information from a running system and using it to detect and possibly react to observed behaviors satisfying or violating certain properties.[1]Some very particular properties, such asdataraceanddeadlockfreedom, are typically desired to be satisfied by all systems and may be best implemented algorithmically. Other properties can be more conveniently captured asformal specifications. Runtime verification specifications are typically expressed in trace predicate formalisms, such asfinite-state machines,regular expressions,context-freepatterns,linear temporal logics, etc., or extensions of these. This allows for a less ad-hoc approach thannormal testing. However, any mechanism for monitoring an executing system is considered runtime verification, including verifying against test oracles and reference implementations[citation needed]. When formal requirements specifications are provided, monitors are synthesized from them and infused within the system by means of instrumentation. Runtime verification can be used for many purposes, such as security or safetypolicy monitoring, debugging, testing, verification, validation, profiling, fault protection, behavior modification (e.g., recovery), etc. Runtime verification avoids the complexity of traditionalformal verificationtechniques, such asmodel checkingand theorem proving, by analyzing only one or a few execution traces and by working directly with the actual system, thus scaling up relatively well and giving more confidence in the results of the analysis (because it avoids the tedious and error-prone step of formally modelling the system), at the expense of less coverage. Moreover, through its reflective capabilities runtime verification can be made an integral part of the target system, monitoring and guiding its execution during deployment. Checking formally or informally specified properties against executing systems or programs is an old topic (notable examples aredynamic typingin software, or fail-safe devices or watchdog timers in hardware), whose precise roots are hard to identify. The terminologyruntime verificationwas formally introduced as the name of a 2001 workshop[2]aimed at addressing problems at the boundary between formal verification and testing. For large code bases, manually writing test cases turns out to be very time consuming. In addition, not all errors can be detected during development. Early contributions to automated verification were made at the NASA Ames Research Center by Klaus Havelund andGrigore Rosuto archive high safety standards in spacecraft, rovers and avionics technology.[3]They proposed a tool to verify specifications in temporal logic and to detectrace conditionsand deadlocks inJavaprograms by analyzing single execution paths. Currently, runtime verification techniques are often presented with various alternative names, such as runtime monitoring, runtime checking, runtime reflection, runtime analysis,dynamic analysis, runtime/dynamic symbolic analysis, trace analysis, log file analysis, etc., all referring to instances of the same high-level concept applied either to different areas or by scholars from different communities. Runtime verification is intimately related to other well-established areas, such as testing (particularly model-based testing) when used before deployment andfault-tolerant systemswhen used during deployment. Within the broad area of runtime verification, one can distinguish several categories, such as: The broad field of runtime verification methods can be classified by three dimensions:[9] Nevertheless, the basic process in runtime verification remains similar:[9] The examples below discuss some simple properties that have been considered, possibly with small variations, by several runtime verification groups by the time of this writing (April 2011). To make them more interesting, each property below uses a different specification formalism and all of them are parametric. Parametric properties are properties about traces formed with parametric events, which are events that bind data to parameters. Here a parametric property has the form∀parameters:φ{\displaystyle \forall parameters:\varphi }, whereφ{\displaystyle \varphi }is a specification in some appropriate formalism referring to generic (uninstantiated) parametric events. The intuition for such parametric properties is that the property expressed byφ{\displaystyle \varphi }must hold for all parameter instances encountered (through parametric events) in the observed trace. None of the following examples are specific to any particular runtime verification system, though support for parameters is obviously needed. In the following examples Java syntax is assumed, thus "==" is logical equality, while "=" is assignment. Some methods (e.g.,update()in the UnsafeEnumExample) are dummy methods, which are not part of the Java API, that are used for clarity. The JavaIteratorinterface requires that thehasNext()method be called and return true before thenext()method is called. If this does not occur, it is very possible that a user will iterate "off the end of" aCollection. The figure to the right shows a finite-state machine that defines a possible monitor for checking and enforcing this property with runtime verification. From theunknownstate, it is always an error to call thenext()method because such an operation could be unsafe. IfhasNext()is called and returnstrue, it is safe to callnext(), so the monitor enters themorestate. If, however, thehasNext()method returnsfalse, there are no more elements, and the monitor enters thenonestate. In themoreandnonestates, calling thehasNext()method provides no new information. It is safe to call thenext()method from themorestate, but it becomes unknown if more elements exist, so the monitor reenters the initialunknownstate. Finally, calling thenext()method from thenonestate results in entering theerrorstate. What follows is a representation of this property using parametric past timelinear temporal logic. ∀Iteratorii.next()→⊙(i.hasNext()==true){\displaystyle \forall ~{\text{Iterator}}~i\quad i.{\text{next}}()~\rightarrow ~\odot (i.{\text{hasNext}}()==true)} This formula says that any call to thenext()method must be immediately preceded by a call tohasNext()method that returns true. The property here is parametric in the Iteratori. Conceptually, this means that there will be one copy of the monitor for each possible Iterator in a test program, although runtime verification systems need not implement their parametric monitors this way. The monitor for this property would be set to trigger a handler when the formula is violated (equivalently when the finite-state machine enters theerrorstate), which will occur when eithernext()is called without first callinghasNext(), or whenhasNext()is called beforenext(), but returnedfalse. TheVectorclass in Java has two means for iterating over its elements. One may use the Iterator interface, as seen in the previous example, or one may use theEnumerationinterface. Besides the addition of a remove method for the Iterator interface, the main difference is that Iterator is "fail fast" while Enumeration is not. What this means is that if one modifies the Vector (other than by using the Iterator remove method) when one is iterating over the Vector using an Iterator, aConcurrentModificationExceptionis thrown. However, when using an Enumeration this is not a case, as mentioned. This can result in non-deterministic results from a program because the Vector is left in an inconsistent state from the perspective of the Enumeration. For legacy programs that still use the Enumeration interface, one may wish to enforce that Enumerations are not used when their underlying Vector is modified. The following parametric regular pattern can be used to enforce this behavior: This pattern is parametric in both the Enumeration and the Vector. Intuitively, and as above runtime verification systems need not implement their parametric monitors this way, one may think of the parametric monitor for this property as creating and keeping track of a non-parametric monitor instance for each possible pair of Vector and Enumeration. Some events may concern several monitors at the same time, such asv.update(), so the runtime verification system must (again conceptually) dispatch them to all interested monitors. Here the property is specified so that it states the bad behaviors of the program. This property, then, must be monitored for the match of the pattern. The figure to the right shows Java code that matches this pattern, thus violating the property. The Vector, v, is updated after the Enumeration, e, is created, and e is then used. The previous two examples show finite state properties, but properties used in runtime verification may be much more complex. The SafeLock property enforces the policy that the number of acquires and releases of a (reentrant) Lock class are matched within a given method call. This, of course, disallows release of Locks in methods other than the ones that acquire them, but this is very possibly a desirable goal for the tested system to achieve. Below is a specification of this property using a parametric context-free pattern: The pattern specifies balanced sequences of nested begin/end and acquire/release pairs for each Thread and Lock (ϵ{\displaystyle \epsilon }is the empty sequence). Here begin and end refer to the begin and end of every method in the program (except the calls to acquire and release themselves). They are parametric in the Thread because it is necessary to associate the beginning and end of methods if and only if they belong to the same Thread. The acquire and release events are also parametric in the Thread for the same reason. They are, additionally, parametric in Lock because we do not wish to associate the releases of one Lock with the acquires of another. In the extreme, it is possible that there will be an instance of the property, i.e., a copy of the context-free parsing mechanism, for each possible combination of Thread with Lock; this happens, again, intuitively, because runtime verification systems may implement the same functionality differently. For example, if a system has Threadst1{\displaystyle t_{1}},t2{\displaystyle t_{2}}, andt3{\displaystyle t_{3}}with Locksl1{\displaystyle l_{1}}andl2{\displaystyle l_{2}}, then it is possible to have to maintain property instances for the pairs <t1{\displaystyle t_{1}},l1{\displaystyle l_{1}}>, <t1{\displaystyle t_{1}},l2{\displaystyle l_{2}}>, <t2{\displaystyle t_{2}},l1{\displaystyle l_{1}}>, <t2{\displaystyle t_{2}},l2{\displaystyle l_{2}}>, <t3{\displaystyle t_{3}},l1{\displaystyle l_{1}}>, and <t3{\displaystyle t_{3}},l2{\displaystyle l_{2}}>. This property should be monitored for failures to match the pattern because the pattern specified correct behavior. The figure to the right shows a trace that produces two violations of this property. The steps down in the figure represent the beginning of a method, while the steps up are the end. The grey arrows in the figure show the matching between given acquires and releases of the same Lock. For simplicity, the trace shows only one Thread and one Lock. Most of the runtime verification research addresses one or more of the topics listed below. Observing an executing system typically incurs some runtime overhead (hardware monitors may make an exception). It is important to reduce the overhead of runtime verification tools as much as possible, particularly when the generated monitors are deployed with the system. Runtime overhead reducing techniques include: One of the major practical impediments of all formal approaches is that their users are reluctant to, or don't know and don't want to learn how to read or write specifications. In some cases the specifications are implicit, such as those for deadlocks and data-races, but in most cases they need to be produced. An additional inconvenience, particularly in the context of runtime verification, is that many existing specification languages are not expressive enough to capture the intended properties. The capability of a runtime verifier to detect errors strictly depends on its capability to analyze execution traces. When the monitors are deployed with the system, instrumentation is typically minimal and the execution traces are as simple as possible to keep the runtime overhead low. When runtime verification is used for testing, one can afford more comprehensive instrumentations that augment events with important system information that can be used by the monitors to construct and therefore analyze more refined models of the executing system. For example, augmenting events withVector clockinformation and with data and control flow information allows the monitors to construct acausal modelof the running system in which the observed execution was only one possible instance. Any other permutation of events that is consistent with the model is a feasible execution of the system, which could happen under a different thread interleaving. Detecting property violations in such inferred executions (by monitoring them) makes the monitorpredicterrors that did not happen in the observed execution, but which can happen in another execution of the same system. An important research challenge is to extract models from execution traces that comprise as many other execution traces as possible. Unlike testing or exhaustive verification, runtime verification holds the promise to allow the system to recover from detected violations, through reconfiguration, micro-resets, or through finer intervention mechanisms sometimes referred to as tuning or steering. Implementation of these techniques within the rigorous framework of runtime verification gives rise to additional challenges. Researchers in Runtime Verification recognized the potential for usingAspect-oriented Programmingas a technique for defining program instrumentation in a modular way. Aspect-oriented programming (AOP) generally promotes the modularization of crosscutting concerns. Runtime Verification naturally is one such concern and can hence benefit from certain properties of AOP. Aspect-oriented monitor definitions are largely declarative, and hence tend to be simpler to reason about than instrumentation expressed through aprogram transformationwritten in an imperative programming language. Further, static analyses can reason about monitoring aspects more easily than about other forms of program instrumentation, as all instrumentation is contained within a single aspect. Many current runtime verification tools are hence built in the form of specification compilers, that take an expressive high-level specification as input and produce as output code written in some Aspect-oriented programming language (such asAspectJ). Runtime verification, if used in combination with provably correct recovery code, can provide an invaluable infrastructure for program verification, which can significantly lower the latter's complexity. For example, formally verifying heap-sort algorithm is very challenging. One less challenging technique to verify it is to monitor its output to be sorted (a linear complexity monitor) and, if not sorted, then sort it using some easily verifiable procedure, say insertion sort. The resulting sorting program is now more easily verifiable, the only thing being required from heap-sort is that it does not destroy the original elements regarded as a multiset, which is much easier to prove. Looking at from the other direction, one can use formal verification to reduce the overhead of runtime verification, as already mentioned above for static analysis instead of formal verification. Indeed, one can start with a fully runtime verified, but probably slow program. Then one can use formal verification (or static analysis) to discharge monitors, same way a compiler uses static analysis to discharge runtime checks of type correctness ormemory safety. Compared to the more traditional verification approaches, an immediate disadvantage of runtime verification is its reduced coverage. This is not problematic when the runtime monitors are deployed with the system (together with appropriate recovery code to be executed when the property is violated), but it may limit the effectiveness of runtime verification when used to find errors in systems. Techniques to increase the coverage of runtime verification for error detection purposes include:
https://en.wikipedia.org/wiki/Runtime_verification
Connascenceis a software design metric introduced by Meilir Page-Jones that quantifies the degree and type of dependency between software components, evaluating their strength (difficulty of change) and locality (proximity in the codebase). It can be categorized as static (analyzable at compile-time) or dynamic (detectable at runtime) and includes forms such as Connascence of Name, Type, and Position, each representing different dependency characteristics and levels of fragility.[1][2] Coupling describes the degree and nature of dependency between software components, focusing on what they share (e.g., data, control flow, technology) and how tightly they are bound. It evaluates two key dimensions: strength, which measures how difficult it is to change the dependency, and scope (or visibility), which indicates how widely the dependency is exposed across modules or boundaries. Traditional coupling types typically include content coupling, common coupling, control coupling, stamp coupling, external coupling, and data coupling.[1][3][2] Connascence, introduced by Meilir Page-Jones, provides a systematic framework for analyzing and measuring coupling dependencies. It evaluates dependencies based on three dimensions: strength, which measures the effort required to refactor or modify the dependency; locality, which considers how physically or logically close dependent components are in the codebase; and degree, which measures how many components are affected by the dependency. Connascence can be categorized into static (detectable at compile-time) and dynamic (detectable at runtime) forms. Static connascence refers to compile-time dependencies, such as method signatures, while dynamic connascence refers to runtime dependencies, which can manifest in forms like connascence of timing, values, or algorithm.[1][3][2] Each coupling flavor can exhibit multiple types of connascence, a specific type, or, in rare cases, none at all, depending on how the dependency is implemented. Common types of connascence include connascence of name, type, position, and meaning. Certain coupling types naturally align with specific connascence types; for example, data coupling often involves connascence of name or type. However, not every combination of coupling and connascence is practically meaningful. Dependencies relying on parameter order in a method signature demonstrate connascence of position, which is fragile and difficult to refactor because reordering parameters breaks the interface. In contrast, connascence of name, which relies on field or parameter names, is generally more resilient to change. Connascence types themselves exhibit a natural hierarchy of strength, with connascence of name typically considered weaker than connascence of meaning.[1][3][2] Dependencies spanning module boundaries or distributed systems typically have higher coordination costs, increasing the difficulty of refactoring and propagating changes across distant boundaries. Modern practices, such as dependency injection and interface-based programming, are often employed to reduce coupling strength and improve the maintainability of dependencies.[1][3][2] While coupling identifies what is shared between components, connascence evaluates how those dependencies behave, how changes propagate, and how difficult they are to refactor. Strength, locality, and degree are interrelated; dependencies with high strength, wide scope, and spanning distant boundaries are significantly harder to refactor and maintain. Together, coupling provides a high-level overview of dependency relationships, while connascence offers a granular framework for analyzing dependency strength, locality, degree, and resilience to change, supporting the design of maintainable and robust systems.[1][3][2]
https://en.wikipedia.org/wiki/Connascence_(computer_science)
PassMap/ˈpæsmæp/is a map-based graphical password method ofauthentication, similar topasswords, proposed byNational Tsing Hua Universityresearchers. The wordPassMaporiginates from the wordpasswordby substitutingwordwithmap. PassMap was proposed byNational Tsing Hua Universityresearchers Hung-Min Sun, Yao-Hsin Chen, Chiung-Cheng Fang, and Shih-Ying Chang at the 7thAssociation for Computing MachinerySymposium on Information, Computer and Communications Security. They defined PassMap as letting a consumer get authenticated by choosing a series of points on a bigworld map. Their study showed that for people, PassMap passwords are more user-friendly and memorable.[1] Users are shownGoogle Mapson their screen, through which they can zoom in to choose any two points they want to become their PassMap password. Since PassMap uses Google Maps, it cannot be used in applications that lack Internet access or Google Maps integration.[2]By default, PassMap's screen is set to the eighth zoom level and is centered onTaiwan. PassMap has no constraints on the zoom level, so consumers are allowed to select dots at unsafer, lower levels, like level 8. It does notnormalizeerror tolerancebased on a screen's zoom position.[3]PassMap's effective login percentage is 92.59%.[4] Ritika Sachdev wrote in theInternational Journal of Pure and Applied Research in Engineering and Technologythat based on psychological studies, people can effortlessly recall the milestones they have visited. Sachdev called PassMap a "highly subjective or customized based password to ensure security".[5] S. Rajarajan, M. Prabhu, and S. Palanivel praised PassMap for having "good memorability due to the usage of map for the password mechanism". But they noted that, like manygraphical passwords, PassMap is susceptible to ashoulder surfingintrusion.[2]
https://en.wikipedia.org/wiki/PassMap
Real-time locating systems(RTLS), also known asreal-time tracking systems, are used to automaticallyidentifyandtrackthe location of objects or people inreal time, usually within a building or other contained area. Wireless RTLS tags are attached to objects or worn by people, and in most RTLS, fixed reference points receive wireless signals from tags to determine their location.[1]Examples of real-time locating systems include tracking automobiles through anassembly line, locating pallets of merchandise in a warehouse, or finding medical equipment in a hospital. The physical layer of RTLS technology is oftenradio frequency(RF) communication. Some systems use optical (usuallyinfrared) or acoustic (usuallyultrasound) technology with, or in place of RF, RTLS tags. And fixed reference points can betransmitters,receivers, or both resulting in numerous possible technology combinations. RTLS are a form oflocal positioning systemand do not usually refer toGPSor tomobile phone tracking. Location information usually does not include speed, direction, or spatial orientation. The term RTLS was created (circa 1998) at theID EXPOtrade show by Tim Harrington (WhereNet), Jay Werb (PinPoint), and Bert Moore (Automatic Identification Manufacturers, Inc., AIM). It was created to describe and differentiate anemerging technologythat not only provided the automatic identification capabilities of activeRFIDtags, but also added the ability to view the location on a computer screen. It was at this show that the first examples of a commercial radio based RTLS system were shown by PinPoint and WhereNet. Although this capability had been utilized previously by military and government agencies, the technology had been too expensive for commercial purposes. In the early 1990s, the first commercial RTLS were installed at three healthcare facilities in the United States and were based on the transmission and decoding ofinfrared lightsignals from actively transmitting tags. Since then, new technology has emerged that also enables RTLS to be applied to passive tag applications. RTLS are generally used in indoor and/or confined areas, such as buildings, and do not provide global coverage likeGPS. RTLS tags are affixed to mobile items, such as equipment or personnel, to be tracked or managed. RTLS reference points, which can be either transmitters or receivers, are spaced throughout a building (or similar area of interest) to provide the desired tag coverage. In most cases, the more RTLS reference points that are installed, the better the location accuracy, until the technology limitations are reached. A number of disparate system designs are all referred to as "real-time locating systems". Two primary system design elements are locating at choke points and locating in relative coordinates. The simplest form ofchoke pointlocating is where short range ID signals from a moving tag are received by a single fixed reader in a sensory network, thus indicating the location coincidence of reader and tag. Alternately, a choke point identifier can be received by the moving tag and then relayed, usually via a second wireless channel, to a location processor. Accuracy is usually defined by the sphere spanned with the reach of the choke point transmitter or receiver. The use of directional antennas, or technologies such as infrared or ultrasound that are blocked by room partitions, can support choke points of various geometries.[2] ID signals from a tag are received by a multiplicity of readers in asensory network, and a position is estimated using one or more locating algorithms, such astrilateration,multilateration, ortriangulation. Equivalently, ID signals from several RTLS reference points can be received by a tag and relayed back to a location processor. Localization with multiple reference points requires that distances between reference points in the sensory network be known in order to precisely locate a tag, and the determination of distances is calledranging. Another way to calculate relative location is viamobile tagscommunicating with one another. The tag(s) will then relay this information to a location processor. RF trilateration uses estimated ranges from multiple receivers to estimate the location of a tag. RF triangulation uses the angles at which the RF signals arrive at multiple receivers to estimate the location of a tag. Many obstructions, such as walls or furniture, can distort the estimated range and angle readings leading to varied qualities of location estimate. Estimation-based locating is often measured in accuracy for a given distance, such as 90% accurate for 10-meter range. Some systems use locating technologies that can't pass through walls, such as infrared or ultrasound. These require line of sight (or near line of sight) to communicate properly. As a result, they tend to be more accurate in indoor environments. RTLS can be used in numerouslogisticalor operational areas to: RTLS may be seen as a threat toprivacywhen used to determine the location of people. The newly declared human right ofinformational self-determinationgives the right to prevent one's identity andpersonal datafrom being disclosed to others and also covers disclosure of locality, though this does not generally apply to theworkplace. Several prominentlabor unionshave spoken out against the use of RTLS systems to track workers, calling them "the beginning ofBig Brother" and "aninvasion of privacy".[5] Current location-tracking technologies can be used to pinpoint users of mobile devices in several ways. First, service providers have access to network-based and handset-based technologies that can locate a phone for emergency purposes. Second, historical location can frequently be discerned from service provider records. Thirdly, other devices such as Wi-Fi hotspots or IMSI catchers can be used to track nearby mobile devices in real time. Finally, hybrid positioning systems combine different methods in an attempt to overcome each individual method's shortcomings.[6] There is a wide variety of systems concepts and designs to provide real-time locating.[7] A general model for selection of the best solution for a locating problem has been constructed at theRadboud University of Nijmegen.[19]Many of these references do not comply with the definitions given in international standardization with ISO/IEC 19762-5[20]and ISO/IEC 24730-1.[21]However, some aspects of real-time performance are served and aspects of locating are addressed in context of absolute coordinates. Depending on the physical technology used, at least one and often some combination of ranging and/or angulating methods are used to determine location: Real-time locating is affected by a variety of errors. Many of the major reasons relate to the physics of the locating system, and may not be reduced by improving the technical equipment. Many RTLS systems require direct and clear line of sight visibility. For those systems, where there is no visibility from mobile tags to fixed nodes there will be no result or a non valid result fromlocating engine. This applies to satellite locating as well as other RTLS systems such as angle of arrival and time of arrival. Fingerprinting is a way to overcome the visibility issue: If the locations in the tracking area contain distinct measurement fingerprints, line of sight is not necessarily needed. For example, if each location contains a unique combination of signal strength readings from transmitters, the location system will function properly. This is true, for example, with some Wi-Fi based RTLS solutions. However, having distinct signal strength fingerprints in each location typically requires a fairly high saturation of transmitters. The measured location may appear entirely faulty. This is a generally result of simple operational models to compensate for the plurality of error sources. It proves impossible to serve proper location after ignoring the errors. Real timeis no registered branding and has no inherent quality. A variety of offers sails under this term. As motion causes location changes, inevitably the latency time to compute a new location may be dominant with regard to motion. Either an RTLS system that requires waiting for new results is not worth the money or the operational concept that asks for faster location updates does not comply with the chosen system's approach. Location will never be reportedexactly, as the termreal-timeand the termprecisiondirectly contradict in aspects of measurement theory as well as the termprecisionand the termcostcontradict in aspects of economy. That is no exclusion of precision, but the limitations with higher speed are inevitable. Recognizing a reported location steadily apart from physical presence generally indicates the problem of insufficient over-determination and missing of visibility along at least one link from resident anchors to mobile transponders. Such effect is caused also by insufficient concepts to compensate for calibration needs. Noise from various sources has an erratic influence on stability of results. The aim to provide a steady appearance increases the latency contradicting to real time requirements. As objects containing mass have limitations to jump, such effects are mostly beyond physical reality. Jumps of reported location not visible with the object itself generally indicate improper modeling with the location engine. Such effect is caused by changing dominance of various secondary responses. Location of residing objects gets reported moving, as soon as the measures taken are biased by secondary path reflections with increasing weight over time. Such effect is caused by simple averaging and the effect indicates insufficient discrimination of first echoes. The basic issues of RTLS are standardized by theInternational Organization for Standardizationand theInternational Electrotechnical Commissionunder the ISO/IEC 24730 series. In this series of standards, the basic standard ISO/IEC 24730-1 identifies the terms describing a form of RTLS used by a set of vendors but does not encompass the full scope of RTLS technology. Currently several standards are published: These standards do not stipulate any special method of computing locations, nor the method of measuring locations. This may be defined in specifications for trilateration, triangulation, or any hybrid approaches to trigonometric computing for planar or spherical models of a terrestrial area. In RTLS application in the healthcare industry, various studies were issued discussing the limitations of the currently adopted RTLS. Currently used technologies RFID, Wi-fi, UWB, all RFID based are hazardous in the sense of interference with sensitive equipment. A study carried out by Dr Erik Jan van Lieshout of the Academic Medical Centre of the University of Amsterdam published inJAMA(Journal of the American Medical Equipment)[24]claimed "RFID and UWB could shut down equipment patients rely on" as "RFID caused interference in 34 of the 123 tests they performed". The first Bluetooth RTLS provider in the medical industry is supporting this in their article: "The fact that RFID cannot be used near sensitive equipment should in itself be a red flag to the medical industry". The RFID Journal responded to this study not negating it rather explaining real-case solution: "The Purdue study showed no effect when ultrahigh-frequency (UHF) systems were kept at a reasonable distance from medical equipment. So placing readers in utility rooms, near elevators and above doors between hospital wings or departments to track assets is not a problem".[25]However the case of ”keeping at a reasonable distance” might be still an open question for the RTLS technology adopters and providers in medical facilities. In many applications it is very difficult and at the same time important to make a proper choice among various communication technologies (e.g., RFID, WiFi, etc.) which RTLS may include. Wrong design decisions made at early stages can lead to catastrophic results for the system and a significant loss of money for fixing and redesign. To solve this problem a special methodology for RTLS design space exploration was developed. It consists of such steps as modelling, requirements specification, and verification into a single efficient process.[26]
https://en.wikipedia.org/wiki/Real-time_locating
Digital anthropologyis the anthropological study of the relationship between humans and digital-era technology. The field is new, and thus has a variety of names with a variety of emphases. These include techno-anthropology,[1]digital ethnography, cyberanthropology,[2]and virtual anthropology.[3] Most anthropologists who use the phrase "digital anthropology" are specifically referring to online and Internet technology. The study of humans' relationship to a broader range of technology may fall under other subfields of anthropological study, such ascyborg anthropology. The Digital Anthropology Group (DANG) is classified as an interest group in theAmerican Anthropological Association. DANG's mission includes promoting the use of digital technology as a tool of anthropological research, encouraging anthropologists to share research using digital platforms, and outlining ways for anthropologists to study digital communities. Cyberspaceor the "virtual world" itself can serve as a "field" site for anthropologists, allowing the observation, analysis, and interpretation of the sociocultural phenomena springing up and taking place in any interactive space. National and transnational communities, enabled by digital technology, establish a set of social norms, practices, traditions, storied history and associatedcollective memory,[4]migration periods, internal and external conflicts, potentially subconscious language features[5][6]andmemeticdialects comparable to those of traditional, geographically confined communities. This includes the various communities built aroundfree and open-source software, online platforms such as Facebook, Twitter/X, Instagram,4chanandRedditand their respective sub-sites, and politically motivated groups likeAnonymous,WikiLeaks, or theOccupy movement.[7] A number of academic anthropologists have conducted traditional ethnographies of virtual worlds, such asBonnie Nardi's study ofWorld of Warcraft[8]orTom Boellstorff's study ofSecond Life.[9]AcademicGabriella Colemanhas done ethnographic work on theDebiansoftware community[10]and the Anonymoushacktivistnetwork.[11]TheoristNancy Mauro-Fludeconducts ethnographic field work on computing arts and computer subcultures such assysterserver.neta part of the communities of feminist web servers[12]and theFeministInternet network.[13]Eitan Y. Wilf[14]examines the intersection of artists' creativity and digital technology and artificial intelligence.[15]Yongming Zhoustudied how in China the internet is used to participate in politics.[16]Eve M. Zuckerand colleagues study the shift to digital memorialization of mass atrocities and the emergent role of artificial intelligence in these processes.[4][17]Victoria Bernalconducted ethnographic research on the themes of nationalism and citizenship among Eritreans participating in online political engagement with their homeland.[18] Anthropological research can help designers adapt and improve technology. Australian anthropologistGenevieve Belldid extensive user experience research at Intel that informed the company's approach to its technology, users, and market.[19] Many digital anthropologists who study online communities use traditional methods of anthropological research. Theyparticipatein online communities in order to learn about their customs and worldviews, and back their observations with private interviews, historical research, and quantitative data. Their product is an ethnography, a qualitative description of their experience and analyses. Other anthropologists and social scientists have conducted research that emphasizes data gathered by websites and servers. However, academics often have trouble accessing user data on the same scale as social media corporations likeFacebookand data mining companies likeAcxiom. In terms of method, there is a disagreement in whether it is possible to conduct research exclusively online or if research will only be complete when the subjects are studied holistically, both online and offline.Tom Boellstorff, who conducted a three-year research as an avatar in the virtual worldSecond Life, defends the first approach, stating that it is not just possible, but necessary to engage with subjects “in their own terms”.[20][citation needed][21]Others, such asDaniel Miller, have argued that an ethnographic research should not exclude learning about the subject's life outside the internet.[9] TheAmerican Anthropological Associationoffers an online guide for students using digital technology to store and share data. Data can be uploaded to digital databases to be stored, shared, and interpreted. Text and numerical analysis software can help producemetadata, while acodebookmay help organize data. Online fieldwork offers new ethical challenges. According to theAmerican Anthropological Association's ethics guidelines, anthropologists researching a community must make sure that all members of that community know they are being studied and have access to data the anthropologist produces. However, many online communities' interactions are publicly available for anyone to read, and may be preserved online for years. Digital anthropologists debate the extent to whichlurkingin online communities and sifting through public archives is ethical.[22] The Association also asserts that anthropologists' ability to collect and store data at all is "a privilege", and researchers have an ethical duty to store digital data responsibly. This means protecting the identity of participants, sharing data with other anthropologists, and making backup copies of all data.[23]
https://en.wikipedia.org/wiki/Digital_anthropology
Inethics,evasionis an act ofdeceptionwhere a true statement isirrelevantor leads to afalse conclusion. For instance, a man knows that a woman is in a room in the building because he heard her, but in answer to a question as to whether she is present, says "I have not seen her", thereby avoiding bothlyingand making a revelation. Evasion is described[citation needed]as a way to fulfil an obligation to tell the truth while keeping secrets from those not entitled to know the truth. Evasions are closely related toequivocationsandmental reservations; indeed, some statements fall under both descriptions. Question dodging is a rhetorical technique involving the intentional avoidance of answering aquestion. This may occur when the person questioned either does not know the answer and wants to avoid embarrassment, or when the person is beinginterrogatedor questioned indebate, and wants to avoid giving a direct response.[1] A famous example of question dodging in a UK context occurred in 1997 whenHome SecretaryMichael Howardwas questioned byJeremy Paxmanon the BBC'sNewsnight. While discussing a meeting Howard had with the head of the Prison Service,Derek Lewis, about the possible dismissal of the head ofParkhurst Prison; Paxman asked Howard "did you threaten to overrule him?". Howard dodged the question by saying that he did not overrule him. Paxman repeatedly asked the question "did youthreatento overrule him?" a total of 12 times during the interview with Howard evading each time.[2][3] Overt question dodging can sometimes be employed humorously, in order to sidestep giving a public answer in a political discussion: when a reporter asked MayorRichard J. DaleywhyHubert Humphreyhad lost the state of Illinois in the1968 presidential election, Daley replied "He lost it because he didn't get enough votes."[4]Similarly whenLarry KingaskedPutinwhat happened withKursk submarine, Putin answered: 'She sank'.[5]Often the aim of dodging a question is to make it seem as though the question was fulfilled, leaving the person who asked the question feeling satisfied with the answer, unaware that the question was not properly answered. A false accusation of question dodging can sometimes be made as a disingenuous tactic in debate, in theinformal fallacyof theloaded question. A common way out of this argument is not to answer the question (e.g. with a simple 'yes' or 'no'), but to challenge the assumption behind the question. This can lead the person questioned to be accused of "dodging the question". In the context of political discourse, evasion is a technique of equivocation that is important forfacemanagement.[6] Peter Bull identified the following evasion techniques for answering questions:[7]
https://en.wikipedia.org/wiki/Evasion_(ethics)
Visual temporal attentionis a special case ofvisual attentionthat involves directing attention to specific instant of time. Similar to its spatial counterpartvisual spatial attention, these attention modules have been widely implemented invideo analyticsincomputer visionto provide enhanced performance and human interpretable explanation[3]ofdeep learningmodels. As visual spatial attention mechanism allows human and/orcomputer visionsystems to focus more on semantically more substantial regions in space, visual temporal attention modules enablemachine learningalgorithms to emphasize more on critical video frames invideo analyticstasks, such ashuman action recognition. Inconvolutional neural network-based systems, the prioritization introduced by the attention mechanism is regularly implemented as a linear weighting layer with parameters determined by labeled training data.[3] Recent video segmentation algorithms often exploits both spatial and temporal attention mechanisms.[2][4]Research inhuman action recognitionhas accelerated significantly since the introduction of powerful tools such asConvolutional Neural Networks (CNNs). However, effective methods for incorporation of temporal information into CNNs are still being actively explored. Motivated by the popular recurrent attention models innatural language processing, the Attention-aware Temporal Weighted CNN (ATW CNN) is proposed[4]in videos, which embeds a visual attention model into a temporal weighted multi-stream CNN. This attention model is implemented as temporal weighting and it effectively boosts the recognition performance of video representations. Besides, each stream in the proposed ATW CNN framework is capable of end-to-end training, with both network parameters and temporal weights optimized bystochastic gradient descent (SGD)withback-propagation. Experimental results show that the ATW CNN attention mechanism contributes substantially to the performance gains with the more discriminative snippets by focusing on more relevant video segments.
https://en.wikipedia.org/wiki/Visual_temporal_attention
TheeXtensible Access Control Markup Language(XACML) is anXML-based standardmarkup languagefor specifyingaccess controlpolicies. The standard, published byOASIS, defines a declarative fine-grained, attribute-basedaccess controlpolicy language, an architecture, and a processing model describing how to evaluate access requests according to the rules defined in policies.[2] XACML is primarily anattribute-based access controlsystem. In XACML, attributes – information about the subject accessing a resource, the resource to be addressed, and the environment – act as inputs for the decision of whether access is granted or not.[3]XACML can also be used to implementrole-based access control.[4] In XACML, access control decisions to be taken are expressed as Rules. Each Rule comprises a series of conditions which decide whether a given request is approved or not. If a Rule is applicable to a request but the conditions within the Rule fail to evaluate, the result is Indeterminate. Rules are grouped together in Policies, and a PolicySet contains Policies and possibly other PolicySets. Each of these also includes a Target, a simple condition that determines whether it should be evaluated for a given request. Combining algorithms can be used to combine Rules and Policies with potentially differing results in various ways. XACML also supports obligations and advice expressions. Obligations specify actions which must be executed during the processing of a request, for example for logging. Advice expressions are similar, but may be ignored.[3] XACML separates access control functionality into several components. Each operating environment in which access control is used has a Policy Enforcement Point (PEP) which implements the functionality to demand authorization and to grant or deny access to resources. These refer to an environment-independent and central Policy Decision Point (PDP) which actually makes the decision on whether access is granted. The PDP refers to policies stored in the Policy Retrieval Point (PRP). Policies are managed through a Policy Administration Point (PAP).[3] Version 3.0 was ratified by OASIS in January 2013.[5] Version 1.0 was ratified byOASISstandards organizationin 2003.[citation needed] Version 2.0 was ratified by OASIS standards organization on February 1, 2005.[citation needed] Version 3.0 was ratified by OASIS in January 2013. Non-normative terminology (following RFC 2904, except for PAP) (i.e. access to the resource is approved or rejected), and acts on the received decision XACML is structured into 3 levels of elements: A policy set can contain any number of policy elements and policy set elements. A policy can contain any number of rule elements. Policies, policy sets, rules and requests all use subjects, resources, environments, and actions. XACML provides a target, which is basically a set of simplified conditions for the subject, resource, and action that must be met for a policy set, policy, or rule to apply to a given request. Once a policy or policy set is found to apply to a given request, its rules are evaluated to determine the access decision and response. In addition to being a way to check applicability, target information also provides a way to index policies, which is useful if you need to store many policies and then quickly sift through them to find which ones apply. When a request to access that service arrives, the PDP will know where to look for policies that might apply to this request because the policies are indexed based on their target constraints. Note that a target may also specify that it applies to any request. Policy set, policy and rule can all contain target elements. Conditions only exist in rules. Conditions are essentially an advanced form of a target which can use a broader range of functions and more importantly can be used to compare two or more attributes together, e.g. subject-id==doctor-id. With conditions, it is possible to implement segregation of duty checks or relationship-based access control. Within XACML, a concept called obligations can be used. An obligation is a directive from the policy decision point (PDP) to the policy enforcement point (PEP) on what must be carried out before or after an access is approved. If the PEP is unable to comply with the directive, the approved accessmayormustnot be realized. The augmentation of obligations eliminates a gap between formal requirements and policy enforcement. An example of an obligation could look like this: The XACML's obligation can be an effective way to meet formal requirements (non-repudiation for example) that can be hard to implement as access control rules. Furthermore, any formal requirements will be part of the access control policy as obligations and not as separate functions, which makes policies consistent and centralization of the IT environment easier to achieve. Obligations can be used for "break-the-glass" scenarios or trust elevation ("you cannot transfer $1,000 without two-factor authentication - here is the link to the 2FA page"). In addition to obligations, XACML supports advice which are identical to obligations with the difference that a PEP is not obligated to enforce the advice (hence its name). What happens in XACML if there are two rules (or policies) that contradict each other? Imagine for instance a first rule that would saymanagers can view documentsand a second rule that would sayno one can work before 9am. What if the request is about Alice trying to view a document at 8am? Which rule wins? This is what combining algorithms tell us. They help resolve conflicts. XACML defines a number of combining algorithms that can be identified by aRuleCombiningAlgIdorPolicyCombiningAlgIdattribute of the <Policy> or <PolicySet> elements, respectively. The rule-combining algorithm defines a procedure for arriving at an access decision given the individual results of evaluation of a set of rules. Similarly, the policy-combining algorithm defines a procedure for arriving at an access decision given the individual results of evaluation of a set of policies. XACML defines a long list of functions (close to 300) to manipulate and compare attributes to other attributes and values: The functions and their identifiers are fully described in the standard. Functions are type-specific i.e. there is a function for string equality and a different one for integer equality. Refer to the standard for a formal definition of these function. Refer to the standard for a formal definition of these function. The list of higher order functions is as listed below. For a formal definition, refer to the XACML standard. http://docs.oasis-open.org/xacml/3.0/xacml-core-v3-schema-wd-17.xsd XACML 3.0 introduces administrative delegation, the JSON Profile of XACML (request/response), the REST Profile of XACML, the Multiple Decision Profile of XACML, and many more. The implementation of delegation is new in XACML 3.0. The delegation mechanism is used to support decentralized administration of access policies. It allows an authority (delegator) to delegate all or parts of its own authority or someone else's authority to another user (delegate) without any need to involve modification of the root policy. This is because, in this delegation model, the delegation rights are separated from the access rights. These are instead referred to as administrative control policies. Access control and administrative policies work together as in the following scenario: A partnership of companies' many services are protected by an access control system. The system implements the following central rules to protect its resources and to allow delegation: (Attributes can be fetched from an external source, e.g. a LDAP catalog.) When a consultant enters the corporation, a delegation can be issued locally by the consultant's supervisor, authorizing the consultant access to systems directly. The delegator (the supervisor in this scenario) may only have the right to delegate a limited set of access rights to consultants. Other new features of XACML 3.0 are listed athttp://www.webfarmr.eu/2010/07/enhancements-and-new-features-in-xacml-3-axiomatics/ The XACML TC is also publishing a list of changes here:http://wiki.oasis-open.org/xacml/DifferencesBetweenXACML2.0AndXACML3.0 This rule implements the use-it-lose-it access control paradigm. If a user does not log in for 30 days, then they lose access. In pseudo-code: deny if currentDateTime > lastLogin + 30 days This rule grants access if the current time is greater than 9am and less than 5pm. The following contains an Obligation block. Obligations are statements that can be returned along with a decision to enrich the decision flow. In this example, the PEP must log that access was granted. By default a PDP processes a single request at a time e.g. "Can Alice view item #1?". The PDP then replies with a single decision. At times, though, it is necessary to send multiple requests in one go e.g. "Can Alice view / edit / delete items #1, #2, #3?". The Multiple Decision Profile of XACML allows for this use case. The PDP will typically do the product of all combinations i.e. in the example aforementioned there will be 1 x 3 x 3 = 9 decisions returned in a single response. The way to enable the MDP is to send an array of objects for any of the categories rather than an array of one object (or simply an object). For instance, AccessSubject is an object but Resource is an array of objects. The latter will trigger the MDP process in PDPs that support the profile. Note as well the use of the IncludeInResult attribute which tells the PDP to return the XACML attribute and its value in the response so that decisions can be correlated to the relevant attribute values. In 2013 and 2014, the XACML Technical Committee focused on designing new profiles to facilitate developer integration. These include: All three profiles were showcased at the Cloud Identity Summit 2014 in Monterey, California. Using these profiles, integrating fine-grained authorization into applications becomes much easier. ALFA stands for Abbreviated Language for Authorization. It is a lightweight syntax used to implement policy-based access control policies. For examples refer to themain article. The JSON profile of XACML simplifies the integration between the PEP and the PDP. XACML is almost entirely apolicy definition languagebased onXMLandXSLT, defined by an openOASISspecification. The XACML specification  does not cover the design or implementation of Policy Decision Point (PDP), only the policy language they consume. Manyproprietaryandopen-sourcePDPs use XACML as their policy definition language. Open Policy Agent (OPA) is an open-source Policy Decision Point (PDP) implementation, capable of interpreting policy language to render policy decisions. OPA is a general-purpose PDP implementation which can be used for any scenario where a policy decision is required, much like PDP implementations that support the XACML specification. OPA's policy definition language is (Rego), which is a JSON-based, Turing-incomplete language based on Datalog. Policies written in XACML can be translated to Rego, and vice-versa. SAMLis an identity SSO and federation standard used for authentication. SAML is used as a common identity token format between different applications. SAML and XACML are both defined byOASIS. SAML and XACML were designed to interoperate where SAML is used to carry identity information / virtual identities and XACML is used to drive the access control logic through policies. OAuth 2.0is considered to be an authorization standard. It differs from XACML though in its origin, its purpose, and its applications. OAuth is about: XACML does not handle user approval or delegated access or password management. XACML simply provides: XACML and OAuth can be combined to deliver a more comprehensive approach to authorization.
https://en.wikipedia.org/wiki/XACML
Acanary trapis a method for exposing an information leak by giving different versions of a sensitive document to each of several suspects and seeing which version gets leaked. It could be one false statement, to see whether sensitive information gets out to other people as well. Special attention is paid to the quality of the prose of the unique language, in the hopes that the suspect will repeat it verbatim in the leak, thereby identifying the version of the document. The term was coined byTom Clancyin his novelPatriot Games,[1][non-primary source needed]although Clancy did not invent the technique. The actual method (usually referred to as abarium meal testin espionage circles) has been used by intelligence agencies for many years. The fictional characterJack Ryandescribes the technique he devised for identifying the sources of leaked classified documents: Each summary paragraph has six different versions, and the mixture of those paragraphs is unique to each numbered copy of the paper. There are over a thousand possible permutations, but only ninety-six numbered copies of the actual document. The reason the summary paragraphs are so lurid is to entice a reporter to quote them verbatim in the public media. If he quotes something from two or three of those paragraphs, we know which copy he saw and, therefore, who leaked it. A refinement of this technique uses a thesaurus program to shuffle through synonyms, thus making every copy of the document unique.[2] According to the bookSpycatcher[3]byPeter Wright(published in 1987), the technique is standard practice that has been used byMI5(and other intelligence agencies) for many years, under the name "barium meal test", named for themedical procedure. A barium meal test is more sophisticated than a canary trap because it is flexible and may take many different forms. However, the basic premise is to reveal a supposed secret to a suspected enemy (but nobody else) then monitor whether there is evidence of the fake information being utilized by the other side. For example, a suspected double agent could be offered some tempting "bait": e.g., be told that important information was stored at adead dropsite. The fake dead drop site could then be periodically checked for signs of disturbance. If the site showed signs of being disturbed (for instance, in order to copy microfilm stored there), then this would confirm that the suspected enemy really was an enemy, i.e., a double agent. The technique of embedding significant information in a hidden form in a medium has been used in many ways, which are usually classified according to intent: Following the troubled production ofStar Trek: The Motion Picturein the late 1970s,Paramount Pictureseffectively replacedGene Roddenberryas producer offurther moviesin thefranchisewithHarve Bennett. Roddenberry was retained as an "executive consultant", due to the high regard the series' fans held him in; while he had little real authority he was still kept involved in the creative process. The fans often complained about particular plot developments proposed for the films, such as the death ofSpockinStar Trek II, that Roddenberry had opposed. So, before any drafts of the screenplay forStar Trek III: The Search for Spockwere circulated, Bennett arranged for each individual copy to have subtle clues distinguishing it from the others. Shortly after Roddenberry opposed the destruction of theEnterpriseat the climax of that film, fans began to complain to Paramount and Bennett. He found that a leaked copy of the script was the one given to Roddenberry, but was unable to do anything about it.[5] After a series of leaks atTesla Motorsin 2008, CEOElon Muskreportedly sent slightly different versions of an e-mail to each employee in an attempt to reveal potential leakers. The e-mail was disguised as a request to employees to sign a newnon-disclosure agreement. The plan was undermined when the company's general counsel forwarded his own unique version of the e-mail with the attached agreement. As a result, Musk's scheme was realized by employees who now had a safe copy to leak.[6] In October 2019, British celebrityColeen Rooneyused a barium meal test to identify who was leaking information from her privateInstagramstories to tabloid newspaperThe Sunby posting fake stories which were blocked to all but one account. When these details appeared in the press, she publicly identified the leaks as coming from the account ofRebekah Vardy, wife of soccer playerJamie Vardy. The subsequent libel trial became known as theWagatha Christiecase.[7][8] In December 2020,Andrew Lewer, a Member of Parliament andParliamentary Private Secretaryin the UK government, was fired after a canary trap in the form of a letter reminding staff not to leak was published on the websiteGuido Fawkes.[9]
https://en.wikipedia.org/wiki/Canary_trap
Incomputer programming, aself-relocatingprogram is a program thatrelocatesits own address-dependent instructions and data when run, and is therefore capable of being loaded into memory at any address.[1][2]In many cases, self-relocating code is also a form ofself-modifying code. Self-relocation is similar to therelocationprocess employed by thelinker-loaderwhen a program is copied from external storage into main memory; the difference is that it is the loaded program itself rather than the loader in theoperating systemorshellthat performs the relocation. One form of self-relocation occurs when a program copies the code of its instructions from one sequence of locations to another sequence of locations within the main memory of a single computer, and then transfers processor control from the instructions found at the source locations of memory to the instructions found at the destination locations of memory. As such, the data operated upon by the algorithm of the program is the sequence of bytes which define the program. Static self-relocation typically happens atload-time(after the operating system has loaded the software and passed control to it, but still before its initialization has finished), sometimes also when changing the program's configuration at a later stage duringruntime.[3][4] As an example, self-relocation is often employed in the early stages of bootstrapping operating systems on architectures likeIBM PC compatibles, where lower-level chainboot loaders(like themaster boot record(MBR),volume boot record(VBR) and initial boot stages of operating systems such asDOS) move themselves out of place in order to load the next stage into memory. UnderCP/M, the debuggerDynamic Debugging Tool(DDT)dynamicallyrelocateditselfto the top of available memory throughpage boundary relocationin order to maximize theTransient Program Area(TPA) for programs to run in.[5][6] In 1988, the alternative command line processorZCPR3.4 for theZ-Systemintroduced so calledtype-4programs which were self-relocatable through an embedded stub as well.[7][8][9][10][11] UnderDOS, self-relocation is sometimes also used by more advanceddriversandresident system extensions(RSXs) orterminate-and-stay-resident programs(TSRs) loading themselves "high" intoupper memorymore effectively than possible for externally provided "high"-loaders (likeLOADHIGH/HILOAD,INSTALLHIGH/HIINSTALLorDEVICEHIGH/HIDEVICEetc.[12]since DOS 5) in order to maximize the memory available for applications. This is down to the fact that the operating system has no knowledge of the inner workings of a driver to be loaded and thus has to load it into a free memory area large enough to hold the whole driver as a block including its initialization code, even if that would be freed after the initialization. For TSRs, the operating system also has to allocate aProgram Segment Prefix(PSP) and anenvironment segment.[13]This might cause the driver not to be loaded into the most suitable free memory area or even prevent it from being loaded high at all. In contrast to this, a self-relocating driver can be loaded anywhere (including intoconventional memory) and then relocate only its (typically much smaller) resident portion into a suitable free memory area in upper memory. In addition, advanced self-relocating TSRs (even if already loaded into upper memory by the operating system) can relocate over most of their own PSP segment and command line buffer and free their environment segment in order to further reduce the resultingmemory footprintand avoidfragmentation.[14]Some self-relocating TSRs can also dynamically change their "nature" and morph into device drivers even if originally loaded as TSRs, thereby typically also freeing some memory.[4]Finally, it is technically impossible for an external loader to relocate drivers intoexpanded memory(EMS), thehigh memory area(HMA) orextended memory(viaDPMSorCLOAKING), because these methods require small driver-specificstubsto remain in conventional or upper memory in order to coordinate the access to the relocation target area,[15][nb 1][nb 2]and in the case of device drivers also because the driver's header must always remain in the first megabyte.[15][13]In order to achieve this, the drivers must be specially designed to support self-relocation into these areas.[15] Some advanced DOS drivers also contain both a device driver (which would be loaded at offset +0000h by the operating system) and TSR (loaded at offset +0100h) sharing a common code portion internally asfat binary.[13]If the shared code is not designed to beposition-independent, it requires some form of internal address fix-up similar to what would otherwise have been carried out by arelocating loaderalready; this is similar to the fix-up stage of self-relocation but with the code already being loaded at the target location by the operating system's loader (instead of done by the driver itself). IBMDOS/360did not have the ability to relocate programs during loading. Sometimes multiple versions of a program were maintained, each built for a different load address (partition). A special class of programs, called self-relocating programs, were coded to relocate themselves after loading.[16]IBMOS/360relocated executable programs when they were loaded into memory. Only one copy of the program was required, but once loaded the program could not be moved (so calledone-time position-independent code). As an extreme example of (many-time) self-relocation, also called dynamic self-relocation, it is possible to construct a computer program so that it does not stay at a fixed address in memory, even as it executes, as for example used inworm memory tests.[17][18][19][20]TheApple Wormis a dynamic self-relocator as well.[21]
https://en.wikipedia.org/wiki/Self-relocation
Discrete mathematicsis the study ofmathematical structuresthat can be considered "discrete" (in a way analogous todiscrete variables, having abijectionwith the set ofnatural numbers) rather than "continuous" (analogously tocontinuous functions). Objects studied in discrete mathematics includeintegers,graphs, andstatementsinlogic.[1][2][3]By contrast, discrete mathematics excludes topics in "continuous mathematics" such asreal numbers,calculusorEuclidean geometry. Discrete objects can often beenumeratedbyintegers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing withcountable sets[4](finite sets or sets with the samecardinalityas the natural numbers). However, there is no exact definition of the term "discrete mathematics".[5] The set of objects studied in discrete mathematics can be finite or infinite. The termfinite mathematicsis sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business. Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development ofdigital computerswhich operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches ofcomputer science, such ascomputer algorithms,programming languages,cryptography,automated theorem proving, andsoftware development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems. Although the main objects of study in discrete mathematics are discrete objects,analyticmethods from "continuous" mathematics are often employed as well. In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts byACMandMAAinto a course that is basically intended to developmathematical maturityin first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well.[6][7]Some high-school-level discrete mathematics textbooks have appeared as well.[8]At this level, discrete mathematics is sometimes seen as a preparatory course, likeprecalculusin this respect.[9] TheFulkerson Prizeis awarded for outstanding papers in discrete mathematics. Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily ongraph theoryandmathematical logic. Included within theoretical computer science is the study of algorithms and data structures.Computabilitystudies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations.Automata theoryandformal languagetheory are closely related to computability.Petri netsandprocess algebrasare used to model computer systems, and methods from discrete mathematics are used in analyzingVLSIelectronic circuits.Computational geometryapplies algorithms to geometrical problems and representations ofgeometricalobjects, whilecomputer image analysisapplies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics. Information theory involves the quantification ofinformation. Closely related iscoding theorywhich is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as:analog signals,analog coding,analog encryption. Logic is the study of the principles of valid reasoning andinference, as well as ofconsistency,soundness, andcompleteness. For example, in most systems of logic (but not inintuitionistic logic)Peirce's law(((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with atruth table. The study ofmathematical proofis particularly important in logic, and has accumulated toautomated theorem provingandformal verificationof software. Logical formulasare discrete structures, as areproofs, which form finitetrees[10]or, more generally,directed acyclic graphstructures[11][12](with eachinference stepcombining one or morepremisebranches to give a single conclusion). Thetruth valuesof logical formulas usually form a finite set, generally restricted to two values:trueandfalse, but logic can also be continuous-valued, e.g.,fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied,[13]e.g.infinitary logic. Set theory is the branch of mathematics that studiessets, which are collections of objects, such as {blue, white, red} or the (infinite) set of allprime numbers.Partially ordered setsand sets with otherrelationshave applications in several areas. In discrete mathematics,countable sets(includingfinite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked byGeorg Cantor's work distinguishing between different kinds ofinfinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work indescriptive set theorymakes extensive use of traditional continuous mathematics. Combinatorics studies the ways in which discrete structures can be combined or arranged.Enumerative combinatoricsconcentrates on counting the number of certain combinatorial objects - e.g. thetwelvefold wayprovides a unified framework for countingpermutations,combinationsandpartitions.Analytic combinatoricsconcerns the enumeration (i.e., determining the number) of combinatorial structures using tools fromcomplex analysisandprobability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae andgenerating functionsto describe the results, analytic combinatorics aims at obtainingasymptotic formulae.Topological combinatoricsconcerns the use of techniques fromtopologyandalgebraic topology/combinatorial topologyincombinatorics. Design theory is a study ofcombinatorial designs, which are collections of subsets with certainintersectionproperties.Partition theorystudies various enumeration and asymptotic problems related tointeger partitions, and is closely related toq-series,special functionsandorthogonal polynomials. Originally a part ofnumber theoryandanalysis, partition theory is now considered a part of combinatorics or an independent field.Order theoryis the study ofpartially ordered sets, both finite and infinite. Graph theory, the study ofgraphsandnetworks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right.[14]Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts oftopology, e.g.knot theory.Algebraic graph theoryhas close links with group theory andtopological graph theoryhas close links totopology. There are alsocontinuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics. Number theory is concerned with the properties of numbers in general, particularlyintegers. It has applications tocryptographyandcryptanalysis, particularly with regard tomodular arithmetic,diophantine equations, linear and quadratic congruences, prime numbers andprimality testing. Other discrete aspects of number theory includegeometry of numbers. Inanalytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects includetranscendental numbers,diophantine approximation,p-adic analysisandfunction fields. Algebraic structuresoccur as both discrete examples and continuous examples. Discrete algebras include:Boolean algebraused inlogic gatesand programming;relational algebraused indatabases; discrete and finite versions ofgroups,ringsandfieldsare important inalgebraic coding theory; discretesemigroupsandmonoidsappear in the theory offormal languages. There are many concepts and theories in continuous mathematics which have discrete versions, such asdiscrete calculus,discrete Fourier transforms,discrete geometry,discrete logarithms,discrete differential geometry,discrete exterior calculus,discrete Morse theory,discrete optimization,discrete probability theory,discrete probability distribution,difference equations,discrete dynamical systems, anddiscrete vector measures. Indiscrete calculusand thecalculus of finite differences, afunctiondefined on an interval of theintegersis usually called asequence. A sequence could be a finite sequence from a data source or an infinite sequence from adiscrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by arecurrence relationordifference equation. Difference equations are similar todifferential equations, but replacedifferentiationby taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there areintegral transformsinharmonic analysisfor studying continuous functions or analogue signals, there arediscrete transformsfor discrete functions or digital signals. As well asdiscrete metric spaces, there are more generaldiscrete topological spaces,finite metric spaces,finite topological spaces. Thetime scale calculusis a unification of the theory ofdifference equationswith that ofdifferential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion ofhybrid dynamical systems. Discrete geometryand combinatorial geometry are about combinatorial properties ofdiscrete collectionsof geometrical objects. A long-standing topic in discrete geometry istiling of the plane. Inalgebraic geometry, the concept of a curve can be extended to discrete geometries by taking thespectraofpolynomial ringsoverfinite fieldsto be models of theaffine spacesover that field, and lettingsubvarietiesor spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the formV(x−c)⊂Spec⁡K[x]=A1{\displaystyle V(x-c)\subset \operatorname {Spec} K[x]=\mathbb {A} ^{1}}forK{\displaystyle K}a field can be studied either asSpec⁡K[x]/(x−c)≅Spec⁡K{\displaystyle \operatorname {Spec} K[x]/(x-c)\cong \operatorname {Spec} K}, a point, or as the spectrumSpec⁡K[x](x−c){\displaystyle \operatorname {Spec} K[x]_{(x-c)}}of thelocal ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion oftangent spacecalled theZariski tangent space, making many features of calculus applicable even in finite settings. Inapplied mathematics,discrete modellingis the discrete analogue ofcontinuous modelling. In discrete modelling, discrete formulae are fit todata. A common method in this form of modelling is to userecurrence relation.Discretizationconcerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations.Numerical analysisprovides an important example. The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove thefour color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).[15] Inlogic, thesecond problemonDavid Hilbert's list of openproblemspresented in 1900 was to prove that theaxiomsofarithmeticareconsistent.Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself.Hilbert's tenth problemwas to determine whether a given polynomialDiophantine equationwith integer coefficients has an integer solution. In 1970,Yuri Matiyasevichproved that thiscould not be done. The need tobreakGerman codes inWorld War IIled to advances incryptographyandtheoretical computer science, with thefirst programmable digital electronic computerbeing developed at England'sBletchley Parkwith the guidance ofAlan Turingand his seminal work,On Computable Numbers.[16]TheCold Warmeant that cryptography remained important, with fundamental advances such aspublic-key cryptographybeing developed in the following decades. Thetelecommunications industryhas also motivated advances in discrete mathematics, particularly in graph theory andinformation theory.Formal verificationof statements in logic has been necessary forsoftware developmentofsafety-critical systems, and advances inautomated theorem provinghave been driven by this need. Computational geometryhas been an important part of thecomputer graphicsincorporated into modernvideo gamesandcomputer-aided designtools. Several fields of discrete mathematics, particularly theoretical computer science, graph theory, andcombinatorics, are important in addressing the challengingbioinformaticsproblems associated with understanding thetree of life.[17] Currently, one of the most famous open problems in theoretical computer science is theP = NP problem, which involves the relationship between thecomplexity classesPandNP. TheClay Mathematics Institutehas offered a $1 millionUSDprize for the first correct proof, along with prizes forsix other mathematical problems.[18]
https://en.wikipedia.org/wiki/Discrete_mathematics
Incomputer science,LR parsersare a type ofbottom-up parserthat analysedeterministic context-free languagesin linear time.[1]There are several variants of LR parsers:SLR parsers,LALR parsers,canonical LR(1) parsers,minimal LR(1) parsers, andgeneralized LR parsers(GLR parsers). LR parsers can be generated by aparser generatorfrom aformal grammardefining the syntax of the language to be parsed. They are widely used for the processing ofcomputer languages. An LR parser (left-to-right, rightmost derivation in reverse) reads input text from left to right without backing up (this is true for most parsers), and produces arightmost derivationin reverse: it does abottom-up parse– not atop-down LL parseor ad-hoc parse. The name "LR" is often followed by a numeric qualifier, as in "LR(1)" or sometimes "LR(k)". To avoidbacktrackingor guessing, the LR parser is allowed to peek ahead atklookaheadinputsymbolsbefore deciding how to parse earlier symbols. Typicallykis 1 and is not mentioned. The name "LR" is often preceded by other qualifiers, as in "SLR" and "LALR". The "LR(k)" notation for a grammar was suggested by Knuth to stand for "translatable from left to right with boundk."[1] LR parsers are deterministic; they produce a single correct parse without guesswork or backtracking, in linear time. This is ideal for computer languages, but LR parsers are not suited for human languages which need more flexible but inevitably slower methods. Some methods which can parse arbitrary context-free languages (e.g.,Cocke–Younger–Kasami,Earley,GLR) have worst-case performance of O(n3) time. Other methods which backtrack or yield multiple parses may even take exponential time when they guess badly.[2] The above properties ofL,R, andkare actually shared by allshift-reduce parsers, includingprecedence parsers. But by convention, the LR name stands for the form of parsing invented byDonald Knuth, and excludes the earlier, less powerful precedence methods (for exampleOperator-precedence parser).[1]LR parsers can handle a larger range of languages and grammars than precedence parsers or top-downLL parsing.[3]This is because the LR parser waits until it has seen an entire instance of some grammar pattern before committing to what it has found. An LL parser has to decide or guess what it is seeing much sooner, when it has only seen the leftmost input symbol of that pattern. An LR parser scans and parses the input text in one forward pass over the text. The parser builds up theparse treeincrementally, bottom up, and left to right, without guessing or backtracking. At every point in this pass, the parser has accumulated a list of subtrees or phrases of the input text that have been already parsed. Those subtrees are not yet joined together because the parser has not yet reached the right end of the syntax pattern that will combine them. At step 6 in an example parse, only "A * 2" has been parsed, incompletely. Only the shaded lower-left corner of the parse tree exists. None of the parse tree nodes numbered 7 and above exist yet. Nodes 3, 4, and 6 are the roots of isolated subtrees for variable A, operator *, and number 2, respectively. These three root nodes are temporarily held in a parse stack. The remaining unparsed portion of the input stream is "+ 1". As with other shift-reduce parsers, an LR parser works by doing some combination of Shift steps and Reduce steps. If the input has no syntax errors, the parser continues with these steps until all of the input has been consumed and all of the parse trees have been reduced to a single tree representing an entire legal input. LR parsers differ from other shift-reduce parsers in how they decide when to reduce, and how to pick between rules with similar endings. But the final decisions and the sequence of shift or reduce steps are the same. Much of the LR parser's efficiency is from being deterministic. To avoid guessing, the LR parser often looks ahead (rightwards) at the next scanned symbol, before deciding what to do with previously scanned symbols. The lexical scanner works one or more symbols ahead of the parser. Thelookaheadsymbols are the 'right-hand context' for the parsing decision.[4] Like other shift-reduce parsers, an LR parser lazily waits until it has scanned and parsed all parts of some construct before committing to what the combined construct is. The parser then acts immediately on the combination instead of waiting any further. In the parse tree example, the phrase A gets reduced to Value and then to Products in steps 1-3 as soon as lookahead * is seen, rather than waiting any later to organize those parts of the parse tree. The decisions for how to handle A are based only on what the parser and scanner have already seen, without considering things that appear much later to the right. Reductions reorganize the most recently parsed things, immediately to the left of the lookahead symbol. So the list of already-parsed things acts like astack. Thisparse stackgrows rightwards. The base or bottom of the stack is on the left and holds the leftmost, oldest parse fragment. Every reduction step acts only on the rightmost, newest parse fragments. (This accumulative parse stack is very unlike the predictive, leftward-growing parse stack used bytop-down parsers.) Step 6 applies a grammar rule with multiple parts: This matches the stack top holding the parsed phrases "... Products * Value". The reduce step replaces this instance of the rule's right hand side, "Products * Value" by the rule's left hand side symbol, here a larger Products. If the parser builds complete parse trees, the three trees for inner Products, *, and Value are combined by a new tree root for Products. Otherwise,semantic[broken anchor]details from the inner Products and Value are output to some later compiler pass, or are combined and saved in the new Products symbol.[5] In LR parsers, the shift and reduce decisions are potentially based on the entire stack of everything that has been previously parsed, not just on a single, topmost stack symbol. If done in an unclever way, that could lead to very slow parsers that get slower and slower for longer inputs. LR parsers do this with constant speed, by summarizing all the relevant left context information into a single number called the LR(0)parser state. For each grammar and LR analysis method, there is a fixed (finite) number of such states. Besides holding the already-parsed symbols, the parse stack also remembers the state numbers reached by everything up to those points. At every parse step, the entire input text is divided into a stack of previously parsed phrases, a current look-ahead symbol, and the remaining unscanned text. The parser's next action is determined by its current LR(0)state number(rightmost on the stack) and the lookahead symbol. In the steps below, all the black details are exactly the same as in other non-LR shift-reduce parsers. LR parser stacks add the state information in purple, summarizing the black phrases to their left on the stack and what syntax possibilities to expect next. Users of an LR parser can usually ignore state information. These states are explained in a later section. At initial step 0, the input stream "A * 2 + 1" is divided into The parse stack begins by holding only initial state 0. When state 0 sees the lookaheadid, it knows to shift thatidonto the stack, and scan the next input symbol*, and advance to state 9. At step 4, the total input stream "A * 2 + 1" is currently divided into The states corresponding to the stacked phrases are 0, 4, and 5. The current, rightmost state on the stack is state 5. When state 5 sees the lookaheadint, it knows to shift thatintonto the stack as its own phrase, and scan the next input symbol+, and advance to state 8. At step 12, all of the input stream has been consumed but only partially organized. The current state is 3. When state 3 sees the lookaheadeof, it knows to apply the completed grammar rule by combining the stack's rightmost three phrases for Sums,+, and Products into one thing. State 3 itself doesn't know what the next state should be. This is found by going back to state 0, just to the left of the phrase being reduced. When state 0 sees this new completed instance of a Sums, it advances to state 1 (again). This consulting of older states is why they are kept on the stack, instead of keeping only the current state. LR parsers are constructed from a grammar that formally defines the syntax of the input language as a set of patterns. The grammar doesn't cover all language rules, such as the size of numbers, or the consistent use of names and their definitions in the context of the whole program. LR parsers use acontext-free grammarthat deals just with local patterns of symbols. The example grammar used here is a tiny subset of the Java or C language: The grammar'sterminal symbolsare the multi-character symbols or 'tokens' found in the input stream by alexical scanner. Here these include+*andintfor any integer constant, andidfor any identifier name, andeoffor end of input file. The grammar doesn't care what theintvalues oridspellings are, nor does it care about blanks or line breaks. The grammar uses these terminal symbols but does not define them. They are always leaf nodes (at the bottom bushy end) of the parse tree. The capitalized terms like Sums arenonterminal symbols. These are names for concepts or patterns in the language. They are defined in the grammar and never occur themselves in the input stream. They are always internal nodes (above the bottom) of the parse tree. They only happen as a result of the parser applying some grammar rule. Some nonterminals are defined with two or more rules; these are alternative patterns. Rules can refer back to themselves, which are calledrecursive. This grammar uses recursive rules to handle repeated math operators. Grammars for complete languages use recursive rules to handle lists, parenthesized expressions, and nested statements. Any given computer language can be described by several different grammars. An LR(1) parser can handle many but not all common grammars. It is usually possible to manually modify a grammar so that it fits the limitations of LR(1) parsing and the generator tool. The grammar for an LR parser must beunambiguousitself, or must be augmented by tie-breaking precedence rules. This means there is only one correct way to apply the grammar to a given legal example of the language, resulting in a unique parse tree with just one meaning, and a unique sequence of shift/reduce actions for that example. LR parsing is not a useful technique for human languages with ambiguous grammars that depend on the interplay of words. Human languages are better handled by parsers likeGeneralized LR parser, theEarley parser, or theCYK algorithmthat can simultaneously compute all possible parse trees in one pass. Most LR parsers are table driven. The parser's program code is a simple generic loop that is the same for all grammars and languages. The knowledge of the grammar and its syntactic implications are encoded into unchanging data tables calledparse tables(orparsing tables). Entries in a table show whether to shift or reduce (and by which grammar rule), for every legal combination of parser state and lookahead symbol. The parse tables also tell how to compute the next state, given just a current state and a next symbol. The parse tables are much larger than the grammar. LR tables are hard to accurately compute by hand for big grammars. So they are mechanically derived from the grammar by someparser generatortool likeBison.[6] Depending on how the states and parsing table are generated, the resulting parser is called either aSLR(simple LR) parser,LALR(look-ahead LR) parser, orcanonical LR parser. LALR parsers handle more grammars than SLR parsers. Canonical LR parsers handle even more grammars, but use many more states and much larger tables. The example grammar is SLR. LR parse tables are two-dimensional. Each current LR(0) parser state has its own row. Each possible next symbol has its own column. Some combinations of state and next symbol are not possible for valid input streams. These blank cells trigger syntax error messages. TheActionleft half of the table has columns for lookahead terminal symbols. These cells determine whether the next parser action is shift (to staten), or reduce (by grammar rulern). TheGotoright half of the table has columns for nonterminal symbols. These cells show which state to advance to, after some reduction's Left Hand Side has created an expected new instance of that symbol. This is like a shift action but for nonterminals; the lookahead terminal symbol is unchanged. The table column "Current Rules" documents the meaning and syntax possibilities for each state, as worked out by the parser generator. It is not included in the actual tables used at parsing time. The•(pink dot) marker shows where the parser is now, within some partially recognized grammar rules. The things to the left of•have been parsed, and the things to the right are expected soon. A state has several such current rules if the parser has not yet narrowed possibilities down to a single rule. In state 2 above, the parser has just found and shifted-in the+of grammar rule The next expected phrase is Products. Products begins with terminal symbolsintorid. If the lookahead is either of those, the parser shifts them in and advances to state 8 or 9, respectively. When a Products has been found, the parser advances to state 3 to accumulate the complete list of summands and find the end of rule r0. A Products can also begin with nonterminal Value. For any other lookahead or nonterminal, the parser announces a syntax error. In state 3, the parser has just found a Products phrase, that could be from two possible grammar rules: The choice between r1 and r3 can't be decided just from looking backwards at prior phrases. The parser has to check the lookahead symbol to tell what to do. If the lookahead is*, it is in rule 3, so the parser shifts in the*and advances to state 5. If the lookahead iseof, it is at the end of rule 1 and rule 0, so the parser is done. In state 9 above, all the non-blank, non-error cells are for the same reduction r6. Some parsers save time and table space by not checking the lookahead symbol in these simple cases. Syntax errors are then detected somewhat later, after some harmless reductions, but still before the next shift action or parser decision. Individual table cells must not hold multiple, alternative actions, otherwise the parser would be nondeterministic with guesswork and backtracking. If the grammar is not LR(1), some cells will have shift/reduce conflicts between a possible shift action and reduce action, or reduce/reduce conflicts between multiple grammar rules. LR(k) parsers resolve these conflicts (where possible) by checking additional lookahead symbols beyond the first. The LR parser begins with a nearly empty parse stack containing just the start state 0, and with the lookahead holding the input stream's first scanned symbol. The parser then repeats the following loop step until done, or stuck on a syntax error: The topmost state on the parse stack is some states, and the current lookahead is some terminal symbolt. Look up the next parser action from rowsand columntof the Lookahead Action table. That action is either Shift, Reduce, Accept, or Error: LR parser stack usually stores just the LR(0) automaton states, as the grammar symbols may be derived from them (in the automaton, all input transitions to some state are marked with the same symbol, which is the symbol associated with this state). Moreover, these symbols are almost never needed as the state is all that matters when making the parsing decision.[7] This section of the article can be skipped by most users of LR parser generators. State 2 in the example parse table is for the partially parsed rule This shows how the parser got here, by seeing Sums then+while looking for a larger Sums. The•marker has advanced beyond the beginning of the rule. It also shows how the parser expects to eventually complete the rule, by next finding a complete Products. But more details are needed on how to parse all the parts of that Products. The partially parsed rules for a state are called its "core LR(0) items". The parser generator adds additional rules or items for all the possible next steps in building up the expected Products: The•marker is at the beginning of each of these added rules; the parser has not yet confirmed and parsed any part of them. These additional items are called the "closure" of the core items. For each nonterminal symbol immediately following a•, the generator adds the rules defining that symbol. This adds more•markers, and possibly different follower symbols. This closure process continues until all follower symbols have been expanded. The follower nonterminals for state 2 begins with Products. Value is then added by closure. The follower terminals areintandid. The kernel and closure items together show all possible legal ways to proceed from the current state to future states and complete phrases. If a follower symbol appears in only one item, it leads to a next state containing only one core item with the•marker advanced. Sointleads to next state 8 with core If the same follower symbol appears in several items, the parser cannot yet tell which rule applies here. So that symbol leads to a next state that shows all remaining possibilities, again with the•marker advanced. Products appears in both r1 and r3. So Products leads to next state 3 with core In words, that means if the parser has seen a single Products, it might be done, or it might still have even more things to multiply together. All the core items have the same symbol preceding the•marker; all transitions into this state are always with that same symbol. Some transitions will be to cores and states that have been enumerated already. Other transitions lead to new states. The generator starts with the grammar's goal rule. From there it keeps exploring known states and transitions until all needed states have been found. These states are called "LR(0)" states because they use a lookahead ofk=0, i.e. no lookahead. The only checking of input symbols occurs when the symbol is shifted in. Checking of lookaheads for reductions is done separately by the parse table, not by the enumerated states themselves. The parse table describes all possible LR(0) states and their transitions. They form afinite-state machine(FSM). An FSM is a simple engine for parsing simple unnested languages, without using a stack. In this LR application, the FSM's modified "input language" has both terminal and nonterminal symbols, and covers any partially parsed stack snapshot of the full LR parse. Recall step 5 of the Parse Steps Example: 0Products4*5int8 The parse stack shows a series of state transitions, from the start state 0, to state 4 and then on to 5 and current state 8. The symbols on the parse stack are the shift or goto symbols for those transitions. Another way to view this, is that the finite state machine can scan the stream "Products *int+ 1" (without using yet another stack) and find the leftmost complete phrase that should be reduced next. And that is indeed its job! How can a mere FSM do this when the original unparsed language has nesting and recursion and definitely requires an analyzer with a stack? The trick is that everything to the left of the stack top has already been fully reduced. This eliminates all the loops and nesting from those phrases. The FSM can ignore all the older beginnings of phrases, and track just the newest phrases that might be completed next. The obscure name for this in LR theory is "viable prefix". The states and transitions give all the needed information for the parse table's shift actions and goto actions. The generator also needs to calculate the expected lookahead sets for each reduce action. InSLRparsers, these lookahead sets are determined directly from the grammar, without considering the individual states and transitions. For each nonterminal S, the SLR generator works out Follows(S), the set of all the terminal symbols which can immediately follow some occurrence of S. In the parse table, each reduction to S uses Follow(S) as its LR(1) lookahead set. Such follow sets are also used by generators for LL top-down parsers. A grammar that has no shift/reduce or reduce/reduce conflicts when using Follow sets is called an SLR grammar. LALRparsers have the same states as SLR parsers, but use a more complicated, more precise way of working out the minimum necessary reduction lookaheads for each individual state. Depending on the details of the grammar, this may turn out to be the same as the Follow set computed by SLR parser generators, or it may turn out to be a subset of the SLR lookaheads. Some grammars are okay for LALR parser generators but not for SLR parser generators. This happens when the grammar has spurious shift/reduce or reduce/reduce conflicts using Follow sets, but no conflicts when using the exact sets computed by the LALR generator. The grammar is then called LALR(1) but not SLR. An SLR or LALR parser avoids having duplicate states. But this minimization is not necessary, and can sometimes create unnecessary lookahead conflicts.Canonical LRparsers use duplicated (or "split") states to better remember the left and right context of a nonterminal's use. Each occurrence of a symbol S in the grammar can be treated independently with its own lookahead set, to help resolve reduction conflicts. This handles a few more grammars. Unfortunately, this greatly magnifies the size of the parse tables if done for all parts of the grammar. This splitting of states can also be done manually and selectively with any SLR or LALR parser, by making two or more named copies of some nonterminals. A grammar that is conflict-free for a canonical LR generator but has conflicts in an LALR generator is called LR(1) but not LALR(1), and not SLR. SLR, LALR, and canonical LR parsers make exactly the same shift and reduce decisions when the input stream is the correct language. When the input has a syntax error, the LALR parser may do some additional (harmless) reductions before detecting the error than would the canonical LR parser. And the SLR parser may do even more. This happens because the SLR and LALR parsers are using a generous superset approximation to the true, minimal lookahead symbols for that particular state. LR parsers can generate somewhat helpful error messages for the first syntax error in a program, by simply enumerating all the terminal symbols that could have appeared next instead of the unexpected bad lookahead symbol. But this does not help the parser work out how to parse the remainder of the input program to look for further, independent errors. If the parser recovers badly from the first error, it is very likely to mis-parse everything else and produce a cascade of unhelpful spurious error messages. In theyaccand bison parser generators, the parser has an ad hoc mechanism to abandon the current statement, discard some parsed phrases and lookahead tokens surrounding the error, and resynchronize the parse at some reliable statement-level delimiter like semicolons or braces. This often works well for allowing the parser and compiler to look over the rest of the program. Many syntactic coding errors are simple typos or omissions of a trivial symbol. Some LR parsers attempt to detect and automatically repair these common cases. The parser enumerates every possible single-symbol insertion, deletion, or substitution at the error point. The compiler does a trial parse with each change to see if it worked okay. (This requires backtracking to snapshots of the parse stack and input stream, normally unneeded by the parser.) Some best repair is picked. This gives a very helpful error message and resynchronizes the parse well. However, the repair is not trustworthy enough to permanently modify the input file. Repair of syntax errors is easiest to do consistently in parsers (like LR) that have parse tables and an explicit data stack. The LR parser generator decides what should happen for each combination of parser state and lookahead symbol. These decisions are usually turned into read-only data tables that drive a generic parser loop that is grammar- and state-independent. But there are also other ways to turn those decisions into an active parser. Some LR parser generators create separate tailored program code for each state, rather than a parse table. These parsers can run several times faster than the generic parser loop in table-driven parsers. The fastest parsers use generated assembler code. In therecursive ascent parservariation, the explicit parse stack structure is also replaced by the implicit stack used by subroutine calls. Reductions terminate several levels of subroutine calls, which is clumsy in most languages. So recursive ascent parsers are generally slower, less obvious, and harder to hand-modify thanrecursive descent parsers. Another variation replaces the parse table by pattern-matching rules in non-procedural languages such asProlog. GLRGeneralized LR parsersuse LR bottom-up techniques to find all possible parses of input text, not just one correct parse. This is essential for ambiguous grammar such as used for human languages. The multiple valid parse trees are computed simultaneously, without backtracking. GLR is sometimes helpful for computer languages that are not easily described by a conflict-free LALR(1) grammar. LCLeft corner parsersuse LR bottom-up techniques for recognizing the left end of alternative grammar rules. When the alternatives have been narrowed down to a single possible rule, the parser then switches to top-down LL(1) techniques for parsing the rest of that rule. LC parsers have smaller parse tables than LALR parsers and better error diagnostics. There are no widely used generators for deterministic LC parsers. Multiple-parse LC parsers are helpful with human languages with very large grammars. LR parsers were invented byDonald Knuthin 1965 as an efficient generalization ofprecedence parsers. Knuth proved that LR parsers were the most general-purpose parsers possible that would still be efficient in the worst cases.[citation needed] In other words, if a language was reasonable enough to allow an efficient one-pass parser, it could be described by an LR(k) grammar. And that grammar could always be mechanically transformed into an equivalent (but larger) LR(1) grammar. So an LR(1) parsing method was, in theory, powerful enough to handle any reasonable language. In practice, the natural grammars for many programming languages are close to being LR(1).[citation needed] The canonical LR parsers described by Knuth had too many states and very big parse tables that were impractically large for the limited memory of computers of that era. LR parsing became practical whenFrank DeRemerinventedSLRandLALRparsers with much fewer states.[10][11] For full details on LR theory and how LR parsers are derived from grammars, seeThe Theory of Parsing, Translation, and Compiling, Volume 1(Aho and Ullman).[7][2] Earley parsersapply the techniques and•notation of LR parsers to the task of generating all possible parses for ambiguous grammars such as for human languages. While LR(k) grammars have equal generative power for allk≥1, the case of LR(0) grammars is slightly different. A languageLis said to have theprefix propertyif no word inLis aproper prefixof another word inL.[12]A languageLhas an LR(0) grammar if and only ifLis adeterministic context-free languagewith the prefix property.[13]As a consequence, a languageLis deterministic context-free if and only ifL$has an LR(0) grammar, where "$" is not a symbol ofL'salphabet.[14] This example of LR parsing uses the following small grammar with goal symbol E: to parse the following input: The two LR(0) parsing tables for this grammar look as follows: Theaction tableis indexed by a state of the parser and a terminal (including a special terminal $ that indicates the end of the input stream) and contains three types of actions: Thegoto tableis indexed by a state of the parser and a nonterminal and simply indicates what the next state of the parser will be if it has recognized a certain nonterminal. This table is important to find out the next state after every reduction. After a reduction, the next state is found by looking up thegoto tableentry for top of the stack (i.e. current state) and the reduced rule's LHS (i.e. non-terminal). The table below illustrates each step in the process. Here the state refers to the element at the top of the stack (the right-most element), and the next action is determined by referring to the action table above. A $ is appended to the input string to denote the end of the stream. The parser starts out with the stack containing just the initial state ('0'): The first symbol from the input string that the parser sees is '1'. To find the next action (shift, reduce, accept or error), the action table is indexed with the current state (the "current state" is just whatever is on the top of the stack), which in this case is 0, and the current input symbol, which is '1'. The action table specifies a shift to state 2, and so state 2 is pushed onto the stack (again, all the state information is in the stack, so "shifting to state 2" is the same as pushing 2 onto the stack). The resulting stack is where the top of the stack is 2. For the sake of explaining the symbol (e.g., '1', B) is shown that caused the transition to the next state, although strictly speaking it is not part of the stack. In state 2, the action table says to reduce with grammar rule 5 (regardless of what terminal the parser sees on the input stream), which means that the parser has just recognized the right-hand side of rule 5. In this case, the parser writes 5 to the output stream, pops one state from the stack (since the right-hand side of the rule has one symbol), and pushes on the stack the state from the cell in the goto table for state 0 and B, i.e., state 4. The resulting stack is: However, in state 4, the action table says the parser should now reduce with rule 3. So it writes 3 to the output stream, pops one state from the stack, and finds the new state in the goto table for state 0 and E, which is state 3. The resulting stack: The next terminal that the parser sees is a '+' and according to the action table it should then shift to state 6: The resulting stack can be interpreted as the history of afinite-state machinethat has just read a nonterminal E followed by a terminal '+'. The transition table of this automaton is defined by the shift actions in the action table and the goto actions in the goto table. The next terminal is now '1' and this means that the parser performs a shift and go to state 2: Just as the previous '1' this one is reduced to B giving the following stack: The stack corresponds with a list of states of a finite automaton that has read a nonterminal E, followed by a '+' and then a nonterminal B. In state 8 the parser always performs a reduce with rule 2. The top 3 states on the stack correspond with the 3 symbols in the right-hand side of rule 2. This time we pop 3 elements off of the stack (since the right-hand side of the rule has 3 symbols) and look up the goto state for E and 0, thus pushing state 3 back onto the stack Finally, the parser reads a '$' (end of input symbol) from the input stream, which means that according to the action table (the current state is 3) the parser accepts the input string. The rule numbers that will then have been written to the output stream will be [5, 3, 5, 2] which is indeed arightmost derivationof the string "1 + 1" in reverse. The construction of these parsing tables is based on the notion ofLR(0) items(simply calleditemshere) which are grammar rules with a special dot added somewhere in the right-hand side. For example, the rule E → E + B has the following four corresponding items: Rules of the formA→ ε have only a single itemA→•. The item E → E•+ B, for example, indicates that the parser has recognized a string corresponding with E on the input stream and now expects to read a '+' followed by another string corresponding with B. It is usually not possible to characterize the state of the parser with a single item because it may not know in advance which rule it is going to use for reduction. For example, if there is also a rule E → E * B then the items E → E•+ B and E → E•* B will both apply after a string corresponding with E has been read. Therefore, it is convenient to characterize the state of the parser by a set of items, in this case the set { E → E•+ B, E → E•* B }. An item with a dot before a nonterminal, such as E → E +•B, indicates that the parser expects to parse the nonterminal B next. To ensure the item set contains all possible rules the parser may be in the midst of parsing, it must include all items describing how B itself will be parsed. This means that if there are rules such as B → 1 and B → 0 then the item set must also include the items B →•1 and B →•0. In general this can be formulated as follows: Thus, any set of items can be extended by recursively adding all the appropriate items until all nonterminals preceded by dots are accounted for. The minimal extension is called theclosureof an item set and written asclos(I) whereIis an item set. It is these closed item sets that are taken as the states of the parser, although only the ones that are actually reachable from the begin state will be included in the tables. Before the transitions between the different states are determined, the grammar is augmented with an extra rule where S is a new start symbol and E the old start symbol. The parser will use this rule for reduction exactly when it has accepted the whole input string. For this example, the same grammar as above is augmented thus: It is for this augmented grammar that the item sets and the transitions between them will be determined. The first step of constructing the tables consists of determining the transitions between the closed item sets. These transitions will be determined as if we are considering a finite automaton that can read terminals as well as nonterminals. The begin state of this automaton is always the closure of the first item of the added rule: S →•E eof: Theboldfaced"+" in front of an item indicates the items that were added for the closure (not to be confused with the mathematical '+' operator which is a terminal). The original items without a "+" are called thekernelof the item set. Starting at the begin state (S0), all of the states that can be reached from this state are now determined. The possible transitions for an item set can be found by looking at the symbols (terminals and nonterminals) found following the dots; in the case of item set 0 those symbols are the terminals '0' and '1' and the nonterminals E and B. To find the item set that each symbolx∈{0,1,E,B}{\textstyle x\in \{0,1,E,B\}}leads to, the following procedure is followed for each of the symbols: For the terminal '0' (i.e. where x = '0') this results in: and for the terminal '1' (i.e. where x = '1') this results in: and for the nonterminal E (i.e. where x = E) this results in: and for the nonterminal B (i.e. where x = B) this results in: The closure does not add new items in all cases - in the new sets above, for example, there are no nonterminals following the dot. Above procedure is continued until no more new item sets are found. For the item sets 1, 2, and 4 there will be no transitions since the dot is not in front of any symbol. For item set 3 though, we have dots in front of terminals '*' and '+'. For symbolx=*{\textstyle x={\texttt {*}}}the transition goes to: and forx=+{\textstyle x={\texttt {+}}}the transition goes to: Now, the third iteration begins. For item set 5, the terminals '0' and '1' and the nonterminal B must be considered, but the resulting closed item sets for the terminals are equal to already found item sets 1 and 2, respectively. For the nonterminal B, the transition goes to: For item set 6, the terminal '0' and '1' and the nonterminal B must be considered, but as before, the resulting item sets for the terminals are equal to the already found item sets 1 and 2. For the nonterminal B the transition goes to: These final item sets 7 and 8 have no symbols beyond their dots so no more new item sets are added, so the item generating procedure is complete. The finite automaton, with item sets as its states is shown below. The transition table for the automaton now looks as follows: From this table and the found item sets, the action and goto table are constructed as follows: The reader may verify that these steps produce the action and goto table presented earlier. Only step 4 of the above procedure produces reduce actions, and so all reduce actions must occupy an entire table row, causing the reduction to occur regardless of the next symbol in the input stream. This is why these are LR(0) parse tables: they don't do any lookahead (that is, they look ahead zero symbols) before deciding which reduction to perform. A grammar that needs lookahead to disambiguate reductions would require a parse table row containing different reduce actions in different columns, and the above procedure is not capable of creating such rows. Refinements to theLR(0) table construction procedure (such asSLRandLALR) are capable of constructing reduce actions that do not occupy entire rows. Therefore, they are capable of parsing more grammars than LR(0) parsers. The automaton is constructed in such a way that it is guaranteed to be deterministic. However, when reduce actions are added to the action table it can happen that the same cell is filled with a reduce action and a shift action (ashift-reduce conflict) or with two different reduce actions (areduce-reduce conflict). However, it can be shown that when this happens the grammar is not an LR(0) grammar. A classic real-world example of a shift-reduce conflict is thedangling elseproblem. A small example of a non-LR(0) grammar with a shift-reduce conflict is: One of the item sets found is: There is a shift-reduce conflict in this item set: when constructing the action table according to the rules above, the cell for [item set 1, terminal '1'] containss1(shift to state 1)and r2(reduce with grammar rule 2). A small example of a non-LR(0) grammar with a reduce-reduce conflict is: In this case the following item set is obtained: There is a reduce-reduce conflict in this item set because in the cells in the action table for this item set there will be both a reduce action for rule 3 and one for rule 4. Both examples above can be solved by letting the parser use the follow set (seeLL parser) of a nonterminalAto decide if it is going to use one ofAs rules for a reduction; it will only use the ruleA→wfor a reduction if the next symbol on the input stream is in the follow set ofA. This solution results in so-calledSimple LR parsers.
https://en.wikipedia.org/wiki/LR_parser
fileis ashellcommandfor reporting the type of data contained in afile. It is commonly supported inUnixandUnix-likeoperating systems. As the command uses relatively quick-runningheuristicsto determinefile type, it can report misleading information. The command can be fooled, for example, by including a magic number in the content even if the rest of the content does not match what the magic number indicates. The command report cannot be taken as completely trustworthy. TheSingle UNIX Specification(SUS) requires the command to exhibit the following behavior with respect to the file specified via thecommand-line: Position-sensitive tests are normally implemented by matching various locations within the file against a textual database ofmagic numbers(see the Usage section). This differs from other simpler methods such asfile extensionsand schemes likeMIME. In the System V implementation, the Ian Darwin implementation, and the OpenBSD implementation, the command uses a database to drive the probing of the lead bytes. That database is stored as a file that is located in/etc/magic,/usr/share/file/magicor similar. Thefilecommand originated inUnix Research Version 4[2]in 1973.System Vbrought a major update with several important changes, most notably moving the file type information into an external text file rather than compiling it into the binary itself. Most majorBSDandLinuxdistributions include afree,open-sourceimplementation that was written from scratch by Ian Darwin in 1986–87.[3]It keeps file type information in a text file with a format based on that of the System V version. It was expanded byGeoff Collyerin 1989 and since then has had input from many others, including Guy Harris, Chris Lowth and Eric Fischer. From late 1993 onward, its maintenance has been organized by Christos Zoulas. TheOpenBSDsystem has its own subset implementation written from scratch, but still uses the Darwin/Zoulas collection of magic file formatted information. Thefilecommand was ported to theIBM ioperating system.[4] As of version 4.00 of the Ian Darwin/Christos Zoulas implementation offile, the functionality of the command is implemented in and exposed by alibmagiclibrarythat is accessible to consuming code viaC(and compatible) linking.[5][6][7][8] The SUS[9]mandates the following command-line options: Implementations may add extra options. Ian Darwin's implementation adds-s'special files',-k'keep-going' or-r'raw', among many others.[10] For aCsource codefile,file main.creports: For a compiled executable,file programreports information like: For a block device/dev/hda,file /dev/hda1reports: By default,filedoes not try to read a device file due to potential undesirable effects. But using the non-standard option-s(available in the Ian Darwin branch), which requests to read device files to identify content,file -s /dev/hda1reports details such as: Via Ian Darwin's non-standard option-k, the command does not stop after the first hit found, but looks for other matching patterns. The-roption, which is available in some versions, causes thenew linecharacter to be displayed in its raw form rather than in its octal representation. On Linux,file -k -r libmagic-dev_5.35-4_armhf.debreports information like: For a compressed file,file compressed.gzreports information like: For a compressed file,file -i compressed.gzreports information like: For a PPM file,file data.ppmreports; For aMach-Ouniversal binary,file /bin/catreports like: For asymbolic link,file /usr/bin/vireports: Identifying a symbolic link is not available on all platforms and will be dereferenced if-Lis passed orPOSIXLY_CORRECTis set.
https://en.wikipedia.org/wiki/File_(command)
Inmathematics,Light's associativity testis a procedure invented by F. W. Light for testing whether abinary operationdefined in afinite setby aCayley multiplication tableisassociative. The naive procedure for verification of the associativity of a binary operation specified by a Cayley table, which compares the two products that can be formed from each triple of elements, is cumbersome. Light's associativity test simplifies the task in some instances (although it does not improve the worst-case runtime of the naive algorithm, namelyO(n3){\displaystyle {\mathcal {O}}\left(n^{3}\right)}for sets of sizen{\displaystyle n}). Let a binary operation ' · ' be defined in a finite setAby a Cayley table. Choosing some elementainA, two new binary operations are defined inAas follows: The Cayley tables of these operations are constructed and compared. If the tables coincide thenx· (a·y) = (x·a) ·yfor allxandy. This is repeated for every element of the setA. The example below illustrates a further simplification in the procedure for the construction and comparison of the Cayley tables of the operations '⋆{\displaystyle \star }' and '∘{\displaystyle \circ }'. It is not even necessary to construct the Cayley tables of '⋆{\displaystyle \star }' and '∘{\displaystyle \circ }' forallelements ofA. It is enough to compare Cayley tables of '⋆{\displaystyle \star }' and '∘{\displaystyle \circ }' corresponding to the elements in a proper generating subset ofA. When the operation ' . ' iscommutative, then x⋆{\displaystyle \star }y = y∘{\displaystyle \circ }x. As a result, only part of each Cayley table must be computed, because x⋆{\displaystyle \star }x = x∘{\displaystyle \circ }x always holds, and x⋆{\displaystyle \star }y = x∘{\displaystyle \circ }y implies y⋆{\displaystyle \star }x = y∘{\displaystyle \circ }x. When there is anidentity elemente, it does not need to be included in the Cayley tables because x⋆{\displaystyle \star }y = x∘{\displaystyle \circ }y always holds if at least one of x and y are equal to e. Consider the binary operation ' · ' in the setA= {a,b,c,d,e} defined by the following Cayley table (Table 1): The set {c,e} is a generating set for the setAunder the binary operation defined by the above table, for,a=e·e,b=c·c,d=c·e. Thus it is enough to verify that the binary operations '⋆{\displaystyle \star }' and '∘{\displaystyle \circ }' corresponding toccoincide and also that the binary operations '⋆{\displaystyle \star }' and '∘{\displaystyle \circ }' corresponding toecoincide. To verify that the binary operations '⋆{\displaystyle \star }' and '∘{\displaystyle \circ }' corresponding toccoincide, choose the row in Table 1 corresponding to the elementc: This row is copied as the header row of a new table (Table 3): Under the headeracopy the corresponding column in Table 1, under the headerbcopy the corresponding column in Table 1, etc., and construct Table 4. The column headers of Table 4 are now deleted to get Table 5: The Cayley table of the binary operation '⋆{\displaystyle \star }' corresponding to the elementcis given by Table 6. Next choose theccolumn of Table 1: Copy this column to the index column to get Table 8: Against the index entryain Table 8 copy the corresponding row in Table 1, against the index entrybcopy the corresponding row in Table 1, etc., and construct Table 9. The index entries in the first column of Table 9 are now deleted to get Table 10: The Cayley table of the binary operation '∘{\displaystyle \circ }' corresponding to the elementcis given by Table 11. One can verify that the entries in the various cells in Table 6 agrees with the entries in the corresponding cells of Table 11. This shows thatx· (c·y) = (x·c) ·yfor allxandyinA. If there were some discrepancy then it would not be true thatx· (c·y) = (x·c) ·yfor allxandyinA. Thatx· (e·y) = (x·e) ·yfor allxandyinAcan be verified in a similar way by constructing the following tables (Table 12 and Table 13): It is not necessary to construct the Cayley tables (Table 6 and table 11) of the binary operations '⋆{\displaystyle \star }' and '∘{\displaystyle \circ }'. It is enough to copy the column corresponding to the headercin Table 1 to the index column in Table 5 and form the following table (Table 14) and verify that thea-row of Table 14 is identical with thea-row of Table 1, theb-row of Table 14 is identical with theb-row of Table 1, etc. This is to be repeatedmutatis mutandisfor all the elements of the generating set ofA. Computer softwarecan be written to carry out Light's associativity test. Kehayopulu and Argyris have developed such a program forMathematica.[1] Light's associativity test can be extended to test associativity in a more general context.[2][3] LetT= {t1,t2,…{\displaystyle \ldots },tm} be amagmain which the operation is denoted byjuxtaposition. LetX= {x1,x2,…{\displaystyle \ldots },xn} be a set. Let there be a mapping from theCartesian productT×XtoXdenoted by (t,x) ↦txand let it be required to test whether this map has the property A generalization of Light's associativity test can be applied to verify whether the above property holds or not. In mathematical notations, the generalization runs as follows: For eachtinT, letL(t) be them×nmatrix of elements ofXwhosei- th row is and letR(t) be them×nmatrix of elements ofX, the elements of whosej- th column are According to the generalised test (due to Bednarek), that the property to be verified holds if and only ifL(t) =R(t) for alltinT. WhenX=T, Bednarek's test reduces to Light's test. There is a randomized algorithm by Rajagopalan andSchulmanto test associativity in time proportional to the input size. (The method also works for testing certain other identities.) Specifically, the runtime isO(n2log⁡1δ){\displaystyle O(n^{2}\log {\frac {1}{\delta }})}for ann×n{\displaystyle n\times n}table and error probabilityδ{\displaystyle \delta }. The algorithm can be modified to produce a triple⟨a,b,c⟩{\displaystyle \langle a,b,c\rangle }for which(ab)c≠a(bc){\displaystyle (ab)c\neq a(bc)}, if there is one, in timeO(n2log⁡n⋅log⁡1δ){\displaystyle O(n^{2}\log n\cdot \log {\frac {1}{\delta }})}.[4]
https://en.wikipedia.org/wiki/Light%27s_associativity_test
Ashipping containeris acontainerwith strength suitable to withstand shipment, storage, and handling. Shipping containers range from large reusable steel boxes used for intermodal shipments to the ubiquitouscorrugated boxes. In the context of international shipping trade, "container" or "shipping container" is virtually synonymous with "intermodal freight container" (sometimes informally called a "sea can"), a container designed to be moved from one mode of transport to another without unloading and reloading.[1] Freight containers are areusabletransport and storage unit for moving products and raw materials between locations or countries. There are about seventeen million intermodal containers in the world, and a large proportion of the world's long-distance freight generated byinternational tradeis transported in shipping containers. In addition, it is estimated that several million of these containers have now been discarded due to the shipping cost of sending them back to their port of origin. Their invention made a major contribution to theglobalizationof commerce in the second half of the 20th century, dramatically reducing the cost of transporting goods and hence of long-distance trade.[2][3] Specialized shipping containers include: high cube containers (providing an extra 1 ft (305 mm) in height to standard shipping containers), pallet wides, open tops, side loaders, double door or tunnel-tainers, and temperature controlled containers. Another specialized container, known as Transtainer, is a portable fuel and oil freight container. The hybrid bulk fuel tank is originally intended for the construction, mining, logging and farming sectors. The tank can be used to transport and store bulk fuels as well as dangerous liquids, by road, rail and sea.[4]Sea containers are crucial for modern logistics, offering a cost-effective storage and shipping solution. These durable containers, designed for international transportation, provide secure storage for goods with robust steel construction. Beyond shipping, they find applications in on-site storage and modular living or workspaces. Sea containers for sale provide an accessible and convenient solution, meeting diverse needs and promoting sustainability through their reuse. Reusable steel boxes for use as truck-sized shipping containers first came into use around 1956. It took some time for businesses to devise a structured process to utilize and to get optimal benefits from the role and use of shipping containers. Over time, the invention of the modern telecommunications of the late 20th-century made it highly beneficial to have standardized shipping containers, and made these shipping processes more standardized, modular, easier to schedule and easier to manage.[5] Corrugated boxesare commonly used as shipping containers[6](more than 90% of all shipping containers are of this type).[6][7]They are made ofcorrugated fiberboardwhich is lightweight, recyclable, and strong enough to ship a variety of products. Wooden boxes are often used for shipping heavy and dense products. They are sometimes specified for shipments of government or military shipments. A crate is a large container, often made of wood, used to transport large, heavy or awkward items. A crate has a self-supporting structure, with or without sheathing. Reusable plastic versions include: An intermediate bulk container (IBC, IBC Tote, IBC Tank) is a multi-use container employed for the general transport, storage, and handling of bulk fluids and materials. IBC tanks are compatible with, and resistant to, an extensive list of chemicals, acids, caustics, as well as inert materials and food grade consumables. IBCs are commonly manufactured from the following materials: Some IBC engineering models are foldable (collapsible) for space-saving breakdown following use. A flexible intermediate bulk container,FIBC,big bag,bulk bag, orsuper sackis a standardized container in large dimensions for storing and transporting granular products. It is often made of a woven synthetic material. A bulk box, bulk bin, skid box, or tote box is apalletsize box used for storage and shipping of bulk quantities. Drums are cylindrical shipping containers made of steel, plastic or fiber. They are often used for liquids and granular materials. Insulated shipping containers are a type of packaging used to ship temperature sensitive products such as foods, pharmaceuticals, and chemicals. They are used as part of acold chainto help maintain product freshness and efficacy. Some pails are used as shipping containers.[8] A Unit Load Device (ULD), is a container used to transport cargo on commercial aircraft. It can be a pallet or container used to load luggage, freight, and mail on wide-body aircraft and specific narrow-body aircraft. It allows a large quantity of cargo to be bundled into a single unit. Since this leads to fewer units to load, it saves ground crews time and effort and helps prevent delayed flights. Each ULD has its own packing list, manifest, or tracking identification to improve control and tracking of contents Custom containers are used for shipments of products such as scientific instruments, weapons and aviation components.[9]Customizedcushioning, blocking and bracing, carrying handles, lift rings, locks, etc. are common to facilitate handling and to protect the contents. Often, these shipping containers are reusable. The reusableifco tray("international fruit container") is used in Europe for transportation of fruit, vegetables, and fish. Flight cases and transit cases are usually custom designed for shipping and carrying fragile equipment: audio visual, camera, instruments, etc. Although generally light in construction, they tend to have reinforced edges and corners. Road cases are often used for shipping musical instruments and theater props. Many types of shipping containers are reusable. Steel drums are frequently reconditioned and reused. Gas cylinders, transit cases and sometimes even corrugated boxes are reused. The widespread availability and relative cheapness of used intermodal shipping containers meant that architects began to consider them as an alternative to traditional building materials.[10]Used shipping containers have been converted for use in housing, and as retail and office spaces.[11][12]Examples of its use include the Cité A Docks student housing project inLe Havre, France;[13]the Wenckehof container village in Amsterdam;[14]the portablePumaCity store in US cities;[15][16]the food and retailBoxparkin London;[17]theDordoy BazaarinBishkek, Kyrgyzstan;[18]the temporary mallRe:STARTinChristchurch, New Zealand built after the2011 Christchurch earthquake,[19]and asintensive-careunits in temporary hospitals during theCOVID-19 pandemic.[20]The Smoky Park Supper Club inAsheville, North Carolina, opened in 2015, was constructed from 19 containers and is considered " "America's largest recycled shipping container restaurant."[21] It has however been pointed out there are problems with recycling shipping containers, that it may not be as ecologically friendly or cheap an option as it might appear. The containers may be coated with harmful chemicals such as chromate, phosphorus, and lead-based paints, while its wooden floors may be treated with toxic insecticides, and some cost and effort are involved in modifying containers to make them habitable.[10]Others have noted various issues such as space constraint, insulation, and structural weakness if too much steel is cut out of the containers.[22][23] Shipping containers are used in the film and television industry for building temporary sets. Shipping containers can be stacked on top of each other and used as reinforced scaffold that large-scale film sets can be built against. An example can be seen atLeavesden Studios, England; an area of the studio backlot is allocated to spare containers when not in use.[citation needed] Reefer containers orrefrigerated containersare containers built to haul refrigerated or frozen products. These containers can be repurposed for container housing or prefabricated for housing purposes. The advantage is theinsulationin the walls, ceiling, and floor compared tocorrugated metalin standard shipping containers that can get very hot or cold from the weather outside.Prefabricatedreefer containers with the wiring ran through the walls and the plumbing ran through the ceiling and floor before the insulation, interior walls, and floors are installed would be more practical than trying to do that with a repurposed used reefer container.[24]
https://en.wikipedia.org/wiki/Shipping_container#Re-use
Inmathematics,Fourier–Bessel seriesis a particular kind ofgeneralized Fourier series(aninfinite seriesexpansion on a finite interval) based onBessel functions. Fourier–Bessel series are used in the solution topartial differential equations, particularly incylindrical coordinatesystems. The Fourier–Bessel series of a functionf(x)with adomainof[0,b]satisfyingf(b) = 0 f:[0,b]→R{\displaystyle f:[0,b]\to \mathbb {R} }is the representation of that function as alinear combinationof manyorthogonalversions of the sameBessel function of the first kindJα, where the argument to each versionnis differently scaled, according to[1][2](Jα)n(x):=Jα(uα,nbx){\displaystyle (J_{\alpha })_{n}(x):=J_{\alpha }\left({\frac {u_{\alpha ,n}}{b}}x\right)}whereuα,nis aroot, numberednassociated with the Bessel functionJαandcnare the assigned coefficients:[3]f(x)∼∑n=1∞cnJα(uα,nbx).{\displaystyle f(x)\sim \sum _{n=1}^{\infty }c_{n}J_{\alpha }\left({\frac {u_{\alpha ,n}}{b}}x\right).} The Fourier–Bessel series may be thought of as a Fourier expansion in the ρ coordinate ofcylindrical coordinates. Just as theFourier seriesis defined for a finite interval and has a counterpart, thecontinuous Fourier transformover an infinite interval, so the Fourier–Bessel series has a counterpart over an infinite interval, namely theHankel transform. As said, differently scaled Bessel Functions are orthogonal with respect to theinner product ⟨f,g⟩=∫0bxf(x)g(x)dx{\displaystyle \langle f,g\rangle =\int _{0}^{b}xf(x)g(x)\,dx} according to ∫0bxJα(xuα,nb)Jα(xuα,mb)dx=b22δmn[Jα+1(uα,n)]2,{\displaystyle \int _{0}^{b}xJ_{\alpha }\left({\frac {xu_{\alpha ,n}}{b}}\right)\,J_{\alpha }\left({\frac {xu_{\alpha ,m}}{b}}\right)\,dx={\frac {b^{2}}{2}}\delta _{mn}[J_{\alpha +1}(u_{\alpha ,n})]^{2},} (where:δmn{\displaystyle \delta _{mn}}is the Kronecker delta). The coefficients can be obtained fromprojectingthe functionf(x)onto the respective Bessel functions: cn=⟨f,(Jα)n⟩⟨(Jα)n,(Jα)n⟩=∫0bxf(x)(Jα)n(x)dx12(bJα±1(uα,n))2{\displaystyle c_{n}={\frac {\langle f,(J_{\alpha })_{n}\rangle }{\langle (J_{\alpha })_{n},(J_{\alpha })_{n}\rangle }}={\frac {\int _{0}^{b}xf(x)(J_{\alpha })_{n}(x)\,dx}{{\frac {1}{2}}(bJ_{\alpha \pm 1}(u_{\alpha ,n}))^{2}}}} where the plus or minus sign is equally valid. For the inverse transform, one makes use of the following representation of theDirac delta function[4] 2xαy1−αb2∑k=1∞Jα(xuα,kb)Jα(yuα,kb)Jα+12(uα,k)=δ(x−y).{\displaystyle {\frac {2x^{\alpha }y^{1-\alpha }}{b^{2}}}\sum _{k=1}^{\infty }{\frac {J_{\alpha }\left({\frac {xu_{\alpha ,k}}{b}}\right)\,J_{\alpha }\left({\frac {yu_{\alpha ,k}}{b}}\right)}{J_{\alpha +1}^{2}(u_{\alpha ,k})}}=\delta (x-y).} Fourier–Bessel series coefficients are unique for a given signal, and there is one-to-one mapping between continuous frequency (Fn{\displaystyle F_{n}}) and order index(n){\displaystyle (n)}which can be expressed as follows: un=2πFnLFs{\displaystyle u_{n}={\frac {2\pi F_{n}L}{F_{s}}}} Since,un=un−1+π≈nπ{\displaystyle u_{n}=u_{n-1}+\pi \approx n\pi }. So above equation can be rewritten as follows: Fn=Fsn2L{\displaystyle F_{n}={\frac {F_{s}n}{2L}}} whereL{\displaystyle L}is the length of the signal andFs{\displaystyle F_{s}}is the sampling frequency of the signal. For an imagef(x,y){\displaystyle f(x,y)}of size M×N, the synthesis equations for order-0 2D-Fourier–Bessel series expansion is as follows: f(x,y)=∑m=1M∑n=1NF(m,n)J0(u0,nyN)J0(u0,mxM){\displaystyle f(x,y)=\sum _{m=1}^{M}\sum _{n=1}^{N}F(m,n)J_{0}{\bigg (}{\frac {u_{0,n}y}{N}}{\bigg )}J_{0}{\bigg (}{\frac {u_{0,m}x}{M}}{\bigg )}} WhereF(m,n){\displaystyle F(m,n)}is 2D-Fourier–Bessel series expansion coefficients whose mathematical expressions are as follows: F(m,n)=4α1∑x=0M−1∑y=0N−1xyf(x,y)J0(u0,nyN)J0(u0,mxM){\displaystyle F(m,n)={\frac {4}{\alpha _{1}}}\sum _{x=0}^{M-1}\sum _{y=0}^{N-1}xyf(x,y)J_{0}{\bigg (}{\frac {u_{0,n}y}{N}}{\bigg )}J_{0}{\bigg (}{\frac {u_{0,m}x}{M}}{\bigg )}} where,α1=(NM)2(J1(u0,m)J1(u0,n))2{\displaystyle \alpha _{1}=(NM)^{2}(J_{1}(u_{0,m})J_{1}(u_{0,n}))^{2}} For a signal of lengthb{\displaystyle b}, Fourier-Bessel based spectral entropy such as Shannon spectral entropy (HSSE{\displaystyle H_{\text{SSE}}}), log energy entropy (HLLE{\displaystyle H_{\text{LLE}}}), and Wiener entropy (HWE{\displaystyle H_{\text{WE}}}) are defined as follows: HSSE=−∑n=1bP(n)log2(P(n)){\displaystyle H_{\text{SSE}}=-\sum _{n=1}^{b}P(n)~{\text{log}}_{2}\left(P(n)\right)}HWE=b∏n=1bEn∑n=1bEn{\displaystyle H_{\text{WE}}=b{\frac {\sqrt {\displaystyle \prod _{n=1}^{b}E_{n}}}{\displaystyle \sum _{n=1}^{b}E_{n}}}}HLE=−∑n=1blog2(P(n)){\displaystyle H_{\text{LE}}=-\sum _{n=1}^{b}~{\text{log}}_{2}\left(P(n)\right)} wherePn{\displaystyle P_{n}}is the normalized energy distribution which is mathematically defined as follows: P(n)=En∑n=1bEn{\displaystyle P(n)={\frac {E_{n}}{\displaystyle \sum _{n=1}^{b}E_{n}}}} En{\displaystyle E_{n}}is energy spectrum which is mathematically defined as follows: En=cn2b2[J1(u1,n)]22{\displaystyle E_{n}={\frac {c_{n}^{2}b^{2}[J_{1}(u_{1,n})]^{2}}{2}}} The Empirical wavelet transform (EWT) is a multi-scale signal processing approach for the decomposition of multi-component signal into intrinsic mode functions (IMFs).[5]The EWT is based on the design of empirical wavelet based filter bank based on the segregation of Fourier spectrum of the multi-component signals. The segregation of Fourier spectrum of multi-component signal is performed using the detection of peaks and then the evaluation of boundary points.[5]For non-stationary signals, the Fourier Bessel Series Expansion (FBSE) is the natural choice as it uses Bessel function as basis for analysis and synthesis of the signal. The FBSE spectrum has produced the number of frequency bins same as the length of the signal in the frequency range [0,Fs2{\displaystyle {\frac {F_{s}}{2}}}]. Therefore, in FBSE-EWT, the boundary points are detected using the FBSE based spectrum of the non-stationary signal. Once, the boundary points are obtained, the empirical wavelet based filter-bank is designed in the Fourier domain of the multi-component signal to evaluate IMFs. The FBSE based method used in FBSE-EWT has produced higher number of boundary points as compared to FFT part in EWT based method. The features extracted from the IMFs of EEG and ECG signals obtained using FBSE-EWT based approach have shown better performance for the automated detection of Neurological and cardiac ailments. For a discrete time signal, x(n), the FBSE domain discrete Stockwell transform (FBSE-DST) is evaluated as follows:T(n,l)=∑m=1LY(m+l)g(m,l)J0(λlNn){\displaystyle T(n,l)=\sum _{m=1}^{L}Y{\Big (}m+l{\Big )}g(m,l)J_{0}{\Big (}{\frac {\lambda _{l}}{N}}n{\Big )}}where Y(l) are the FBSE coefficients and these coefficients are calculated using the following expression as Y(l)=2N2[J1(λl)]2∑n=0N−1nx(n)J0(λlNn){\displaystyle Y(l)={\frac {2}{N^{2}[J_{1}(\lambda _{l})]^{2}}}\sum _{n=0}^{N-1}nx(n)J_{0}{\Big (}{\frac {\lambda _{l}}{N}}n{\Big )}} Theλl{\displaystyle \lambda _{l}}is termed as thelth{\displaystyle l^{th}}root of the Bessel function, and it is evaluated in an iterative manner based on the solution ofJ0(λl)=0{\displaystyle J_{0}(\lambda _{l})=0}using theNewton-Raphson method. Similarly, the g(m,l) is the FBSE domain Gaussian window and it is given as follows : g(m,l)=e−2π2λm2λl2,{l,m=1,2,...L}{\displaystyle g(m,l)={\text{e}}^{-{\frac {2\pi ^{2}\lambda _{m}^{2}}{\lambda _{l}^{2}}}},~{\{l,m=1,2,...L}\}} For multicomponent amplitude and frequency modulated (AM-FM) signals, the discrete energy separation algorithm (DESA) together with the Gabor's filtering is a traditional approach to estimate the amplitude envelope (AE) and the instantaneous frequency (IF) functions.[6]It has been observed that the filtering operation distorts the amplitude and phase modulations in the separated monocomponent signals. The Fourier–Bessel series expansion does not require use of window function in order to obtain spectrum of the signal. It represents real signal in terms of real Bessel basis functions. It provides representation of real signals it terms of positive frequencies. The basis functions used are aperiodic in nature and converge. The basis functions include amplitude modulation in the representation. The Fourier–Bessel series expansion spectrum provides frequency points equal to the signal length. The Fourier–Bessel series expansion employs aperiodic and decaying Bessel functions as the basis. The Fourier–Bessel series expansion has been successfully applied in diversified areas such as Gear fault diagnosis,[7]discrimination of odorants in a turbulent ambient,[8]postural stability analysis, detection of voice onset time, glottal closure instants (epoch) detection, separation of speech formants, speech enhancement,[9]and speaker identification.[10]The Fourier–Bessel series expansion has also been used to reduce cross terms in the Wigner–Ville distribution. A second Fourier–Bessel series, also known asDini series, is associated with theRobin boundary conditionbf′(b)+cf(b)=0,{\displaystyle bf'(b)+cf(b)=0,}wherec{\displaystyle c}is an arbitrary constant. The Dini series can be defined byf(x)∼∑n=1∞bnJα(γnx/b),{\displaystyle f(x)\sim \sum _{n=1}^{\infty }b_{n}J_{\alpha }(\gamma _{n}x/b),} whereγn{\displaystyle \gamma _{n}}is then-th zero ofxJα′(x)+cJα(x){\displaystyle xJ'_{\alpha }(x)+cJ_{\alpha }(x)}. The coefficientsbn{\displaystyle b_{n}}are given bybn=2γn2b2(c2+γn2−α2)Jα2(γn)∫0bJα(γnx/b)f(x)xdx.{\displaystyle b_{n}={\frac {2\gamma _{n}^{2}}{b^{2}(c^{2}+\gamma _{n}^{2}-\alpha ^{2})J_{\alpha }^{2}(\gamma _{n})}}\int _{0}^{b}J_{\alpha }(\gamma _{n}x/b)\,f(x)\,x\,dx.}
https://en.wikipedia.org/wiki/Fourier%E2%80%93Bessel_series
In mathematics, themax–min inequalityis as follows: When equality holds one says thatf,W, andZsatisfies a strong max–min property (or asaddle-pointproperty). The example functionf(z,w)=sin⁡(z+w){\displaystyle \ f(z,w)=\sin(z+w)\ }illustrates that the equality does not hold for every function. A theorem giving conditions onf,W, andZwhich guarantee the saddle point property is called aminimax theorem. Defineg(z)≜infw∈Wf(z,w).{\displaystyle g(z)\triangleq \inf _{w\in W}f(z,w)\ .}For allz∈Z{\displaystyle z\in Z}, we getg(z)≤f(z,w){\textstyle g(z)\leq f(z,w)}for allw∈W{\displaystyle w\in W}by definition of the infimum being a lower bound. Next, for allw∈W{\textstyle w\in W},f(z,w)≤supz∈Zf(z,w){\displaystyle f(z,w)\leq \sup _{z\in Z}f(z,w)}for allz∈Z{\textstyle z\in Z}by definition of the supremum being an upper bound. Thus, for allz∈Z{\displaystyle z\in Z}andw∈W{\displaystyle w\in W},g(z)≤f(z,w)≤supz∈Zf(z,w){\displaystyle g(z)\leq f(z,w)\leq \sup _{z\in Z}f(z,w)}makingh(w)≜supz∈Zf(z,w){\displaystyle h(w)\triangleq \sup _{z\in Z}f(z,w)}an upper bound ong(z){\displaystyle g(z)}for any choice ofw∈W{\displaystyle w\in W}. Because the supremum is the least upper bound,supz∈Zg(z)≤h(w){\displaystyle \sup _{z\in Z}g(z)\leq h(w)}holds for allw∈W{\displaystyle w\in W}. From this inequality, we also see thatsupz∈Zg(z){\displaystyle \sup _{z\in Z}g(z)}is a lower bound onh(w){\displaystyle h(w)}. By the greatest lower bound property of infimum,supz∈Zg(z)≤infw∈Wh(w){\displaystyle \sup _{z\in Z}g(z)\leq \inf _{w\in W}h(w)}. Putting all the pieces together, we get supz∈Zinfw∈Wf(z,w)=supz∈Zg(z)≤infw∈Wh(w)=infw∈Wsupz∈Zf(z,w){\displaystyle \sup _{z\in Z}\inf _{w\in W}f(z,w)=\sup _{z\in Z}g(z)\leq \inf _{w\in W}h(w)=\inf _{w\in W}\sup _{z\in Z}f(z,w)} which proves the desired inequality.◼{\displaystyle \blacksquare }
https://en.wikipedia.org/wiki/Max%E2%80%93min_inequality
The3-subset meet-in-the-middle(hereafter shortenedMITM)attackis a variant of the genericmeet-in-the-middle attack, which is used incryptologyforhashandblock ciphercryptanalysis. The 3-subset variant opens up the possibility to apply MITM attacks on ciphers, where it is not trivial to divide the keybits into two independent key-spaces, as required by the MITM attack. The 3-subset variant relaxes the restriction for the key-spaces to be independent, by moving the intersecting parts of the keyspaces into a subset, which contains the keybits common between the two key-spaces. The original MITM attack was first suggested in an article byDiffieandHellmanin 1977, where they discussed the cryptanalytic properties of DES.[1]They argued that the keysize of DES was too small, and that reapplying DES multiple times with different keys could be a solution to the key-size; however, they advised against using double-DES and suggestedtriple-DESas a minimum, due to MITM attacks (Double-DES is very susceptible to a MITM attack, as DES could easily be split into two subciphers (the first and second DES encryption) with keys independent of one another, thus allowing for a basic MITM attack that reduces the computational complexity from2112(=22×56){\displaystyle 2^{112}(=2^{2\times 56})}to257(=2×256){\displaystyle 2^{57}(=2\times 2^{56})}. Many variations has emerged, since Diffie and Hellman suggested MITM attacks. These variations either makes MITM attacks more effective, or allows them to be used in situations, where the basic variant cannot. The 3-subset variant was shown by Bogdanov and Rechberger in 2011,[2]and has shown its use in cryptanalysis of ciphers, such as the lightweight block-cipher family KTANTAN. As with general MITM attacks, the attack is split into two phases: A key-reducing phase and a key-verification phase. In the first phase, the domain of key-candidates is reduced, by applying the MITM attack. In the second phase, the found key-candidates are tested on another plain-/ciphertext pair to filter away the wrong key(s). In the key-reducing phase, the attacked cipher is split into two subciphers,f{\displaystyle f}andg{\displaystyle g}, with each their independent keybits, as is normal with MITM attacks. Instead of having to conform to the limitation that the keybits of the two subciphers should be independent, the 3-subset attack allows for splitting the cipher into two subciphers, where some of the bits are allowed to be used in both of the subciphers. This is done by splitting the key into three subsets instead, namely: To now carry out the MITM attack, the 3 subsets are bruteforced individually, according to the procedure below: Each key-candidate found in the key-reducing phase, is now tested with another plain-/ciphertext pair. This is done simply by seeing if the encryption of the plaintext, P, yields the known ciphertext, C. Usually only a few other pairs are needed here, which makes the 3-subset MITM attack, have a very little data complexity. The following example is based on the attack done by Rechberger and Bogdanov on the KTANTAN cipher-family. The naming-conventions used in their paper is also used for this example. The attack reduces the computational complexity of KTANTAN32 to275.170{\displaystyle 2^{75.170}}, down from280{\displaystyle 2^{80}}if compared with a bruteforce attack. A computational complexity of275.170{\displaystyle 2^{75.170}}is of 2014 still not practical to break, and the attack is thus not computationally feasible as of now. The same goes for KTANTAN48 and KTANTAN64, which complexities can be seen at the end of the example. The attack is possible, due to weaknesses exploited in KTANTAN's bit-wise key-schedule. It is applicable to both KTANTAN32, KTANTAN48 and KTANTAN64, since all the variations uses the same key-schedule. It is not applicable to the related KANTAN family of block-ciphers, due to the variations in the key-schedule between KTANTAN and KANTAN. KTANTAN is a lightweight block-cipher, meant for constrained platforms such asRFIDtags, where a cryptographic primitive such asAES, would be either impossible (given the hardware) or too expensive to implement. It was invented by Canniere, Dunkelman and Knezevic in 2009.[3]It takes a block size of either 32, 48 or 64 bits, and encrypts it using an 80-bit key over 254 rounds. Each round utilizes two bits of the key (selected by thekey schedule) as round key. In preparation to the attack, weaknesses in the key schedule of KTANTAN that allows the 3-subset MITM attack was identified. Since only two key-bits are used each round, the diffusion of the key per round is small - the safety lies in the number of rounds. Due to this structure of the key-schedule, it was possible to find a large number of consecutive rounds, which never utilized certain key-bits. More precisely, the authors of the attack found that: This characteristics of the key-schedule is used for staging the 3-subset MITM attack, as we now are able to split the cipher into two blocks with independent key-bits. The parameters for the attack are thus: One may notice a problem with step 1.3 in the key-reducing phase. It is not possible to directly compare the values ofi{\displaystyle i}andj{\displaystyle j}, asi{\displaystyle i}is calculated at the end of round 111, andj{\displaystyle j}is calculated at the start of round 131. This is mitigated by another MITM technique calledpartial-matching. The authors found by calculating forwards from the intermediate valuei{\displaystyle i}, and backwards from the intermediate valuej{\displaystyle j}that at round 127, 8 bits was still unchanged in bothi{\displaystyle i}andj{\displaystyle j}with a probability one. They thus only compared part of the state, by comparing those 8 bits (It was 8 bits at round 127 for KTANTAN32. It was 10 bits at round 123 and 47 bits at round 131 for KTANTAN48 and KTANTAN64, respectively). Doing this yields more false positives, but nothing that increases the complexity of the attack noticeably. KTANTAN32 requires on average 2 pairs now to find the key-candidate, due to the false positives from only matching on part of the state of the intermediate values. KTANTAN48 and KTANTAN64 on average still only requires one plain-/ciphertext pair to test and find the correct key-candidates. For: The results are taken from the article by Rechberger and Bogdanov. This is not the best attack on KTANTAN anymore. The best attack as of 2011 is contributed to Wei, Rechberger, Guo, Wu, Wang and Ling which improved upon the MITM attack on the KTANTAN family.[4]They arrived at a computational complexity of272.9{\displaystyle 2^{72.9}}with 4 chosen plain-/ciphertext pairs using indirect partial-matching and splice & cut MITM techniques.
https://en.wikipedia.org/wiki/3-subset_meet-in-the-middle_attack
Apriori[1]is analgorithmfor frequent item set mining andassociation rule learningoverrelational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determineassociation ruleswhich highlight general trends in thedatabase: this has applications in domains such asmarket basket analysis. The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate ondatabasescontaining transactions (for example, collections of items bought by customers, or details of a website frequentation orIP addresses[2]). Other algorithms are designed for finding association rules in data having no transactions (Winepiand Minepi), or having no timestamps (DNA sequencing). Each transaction is seen as a set of items (anitemset). Given a thresholdC{\displaystyle C}, the Apriori algorithm identifies the item sets which are subsets of at leastC{\displaystyle C}transactions in the database. Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known ascandidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori usesbreadth-first searchand aHash treestructure to count candidate item sets efficiently. It generates candidate item sets of lengthk{\displaystyle k}from item sets of lengthk−1{\displaystyle k-1}. Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequentk{\displaystyle k}-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. The pseudo code for the algorithm is given below for a transaction databaseT{\displaystyle T}, and a support threshold ofε{\displaystyle \varepsilon }. Usual set theoretic notation is employed, though note thatT{\displaystyle T}is amultiset.Ck{\displaystyle C_{k}}is the candidate set for levelk{\displaystyle k}. At each step, the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level, heeding the downward closure lemma.count[c]{\displaystyle \mathrm {count} [c]}accesses a field of the data structure that represents candidate setc{\displaystyle c}, which is initially assumed to be zero. Many details are omitted below, usually the most important part of the implementation is the data structure used for storing the candidate sets, and counting their frequencies. Consider the following database, where each row is a transaction and each cell is an individual item of the transaction: The association rules that can be determined from this database are the following: we can also illustrate this through a variety of examples. Assume that a large supermarket tracks sales data bystock-keeping unit(SKU) for each item: each item, such as "butter" or "bread", is identified by a numerical SKU. The supermarket has a database of transactions where each transaction is a set of SKUs that were bought together. Let the database of transactions consist of following itemsets: We will use Apriori to determine the frequent item sets of this database. To do this, we will say that an item set is frequent if it appears in at least 3 transactions of the database: the value 3 is thesupport threshold. The first step of Apriori is to count up the number of occurrences, called the support, of each member item separately. By scanning the database for the first time, we obtain the following result All the itemsets of size 1 have a support of at least 3, so they are all frequent. The next step is to generate a list of all pairs of the frequent items. For example, regarding the pair {1,2}: the first table of Example 2 shows items 1 and 2 appearing together in three of the itemsets; therefore, we say item {1,2} has support of three. The pairs {1,2}, {2,3}, {2,4}, and {3,4} all meet or exceed the minimum support of 3, so they are frequent. The pairs {1,3} and {1,4} are not. Now, because {1,3} and {1,4} are not frequent, any larger set which contains {1,3} or {1,4} cannot be frequent. In this way, we canprunesets: we will now look for frequent triples in the database, but we can already exclude all the triples that contain one of these two pairs: in the example, there are no frequent triplets. {2,3,4} is below the minimal threshold, and the other triplets were excluded because they were super sets of pairs that were already below the threshold. We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold. Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (The algorithm attempts to load up the candidate set, with as many as possible subsets before each scan of the database). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all2|S|−1{\displaystyle 2^{|S|}-1}of its proper subsets. The algorithm scans the database too many times, which reduces the overall performance. Due to this, the algorithm assumes that the database is permanently in the memory. Also, both the time and space complexity of this algorithm are very high:O(2|D|){\displaystyle O\left(2^{|D|}\right)}, thus exponential, where|D|{\displaystyle |D|}is the horizontal width (the total number of items) present in the database. Later algorithms such asMax-Miner[3]try to identify the maximal frequent item sets without enumerating their subsets, and perform "jumps" in the search space rather than a purely bottom-up approach.
https://en.wikipedia.org/wiki/Apriori_algorithm
Computational creativity(also known asartificial creativity,mechanical creativity,creative computingorcreative computation) is a multidisciplinary endeavour that is located at the intersection of the fields ofartificial intelligence,cognitive psychology,philosophy, andthe arts(e.g.,computational artas part ofcomputational culture[1]). Isthe application of computer systems to emulate human-like creative processes, facilitating the generation of artistic and design outputs that mimic innovation and originality. The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends:[2] The field of computational creativity concerns itself with theoretical and practical issues in the study of creativity. Theoretical work on the nature and proper definition of creativity is performed in parallel with practical work on the implementation of systems that exhibit creativity, with one strand of work informing the other. The applied form of computational creativity is known asmedia synthesis. Theoretical approaches concern the essence of creativity. Especially, under what circumstances it is possible to call the model a "creative" if eminent creativity is about rule-breaking or the disavowal of convention. This is a variant ofAda Lovelace's objection to machine intelligence, as recapitulated by modern theorists such asTeresa Amabile.[3]If a machine can do only what it was programmed to do, how can its behavior ever be calledcreative? Indeed, not all computer theorists would agree with the premise that computers can only do what they are programmed to do[4]—a key point in favor of computational creativity. Because no single perspective or definition seems to offer a complete picture of creativity, the AI researchers Newell, Shaw and Simon[5]developed the combination of novelty and usefulness into the cornerstone of a multi-pronged view of creativity, one that uses the following four criteria to categorize a given answer or solution as creative: Margaret Boden focused on the first two of these criteria, arguing instead that creativity (at least when asking whether computers could be creative) should be defined as "the ability to come up with ideas or artifacts that arenew, surprising,andvaluable".[6] Mihali Csikszentmihalyi argued that creativity had to be considered instead in a social context, and his DIFI (Domain-Individual-Field Interaction) framework has since strongly influenced the field.[7]In DIFI, anindividualproduces works whose novelty and value are assessed by thefield—other people in society—providing feedback and ultimately adding the work, now deemed creative, to thedomainof societal works from which an individual might be later influenced. Whereas the above reflects a top-down approach to computational creativity, an alternative thread has developed among bottom-up computational psychologists involved in artificial neural network research. During the late 1980s and early 1990s, for example, such generative neural systems were driven bygenetic algorithms.[8]Experiments involving recurrent nets[9]were successful in hybridizing simple musical melodies and predicting listener expectations. The use computational processes to generate creative artifacts has been present from early times in history. During the late 1800’s, methods for composing music combinatorily were explored, involving prominent figures like Mozart, Bach, Haydn, and Kiernberger.[10]This approach extended to analytical endeavors as early as 1934, where simple mechanical models were built to explore mathematical problem solving.[11]Professional interest in the creative aspect of computation also was commonly addressed in early discussions of artificial intelligence. The 1956 Dartmouth Conference, listed creativity, invention, and discovery as key goals for artificial intelligence.[12] As the development of computers allowed systems of greater complexity, the 1970’s and 1980’s saw invention of early systems that modelled creativity using symbolic or rule-based approaches. The field of creative storytelling investigated several such models. Meehan’s TALE-SPIN (1977) generated narratives through simulation of character goals and decision trees. Dehn’s AUTHOR (1981) approached generation by simulating an author’s process for crafting a story.[13]Beyond narrative generation, computational creativity expanded into artistic and scientific domains. Artistic image generation was one of the disciplines that saw early potential in generated artifacts through computational creativity. One of the most prominent examples was Harold Cohen’s AARON, which produced art through composition and adaptation of figures based on a large set of symbolic rules and heuristics for visual composition. Some systems also tackled creativity in scientific endeavors. BACON was said to rediscover natural laws like Boyle’s Law and Kepler’s law through hypothesis testing in constrained spaces. By the 1990’s the modeling techniques became more adaptive, attempting to implement cognitive creative rules for generation. Turner’s MINSTREL (1993) introduced TRAMs (Transform Recall Adapt Methods) to simulate creative re-use of prior material for generative storytelling. Meanwhile, Pérez y Pérez’s MEXICA (1999) modeled the creative writing process using cycles of engagement and reflection. As systems increasingly incorporated models of internal evaluation, another approach that emerged was that of combining symbolic generation with domain-specific evaluation metrics, modeling generative and selective steps to creativity In the field of generational humor, the JAPE system (1994) generated pun-based riddles using Prolog and WordNet, applying symbolic pattern-matching rules and a large lexical database (WordNet) to compose riddles involving wordplay.[14]WordNet is a system developed by George Miller and his team at Princeton, its platform and inspired word-mapping structures have been used as the backbone of several syntactic and semantic AI programs. A notable system for music generation was David Cope’s EMI (Experiments in Musical Intelligence) or Emmy, which was trained in the styles of artists like Bach, Beethoven, or Chopin and generated novel pieces in their style through pattern abstraction and recomposition. In the 2000s and beyond, machine learning began influencing creative system design. Researchers such as Mihalcea and Strapparava trained classifiers to distinguish humorous from non-humorous text, using stylistic and semantic features. Meanwhile custom computational approaches led to chess systems like Deep Blue generating quasi-creative gameplay strategies through search algorithms and parallel processing constrained by specific rules and patterns for evaluation.[15] The institutional development of computational creativity grew along its technical advances. Dedicated workshops such as the IJWCC emerged in the 1990s, growing out of interdisciplinary conferences focused on AI and creativity. By the early 2000s, the field coalesced around annual conferences like the International Conference on Computational Creativity (ICCC).[16]Recently, with the advent of Deep Learning, Transformers, and further refinement in Machine Learning structures, computational creativity’s implementation space has new tools for development. While traditional computational approaches to creativity rely on the explicit formulation of prescriptions by developers and a certain degree of randomness in computer programs, machine learning methods allow computer programs to learn on heuristics from input data enabling creative capacities within the computer programs.[17]Especially, deep artificial neural networks allow to learn patterns from input data that allow for the non-linear generation of creative artefacts. Before 1989,artificial neural networkshave been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner.[9][18][19]In 1992, Todd[20]extended this work, using the so-called distal teacher approach that had been developed by Paul Munro,[21]Paul Werbos,[22]D. Nguyen andBernard Widrow,[23]Michael I. JordanandDavid Rumelhart.[24]In the new approach, there are two neural networks, one of which is supplying training patterns to another. In later efforts by Todd, a composer would select a set of melodies that define the melody space, position them on a 2-d plane with a mouse-based graphic interface, and train a connectionist network to produce those melodies, and listen to the new "interpolated" melodies that the network generates corresponding to intermediate points in the 2-d plane. Language models likeGPTand LSTM are used to generate texts for creative purposes, such as novels and scripts. These models demonstratehallucinationfrom time to time, where erroneous materials are presented as factual. Creators make use of their hallucinatory tendency to capture unintended results. Ross Goodwin's1 the Road, for example, uses an LSTM model trained on literature corpora to generate a novel that refers toJack Kerouac'sOn the Roadbased on multimodal input captured by a camera, a microphone, a laptop's inner clock, and a GPS throughout the road trip.[25][26]Brian Merchantcommented on the novel as "pixelated poetry in its ragtag assemblage of modern American imagery".[26]Oscar Sharp and Ross Goodwin created the experimental sci-fi short film Sunspring in 2016, written with an LSTM model, trained on their scripts and 1980-1990 sci-fi movies.[25][27]Rodica Gotca critiqued their overall lack of focus on the narrative and intention to create based on the background of human culture.[25] Nevertheless, researchers highlight the positive side of language models' hallucination for generating novel solutions, given that the correctness and consistency of the response could be controlled. Jiang et al. propose the divergence-convergence flow model for harnessing the hallucinatory effects. They summarize the types of such effects in current research into factuality hallucinations and faithfulness hallucinations, which can be divided into smaller classes like factual fabrication and instruction inconsistency. While the divergence stage involves generating potentially hallucinatory content, the convergence stage focuses on filtering the hallucinations that are useful for the user with intent recognition and evaluation metrics.[28] Some high-level and philosophical themes recur throughout the field of computational creativity, for example as follows. Margaret Boden[6][29]refers to creativity that is novelmerely to the agent that produces itas "P-creativity" (or "psychological creativity"), and refers to creativity that is recognized as novelby society at largeas "H-creativity" (or "historical creativity"). Boden also distinguishes between the creativity that arises from an exploration within an established conceptual space, and the creativity that arises from a deliberate transformation or transcendence of this space. She labels the former asexploratory creativityand the latter astransformational creativity, seeing the latter as a form of creativity far more radical, challenging, and rarer than the former. Following the criteria from Newell and Simon elaborated above, we can see that both forms of creativity should produce results that are appreciably novel and useful (criterion 1), but exploratory creativity is more likely to arise from a thorough and persistent search of a well-understood space (criterion 3) -- while transformational creativity should involve the rejection of some of the constraints that define this space (criterion 2) or some of the assumptions that define the problem itself (criterion 4). Boden's insights have guided work in computational creativity at a very general level, providing more an inspirational touchstone for development work than a technical framework of algorithmic substance. However, Boden's insights are also the subject of formalization, most notably in the work by Geraint Wiggins.[30] The criterion that creative products should be novel and useful means that creative computational systems are typically structured into two phases, generation and evaluation. In the first phase, novel (to the system itself, thus P-Creative) constructs are generated; unoriginal constructs that are already known to the system are filtered at this stage. This body of potentially creative constructs is then evaluated, to determine which are meaningful and useful and which are not. This two-phase structure conforms to the Geneplore model of Finke, Ward and Smith,[31]which is a psychological model of creative generation based on empirical observation of human creativity. Jordanous and Keller emphasize the need for a "tractable and well-articulated model of creativity." They extracted 694 creativity words derived from a corpus of empirical studies in psychology and creativity research spanning 60 years and clustered them based on lexical similarity. As a result, they identify 14 key components of creativity, which form the basis of the framework "Standardised Procedure for Evaluating Creative Systems" (SPECS). These components include aspects like "dealing with uncertainty," "independence and freedom," "social interaction and communication," and "spontaneity & subconscious processing".[32] While much of computational creativity research focuses on independent and automatic machine-based creativity generation, many researchers are inclined towards a collaboration approach.[33]This human-computer interaction is sometimes categorized under the creativity support tools development. These systems aim to provide an ideal framework for research, integration, decision-making, and idea generation.[34][35]Recently, deep learning approaches to imaging, sound and natural language processing, resulted in the modeling of productive creativity development frameworks.[36][37] Computational creativity is increasingly being discussed in the innovation and management literature as the recent development in AI may disrupt entire innovation processes and fundamentally change how innovations will be created.[38][36]Philip Hutchinson[33]highlights the relevance of computational creativity for creatinginnovationand introduced the concept of “self-innovating artificial intelligence” (SAI) to describe how companies make use of AI in innovation processes to enhance their innovative offerings. SAI is defined as the organizational utilization of AI with the aim of incrementally advancing existing or developing new products, based on insights from continuously combining and analyzing multiple data sources. As AI becomes ageneral-purpose technology, the spectrum of products to be developed with SAI will broaden from simple to increasingly complex. This implies that computational creativity leads to a shift of creativity-related skills for humans. Veale and Pérez y Pérez consider "optimal innovation" proposed by Giora et al. a useful foundation for developing computational creativity.[39]Giora et al.'s experiment asks participants to do pleasure and familiarity ratings of verbal stimuli (e.g., body and soul vs. body and sole) and non-verbal stimuli (e.g., a peace dove vs. a peace dove vertically posed that looks like a waving hand). It reveals that pleasing stimuli need to be innovative while preserving the salient meaning of the literal form. Veale and Pérez y Pérez highlight the need to develop computational systems that capture how meaning changes due to formal changes.[40] A great deal, perhaps all, of human creativity can be understood as a novel combination of pre-existing ideas or objects.[41]Common strategies for combinatorial creativity include: The combinatorial perspective allows us to model creativity as a search process through the space of possible combinations. The combinations can arise from composition or concatenation of different representations, or through a rule-based or stochastic transformation of initial and intermediate representations.Genetic algorithmsandneural networkscan be used to generate blended or crossover representations that capture a combination of different inputs. Mark Turner and Gilles Fauconnier[42][43]propose a model called Conceptual Integration Networks that elaborates uponArthur Koestler's ideas aboutcreativity[44]as well as work by Lakoff and Johnson,[45]by synthesizing ideas from Cognitive Linguistic research intomental spacesandconceptual metaphors. Their basic model defines an integration network as four connected spaces: Fauconnier and Turner describe a collection of optimality principles that are claimed to guide the construction of a well-formed integration network. In essence, they see blending as a compression mechanism in which two or more input structures are compressed into a single blend structure. This compression operates on the level of conceptual relations. For example, a series of similarity relations between the input spaces can be compressed into a single identity relationship in the blend. Some computational success has been achieved with the blending model by extending pre-existing computational models of analogical mapping that are compatible by virtue of their emphasis on connected semantic structures.[46]In 2006, Francisco Câmara Pereira[47]presented an implementation of blending theory that employs ideas both fromsymbolic AIandgenetic algorithmsto realize some aspects of blending theory in a practical form; his example domains range from the linguistic to the visual, and the latter most notably includes the creation of mythical monsters by combining 3-D graphical models. Language provides continuous opportunity for creativity, evident in the generation of novel sentences, phrasings,puns,neologisms,rhymes,allusions,sarcasm,irony,similes,metaphors,analogies,witticisms, andjokes.[48]Native speakers of morphologically rich languages frequently create newword-formsthat are easily understood, and some have found their way to the dictionary.[49]The area ofnatural language generationhas been well studied, but these creative aspects of everyday language have yet to be incorporated with any robustness or scale. In the seminal work of applied linguist Ronald Carter, he hypothesized two main creativity types involving words and word patterns: pattern-reforming creativity, and pattern-forming creativity.[48]Pattern-reforming creativity refers to creativity by the breaking of rules, reforming and reshaping patterns of language often through individual innovation, while pattern-forming creativity refers to creativity via conformity to language rules rather than breaking them, creating convergence, symmetry and greater mutuality between interlocutors through their interactions in the form of repetitions.[50] Substantial work has been conducted in this area of linguistic creation since the 1970s, with the development of James Meehan's TALE-SPIN[51]system. TALE-SPIN viewed stories as narrative descriptions of a problem-solving effort, and created stories by first establishing a goal for the story's characters so that their search for a solution could be tracked and recorded. The MINSTREL[52]system represents a complex elaboration of this basic approach, distinguishing a range of character-level goals in the story from a range of author-level goals for the story. Systems like Bringsjord's BRUTUS[53]elaborate these ideas further to create stories with complex interpersonal themes like betrayal. Nonetheless, MINSTREL explicitly models the creative process with a set of Transform Recall Adapt Methods (TRAMs) to create novel scenes from old. The MEXICA[54]model of Rafael Pérez y Pérez and Mike Sharples is more explicitly interested in the creative process of storytelling, and implements a version of the engagement-reflection cognitive model of creative writing. Example of a metaphor:"She was an ape." Example of a simile:"Felt like a tiger-fur blanket." The computational study of these phenomena has mainly focused on interpretation as a knowledge-based process. Computationalists such asYorick Wilks, James Martin,[55]Dan Fass, John Barnden,[56]and Mark Lee have developed knowledge-based approaches to the processing of metaphors, either at a linguistic level or a logical level. Tony Veale and Yanfen Hao have developed a system, called Sardonicus, that acquires a comprehensive database of explicit similes from the web; these similes are then tagged as bona-fide (e.g., "as hard as steel") or ironic (e.g., "as hairy as abowling ball", "as pleasant as aroot canal"); similes of either type can be retrieved on demand for any given adjective. They use these similes as the basis of an on-line metaphor generation system called Aristotle[57]that can suggest lexical metaphors for a given descriptive goal (e.g., to describe a supermodel as skinny, the source terms "pencil", "whip", "whippet", "rope", "stick-insect" and "snake" are suggested). The process of analogical reasoning has been studied from both a mapping and a retrieval perspective, the latter being key to the generation of novel analogies. The dominant school of research, as advanced byDedre Gentner, views analogy as a structure-preserving process; this view has been implemented in thestructure mapping engineor SME,[58]the MAC/FAC retrieval engine (Many Are Called, Few Are Chosen), ACME (Analogical Constraint Mapping Engine) and ARCS (Analogical Retrieval Constraint System). Other mapping-based approaches include Sapper,[46]which situates the mapping process in a semantic-network model of memory. Analogy is a very active sub-area of creative computation and creative cognition; active figures in this sub-area includeDouglas Hofstadter,Paul Thagard, andKeith Holyoak. Also worthy of note here is Peter Turney and Michael Littman'smachine learningapproach to the solving ofSAT-style analogy problems; their approach achieves a score that compares well with average scores achieved by humans on these tests. Humour is an especially knowledge-hungry process, and the most successful joke-generation systems to date have focussed on pun-generation, as exemplified by the work of Kim Binsted and Graeme Ritchie.[59]This work includes theJAPEsystem, which can generate a wide range of puns that are consistently evaluated as novel and humorous by young children. An improved version of JAPE has been developed in the guise of the STANDUP system, which has been experimentally deployed as a means of enhancing linguistic interaction with children with communication disabilities. Some limited progress has been made in generating humour that involves other aspects of natural language, such as the deliberate misunderstanding of pronominal reference (in the work of Hans Wim Tinholt and Anton Nijholt), as well as in the generation of humorous acronyms in the HAHAcronym system[60]of Oliviero Stock and Carlo Strapparava. The blending of multiple word forms is a dominant force for new word creation in language; these new words are commonly called "blends" or "portmanteau words" (afterLewis Carroll). Tony Veale has developed a system called ZeitGeist[61]that harvests neologicalheadwordsfromWikipediaand interprets them relative to their local context in Wikipedia and relative to specific word senses inWordNet. ZeitGeist has been extended to generate neologisms of its own; the approach combines elements from an inventory of word parts that are harvested from WordNet, and simultaneously determines likely glosses for these new words (e.g., "food traveller" for "gastronaut" and "time traveller" for "chrononaut"). It then usesWeb searchto determine which glosses are meaningful and which neologisms have not been used before; this search identifies the subset of generated words that are both novel ("H-creative") and useful. Acorpus linguisticapproach to the search and extraction ofneologismhave also shown to be possible. UsingCorpus of Contemporary American Englishas a reference corpus, Locky Law has performed an extraction ofneologism,portmanteausand slang words using thehapax legomenawhich appeared in the scripts of AmericanTV dramaHouse M.D.[62] In terms of linguistic research in neologism,Stefan Th. Grieshas performed a quantitative analysis of blend structure in English and found that "the degree of recognizability of the source words and that the similarity of source words to the blend plays a vital role in blend formation." The results were validated through a comparison of intentional blends to speech-error blends.[63] More than iron, more than lead, more than gold I need electricity.I need it more than I need lamb or pork or lettuce or cucumber.I need it for my dreams. Like jokes, poems involve a complex interaction of different constraints, and no general-purpose poem generator adequately combines the meaning, phrasing, structure and rhyme aspects of poetry. Nonetheless, Pablo Gervás[64]has developed a noteworthy system called ASPERA that employs acase-based reasoning(CBR) approach to generating poetic formulations of a given input text via a composition of poetic fragments that are retrieved from a case-base of existing poems. Each poem fragment in the ASPERA case-base is annotated with a prose string that expresses the meaning of the fragment, and this prose string is used as the retrieval key for each fragment.Metricalrules are then used to combine these fragments into a well-formed poetic structure.Racteris an example of such a software project. Computational creativity in the music domain has focused both on the generation of musical scores for use by human musicians, and on the generation of music for performance by computers. The domain of generation has included classical music (with software that generates music in the style ofMozartandBach) andjazz.[65]Most notably,David Cope[66]has written a software system called "Experiments in Musical Intelligence" (or "EMI")[67]that is capable of analyzing and generalizing from existing music by a human composer to generate novel musical compositions in the same style. EMI's output is convincing enough to persuade human listeners that its music is human-generated to a high level of competence.[68] In the field of contemporary classical music,Iamusis the first computer that composes from scratch, and produces final scores that professional interpreters can play. TheLondon Symphony Orchestraplayed a piece for full orchestra, included inIamus' debut CD,[69]whichNew Scientistdescribed as "The first major work composed by a computer and performed by a full orchestra".[70]Melomics, the technology behind Iamus, is able to generate pieces in different styles of music with a similar level of quality. Creativity research in jazz has focused on the process of improvisation and the cognitive demands that this places on a musical agent: reasoning about time, remembering and conceptualizing what has already been played, and planning ahead for what might be played next.[71]The robot Shimon, developed by Gil Weinberg of Georgia Tech, has demonstrated jazz improvisation.[72]Virtual improvisation software based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov include OMax, SoMax and PyOracle, are used to create improvisations in real-time by re-injecting variable length sequences learned on the fly from the live performer.[73] In the field of musical composition, the patented works[74]byRené-Louis Baronallowed to make a robot that can create and play a multitude of orchestrated melodies, so-called "coherent" in any musical style. All outdoor physical parameter associated with one or more specific musical parameters, can influence and develop each of these songs (in real-time while listening to the song). The patented inventionMedal-Composerraises problems of copyright. Computational creativity in the generation ofvisual arthas had some notable successes in the creation of both abstract art and representational art. A well-known program in this domain isHarold Cohen'sAARON,[75]which has been continuously developed and augmented since 1973. Though formulaic, Aaron exhibits a range of outputs, generating black-and-white drawings or colour paintings that incorporate human figures (such as dancers), potted plants, rocks, and other elements of background imagery. These images are of a sufficiently high quality to be displayed in reputable galleries. Other software artists of note include the NEvAr system (for "Neuro-EvolutionaryArt") of Penousal Machado.[76]NEvAr uses a genetic algorithm to derive a mathematical function that is then used to generate a coloured three-dimensional surface. A human user is allowed to select the best pictures after each phase of the genetic algorithm, and these preferences are used to guide successive phases, thereby pushing NEvAr's search into pockets of the search space that are considered most appealing to the user. The Painting Fool, developed bySimon Coltonoriginated as a system for overpainting digital images of a given scene in a choice of different painting styles, colour palettes and brush types. Given its dependence on an input source image to work with, the earliest iterations of the Painting Fool raised questions about the extent of, or lack of, creativity in acomputational artsystem. Nonetheless,The Painting Foolhas been extended to create novel images, much asAARONdoes, from its own limited imagination. Images in this vein include cityscapes and forests, which are generated by a process ofconstraint satisfactionfrom some basic scenarios provided by the user (e.g., these scenarios allow the system to infer that objects closer to the viewing plane should be larger and more color-saturated, while those further away should be less saturated and appear smaller). Artistically, the images now created by the Painting Fool appear on a par with those created by Aaron, though the extensible mechanisms employed by the former (constraint satisfaction, etc.) may well allow it to develop into a more elaborate and sophisticated painter. The artist Krasi Dimtch (Krasimira Dimtchevska) and the software developer Svillen Ranev have created a computational system combining a rule-based generator of English sentences and a visual composition builder that converts sentences generated by the system into abstract art.[77]The software generates automatically indefinite number of different images using different color, shape and size palettes. The software also allows the user to select the subject of the generated sentences or/and the one or more of the palettes used by the visual composition builder. An emerging area of computational creativity is that ofvideo games. ANGELINA is a system for creatively developing video games in Java by Michael Cook. One important aspect is Mechanic Miner, a system that can generate short segments of code that act as simple game mechanics.[78]ANGELINA can evaluate these mechanics for usefulness by playing simple unsolvable game levels and testing to see if the new mechanic makes the level solvable. Sometimes Mechanic Miner discovers bugs in the code and exploits these to make new mechanics for the player to solve problems with.[79] In July 2015,GooglereleasedDeepDream– anopen source[80]computer vision program, created to detect faces and other patterns in images with the aim of automatically classifying images, which uses a convolutional neural network to find and enhance patterns in images via algorithmicpareidolia, thus creating a dreamlikepsychedelicappearance in the deliberately over-processed images.[81][82][83] In August 2015, researchers fromTübingen, Germanycreated a convolutional neural network that uses neural representations to separate and recombine content and style of arbitrary images which is able to turn images into stylistic imitations of works of art by artists such as aPicassoorVan Goghin about an hour. Their algorithm is put into use in the websiteDeepArtthat allows users to create unique artistic images by their algorithm.[84][85][86][87] In early 2016, a global team of researchers explained how a new computational creativity approach known as the Digital Synaptic Neural Substrate (DSNS) could be used to generate original chess puzzles that were not derived from endgame databases.[88]The DSNS is able to combine features of different objects (e.g. chess problems, paintings, music) using stochastic methods in order to derive new feature specifications which can be used to generate objects in any of the original domains. The generated chess puzzles have also been featured on YouTube.[89] Creativity is also useful in allowing for unusual solutions inproblem solving. Inpsychologyandcognitive science, this research area is calledcreative problem solving. The Explicit-Implicit Interaction (EII) theory of creativity has been implemented using aCLARION-based computational model that allows for the simulation ofincubationandinsightin problem-solving.[90]The emphasis of this computational creativity project is not on performance per se (as inartificial intelligenceprojects) but rather on the explanation of the psychological processes leading to human creativity and the reproduction of data collected in psychology experiments. So far, this project has been successful in providing an explanation for incubation effects in simple memory experiments, insight in problem solving, and reproducing the overshadowing effect in problem solving. Some researchers feel that creativity is a complex phenomenon whose study is further complicated by the plasticity of the language we use to describe it. We can describe not just the agent of creativity as "creative" but also the product and the method. Consequently, it could be claimed that it is unrealistic to speak of ageneral theory of creativity.[citation needed]Nonetheless, some generative principles are more general than others, leading some advocates to claim that certain computational approaches are "general theories". Stephen Thaler, for instance, proposes that certain modalities of neural networks are generative enough, and general enough, to manifest a high degree of creative capabilities.[citation needed] Traditional computers, as mainly used in the computational creativity application, do not support creativity, as they fundamentally transform a set of discrete, limited domain of input parameters into a set of discrete, limited domain of output parameters using a limited set of computational functions.[citation needed]As such, a computer cannot be creative, as everything in the output must have been already present in the input data or the algorithms.[citation needed]Related discussions and references to related work are captured in work on philosophical foundations of simulation.[91] Mathematically, the same set of arguments against creativity has been made by Chaitin.[92]Similar observations come from a Model Theory perspective. All this criticism emphasizes that computational creativity is useful and may look like creativity, but it is not real creativity, as nothing new is created, just transformed in well-defined algorithms. According to researchers like Mark Riedl, human creativity and computational creativity at their current state differ in several dimensions. While creativity can be viewed in the context of morality, Riedl considers the "educational, moralizing" aspect of stories as one of the challenges to developing narrative-generating AI models, which may contribute to the underlying reasoning coherence of the text.[25]The lack of intention in AI models hinders them from making morally responsible choices, which often appear in human creativity.[93] Michele Loi and Eleonora Vigano identified some potential threats to human creativity caused by AI development. For example, they considered the openness to "experiments of life," introduced byJohn Stuart Mill, an important factor in creativity. Society's overreliance on algorithms for making decisions would constrainutility functions, which may discourage people from exploring riskier solutions and decrease the diversity of exploration and thus the creativity.[94] The International Conference on Computational Creativity (ICCC) occurs annually, organized by The Association for Computational Creativity.[95]Events in the series include: Previously, the community of computational creativity has held a dedicated workshop, the International Joint Workshop on Computational Creativity, every year since 1999. Previous events in this series include:[citation needed] The 1st Conference on Computer Simulation of Musical Creativity will be held
https://en.wikipedia.org/wiki/Computational_creativity
Fritz Thiele(14 April 1894 – 4 September 1944) was a member of theGerman resistancewho also served as the communications chief of theGerman ArmyduringWorld War II.[1] Thiele was born inBerlinand joined the Imperial Army in 1914. Working closely with Chief of Army communicationsGeneral der NachrichtentruppeErich Fellgiebel, he was part of theassassination attemptagainstAdolf Hitleron 20 July 1944. He was responsible as part of the coup attempt in the effort to sever communications between officers loyal to Hitler and armed forces units in the field and from the communications centre at theBendlerstrassein Berlin; he relayed a crucial message from Fellgiebel to GeneralFriedrich Olbrichtand the other conspirators that the assassination attempt had failed but the coup attempt should still proceed. There are differing accounts of the time when he provided this report. Thiele himself did not want to proceed with the coup attempt when he knew that the assassination attempt had failed and he left the Bendlerstrasse and visitedWalter Schellenbergat theReich Central Security Officein an attempt to extricate himself.[2] Following Fellgiebel's arrest, Thiel was directed to assume his duties before he was himself arrested by theGestapoon 11 August 1944. He was condemned to death on 21 August 1944 by theVolksgerichtshofand hanged on 4 September 1944 atPlötzenseeprison in Berlin. This German World War II article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Fritz_Thiele
Google Authenticatoris a software-basedauthenticatorbyGoogle. It implementsmulti-factor authenticationservices using thetime-based one-time password(TOTP; specified in RFC 6238) andHMAC-based one-time password(HOTP; specified in RFC 4226), for authenticating users of software applications.[5] When logging into a site supporting Authenticator (including Google services) or using Authenticator-supporting third-party applications such aspassword managersorfile hosting services, Authenticator generates a six- to eight-digitone-time passwordwhich users must enter in addition to their usual login details. Google providesAndroid,[6]Wear OS,[7]BlackBerry, andiOS[8]versions of Authenticator. An officialopen sourcefork of the Android app is available onGitHub.[9]However, this fork was archived in Apr 6, 2021 and is now read only.[10] Current software releases areproprietaryfreeware.[11] Theappis first installed on a smartphone to use Authenticator. It must be set up for each site with which it is to be used: the site provides ashared secretkey to the user over a secure channel, to be stored in the Authenticator app. This secret key will be used for all future logins to the site. To log into a site or service that usestwo-factor authenticationand supports Authenticator, the user provides a username and password to the site. The site then computes (but does not display) the required six- to eight-digitone-time passwordand asks the user to enter it. The user runs the Authenticator app, which independently computes and displays the same password, which the user types in, authenticating their identity.[citation needed] With this kind of two-factor authentication, mere knowledge of username and password is insufficient to break into a user's account - the attacker also needs knowledge of the shared secret key or physical access to the device running the Authenticator app. An alternative route of attack is aman-in-the-middle attack: if the device used for the login process is compromised bymalware, the credentials and one-time password can be intercepted by the malware, which then can initiate its login session to the site, or monitor and modify the communication between the user and the site.[12] During setup, the service provider generates an 80-bit secret key for each user (whereas RFC 4226 §4 requires 128 bits and recommends 160 bits).[13]This is transferred to the Authenticator app as a 16, 26, or 32-characterbase32string, or as aQR code. Subsequently, when the user opens the Authenticator app, it calculates anHMAC-SHA1hash value using this secret key. The message can be: A portion of the HMAC is extracted and displayed to the user as a six- to eight-digit code; The lastnibble(4 bits) of the result is used as a pointer, to a 32-bit integer, in the result byte array, and masks out the 31st bit. The Google Authenticator app forAndroidwas originally open source, but later became proprietary.[11]Google made earlier source for their Authenticator app available on itsGitHubrepository; the associated development page stated: "This open source project allows you to download the code that powered version 2.21 of the application. Subsequent versions contain Google-specific workflows that are not part of the project."[14] The latest open-source release was in 2020.[9]
https://en.wikipedia.org/wiki/Google_Authenticator
Incomputing,Intel'sAdvanced Programmable Interrupt Controller(APIC) is a family ofprogrammable interrupt controllers. As its name suggests, the APIC is more advanced than Intel's8259Programmable Interrupt Controller (PIC), particularly enabling the construction ofmultiprocessorsystems. It is one of several architectural designs intended to solve interrupt routing efficiency issues in multiprocessor computer systems. The APIC is a split architecture design, with a local component (LAPIC) usually integrated into the processor itself, and an optional I/O APIC on a system bus. The first APIC was the 82489DX – it was a discrete chip that functioned both as local and I/O APIC. The 82489DX enabled construction ofsymmetric multiprocessor(SMP) systems with theIntel 486and earlyPentiumprocessors; for example, the reference two-way 486 SMP system used three 82489DX chips, two as local APICs and one as I/O APIC. Starting with theP54Cprocessor, the local APIC functionality was integrated into the Intel processors' silicon. The first dedicated I/O APIC was the Intel 82093AA, which was intended forPIIX3-based systems. There are two components in the Intel APIC system, thelocal APIC(LAPIC) and theI/O APIC. There is one LAPIC in each CPU in the system. In the very first implementation (82489DX), the LAPIC was a discrete circuit, as opposed to its later implementation in Intel processors' silicon. There is typically one I/O APIC for each peripheral bus in the system. In original system designs, LAPICs and I/O APICs were connected by a dedicated APIC bus. Newer systems use the system bus for communication between all APIC components. Each APIC, whether a discrete chip or integrated in a CPU, has a version register containing a four-bit version number for its specific APIC implementation. For example, the 82489DX has an APIC version number of 0, while version 1 was assigned to the first generation of local APICs integrated in the Pentium 90 and 100 processors.[1] In systems containing an8259 PIC, the 8259 may be connected to the LAPIC in the system's bootstrap processor (BSP), one of the system's I/O APICs, or both. Logically, however, the 8259 is only connected once at any given time. The first-generation Intel APIC chip, the 82489DX, which was meant to be used withIntel 80486and early Pentium processors, is actually an external local and I/O APIC in one circuit. The Intel MP 1.4 specification refers to it as "discrete APIC" in contrast with the "integrated APIC" found in most of the Pentium processors.[2]The 82489DX had 16 interrupt lines;[3]it also had a quirk that it could lose some ISA interrupts.[4] In a multiprocessor 486 system, each CPU had to be paired with its own 82489DX; additionally a supplementary 82489DX had to be used as I/O APIC. The 82489DX could not emulate the 8259A (XT-PIC) so these also had to be included as physical chips for backwards compatibility.[5]The 82489DX was packaged as a 132-pinPQFP.[3] Local APICs (LAPICs) manage all external interrupts for some specific processor in an SMP system. In addition, they are able to accept and generateinter-processor interrupts(IPIs) between LAPICs. A single LAPIC may support up to 224 usableinterruptvectors from an I/O APIC. Vector numbers 0 to 31, out of 0 to 255, are reserved for exception handling by x86 processors. All Intel processors starting with the P5 microarchitecture (P54C) have a built-in local APIC.[6][7]However, if the local APIC is disabled in a P5 processor, it cannot be re-enabled by software; this limitation no longer exists in theP6 processorsand later ones.[7] With the introduction ofPentium 4 HTandPentium D, each CPU core and each CPU thread have the integrated LAPIC. TheMessage Signaled Interrupts(MSI) feature of the PCI 2.2 and later specifications cannot be used without the local APIC being enabled.[8]Use of MSI obviates the need for an I/O APIC. Additionally, up to 224 interrupts are supported in MSI mode, and IRQ sharing is not allowed.[9] Another advantage of the local APIC is that it also provides a high-resolution (on the order of onemicrosecondor better) timer that can be used in both interval and one-off mode.[7] The APIC timer had its initial acceptance woes. A Microsoft document from 2002 (which advocated for the adoption ofHigh Precision Event Timerinstead) criticized the LAPIC timer for having "poor resolution" and stating that "the clocks silicon is sometimes very buggy".[10]Nevertheless, the APIC timer is used for example byWindows 7whenprofilingis enabled, and byWindows 8in all circumstances. (Before Windows 8 claimed exclusive rights to this timer, it was also used by some programs likeCPU-Z.) Under Microsoft Windows the APIC timer is not a shareable resource.[11] The aperiodic interrupts offered by the APIC timer are used by theLinux kerneltickless kernelfeature. This optional but default feature is new with 2.6.18. When enabled on a computer with an APIC timer, the kernel does not use the8253programmable interval timerfor timekeeping.[12]AVMwaredocument notes that "software does not have a reliable way to determine its frequency. Generally, the only way to determine the local APIC timer’s frequency is to measure it using the PIT or CMOS timer, which yields only an approximate result."[13] I/O APICs contain a redirection table, which is used to route the interrupts it receives from peripheral buses to one or more local APICs. Early I/O APICs (like 82489DX, SIO.A and PCEB/ESC) only had support for 16 interrupt lines, but later ones like 82093AA (separate chip for PIIX3/PIIX4) had support for 24 interrupt lines.[9]It was packaged as a 64-PinPQFP.[14]The 82093AA normally connected to thePIIX3/PIIX4and used its integrated legacy 8259 PICs.[14]TheICH1integrated the I/O APIC. An integrated I/O APIC of modern chipsets may provide more than 24 interrupt lines.[15] According to a 2009 Intel benchmark usingLinux, the I/O APIC reduced interrupt latency by a factor of almost three relative to the 8259 emulation (XT-PIC), while using MSI reduced the latency even more, by a factor of nearly seven relative to the XT-PIC baseline.[16] ThexAPICwas introduced with thePentium 4, while thex2APICis the most recent generation of the Intel's programmable interrupt controller, introduced with theNehalem microarchitecturein November 2008.[17]The major improvements of the x2APIC address the number of supported CPUs and performance of the interface. The x2APIC now uses 32 bits to address CPUs, allowing to address up to 232− 1 CPUs using the physical destination mode. The logical destination mode now works differently and introduces clusters; using this mode, one can address up to 220− 16 processors. The improved interface reduces the number of needed APIC register accesses for sendinginter-processor interrupts(IPIs). Because of this advantage,KVMcan and does emulate the x2APIC for older processors that do not physically support it, and this support is exposed fromQEMUgoing back toConroeand even for AMDOpteronG-series processors (neither of which natively support x2APIC).[18][19] APICvis Intel's brand name forhardware virtualizationsupport aimed at reducing interrupt overhead in guests. APICv was introduced in theIvy Bridge-EPprocessor series, which is sold as Xeon E5-26xx v2 (launched in late 2013) and as Xeon E5-46xx v2 (launched in early 2014).[20][21]AMD announced a similar technology calledAVIC,[22][23]it is available family15h models 6Xh (Carrizo) processorsand newer.[24] There are a number of known bugs in implementations of APIC systems, especially with concern to how the8254is connected. DefectiveBIOSesmay not set up interrupt routing properly, or provide incorrectACPItables and IntelMultiProcessor Specification(MPS) tables. The APIC can also be a cause of system failure when the operating system does not support it properly. On older operating systems, the I/O and local APICs often had to be disabled. While this is not possible anymore due to the prevalence ofsymmetric multiprocessorandmulti-coresystems, the bugs in the firmware and the operating systems are now a rare occurrence. AMDandCyrixonce proposed a somewhat similar-in-purposeOpenPICarchitecture supporting up to 32 processors;[25]it had at least declarative support fromIBMandCompaqaround 1995.[26]No x86 motherboard was released with OpenPIC however.[27]After the OpenPIC's failure in the x86 market, AMD licensed Intel's APIC for itsAMD Athlonand later processors. IBM however developed theirMultiProcessor Interrupt Controller(MPIC) based on the OpenPIC register specifications.[28]MPIC was used inPowerPCbased designs, including those of IBM, for instance in someRS/6000systems,[29]but also by Apple, as late as theirPower Mac G5s.[30][31]
https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller
Aduty to warnis a concept that arises in thelawoftortsin a number of circumstances, indicating that apartywill be held liable for injuries caused to another, where the party had the opportunity to warn the other of a hazard and failed to do so. In the United States, two landmark legal cases established therapists' legal obligations to breach confidentiality if they believe a client poses a risk to himself or others. The first wasTarasoff v. Regents of the University of California(1976), where a therapist failed to inform a young woman and her parents of specific death threats made by a client. The other case wasJablonski by Pahls v. United States(1983), which further extended the responsibilities of duty to warn by including the review of previous records that might include a history of violent behavior. The duty to warn arises inproduct liabilitycases, asmanufacturerscan be held liable for injuries caused by their products if the product causes an injury to a consumer and the manufacturer fails to supply adequatewarningsabout the risks of using the product (such asside effectsfrom pharmacy prescriptions) or if they fail to supply adequateinstructionsfor the proper use of the product (such as a precaution to use safety glasses when using a drill).[1]If the manufacturer fails to supply these warnings, the law will consider the product itself to be defective. A lawsuit by a party injured by a product, where the manufacturer failed to properly warn, is usually brought as a "negligence" action, but it could be filed as a "strict liability" claim or as a "breach ofwarranty of merchantability" case.[2] Not long after launching itsNote 7 smartphonein August 2016, Samsung got many reports of burning phones. Samsung had no choice other than recalling all the Galaxy Note 7, which had cost the company around $5.3bn.[3]Following the recall, the Federal Aviation Administration prohibited people from turning Galaxy Note 7 on, packing it in the checked luggage, and charging it while on the plane.[4]On October 11, 2016 Samsung stopped the production and issued a warning for people to turn the Galaxy Note 7 off and to not use it any longer. Samsung also told all of its global partners to stop selling the phone because of concerns about the product's safety.[5]After testing 200,000 devices and 30,000 batteries, Samsung found that the overheating and the burning phones was resulted from the error in designing and manufacturing the batteries of its two suppliers.[6] An issue in product liability cases is whether the product warranted a duty to warn about known dangers.[7] In the popularized 1994Liebeck v. McDonald's Restaurantscase where the individual Liebeck sued McDonald's for damages for injuries due to spilling hot coffee on her lap. McDonald's was cited not to have properly warned consumers about the inherent danger of their coffee product, which was heated way beyond the average chain coffee's temperature .[8]In addition, McDonald's was aware of previous injuries from hot coffee injuries and had not properly warned the consumers, which resulted in the court awarding Liebeck $640,000 in damages, which was later settled for an undisclosed amount.[9] Most notably, a property owner has a duty to warn persons on the property of various hazards, depending on the status of the person on the property. For example, the property owner must warn an anticipated or discoveredtrespasserof deadly conditions known to the property owner, but that would be hidden from the trespasser. The property owner must warnlicenseesofallknown hazards (whether deadly or not), and must warninviteesof all dangers that the property owner can discover through a reasonable inspection of the property.[10][11] Inclinical psychologicalpractice in theUnited States, duty to warn requires a clinician who has reasonable grounds to believe that a client may be in imminent danger of harming themselves or others to warn the possible victims.[12]Duty to warn is among the few exceptions to aclient's right to confidentialityand the therapist's ethical obligation to maintain confidential information related in the context of thetherapeutic relationship. In theAmerican Psychological Association'sEthical Principles of Psychologists and Code of Conduct, the therapist's duty to warn is implicitly contained within the guidelines for disclosure of confidential information without the consent of the client: "Psychologists disclose confidential information without the consent of the individual only as mandated by law, or where permitted by law for a valid purpose such as to … protect the client/patient, psychologist, or others from harm."[13]In situations when there is cause for serious concern about a client harming someone, the clinician must breach confidentiality to warn the identified victim/third party about imminent danger.[14][page needed]Although laws vary somewhat in different states, in general, the danger must be imminent and the breach of confidentiality should be made to someone who is in a position to reduce the risk of the danger.[12]People who would be appropriate recipients of such information would include the intended victim and law enforcement. Duty to warn is embedded in the historical context of two rulings (1974 and 1976) of theCalifornia Supreme Courtin the case ofTarasoff v. Regents of the University of California.[15][page needed][16]The court held thatmental health professionalshave aduty to protectindividuals who are being threatened with bodily harm by a patient. The original 1974 decision mandated warning the threatened individual, but a 1976 rehearing of the case by the California Supreme Court called for a "duty to protect" the intended victim. Explicit in the court's decision was the principle that the confidentiality of the therapeutic relationship is subordinate to the safety of society and its members.[16]Despite the value and importance of protecting the client and their feelings, and thus thephysician-client relationship, the court decided that the clinician'sdutyto society as a citizen of that society places certain limitations on the clinician's loyalty to a client's secrets, divulged in the context of the therapeutic relationship. Limitations to confidentiality are a critical concern for clinicians, because a relationship of trust between the therapist and client is the prerequisite context for therapeutic growth.[16]Without the client's expectation that the therapist will honor the client's confidences divulged in the therapeutic dialogue, the client will not have the freedom to unveil the most troublesome and private issues that are matters of the utmost concern and need for intervention. Some argue that if clients cannot depend on confidentiality in all matters that are related in therapy, potentially dangerous clients, who may be most in need of psychological services, will avoid therapy, thus missing the opportunity for intervention.[12] Other cases similar to the issues addressed in the Tarasoff case have been brought to the attention of the courts, such as theJablonski by Pahls v. United States. The conclusion of that case extended the responsibility entailed in the duty to warn with the judgment that the clinician may be liable for failure to review previous records, which may contain a history of previous violent behavior, a predictor of potential future violence. Recent[when?]consideration of applying the duty to warn has raised questions regarding therapists' responsibility to breach confidentiality in order to report clients' nonviolent behaviors which may pose danger to others, as in the case of clients withHIV/AIDS.[12] The existence and extent of a contractual duty to warn inconstructioncases is discussed in the England and Wales High Court (Technology and Construction Court) case of Cleightonhills v Bembridge Marine Ltd and Others (2012).[17] InJane Doe No. 14 v. Internet Brands, Inc., the Jane Doe plaintiff alleged thatInternet Brands, Inc.'s failure to warn users of its networking website, modelmayhem.com, caused her to be a victim of a rape scheme. She alleged that defendant Internet Brands knew about the rapists but did not warn her or the website's other users. She filed an action against Internet Brands alleging liability for negligence under California law based on that failure to warn. On May 31, 2016, the US Court of Appeals for the 9th Circuit ruled that theCommunications Decency Actdoes not bar Jane Doe's failure to warn claim.[18] In the early morning hours of August 24, 1986, a woman who lived in a second-floor apartment inTorontowas raped at knifepoint byPaul Callow, who had broken into her apartment from a balcony. At the time, the plaintiff was the fifth victim of similar crimes by Callow, who would become known as the "balcony rapist". In 1998, this woman was successful inher lawsuitagainst the Metropolitan Toronto Police Force for damages on the grounds that the police force had conducted a negligent investigation and failed to warn women of the risk of an attack by Callow.[19] In December 2012, a woman, who later became a Jane Doe plaintiff, was attacked by Sofyan Boalag in St. John's, Newfoundland. This assault was the last of six assaults between September and December 2012. Boalag was charged with 23 criminal offences in relation to complaints from multiple victims. In 2016, he was convicted of multiple offenses including robbery, three counts of sexual assault with a weapon, and choking Doe until she passed out. In January 2016, Doe commenced a lawsuit against theRoyal Newfoundland Constabulary, alleging police failed to properly warn the public that a predator was stalking young women. According to the statement of claim, all of the attacks took place in a similar part of the city and involved people with similar characteristics—six young women, including one girl under 16 years of age.[20][21] In 1986, 19-year-oldJeanne Clery was raped and murderedin herLehigh Universitydorm room. Her parents claimed that there was a lack of information provided to students and families about the rapid increase of violent and non-violent incidents on campuses and that university administrators had failed to warn students and the public.[22]A result of these claims was the passing of theClery Actwhich requires colleges and universities in the United States to publish campus crime reports.[23]In 2008,Eastern Michigan Universitywas fined $357,500 for violating the Clery Act.[24][25]US federal officials cited the university for "an egregious violation" for failing to notify the public of themurder of Laura Dickinsonin her residence hall room.[26] In July 2015, then–Director of National IntelligenceJames Clapperformally issued a directive to the agencies of theUnited States Intelligence Communitythat they had a "duty to warn" both U.S. and non-U.S. persons of impending harm against them. The directive included exemptions for occasions that required the protection of sensitive "sources and methods," cases where the intended victim was a member of a terrorist group or a violent criminal, or if the intended victim was already aware of the threat. Many U.S. intelligence agencies had informally observed such a practice for decades before Clapper's directive.[27] In 2019, theCommittee to Protect Journalistssued theTrump administrationfor information on whether the U.S. government had followed its "duty to warn" principle in the case of the murdered Saudi-American journalistJamal Khashoggi.[28]In August 2021, a U.S. appeals court ruled that U.S. intelligence agencies were not required to disclose whether they had information about threats to Khashoggi's life before his assassination.[29] Before theJanuary 3, 2024, Kerman bombings, a terrorist attack carried out byISIS-Ksuicide bombers that killed 94 people and injured 284 others, the U.S. intelligence community provided Iran, oftenconsidered an adversary of the U.S., with an early warning under its "duty to warn" policy. U.S. officials noted that the information given was sufficiently specific regarding the location and timely enough that it may have proved useful to Tehran in thwarting the attack.[30] In March 2024, the United States privately warned Russian officials of the danger of an impending attack fromIslamic State – Khorasan Province(IS-KPor ISIS–K), from intelligence gathered earlier in March, under the US intelligence community's "duty to warn" requirement.[31]Later that month the group would carry out theCrocus City Hall attackwhich killed 139 people.[32]
https://en.wikipedia.org/wiki/Duty_to_warn
TheInfrastructure Investment and Jobs Act(IIJA), also known as theBipartisan Infrastructure Law(BIL), (H.R. 3684) is aUnited States federal statuteenacted by the117th United States Congressand signed into law by PresidentJoe Bidenon November 15, 2021. It was introduced in the House as theINVEST in America Actand nicknamed the Bipartisan Infrastructure Bill. The act was initially a$547–715 billioninfrastructurepackage that included provisions related to federal highway aid, transit,highway safety, motor carrier, research,hazardous materialsand rail programs of theDepartment of Transportation.[1][2]After congressional negotiations, it was amended and renamed the Infrastructure Investment and Jobs Act to add funding forbroadband access, clean water andelectric gridrenewal in addition to the transportation and road proposals of the original House bill. This amended version included approximately $1.2 trillion in spending, with $550 billion newly authorized spending on top of whatCongresswas planning to authorize regularly.[3][4] The amended bill was passed 69–30 by theSenateon August 10, 2021. On November 5, it was passed 228–206 by theHouse, and ten days later was signed into law by President Biden.[5] On March 31, 2021,[6]PresidentJoe Bidenunveiled his $2.3 trillionAmerican Jobs Plan(which, when combined with theAmerican Families Plan, amounted to $4 trillion in infrastructure spending),[7]pitched by him as "a transformative effort to overhaul the nation's economy".[8]The detailed plan aimed to create millions of jobs, bolsterlabor unions, expand labor protections, andaddress climate change.[9][10] In mid-April 2021, Republican lawmakers offered a $568 billion counterproposal to the American Jobs Plan.[11]On May 9, Senate Minority LeaderMitch McConnellsaid it should cost no more than $800 billion.[12]On May 21, the administration reduced the price tag to $1.7 trillion, which was quickly rejected by Republicans.[13]A day later, a bipartisan group within theSenate Environment and Public Works Committeeannounced that they had reached a deal for $304 billion in U.S. highway funding.[14]This was approved unanimously by the committee on May 26.[15]On June 4,House Transportation and Infrastructure CommitteeChairPeter DeFazioannounced a $547 billion plan, called the INVEST in America Act, which would address parts of the American Jobs Plan.[16][a]On July 1, the House passed an amended $715 billion infrastructure bill focused on land transportation and water.[17] On May 27, Republican senatorShelley Moore Capitopresented a $928 billion plan,[18][b][c]and on June 4, increased it by about $50 billion; this was quickly rejected by the Biden administration.[19]On June 8, the administration shifted its focus to a bipartisan group of 20 senators, which had been working on a package tentatively priced around $900 billion.[20][d]On June 10, a bipartisan group of 10 senators reached a deal costing $974 billion over five years; or about $1.2 trillion if stretched over eight years.[22]On June 16, the plan was endorsed by a bipartisan group of 21 senators.[23]On June 24, the bipartisan group met with the president and reached a compromise deal costing $1.2 trillion over eight years, which focuses on physical infrastructure (notably roads, bridges, railways, water, sewage, broadband, electric vehicles). This was planned to be paid for through reinforcedInternal Revenue Service(IRS) collection, unspent COVID-19 relief funds, and other sources.[24]By July 2021, the IRS portion of the funding had reportedly been scrapped.[25]Biden stipulated that a separate "human infrastructure" bill (notablychild care,home care, andclimate change) – later known as theBuild Back Better Act– must also pass, whether through bipartisanship orreconciliation,[24]but later walked back this position.[26]House SpeakerNancy Pelosisimilarly stated that the House would not vote on the physical infrastructure bill until the larger bill passes in the Senate,[27]despite the fact that reconciliation overrides much of the obstructive power of thefilibuster.[27][28] White House officials stated on July 7 that legislative text was nearing completion.[29]On July 14, theSenate Energy and Natural Resources Committeeadvanced an energy bill expected to be included in the bipartisan package.[30]On July 21, Senate Majority LeaderCharles Schumerput forward a "shell bill" for a vote to kick off debate in the Senate, intending to add the bipartisan text via an amendment.[31][e]On July 25, Republican senatorRob Portmanstated that an agreement was "about 90%" complete, with mass transit being one remaining point of contention.[33]On July 30, Portman stated that this had been resolved.[34]On July 28, SenatorKyrsten Sinemastated that she did not support a reconciliation bill costing $3.5 trillion, breaking the stalemate and allowing the bipartisan bill to move forward.[35]That day, the Senate voted 67–32 to advance the bill,[36]and on July 30, voted 66–28 to proceed to its consideration.[37]The legislation text was completed and substituted into the bill on August 1.[38]On August 5, Schumer moved to truncate debate on the legislation, setting up a procedural vote on August 7,[39]which passed 67–27.[40]Fifteen or more amendments were expected to receive votes through the weekend.[40]On August 10, the bill was passed by the Senate 69–30.[41]It sets aside $550 billion in new spending.[42]A procedural vote on a House rule concerning passing both bills passed along party lines on August 24.[43] In early August, ninemoderate Democratscalled for an immediate House vote on the bill, citing a desire not to lose the momentum from the Senate passage of the bill. They committed to voting against taking up the reconciliation resolution until there was a vote on the bipartisan infrastructure bill.[44][45]While both Biden and House SpeakerNancy Pelosihad reversed earlier positions to support passing the bipartisan bill separately,[26][46]progressivesincludingCongressional Progressive CaucuschairwomanPramila Jayapaland SenatorBernie Sandersmaintained that it be utilized as leverage to pass the most expensive reconciliation bill possible.[47][48][49]The lack of a deal caused a late September House vote to be postponed.[49]On October 2, Pelosi set a new deadline of October 31.[50]By October 28, Jayapal and other progressive leaders indicated that they were willing to vote on the bill separately,[51]but Sanders and others opposed this.[52][53]On October 31, a majority of progressives signaled that they would support both bills.[54] Votes on both bills were considered on November 5, but the hesitation of several moderates to pass the reconciliation bill before it could be scored by theCongressional Budget Officemade passing the bipartisan bill unlikely.[55]Negotiations between centrist and progressive Democrats concluded with the centrists committing to passing the Build Back Better Act.[56]The bill ultimately went to a vote, as did a rule to vote on the larger bill once it was scored, passing 228–206; 13 Republicans joined all but six Democrats (members of "the Squad") in supporting the legislation.[57][58][59]The six Democrats who voted 'No' stated that their opposition was because the legislation had been decoupled from the social-safety net provisions of the Build Back Better bill.[60][61]Biden signed the bill into law at a signing ceremony on November 15.[62] The following is the bill summary authorized by theCongressional Research Service(CRS) for the INVEST in America Act, the original version which passed the House on July 1, 2021: The specific amounts in surface transportation spending were $343 billion for roads, highways, bridges and motor safety, $109 billion for transit, and $95 billion for rail.[16]Provisions of the bill incentivized prioritizing maintenance and repair spending over spending on new infrastructure, holistically planning for all modes of transport when considering how to connect job centers to housing (including collecting data on reductions invehicle miles traveledthroughtransit-oriented development), and lowering speed limits to increase road safety and encourage buildingcomplete streets. The Senate version, and the final bill, de-emphasized these incentives.[2][64][65][66][67][68] The final version restores theSuperfundexcise tax on certain chemicals[69]which expired in 1995.[70] According toNPR, the version which passed the Senate on July 28 was set to include: The law would also make theMinority Business Development Agencya permanent agency.[72]It authorizes the DOT to create an organization called the Advanced Research Projects Agency–Infrastructure (ARPA–I), with a broad remit over transportation research akin toDARPA,HSARPA,IARPA,ARPA-E, andARPA-H,[73]with the first appropriations of $3.22 million being made in theConsolidated Appropriations Act, 2023.[74][75][76]Lastly, it broadens the powers of the Federal Permitting Improvement Steering Council, to provide faster conflict resolution among agencies, in speeding up infrastructure design approvals.[77] An October 2021 report written by the REPEAT Project, a partnership between the Evolved Energy Research firm andPrinceton University's ZERO Lab, said the Infrastructure Investment and Jobs Act alone will make only a small reduction in emissions, but as they say:[78] We lack modeling capabilities to reflect the net effect of surface transportation investments in highways (which tend to increase on-road vehicle and freight miles traveled) and rail and public transit (which tend to reduce on-road vehicle and freight miles traveled). These significant programs are therefore not modeled in this analysis, an important limitation of our assessment of the Infrastructure Investment and Jobs Act. The Georgetown Climate Center tried to estimate how the $599 billion investment for surface transportation in the law can impact emissions from transportation. It created two scenarios: "high emissions" and "low emissions". In the first scenario, from the money dedicated to highways, more money will go to building new highways, while in the second, more will go to repairing existing highways. The other spending areas characteristics are not so different. The first scenario sees increased cumulative emissions over the years 2022–2040 by more than 200 million tons, while the second decreases them by around 250 million tons.[79] In August 2022, theBoston Consulting Groupanalyzed the Act and found $41 billion of it would be spent on energy projects germane to climate action, $18 billion on similarly germane transportation projects, $18 billion on "clean tech" intended to cut hard-to-abate emissions, $0 on manufacturing, and $34 billion on other climate action provisions.[80] The law includes the largest federal investment inpublic transitin history.[81]The law includes spending figures of $105 billion in public transport. It also spends $110 billion on fixing roads and bridges and includes measures for climate change mitigation and improving access forcyclistsandpedestrians.[82]Increasing use ofpublic transportand relatedtransit-oriented developmentcan reduce transportation emissions in human settlements by 78% and overall US emissions by 15%.[83] The law includes spending:[84] New or improved, affordable transportation options to increase safe mobility and connectivity for all, including for people with disabilities, through lower-carbon travel like walking, cycling, rolling, and transit that reduce greenhouse gas emissions and promote active travel.[91] $73 billion will be spent on overhauling the energy policy of the United States. TheBoston Consulting Groupprojects $41 billion of the Act will be germane to climate action in energy.[80]$11 billion of the $73 billion amount will be invested in theelectrical grid's adjustment torenewable energy, with some of the money going to new loans forelectric power transmissionlines and required studies for future transmission needs.[92][93][94]$6 billion of that $73 billion will go todomestic nuclear power. Also of that $73 billion, the IIJA invests $45 billion ininnovationandindustrial policyfor key emerging technologies in energy; $430 million[95]–$21 billion in new demonstration projects at the DOE; and nearly $24 billion in onshoring,supply chain resilience, and bolstering U.S.-held competitive advantages in energy; the latter amount will be divided into an $8.6 billion investment incarbon capture and storage, $3 billion in battery material reprocessing, $3 billion inbattery recycling, $1 billion inrare-earth mineralsstockpiling, and $8 billion in new research hubs forgreen hydrogen.[85]The DOE has imposed grant requirements on $7 billion of the IIJA's battery and transportation spending, which are meant to promotecommunity benefits agreements,social justice, and formation oftrade unions.[96]It created the $225 million Resilient and Efficient Codes Implementation program for cities, tribes and counties to revise building codes for electrical and heating work.[97]Finally, the law gives $4.7 billion to caporphan wellsabandoned by oil and gas companies.[86][87][88] The law invests a total of $65 billion in advancing the U.S. quest forbroadband universal service. Of this $65 billion, the law invests $42.45 billion in a new infrastructure grant program by theNational Telecommunications and Information Administrationcalled theBroadband Equity, Access, and Deployment Program, with highest priority going to communities with Internet speeds below 25 downstream and 3 upstreamMbps. $2 billion will go to the NTIA's Tribal Broadband Connectivity Program, $1 billion to a newmiddle mileinfrastructure program,[98]$1.44 billion in formula grants to state and territorial digital equity plan implementation, $60 million in formula grants to new digital equity plan development, and $1.25 billion in discretionary grants to "specific types of political subdivisions to implement digital equity projects".[99][100] The law gives theUSDA$5.5 billion of the $65 billion total to deliver broadband to rural communities smaller than 20,000 people, $5 million of which is obligated toutility cooperatives.[101][102] The law invests $14.2 billion of the total in theFederal Communications Commission'sAffordable Connectivity Program, the successor to the American Rescue Plan's broadband subsidies. It gives a $30 monthly discount on internet services to qualifying low-income families ($75 on tribal lands), and provides a $100 discount on tablets, laptops and desktops for them.[103][104]The program ran out of funds on April 30, 2024.[105]The law also requires the FCC to return consumer broadband labels it developed in 2016 to statute, to revise its public comment process and to issue rules and model policies for combating digital deployment discrimination, with theUnited States Attorney General's cooperation, and theGovernment Accountability Officeto deliver a report on updating broadband thresholds by November 2022.[106] To support safe drinking water programs, the law provides: For surface water programs, such aswatershed managementandpollution control, the law provides: The Act provides $8 billion for helping Western states deal with theSouthwestern North American megadrought. Spending for many related projects is included under the category "Western Water Infrastructure".[110][111] Prior to the enactment of the infrastructure law in 2021, no dedicated federal bridge funding had existed since fiscal year 2013. The law created two new programs specifically to fund bridge projects:[112] With $27.5 billion over five years, the BFP distributes funds to every state, theDistrict of Columbia, andPuerto Ricobased on a formula that accounts for each state's cost to replace or rehabilitate its poor or fair condition bridges. Each state is guaranteed a minimum of $45 million per year from this program. At least 15% of each state's funds must be spent on off-system bridges (i.e., public bridges that are not on federal-aid highways), and 3% is set aside each year for bridges on tribal lands. Off-system and tribal bridge projects may be funded with a 100% federal share (as opposed to the standard 80% federal share).[113] With $12.5 billion over five years, the BIP is a competitive grant program to replace, rehabilitate, preserve, or make resiliency improvements to bridges. Half of the funding is reserved for large bridge projects, which are defined as projects that cost over $100 million. Large projects are funded at a maximum 50% federal share, while other projects are funded at a maximum 80% federal share.[114] The infrastructure law is the largest investment in passenger rail since the 1971 creation ofAmtrak(which under the law will receive $22 billion in advance appropriations and $19 billion in fully authorized funds).[115][116]It directly appropriated $66 billion for rail over a five-year period (including the Amtrak appropriations), of which at least $18 billion is designated for expanding passenger rail service to new corridors, and it authorized an additional $36 billion.[116]Most of this funding for new passenger rail lines is implemented through the Federal-State Partnership for Intercity Passenger Rail program, which will receive $36 billion in advance appropriations and $7.5 billion in fully authorized funds.[116]The Consolidated Rail Infrastructure and Safety Improvements program will receive $5 billion in advance appropriations and $5 billion in fully authorized funds, while programs forgrade separationreplacinglevel crossingswill receive $3 billion in advance appropriations and $2.5 billion in fully authorized funds, and the Restoration and Enhancement Grant program intended to revive discontinued passenger rail services will receive $250 million in advance appropriations and $250 million in fully authorized funds.[116]Per the law's requirements, at least $12 billion is available and $3.4–4.1 billion authorized for expanding service outside of theNortheast Corridor, and $24 billion is available and $3.4–4.1 billion authorized to partially rebuild the Corridor.[117] To help plan and guide the expansion of passenger rail service beyond theNortheast Corridor, the infrastructure law also created a $1.8 billionCorridor Identification and Development Program.[118]The law also expands eligibility for a potential $23 billion in transit funding to these corridors and changes the allocation methods for state government-supported passenger rail shorter than 750 miles, to encourage states to implement more such service. The law established and authorized $1.75 billion over five years for a new All Stations Accessibility Program (ASAP).[119]This program is designed to improve the accessibility of rail system stations that were built before theAmericans with Disabilities Act of 1990(ADA). At the time of the infrastructure law's passage, over 900 transit stations were not fully ADA-compliant.[120] The law includes $1 billion over five years for Reconnecting Communities planning and construction grants intended to build marginalized community-recommended projects removing or capping highways and railroads, the first $185 million of which were awarded to 45 projects on February 28, 2023.[121]The program was later combined with the Neighborhood Equity and Access program from theInflation Reduction Actfor efficiency reasons, before the next 132 projects were given $3.3 billion in awards on March 13, 2024.[122] The Act creates the National Electric Vehicle Infrastructure (NEVI) program within the Department of Energy. It provides funding of up to $4.155 billion[123]to state governments for up to 80 percent of eligible project costs, to add substantial open-accesselectric vehicle(EV)charging infrastructurealong major highway corridors.[124][125] The Infrastructure Investment and Jobs Act requires theNational Highway Traffic Safety Administration(NHTSA) to develop a safety mechanism to preventdrunk driving, which causes about 10,000 deaths each year in the United States as of 2021, which will be rolled out in phases for retroactive fitting,[126][127]and will become mandatory for all new vehicles in 2027.[128]The technology, which is being developed by NHTSA in cooperation with theAutomotive Coalition for Traffic Safetyand Swedish automobile safety companyAutoliv, consists of a breath-based and a touch-based sensor that stops the car if the driver is above the legalblood alcohol content, and will beopen-sourcedto automobile manufacturers.[129] Under the law, theUnited States Department of Transportation(DOT) will be required to develop regulations for a system that can detect distracted, fatigued, or impaired drivers.[126]The NHTSA has recommended implementing a camera-based warning system for the former, similar to a technology mandated by theEuropean Unionin July 2022.[129] The law also requires the NHTSA'sNew Car Assessment Programto testcollision avoidance systemsin preparation for new federal regulations; new DOT reporting requirements for statistical data on crashes involvingmotorized scootersandelectric bicycles; new federal regulations on headlamps; research directives on technology to protect pedestrians and cyclists,advanced driver-assistance systems, federal hood and bumper regulations,smart cityinfrastructure, andself-driving cars; and a newFederal Highway Administration(FHWA) office specializing incybersecurity.[126] The infrastructure law created theWildlife CrossingsPilot Program with $350 million in funding over five years. This is a competitive grant program that funds planning and construction projects that prevent wildlife-vehicle collisions and improve the connectivity of animal habitats.[130] The law also allocated $1 billion to create the NationalCulvertRemoval, Replacement, and Restoration Grant program to improve the passage ofanadromousfish such assalmon.[131] Biden's infrastructure advisor and the staffer in charge of implementing the law has been identified asMitch Landrieu. Biden's National Security AdvisorJake Sullivanhas been identified as the staffer in charge of ensuring the law does not conflict with American foreign policy interests.[132]To support the implementation of the Act, Biden issued Executive Order 14052, which establishes a task force comprising most of his Cabinet. Biden appointed Landrieu and then-United States National Economic CouncilchiefBrian Deeseas the task force co-chairs.[133][134][135] In May 2022, the Biden administration published a manual on the use of the law, aimed mainly at local authorities. The manual briefly describes the over 350 programs included in the law. Each description includes the aim of the program, its funding and possible recipients, its period of availability, and more. The programs are grouped into four categories: "Transportation", "Climate, Energy and the Environment", "Broadband", and "Other Programs".[136] By the law's second anniversary in November 2023, around $400 billion from the law, about a third of all IIJA funding, was allocated to more than 40,000 projects related toinfrastructure,transport, andsustainability. By May 2024, the law's halfway mark, the numbers had increased to $454 billion (38 percent of the Act's funds) for more than 56,000 projects,[137]and by the third anniversary in November 2024, they had increased to $568 billion (47 percent) to 68,000 projects, leaving 53 percent of IIJA funds unallocated but showing the administration had been accelerating funding approvals.[138]Public attention has remained relatively low, due in part to slow implementation of projects.[139][140][141] The White House offers a "Map of Progress" which tracks all spending that resulted from the act.[142] According to theNew Democrat-linked think tankCenter for American Progress, the IIJA, theCHIPS and Science Act, and theInflation Reduction Acthave together catalyzed over 35,000 public and private investments.[143]EconomistsNoah Smithand Joseph Politano credited the three acts together for spurring booms in factory construction and utility jobs, as well as limiting geographic concentrations of key industries to ensure more dispersed job creation nationwide, though they raised issues of whether the three would serve to limit project delays and significantly increase labor productivity in the long term.[144][145]The Biden administration itself claimed that as of January 10, 2025[update], the IIJA, CaSA, and IRA together catalyzed $1 trillion in private investment (including $449 billion in electronics and semiconductors, $184 billion in electric vehicles and batteries, $215 billion in clean power, $93 billion in clean energy tech manufacturing and infrastructure, and $51 billion in heavy industry) and over $756.2 billion in public infrastructure spending (including $99 billion in energy aside from tax credits in the IRA).[146][needs update] In September 2023, White House data revealed that 60 percent of the Act's energy and transmission funding (up to that point, totaling $12.31 billion) had been awarded to states that voted majority Republican in the 2020 election cycle. Of the Act's top ten recipients, seven states had voted majority Republican, with Wyoming ($1.95 billion) and Texas ($1.71 billion) in the lead. The largest single energy project to receive Act funds was aGeneration IV reactorinKemmerer, Wyomingby the nuclear fission startupTerraPower.[147] In November 2022, the Biden administration announced it would furnish $550 million for the Energy Efficiency and Conservation Block Grant program for clean energy generators for low-income and minority communities, the first such appropriation since theRecovery Actin 2009.[148][149]The administration announced the competitive portion would award $8.8 million to 12 communities on October 12, 2023, with the next award applications due in April (later changed to October) 2024.[150][151]By June 28, 2024, the seventh tranche of funding had been awarded from the EECBG program, totaling about $150 million for 175 communities, with that date's instance seeing $18.5 million awarded to four states and 20 communities.[152] In April 2023, the Biden administration announced it would award $450 million from the Act to projects that built solar farms on abandoned coal mines.[153][154]Further support for coal communities followed. In November 2023 the IIJA's Office of Manufacturing and Energy Supply Chains announced $275 million in grants would go to seven projects in coal communities, creating 1,500 jobs and leveraging $600 million in private investment.[155]The next October it announced $428 million in grants for 14 projects in coal communities, creating 1,900 jobs and leveraging $500 million in private investments.[156] On July 12, 2023, the Biden administration announced it would award $90 million from the Act's Resilient and Efficient Codes Implementation program[97]to 27 cities and counties to update building energy codes.[157]On March 4, 2024 the DOE announced $90 million more would be awarded from the program later that October.[158] On October 24, 2023, the administration announced the first $3.46 billion in Grid Resilience and Innovation Partnerships grants from the Act's $11 billion grid rebuilding authorization, would go to 58 projects in 44 states. A majority are categorized forsmart gridprojects and eight are categorized as pursuing grid innovation. The investment is the largest in the American grid since theRecovery Act14 years earlier. According to Energy SecretaryJennifer Granholm, the projects could enable 35 gigawatts of renewable energy to come online by 2030, $8 billion in investments to be catalyzed, and 400microgridsto be built.[159][160]On August 6, 2024, the DOE announced the recipients of the next $2.2 billion in GRIP grants, eight grid innovation projects across 18 states adding a total of 13 gigawatts of capacity to the grid and catalyzing $10 billion in investments.[161]On October 18, 2024, the DOE announced nearly $2 billion more in GRIP grants would be awarded to 38 smaller projects in 42 states and the District of Columbia, altogether adding 7.5 gigawatts of capacity to the grid and catalyzing nearly $4.2 billion in investment.[162] On October 30, 2023, the DOE announced the results of a mandated triennial study that, for the first time in its history, included anticipation of future grid transmission needs; the Act had explicitly required this inclusion. The study found fewer infrastructure investments since 2015 and consistently high prices in the Rust Belt and California since 2018, and projected a 20 to 128 percent increase in transmission would be needed within regions, while interregional transmission would need to increase by 25 to 412 percent. The DOE found the most potential was in better connecting Texas to the Southwest region, the Mississippi Delta and Midwest regions to the Great Plains region, and New York to New England.[93][163]The DOE also announced the first three recipients of a new $2.5 billion loan program called the Transmission Facilitation Program, created to provide funding to help build up the interstate power grid. They are a line between Quebec, New Hampshire and Vermont, a line between Utah and Nevada; and a line between Arizona and New Mexico.[94][92]The following April 25, the TFP announced the selection of an extension of theOne Nevada Transmission Linenorthward to Idaho.[164]The next October, the DOE announced that four projects in Maine, Oklahoma, New Mexico, and between Texas and Mississippi, were being awarded a total of $1.5 billion under the TFP; the DOE also released its first ever National Transmission Planning Study to follow up on the Needs Study, forecasting a needed national transmission capacity increase of 2.4 to 3.5 times the 2020 level by 2050 to keep costs low and facilitate the energy transmission, with estimated cost savings ranging from $270 billion to $490 billion.[165] On November 16, 2023, the Biden administration announced the first recipients of $40.8 million in grants from a workforce training program the Act created, which will provide skills for industrial technology, the building trades andenergy auditing.[166][167]In December 2023 the DOE fulfilled the IIJA's requirement that the designation process forNational Interest Electric Transmission Corridorsbe revised.[168] On January 17, 2024, more than $104 million were allocated to 31 projects which are expected to increaseenergy conservationandclean energyuse in federal facilities and save $29 million in their first years. The projects advance, among other technologies,heat recovery ventilation,heat pumps,building insulation, andsolar thermal panels.[169]On February 13, the Biden administration announced thatChevron CorporationandFervo Energywould receive $74 million under the law to begin demonstrating the efficacy ofenhanced geothermal systems, at a site nearThe Geysers, California for Chevron, and a site nearMilford, Utahfor Fervo.[170]On February 27, the Department of Energy announced that under the Energy Improvements in Rural or Remote Areas program, 17 projects in rural areas across 20 states and 30 tribal communities had been approved to receive $366 million in grants to decarbonize and densify their grids. A majority of approved projects involved installation of solar panels, grid battery storage, and microgrids.[171] On March 21, the Biden administration announced that five projects in Arizona, Nevada, West Virginia, Kentucky, and Pennsylvania would receive $475 million from the Act, to build solar and geothermal power plants and energy storage on current and former mine lands.[172]On March 25, 2024, the Biden administration announced the first 33 grant recipients of the Department of Energy's $6 billion Industrial Demonstrations Program to reduce embedded emissions in factories and materials processing, of which the Infrastructure Investment and Jobs Act funds $489 million.Cementandconcreteindustry projects received $1.5 billion in total,steelmakingprojects received $1.5 billion, andchemical engineeringand refinery projects $1.2 billion. The Biden administration expects these projected to drive 1.4 million tons in carbon emissions cuts;[173]however, most of the grants had yet to be finalized by November 11.[174]On April 30, the Department of Energy announced 19 more recipients across 12 states and 13 tribal communities, of $78 million in award grants from the Act's Energy Improvements in Remote or Rural Areas program, with a majority of projects involving solar power.[175] On May 13, 2024, theFederal Energy Regulatory Commissionpublished Order No. 1977, clarifying a provision in the Act by stating that the Commission has 'backstop siting authority' in case a state agency neglects to hand out a construction permit for a new transmission project.[176] On September 5, 2024, the Energy Department announced the awarding of over $430 million in incentives to 293 existing hydroelectricity projects, under the Act's Section 40333.[177][178]On September 20, the DOE announced it would award $3 billion to, and leverage $13 billion in investments in, 25 battery manufacturing and supply chain projects, more than half of which had pledgedProject Labor Agreements. 12,000 new jobs across 14 states were projected for creation.[179] In December 2024, the DOE announced that the first three new NIETCs under the IIJA's process would move closer toward full eligibility for TFP funds under the Act's new process, a corridor on the bed of Lake Erie between Ontario and Pennsylvania, a connector between Colorado, New Mexico and Oklahoma, and a connector between the Dakotas.[180]Notably, the sponsor of the Kansas-Indiana Grain Belt Express requested that it be taken off the eligibility list because they had likely secured enough funding to do so.[181] The Biden administration awarded $7 billion of the $8 billion appropriation to seven hydrogen research hubs, based in California, eastern Washington, southeastern Pennsylvania, southeastern Texas, Illinois, Minnesota, and West Virginia and affecting projects there and in eight more states, on October 13, 2023. The remaining $1 billion will be used for demand-side economic policies to drive growth in hydrogen use.[182][183] Several criticisms of the hubs emerged. Jeff St. John, editor in chief ofCanary Media, noted while it does mandate that the DOE create a clean hydrogen definitional standard (which as of October 2023[update]the DOE had not published), and that the DOE selected applicants who pledgedcommunity benefits agreements, the Act does not prescribe metrics or guidelines for measuring emissions from these hubs.[184]Researcher Hannah Story Brown of the watchdog group Revolving Door Project noted that the majority of hub projects announced are powered by fossil fuels, not renewable energy.[185]Staffers for California GovernorGavin Newsomrequested that the Treasury Department exempt the state's hub from emissions restrictions, citing poor alignment with the state's plans for100% renewable energy.[186] On the first anniversary of the October 2023 announcement, St. John reported that the Californian, Washingtonian, and West Virginian hub collaboratives were the farthest along in working towards finalizing their funding, and the DOE's Office of Clean Energy Demonstrations was optimistic, but also that all projects were lagging behind in transparency and community outreach, with several projects seeing corporate partners withdraw.[187]Jael Holzmanof the outlet Heatmap News reported that soon after, experts in energy markets pointed at a lack of coordination between the Hub program and the IRA's hydrogen tax credits, price increases for electrolyzers, and the historically low cost of natural gas as additional reasons for the withdrawal of investment in Hub projects.[188] Later in 2024, the DOE selected the hubs based in California, Washington, Illinois, Texas and West Virginia for near-final deals that together would cost a total $5.3 billion. The final two hubs based in Minnesota and Pennsylvania were not far behind in negotiations.[189] The Act appropriates $3.5 billion to a new RegionalDirect Air CaptureHubs program as part of its $8.6 billion carbon capture and storage investment. In August 2023, the DOE selected two projects (leaving two more to be selected), together worth $1.2 billion: The projects together will remove 2 million metric tons of carbon dioxide and create 4,800 jobs.[190][191] In September 2024, the DOE announced it intended to fund up to $1.8 billion more in direct air capture projects, with the full solicitation released on December 17.[192][193] By April 2024, the Affordable Connectivity Program had seen 23 million households enroll in it.[105]As of June 2024, the program has ended. In May 2024, the Biden administration announced $3 billion in funding from the law had been allotted to replace lead water pipes.[194] The bill contains $27 billion in funding for specific, concrete programs within theFederal Highway Administrationthat are already implemented to reducegreenhouse gas emissionsfrom the transportation sector, all of which was allotted in November 2023. For example, $7.2 billion is allocated to the "Transportation Alternatives Set-Aside Program" (creating more possibilities forbikingandwalking), $6.4 billion to the "Carbon Reduction Program" (reducing emissions from highways), $69 million to the "Transit-Oriented Development Program" (enhancingtransit-oriented developmentand improving land use) and more.[195]However, because states have wide discretion over use of funds from other highway programs under the Act, which leads to states with fast population growth investing more in highway expansion, the Act has been projected byTransportation for Americato increase carbon emissions by 77 million metric tonnes by 2040 compared to a no-Act baseline.[196] On December 4, the Department of Energy released a proposed rule clarifying the definition of "foreign entities of concern" under the Act's car battery materials provisions, in line with theInflation Reduction Act's Section 30D.[197] On December 8, the Biden administration announced it would award $8.2 billion from the Act's Federal-State Partnership for Intercity Passenger Rail Program to ten construction projects, includingBrightline West, theSoutheast High Speed Rail Corridor, theKeystone Corridor,California High-Speed Rail, theDowneasterandEmpire Builderservices, a partial rebuilding ofChicago Union Station, and a bridge replacement nearWillowon theAlaska Railroad. It also announced the first results of the Act'sCorridor ID Program, with $34.5 million being distributed to 15 existing rail upgrades, 47 extensions of rail corridors, and 7 newhigh-speed railstudies.[198][199] The bill included $7.5 billion for electric vehicle charging. As of December 2024, 37 charging stations with a total of 226 spots for charging vehicles had been built.[200] On April 2, 2024, an award announcement was made for the transit-oriented development program, which was expanded under the Act.[201] In 2023 an agreement between seven states was achieved, aiming to preserve theColorado Riverwater system from collapse due to poor management and climate change. The United States is heavily dependent on the river for power generation, drinking water, agriculture, wildlands restoration, and native cultural practices. Some states will reduce water use, receiving compensation for it (totaling $1.2 billion) from the federal government. Many other projects for preserving the river such aswater recyclingandrainwater harvesting, are advanced. The funding comes from the Infrastructure Investment and Jobs Act and theInflation Reduction Act.[202][203] In February 2024, $157 million was allocated to 206 projects linked toecosystem restoration. The projects are spread all over the territory of the United States and are advanced in cooperation withstates,tribes,nonprofitsandterritories. More than half of them benefit underserved communities. The projects include cleaning uppollution, restoring Central U.S.grasslandsincludingbisonpopulations, protecting birds inHawaiifrom extinction, stoppinginvasive species, restoringsalmonpopulations inAlaska, restoringsagebrush steppesand more. On this occasionUnited States Secretary of the InteriorDeb Haalandremarked, "Nature is our best ally in the fight against climate change."[204] The bill provides around $7 billion to theFederal Emergency Management Agencyfor helping communities adapt to different climate-related disasters such ashurricanes,droughts, andheat waves. In August 2023, $3 billion was allocated to different related projects, including 124 projects related to resilient infrastructure and communities (located in "38 states, one tribe and the District of Columbia") and 149 projects related to protection from flooding (located in "28 states and the District of Columbia"). From the projects related to infrastructure, 64 usenature-based solutions. Some of the most vulnerable communities will receive help for free.[205] In November 2023, the Biden administration announced that $300 million from FEMA's new Swift Current Initiative created by the Act would go to helping communities impacted by floods recover and grow their resiliency.[206][207]It also announced that it would award "$50 million in project awards to improve the reliability of water resources and support ecosystem health in Western states, along with an additional $50 million funding opportunity for water conservation projects and hydropower upgrades."[206] In March 2024, $120 million was delivered to helpindigenous peoplesin the U.S. adapt to climate change. Of this number, $26 million was allocated from the Infrastructure Investment and Jobs Act. The efforts will include planning, ecosystem management and restoration, planned relocation, and promotion and use of indigenous knowledge.[208][209] In January 2025, the incoming Trump administration froze selected IIJA grants. However, that April, federal judge Mary McElroy ruled on a case brought by Rhode Island conservation groups that the IIJA grants had to be unfrozen, citing constitutionality concerns.[210] Around $1.1 billion was allocated for restoration of theEvergladesecosystems.[211]In March 2024,Marco Rubio, supported by a bipartisan group of lawmakers, demanded $725 million more, as the rising levels of water in theLake Okeechobeecreated additional problems.[212] In October 2023, $450 million (including $275 million from the bill) was delivered to clean theMilwaukee Riverestuary ofPolychlorinated biphenyl, heavy metals, and oil products. This pollution had negative effects on surrounding communities for a long time. This is the most funding ever distributed by a Great Lakes cleanup program.[213] Republican senators balked at Biden's tandem plan to pass both a bipartisan plan and a separate Democratic-supported reconciliation bill.[214]McConnell criticized Biden for "caving" to his own party by issuing an "ultimatum" that he would not sign the bipartisan bill without a separate reconciliation package.[215]After Biden walked back his comments, Republican senators restated their confidence in the bipartisan bill.[26]AYahoo! News/YouGovpoll conducted in late June found that 60% of Republican voters favored the plan.[216] On June 20, 2021, SenatorBernie Sandersstated that he would not support paying for the bill via a proposed gas tax or a surcharge on electric vehicles.[217] On June 28, 2021,Sunrise Movementand several progressive representatives performed a protest at the White House in criticism of the size and scope of Biden's Civilian Climate Corps. Several protesters were arrested for blocking White House entrances.[218] On July 6, the 58-member bipartisan HouseProblem Solvers Caucusstated their support for the bipartisan bill and called for an expeditious and independent House vote.[219]On July 21, a group of 65 former governors and mayors endorsed the plan.[220] Ahead of a procedural vote on August 7, former presidentDonald Trumpattacked the bill and said he would support Republicanprimarychallengers of senators who vote for it.[40]He reiterated his criticisms following the bill's passage by Congress.[221] Following the bill's passage by Congress in November, Trump criticized it as containing "only 11% for real Infrastructure", calling it "the Elect Democrats in 2022/24 Act", and attacked Republicans who had supported it, saying in particular that McConnell had lent "lifelines to those who are destroying" the country.[221]Various House Republicans also criticized the 13 Republican representatives who voted for the bill.[222]Lauren Boebertdescribed them as "RINOS" (Republican in Name Only).[222]Mary Millercalled them "spineless" and said they helped enact a "socialist takeover".[222]Marjorie Taylor Greenecalled them "traitors" and "American job & energy killers", who "are China-First and America-Last", because they "agree with Globalist Joe [Biden] that America must depend on China to drive" electric vehicles.[223]Gary Palmerwas criticized for touting funding for the Birmingham Northern Beltline that he added to the bill, while neglecting to mention that he voted against the final bill.[224]Paul Gosarwas also criticized for taking credit for the bill's funding forKingman Airportdespite voting against it.[225]Several Republican governors who condemned the bill, includingKristi NoemofSouth DakotaandGreg GianforteofMontana, accepted the funding and directed it to various programs.[226] On June 22, theU.S. Chamber of Commerce,Business RoundtableandNo Labelsmade a joint statement urging the president to consider a bipartisan bill.[227]The former two groups have lobbied for the plan not to raise corporate taxes, and to instead impose user fees and borrow from other federal funds.[227] According to an early AugustHarvard CAPS-Harris Pollsurvey, about 72% of voters support the bill.[228] On September 24, leaders from theU.S. Conference of Mayors, theNational League of Cities, theNational Urban League, and other Black American advocacy groups signaled their support for the bill.[72] On September 25,Peter J. Wallisonauthored an opinion piece forThe Hillin which he argued that Republicans should try to pass the bipartisan bill to prevent it from being used as further leverage to pass the reconciliation bill.[229]Subsequently, Republican House leaders formally opposed the bipartisan bill.[47] "Historians, economists and engineers interviewed by The Associated Press welcomed Biden's efforts. But they stressed that $1 trillion was not nearly enough to overcome the government's failure for decades to maintain and upgrade the country's infrastructure."[230] The think tankTransportation for Americapraised the House version of the bill,[64]but heavily criticized the Senate version for its shortcomings on safety, climate resilience, long-term transit and rail funding and transit-oriented development, and maintenance spending, though it later noted that the final version that became law made small steps to address them.[66][65][67][68] The nuclear industry favored the legislation as it signaled continued federal government support.[231] Polling from Third Way and Impact Research released in July 2022 showed that only 24% of voters were aware the bill was signed into law, despite House Democrats holding over 1,000 events to promote it.[232] Reception to the drunk driver detection and distraction detection requirements have been mixed.Mothers Against Drunk Drivingpraised the requirement as "the beginning of the end of drunk driving".[233]In contrast, theAmerican Civil Liberties Unionhas expressed concern that the technology developed could pose a severe privacy risk to drivers if it collects or stores unnecessary data.[234]Writing forVice, Aaron Gordon also argued that the technology is likely to have an unacceptably high false-positive rate — existingignition interlock devicesthat are sometimes installed after drunk driving convictions are prone to catastrophic failures.[235] In October 2023, theNatural Resources Defense Councilcriticized the IIJA's hydrogen hubs program for its lack of transparency, emphasizing the need for detailed technical reports, public hearings to thwart localNIMBYismand skepticism of hydrogen, and incorporation of environmental justice advocates into project leadership.[236]
https://en.wikipedia.org/wiki/Infrastructure_Investment_and_Jobs_Act#Overview
Incomputer graphics,tessellationis the dividing of datasets ofpolygons(sometimes calledvertex sets) presenting objects in a scene into suitable structures forrendering. Especially forreal-time rendering, data istessellated into triangles, for example inOpenGL 4.0andDirect3D 11.[1][2] A key advantage of tessellation forrealtime graphicsis that it allows detail to be dynamically added and subtracted from a3D polygon meshand its silhouette edges based on control parameters (often camera distance). In previously leading realtime techniques such asparallax mappingandbump mapping, surface details could be simulated at the pixel level, but silhouette edge detail was fundamentally limited by the quality of the original dataset.[3] InDirect3D 11pipeline (a part of DirectX 11), thegraphics primitiveis thepatch.[4]Thetessellatorgenerates a triangle-basedtessellationof the patch according to tessellation parameters such as theTessFactor, which controls the degree of fineness of themesh. The tessellation, along withshaderssuch as aPhong shader, allows for producing smoother surfaces than would be generated by the original mesh.[4]By offloading the tessellation process onto theGPUhardware, smoothing can be performed in real time. Tessellation can also be used for implementingsubdivision surfaces,level of detailscaling and finedisplacement mapping.[5]OpenGL 4.0uses a similar pipeline, where tessellation into triangles is controlled by theTessellation Control Shaderand a set of four tessellation parameters.[6] Incomputer-aided designthe constructed design is represented by aboundary representationtopological model, where analytical 3D surfaces and curves, limited to faces, edges, and vertices, constitute a continuous boundary of a 3D body. Arbitrary 3D bodies are often too complicated to analyze directly. So they are approximated (tessellated) with ameshof small, easy-to-analyze pieces of 3D volume—usually either irregulartetrahedra, or irregularhexahedra. The mesh is used forfinite element analysis.[citation needed] The mesh of a surface is usually generated per individual faces and edges (approximated topolylines) so that original limit vertices are included into mesh. To ensure that approximation of the original surface suits the needs of further processing, three basic parameters are usually defined for the surface mesh generator: An algorithm generating a mesh is typically controlled by the above three and other parameters. Some types of computer analysis of a constructed design require anadaptive mesh refinement, which is a mesh made finer (using stronger parameters) in regions where the analysis needs more detail.[1][2]
https://en.wikipedia.org/wiki/Tessellation_(computer_graphics)
Theforward–backward algorithmis aninferencealgorithmforhidden Markov modelswhich computes theposteriormarginalsof all hidden state variables given a sequence of observations/emissionso1:T:=o1,…,oT{\displaystyle o_{1:T}:=o_{1},\dots ,o_{T}}, i.e. it computes, for all hidden state variablesXt∈{X1,…,XT}{\displaystyle X_{t}\in \{X_{1},\dots ,X_{T}\}}, the distributionP(Xt|o1:T){\displaystyle P(X_{t}\ |\ o_{1:T})}. This inference task is usually calledsmoothing. The algorithm makes use of the principle ofdynamic programmingto efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the nameforward–backward algorithm. The termforward–backward algorithmis also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class. In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for allt∈{1,…,T}{\displaystyle t\in \{1,\dots ,T\}}, the probability of ending up in any particular state given the firstt{\displaystyle t}observations in the sequence, i.e.P(Xt|o1:t){\displaystyle P(X_{t}\ |\ o_{1:t})}. In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting pointt{\displaystyle t}, i.e.P(ot+1:T|Xt){\displaystyle P(o_{t+1:T}\ |\ X_{t})}. These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: The last step follows from an application of theBayes' ruleand theconditional independenceofot+1:T{\displaystyle o_{t+1:T}}ando1:t{\displaystyle o_{1:t}}givenXt{\displaystyle X_{t}}. As outlined above, the algorithm involves three steps: The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to themessage-passingused in generalbelief propagationapproaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results. The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (seeViterbi algorithm). The following description will use matrices of probability values instead of probability distributions. However, it is important to note that the forward-backward algorithm can generally be applied to both continuous and discrete probability models. We transform the probability distributions related to a givenhidden Markov modelinto matrix notation as follows. The transition probabilitiesP(Xt∣Xt−1){\displaystyle \mathbf {P} (X_{t}\mid X_{t-1})}of a given random variableXt{\displaystyle X_{t}}representing all possible states in the hidden Markov model will be represented by the matrixT{\displaystyle \mathbf {T} }where the column indexj{\displaystyle j}will represent the target state and the row indexi{\displaystyle i}represents the start state. A transition from row-vector stateπt{\displaystyle \mathbf {\pi _{t}} }to the incremental row-vector stateπt+1{\displaystyle \mathbf {\pi _{t+1}} }is written asπt+1=πtT{\displaystyle \mathbf {\pi _{t+1}} =\mathbf {\pi _{t}} \mathbf {T} }. The example below represents a system where the probability of staying in the same state after each step is 70% and the probability of transitioning to the other state is 30%. The transition matrix is then: In a typical Markov model, we would multiply a state vector by this matrix to obtain the probabilities for the subsequent state. In a hidden Markov model the state is unknown, and we instead observe events associated with the possible states. An event matrix of the form: provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given an arbitrary row-vector describing the state of the system (π{\displaystyle \mathbf {\pi } }), the probability of observing event j is then: The probability of a given state leading to the observed event j can be represented in matrix form by multiplying the state row-vector (π{\displaystyle \mathbf {\pi } }) with an observation matrix (Oj=diag(B∗,oj){\displaystyle \mathbf {O_{j}} =\mathrm {diag} (B_{*,o_{j}})}) containing only diagonal entries. Continuing the above example, the observation matrix for event 1 would be: This allows us to calculate the new unnormalized probabilities state vectorπ′{\displaystyle \mathbf {\pi '} }through Bayes rule, weighting by the likelihood that each element ofπ{\displaystyle \mathbf {\pi } }generated event 1 as: We can now make this general procedure specific to our series of observations. Assuming an initial state vectorπ0{\displaystyle \mathbf {\pi } _{0}}, (which can be optimized as a parameter through repetitions of the forward-backward procedure), we begin withf0:0=π0{\displaystyle \mathbf {f_{0:0}} =\mathbf {\pi } _{0}}, then updating the state distribution and weighting by the likelihood of the first observation: This process can be carried forward with additional observations using: This value is the forward unnormalizedprobability vector. The i'th entry of this vector provides: Typically, we will normalize the probability vector at each step so that its entries sum to 1. A scaling factor is thus introduced at each step such that: wheref^0:t−1{\displaystyle \mathbf {{\hat {f}}_{0:t-1}} }represents the scaled vector from the previous step andct{\displaystyle c_{t}}represents the scaling factor that causes the resulting vector's entries to sum to 1. The product of the scaling factors is the total probability for observing the given events irrespective of the final states: This allows us to interpret the scaled probability vector as: We thus find that the product of the scaling factors provides us with the total probability for observing the given sequence up to time t and that the scaled probability vector provides us with the probability of being in each state at this time. A similar procedure can be constructed to find backward probabilities. These intend to provide the probabilities: That is, we now want to assume that we start in a particular state (Xt=xi{\displaystyle X_{t}=x_{i}}), and we are now interested in the probability of observing all future events from this state. Since the initial state is assumed as given (i.e. the prior probability of this state = 100%), we begin with: Notice that we are now using acolumn vectorwhile the forward probabilities used row vectors. We can then work backwards using: While we could normalize this vector as well so that its entries sum to one, this is not usually done. Noting that each entry contains the probability of the future event sequence given a particular initial state, normalizing this vector would be equivalent to applying Bayes' theorem to find the likelihood of each initial state given the future events (assuming uniform priors for the final state vector). However, it is more common to scale this vector using the samect{\displaystyle c_{t}}constants used in the forward probability calculations.bT:T{\displaystyle \mathbf {b_{T:T}} }is not scaled, but subsequent operations use: whereb^t:T{\displaystyle \mathbf {{\hat {b}}_{t:T}} }represents the previous, scaled vector. This result is that the scaled probability vector is related to the backward probabilities by: This is useful because it allows us to find the total probability of being in each state at a given time, t, by multiplying these values: To understand this, we note thatf0:t(i)⋅bt:T(i){\displaystyle \mathbf {f_{0:t}} (i)\cdot \mathbf {b_{t:T}} (i)}provides the probability for observing the given events in a way that passes through statexi{\displaystyle x_{i}}at time t. This probability includes the forward probabilities covering all events up to time t as well as the backward probabilities which include all future events. This is the numerator we are looking for in our equation, and we divide by the total probability of the observation sequence to normalize this value and extract only the probability thatXt=xi{\displaystyle X_{t}=x_{i}}. These values are sometimes called the "smoothed values" as they combine the forward and backward probabilities to compute a final probability. The valuesγt(i){\displaystyle \mathbf {\gamma _{t}} (i)}thus provide the probability of being in each state at time t. As such, they are useful for determining the most probable state at any time. The term "most probable state" is somewhat ambiguous. While the most probable state is the most likely to be correct at a given point, the sequence of individually probable states is not likely to be the most probable sequence. This is because the probabilities for each point are calculated independently of each other. They do not take into account the transition probabilities between states, and it is thus possible to get states at two moments (t and t+1) that are both most probable at those time points but which have very little probability of occurring together, i.e.P(Xt=xi,Xt+1=xj)≠P(Xt=xi)P(Xt+1=xj){\displaystyle \mathbf {P} (X_{t}=x_{i},X_{t+1}=x_{j})\neq \mathbf {P} (X_{t}=x_{i})\mathbf {P} (X_{t+1}=x_{j})}. The most probable sequence of states that produced an observation sequence can be found using theViterbi algorithm. This example takes as its basis the umbrella world inRussell & Norvig 2010 Chapter 15 pp. 567in which we would like to infer the weather given observation of another person either carrying or not carrying an umbrella. We assume two possible states for the weather: state 1 = rain, state 2 = no rain. We assume that the weather has a 70% chance of staying the same each day and a 30% chance of changing. The transition probabilities are then: We also assume each state generates one of two possible events: event 1 = umbrella, event 2 = no umbrella. The conditional probabilities for these occurring in each state are given by the probability matrix: We then observe the following sequence of events: {umbrella, umbrella, no umbrella, umbrella, umbrella} which we will represent in our calculations as: Note thatO3{\displaystyle \mathbf {O_{3}} }differs from the others because of the "no umbrella" observation. In computing the forward probabilities we begin with: which is our prior state vector indicating that we don't know which state the weather is in before our observations. While a state vector should be given as a row vector, we will use the transpose of the matrix so that the calculations below are easier to read. Our calculations are then written in the form: instead of: Notice that thetransformation matrixis also transposed, but in our example the transpose is equal to the original matrix. Performing these calculations and normalizing the results provides: For the backward probabilities, we start with: We are then able to compute (using the observations in reverse order and normalizing with different constants): Finally, we will compute the smoothed probability values. These results must also be scaled so that its entries sum to 1 because we did not scale the backward probabilities with thect{\displaystyle c_{t}}'s found earlier. The backward probability vectors above thus actually represent the likelihood of each state at time t given the future observations. Because these vectors are proportional to the actual backward probabilities, the result has to be scaled an additional time. Notice that the value ofγ0{\displaystyle \mathbf {\gamma _{0}} }is equal tob^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }and thatγ5{\displaystyle \mathbf {\gamma _{5}} }is equal tof^0:5{\displaystyle \mathbf {{\hat {f}}_{0:5}} }. This follows naturally because bothf^0:5{\displaystyle \mathbf {{\hat {f}}_{0:5}} }andb^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }begin with uniform priors over the initial and final state vectors (respectively) and take into account all of the observations. However,γ0{\displaystyle \mathbf {\gamma _{0}} }will only be equal tob^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }when our initial state vector represents a uniform prior (i.e. all entries are equal). When this is not the caseb^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }needs to be combined with the initial state vector to find the most likely initial state. We thus find that the forward probabilities by themselves are sufficient to calculate the most likely final state. Similarly, the backward probabilities can be combined with the initial state vector to provide the most probable initial state given the observations. The forward and backward probabilities need only be combined to infer the most probable states between the initial and final points. The calculations above reveal that the most probable weather state on every day except for the third one was "rain". They tell us more than this, however, as they now provide a way to quantify the probabilities of each state at different times. Perhaps most importantly, our value atγ5{\displaystyle \mathbf {\gamma _{5}} }quantifies our knowledge of the state vector at the end of the observation sequence. We can then use this to predict the probability of the various weather states tomorrow as well as the probability of observing an umbrella. The forward–backward algorithm runs with time complexityO(S2T){\displaystyle O(S^{2}T)}in spaceO(ST){\displaystyle O(ST)}, whereT{\displaystyle T}is the length of the time sequence andS{\displaystyle S}is the number of symbols in the state alphabet.[1]The algorithm can also run in constant space with time complexityO(S2T2){\displaystyle O(S^{2}T^{2})}by recomputing values at each step.[2]For comparison, abrute-force procedurewould generate all possibleST{\displaystyle S^{T}}state sequences and calculate the joint probability of each state sequence with the observed series of events, which would havetime complexityO(T⋅ST){\displaystyle O(T\cdot S^{T})}. Brute force is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high. An enhancement to the general forward-backward algorithm, called theIsland algorithm, trades smaller memory usage for longer running time, takingO(S2Tlog⁡T){\displaystyle O(S^{2}T\log T)}time andO(Slog⁡T){\displaystyle O(S\log T)}memory. Furthermore, it is possible to invert the process model to obtain anO(S){\displaystyle O(S)}space,O(S2T){\displaystyle O(S^{2}T)}time algorithm, although the inverted process may not exist or beill-conditioned.[3] In addition, algorithms have been developed to computef0:t+1{\displaystyle \mathbf {f_{0:t+1}} }efficiently through online smoothing such as the fixed-lag smoothing (FLS) algorithm.[4] Given HMM (just like inViterbi algorithm) represented in thePython programming language: We can write the implementation of the forward-backward algorithm like this: The functionfwd_bkwtakes the following arguments:xis the sequence of observations, e.g.['normal', 'cold', 'dizzy'];statesis the set of hidden states;a_0is the start probability;aare the transition probabilities; andeare the emission probabilities. For simplicity of code, we assume that the observation sequencexis non-empty and thata[i][j]ande[i][j]is defined for all states i,j. In the running example, the forward-backward algorithm is used as follows:
https://en.wikipedia.org/wiki/Forward-backward_algorithm
Anintermodal container, often called ashipping container, or afreight container, (or simply "container") is a large metal crate designed and built forintermodal freight transport, meaning these containers can be used across differentmodes of transport– such as fromshipstotrainstotrucks– without unloading and reloading their cargo.[1]Intermodal containers are primarily used to store and transport materials and products efficiently and securely in the globalcontainerizedintermodal freight transport system, but smaller numbers are in regional use as well. It is like aboxcarthat does not have wheels. Based on size alone, up to 95% of intermodal containers comply with ISO standards,[2]and can officially be calledISO containers. These containers are known by many names:cargo container,sea container,ocean container,container vanorsea van,sea canorC can, orMILVAN,[3][4]orSEAVAN.[citation needed]The termCONEX (Box)is a technically incorrect carry-over usage of the name of an important predecessor of the ISO containers: the much smaller steelCONEX boxesused by theU.S. Army. Intermodal containers exist in many types and standardized sizes, but 90 percent of the global container fleet are "dry freight" or "general purpose" containers:[2][5]durableclosed rectangular boxes, made of rust-retardantweathering steel; almost all 8 feet (2.4 m) wide, and of either 20 or 40 feet (6.1 or 12.2 m) standard length, as defined byInternational Organization for Standardization(ISO)standard 668:2020.[2][6]The worldwide standard heights are 8 feet 6 inches (2.6 m) and 9 feet 6 inches (2.9 m) – the latter are known asHigh CubeorHi-Cube(HCorHQ) containers.[7]Depending on the source, these containers may be termed TEUs (twenty-foot equivalent units), reflecting the 20- or 40-foot dimensions. Invented in the early 20th century, 40-foot intermodal containers proliferated during the 1960s and 1970s under thecontainerizationinnovations of the American shipping companySeaLand. Likecardboard boxesandpallets, these containers are a means to bundle cargo and goods into larger,unitized loadsthat can be easily handled, moved, and stacked, and that will pack tightly in a ship or yard. Intermodal containers share a number of construction features to withstand the stresses of intermodal shipping, to facilitate their handling, and to allow stacking. Each has a uniqueISO 6346reporting mark. In 2012, there were about 20.5 million intermodal containers in the world of varying types to suit different cargoes.[6][nb 1]Containers have largely supplanted the traditionalbreak bulk cargo; in 2010, containers accounted for 60% of the world's seaborne trade.[9][10]The predominant alternative methods of transport carrybulk cargo, whether gaseous, liquid, or solid—e.g., bybulk carrierortank ship,tank car, ortruck. Forair freight, the lighter weightIATA-definedunit load devicesare used. Containerization has its origins in earlycoal mining regions in Englandbeginning in the late 18th century. In 1766James Brindleydesigned the box boat 'Starvationer' with ten wooden containers, to transport coal fromWorsleyDelph (quarry) to Manchester byBridgewater Canal. In 1795,Benjamin Outramopened the Little Eaton Gangway, upon which coal was carried inwagonsbuilt at his Butterley Ironwork. The horse-drawn wheeled wagons on the gangway took the form of containers, which, loaded with coal, could be transshipped from canalbargeson theDerby Canal, which Outram had also promoted.[12] By the 1830s, railways were carrying containers that could be transferred to other modes of transport. TheLiverpool and Manchester Railwayin the UK was one of these, making use of "simple rectangular timber boxes" to convey coal from Lancashire collieries to Liverpool, where a crane transferred them to horse-drawn carriages.[13]Originally used for moving coal on and off barges, "loose boxes" were used to containerize coal from the late 1780s, at places like theBridgewater Canal. By the 1840s, iron boxes were in use as well as wooden ones. The early 1900s saw the adoption of closed container boxes designed for movement between road and rail. The first international standard for containers was established by theBureau International des Containerset du Transport Intermodal in 1933, and a second one in 1935, primarily for transport between European countries. American containers at this time were not standardized, and these early containers were not yet stackable – neither in the U.S. nor Europe. In November 1932, the first container terminal in the world was opened by the Pennsylvania Rail Road Company inEnola, Pennsylvania. Containerization was developed in Europe and the US as a way to revitalize rail companies after theWall Street crash of 1929, in New York, which resulted in economic collapse and a drop in all modes of transport.[14] In April 1951 atZürich Tiefenbrunnen railway station, theSwiss Museum of Transportand theBureau International des Containers(BIC) held demonstrations of container systems for representatives from a number of European countries, and from the United States. A system was selected for Western Europe, based on the Netherlands' system for consumer goods and waste transportation calledLaadkisten(lit. "Loading chests"), in use since 1934. This system usedroller containersfor transport by rail, truck and ship, in various configurations up to 12,100 pounds (5,500 kg) capacity, and up to10 ft 2 in × 7 ft6+1⁄2in × 6 ft6+3⁄4in (3.1 m × 2.3 m × 2 m) in size.[15][16]This became the first post World War II European railway standard of theInternational Union of Railways–UIC-590, known as "pa-Behälter". It was implemented in the Netherlands, Belgium, Luxembourg, West Germany, Switzerland, Sweden and Denmark.[17] The use of standardized steelshipping containersbegan during the late 1940s and early 1950s, when commercial shipping operators and the US military started developing such units.[18]In 1948 theU.S. ArmyTransportation Corpsdeveloped the "Transporter", a rigid, corrugated steel container, able to carry 9,000 pounds (4,100 kg). It was 8 ft 6 in (2.6 m) long, 6 ft 3 in (1.9 m) wide, and 6 ft 10 in (2.1 m) high, with double doors on one end, was mounted on skids, and had lifting rings on the top four corners.[19]After proving successful in Korea, the Transporter was developed into the Container Express(CONEX) boxsystem in late 1952. Based on the Transporter, the size and capacity of the Conex were about the same,[nb 2]but the system was made modular, by the addition of a smaller, half-size unit of 6 ft 3 in (1.9 m) long, 4 ft 3 in (1.3 m) wide and6 ft10+1⁄2in (2.1 m) high.[22][23][nb 3]Conexes could be stacked three high, and protected their contents from the elements.[20]By 1965 the US military used some 100,000 Conex boxes, and more than 200,000 in 1967,[22][26]making this the first worldwide application of intermodal containers.[20]Their invention made a major contribution to theglobalizationof commerce in the second half of the 20th century, dramatically reducing the cost of transporting goods and hence of long-distance trade.[27][28] From 1949 onward, engineerKeith Tantlingerrepeatedly contributed to the development of containers, as well as their handling and transportation equipment. In 1949, while at Brown Trailers Inc. ofSpokane, Washington, he modified the design of theirstressed skinaluminum 30-foot trailer, to fulfil an order of two-hundred 30-by-8-by-8.5-foot (9.1 m × 2.4 m × 2.6 m) containers that could be stacked two high, for Alaska-basedOcean Van Lines. Steel castings on the top corners provided lifting and securing points.[29] In 1955, trucking magnateMalcom McLeanboughtPan-Atlantic Steamship Company, to form a container shipping enterprise, later known asSea-Land. The first containers were supplied by Brown Trailers Inc, where McLean metKeith Tantlinger, and hired him as vice-president of engineering and research.[30]Under the supervision of Tantlinger, a new 35 ft × 8 ft × 8.5 ft (10.7 m × 2.4 m × 2.6 m) Sea-Land container was developed, the length determined by the maximum length of trailers then allowed on Pennsylvanian highways. Each container had a frame with eight corner castings that could withstand stacking loads.[31]Tantlinger also designed automaticspreadersfor handling the containers, as well as thetwistlockmechanism that connects with the corner castings. Containers in their modern 21st-century form first began to gain widespread use around 1956. Businesses began to devise a structured process to use and to get optimal benefits from the role and use of shipping containers. Over time, the invention of the modern telecommunications of the late 20th century made it highly beneficial to have standardized shipping containers and made these shipping processes more standardized, modular, easier to schedule, and easier to manage.[32] Two years after McLean's first container ship, theIdeal X, started container shipping on the US East Coast,[33]Matson Navigationfollowed suit between California and Hawaii. Just likePan-Atlantic's containers, Matson's were 8 ft (2.44 m) wide and 8 ft 6 in (2.59 m) high, but due to California's different traffic code Matson chose to make theirs 24 ft (7.32 m) long.[34]In 1968, McLean began container service to South Vietnam for the US military with great success. ISO standards for containers were published between 1968 and 1970 by the International Maritime Organization. These standards allow for more consistent loading, transporting, and unloading of goods in ports throughout the world, thus saving time and resources.[35] The International Convention for Safe Containers (CSC) is a 1972 regulation by theInter-governmental Maritime Consultative Organizationon the safe handling and transport of containers. It decrees that every container traveling internationally be fitted with a CSC safety-approval plate.[36][37]This holds essential information about the container, including age, registration number, dimensions and weights, as well as its strength and maximum stacking capability. Longshoremen and related unions around the world struggled with this revolution in shipping goods.[38][39]For example, by 1971 a clause in theInternational Longshoremen's Association(ILA) contract stipulated that the work of "stuffing" (filling) or "stripping" (emptying) a container within 50 miles (80 km) of a port must be done by ILA workers, or if not done by ILA, that the shipper needed to pay royalties and penalties to the ILA. Unions for truckers and consolidators argued that the ILA rules were not valid work preservation clauses, because the work of stuffing and stripping containers away from the pier had not traditionally been done by ILA members.[38][39]In 1980 theSupreme Court of the United Statesheard this case and ruled against the ILA.[38][39] Some experts have said that the centralized, continuous shipping process made possible by containers has created dangerous liabilities: one bottleneck, delay, or other breakdown at any point in the process can easily cause major delays everywhere up and down the supply chain.[32] The reliance on containers exacerbated some of the economic and societal damage from the2021 global supply chain crisisof 2020 and 2021, and the resultingshortages related to the COVID-19 pandemic. In January 2021, for example, a shortage of shipping containers at ports caused shipping to be backlogged.[40][41][42] Marc Levinson, author ofOutside the Box: How Globalization Changed from Moving Stuff to Spreading IdeasandThe Box: How the Shipping Container Made the World Smaller and the World Economy Bigger, said in an interview:[32] Because of delays in the process, it's taking a container longer to go from its origin to its final destination where it's unloaded, so the container is in use longer for each trip. You've just lost a big hunk of the total capacity because the containers can't be used as intensively. We've had in the United States an additional problem, which is that the ship lines typically charge much higher rates on services from Asia to North America than from North America to Asia. This has resulted in complaints, for example, from farmers and agricultural companies, that it's hard to get containers in some parts of the country because the ship lines want to ship them empty back to Asia, rather than letting them go to South Dakota and load over the course of several days. So we've had exporters in the United States complaining that they have a hard time finding a container that they can use to send their own goods abroad.[32] Ninety percent of the global container fleet consists of "dry freight" or "general purpose" containers – both of standard and special sizes.[2][5]And although lengths of containers vary from 8 to 56 feet (2.4 to 17.1 m), according to two 2012 container census reports[nb 4]about 80% of the world's containers are either 20- or 40-foot standard-length boxes of the dry freight design.[6]These typical containers are rectangular, closed box models, with doors fitted at one end, and made ofcorrugatedweathering steel(commonly known as CorTen)[nb 5]with aplywoodfloor.[44]Although corrugating thesheet metalused for the sides and roof contributes significantly to the container's rigidity and stacking strength, just like incorrugated ironor incardboard boxes, the corrugated sides cause aerodynamic drag, and up to 10% fuel economy loss in road or rail transport, compared to smooth-sided vans.[45] Standard containers are 8 feet (2.4 m) wide by 8 ft 6 in (2.6 m) high,[nb 6]although the taller "High Cube" or "hi-cube" units measuring 9 feet 6 inches (2.9 m) have become very common in recent years[when?]. By the end of 2013, high-cube 40 ft containers represented almost 50% of the world's maritime container fleet, according to Drewry's Container Census report.[47] About 90% of the world's containers are eithernominal20-foot (6.1 m) or 40-foot (12.2 m) long,[6][48]although the United States and Canada also use longer units of 45 ft (13.7 m), 48 ft (14.6 m) and 53 ft (16.2 m). ISO containers have castings with openings fortwistlockfasteners at each of the eight corners, to allow gripping the box from above, below, or the side, and they can be stacked up to ten units high.[49] Although ISO standard 1496 of 1990 only required nine-high stacking,and onlyof containers rated at 53,000 pounds (24,000 kg),[50]currentUltra Large Container Vesselsof the Post New Panamax andMaersk Triple E classare stacking them ten or eleven high.[51][52]Moreover, vessels like theMarie Maerskno longer use separate stacks in their holds, and other stacks above deck – instead they maximize their capacity by stacking continuously from the bottom of the hull, to as much as 21 high.[53]This requires automated planning to keep heavy containers at the bottom of the stack and light ones on top to stabilize the ship and to prevent crushing the bottom containers. Regional intermodal containers, such as European, Japanese and U.S. domestic units however, are mainly transported by road and rail, and can frequently only be stacked up to two or three laden units high.[49]Although the two ends are quite rigid, containers flex somewhat during transport.[54] Container capacity is often expressed intwenty-foot equivalent units(TEU, or sometimesteu). A twenty-foot equivalent unit is a measure of containerized cargo capacity equal to one standard 20-foot (6.1 m) long container. This is an approximate measure, wherein the height of the box is not considered. For example, the 9 ft 6 in (2.9 m) tall high-cube, as well as 4-foot-3-inch half-height (1.3 m) 20-foot (6.1 m) containers are equally counted as one TEU. Similarly, extra long 45 ft (13.7 m) containers are commonly counted as just two TEU, no different from standard 40-foot (12.2 m) long units. Two TEU are equivalent to one forty-foot equivalent unit (FEU).[55][56] In 2014 the global container fleet grew to a volume of 36.6 million TEU, based on Drewry Shipping Consultants' Container Census.[57][nb 7]Moreover, in 2014 for the first time in history 40-foot High-Cube containers accounted for the majority of boxes in service, measured in TEU.[57]In 2019 it was noted by global logistics data analysisstartupUpply[58]that China's role as 'factory of the world' is further incentivizing the use of 40-foot containers, and that the computational standard 1 TEU boxes only make up 20% of units on major east–west liner routes, and demand for shipping them keeps dropping.[59]In the 21st century, the market has shifted to using 40-foot high-cube dry and refrigerated containers more and more predominantly. Forty-foot units have become the standard to such an extent that the sea freight industry now charges less than 30% more for moving a 40-ft unit than for a 1 TEU box. Although 20-ft units mostly have heavy cargo, and are useful for stabilizing both ships and revenue,[nb 8]carriers financially penalize 1 TEU boxes by comparison.[59] For container manufacturers, 40-foot High-Cubes now dominate market demand both for dry and refrigerated units.[59]Manufacturing prices for regular dry freight containers are typically in the range of $1750–$2000 U.S. per CEU (container equivalent unit),[57]and about 90% of the world's containers are made in China.[48]The average age of the global container fleet was a little over 5 years from end 1994 to end 2009, meaning containers remain in shipping use for well over 10 years.[8] Agooseneck tunnel, an indentation in the floor structure, that meshes with thegooseneckon dedicated containersemi-trailers, is a mandatory feature in the bottom structure of 1AAA and 1EEE (40- and 45-ft high-cube) containers, and optional but typical on standard height, forty-foot and longer containers.[62] Other than the standard, general purpose container, many variations exist for use with different cargoes. The most prominent of these arerefrigerated containers(also calledreefers) for perishable goods, that make up 6% of the world's shipping boxes.[5][48]Tanks in a frame, for bulk liquids, account for another 0.75% of the global container fleet.[5] Although these variations are not of the standard type, they mostly areISO standardcontainers – in fact theISO 6346standard classifies a broad spectrum of container types in great detail. Aside from different size options, the most important container types are:[63][nb 10] Containers foroffshoreuse have a few different features, likepad eyes, and must meet additional strength and design requirements, standards and certification, such as the DNV2.7-1 byDet Norske Veritas, LRCCS byLloyd's Register, Guide for Certification of Offshore Containers byAmerican Bureau of Shippingand theInternational standardISO10855:Offshore containers and associated lifting sets, in support ofIMO MSC/Circ. 860[71] A multitude of equipment, such as generators, has been installed in containers of different types to simplify logistics – see§ Containerized equipmentfor more details. Swap bodyunits usually have the same bottom corner fixtures as intermodal containers, and often have folding legs under their frame so that they can be moved between trucks without using a crane. However they frequently do not have the upper corner fittings of ISO containers, and are not stackable, nor can they be lifted and handled by the usual equipment like reach-stackers or straddle-carriers. They are generally more expensive to procure.[72] Basic terminology of globally standardized intermodal shipping containers is set out in standard: From its inception, ISO standards on international shipping containers, consistently speak of them sofar as 'Series 1' containers – deliberately so conceived, to leave room for another such series of interrelated container standards in the future.[nb 11] Basic dimensions and permissible gross weights of intermodal containers are largely determined by two ISO standards: Weights and dimensions of the most common (standardized) types of containers are given below.[nb 12]Forty-eight foot and fifty-three foot containers have not yet been incorporated in the latest, 2020 edition of the ISO 668.[74]ISO standard maximum gross mass for all standard sizes except 10-ft boxes was raised to 79,000 lb (36,000 kg) per Amendment 1 on ISO 668:2013, in 2016.[75]Draft Amendment 1 of ISO 668: 2020 – for the eighth edition – maintains this.[76]Given the average container lifespan, the majority of the global container fleet have not caught up with this change yet. Values vary slightly from manufacturer to manufacturer, but must stay within the tolerances dictated by the standards. Empty weight (tare weight) is not determined by the standards, but by the container's construction, and is therefore indicative, but necessary to calculate a net load figure, by subtracting it from the maximum permitted gross weight. The bottom row in the table gives the legal maximum cargo weights for U.S. highway transport, and those based on use of an industry common tri-axle chassis. Cargo must also be loaded evenly inside the container, to avoid axle weight violations.[77]The maximum gross weights that U.S. railroads accept or deliver are 52,900 lb (24,000 kg) for 20-foot containers, and 67,200 lb (30,500 kg) for 40-foot containers,[78]in contrast to the global ISO-standard gross weight for 20-footers having been raised to the same as 40-footers in the year 2005.[79]In the U.S., containers loaded up to the rail cargo weight limit cannot move over the road, as they will exceed the U.S. 80,000 lb (36,000 kg) highway limit.[78] AustralianRACEcontainers are also slightly wider to optimise them for the use ofAustralia Standard Pallets, or are 41 ft (12.5 m) long and 8 ft 2 in (2.5 m) wide to be able to fit up to 40 pallets.[86][87] European pallet wide (or PW) containers are minimally wider, and have shallow side corrugation, to offer just enough internal width, to allow common EuropeanEuro-palletsof47+1⁄4in (1.20 m) long by31+1⁄2in (0.80 m) wide,[88]to be loaded with significantly greater efficiency and capacity. Having a typical internal width of96+1⁄8in (2.44 m),[89](a gain of about3+15⁄16inches (10 cm) over the ISO-usual92+1⁄8in (2.34 m),[90]givespallet-widecontainers a usable internal floor width of94+1⁄2in (2.40 m), compared to78+3⁄4in (2.00 m) in standard containers, because the extra width enables their users to either load two Euro-pallets end on end across their width, or three of them side by side (providing the pallets were neatly stacked, without overspill), whereas in standard ISO containers, a strip of internal floor-width of about 13 inches (33 cm) cannot be used by Euro-pallets. As a result, while being virtually interchangeable:[89] Somepallet-widesare simply manufactured with the same, ISO-standard floor structure, but with the side-panels welded in, such that the ribs/corrugations are embossed outwards, instead of indenting to the inside.[91]This makes it possible for somepallet-widesto be just96+7⁄8in (2.462 m) wide,[89]but others can be98+3⁄8in (2.50 m) wide.[92] The 45 ft (13.72 m) pallet-wide high-cube container has gained particularly wide acceptance, as these containers can replace the44 ft7+3⁄8in (13.6 m) swap bodies that are common for truck transport in Europe. The EU has started a standardization for pallet wide containerization in the European Intermodal Loading Unit (EILU) initiative.[93] Many sea shipping providers in Europe allow these on board, as their external width overhangs over standard containers are sufficiently minor that they fit in the usual interlock spaces in ship's holds,[91]as long as their corner-castings patterns (both in the floor and the top) still match with regular 40-foot units, for stacking and securing. The North American market has widely adopted containerization, especially for domestic shipments that need to move between road and rail transport.[94]While they appear similar to the ISO-standard containers, there are several significant differences: they are considered High-Cubes based on their 9 ft 6 in (2.90 m) ISO-standard height, their 102-inch (2.6 m) width matches the maximum width of road vehicles in the region but is 6 inches (15 cm) wider than ISO-standard containers,[95]and they are often not built strong enough to endure the rigors of ocean transport.[94] The first North American containers to come to market were 48 feet (15 m) long. This size was introduced by container shipping companyAmerican President Lines(APL) in 1986.[94]The size of the containers matched new federal regulations passed in 1983 which prohibited states from outlawing the operation of single trailers shorter than 48 feet (15 m) long or 102 inches (260 cm) wide.[96]This size being 8 feet (2.44 m) longer and 6 inches (15 cm) wider has 29% more volume capacity than the standard 40-ft High-Cube,[97]yet costs of moving it by truck or rail are almost the same. In the late 1980s, the federal government announced it would once again allow an increase in the length of trailers to 53 feet (16 m) at the start of 1990. Anticipating this change, 53-foot containers were introduced in 1989. These large boxes have 60% more capacity than 40-foot containers, enabling shippers to consolidate more cargo into fewer containers.[97][98][99] In 2007, APL introduced the first 53-foot ocean-capable containers designed to withstand voyages on its South China-to-Los Angeles service.[94]In 2013, APL stopped offering vessel space for 53-foot containers on its trans-Pacific ships.[100]In 2015 bothCrowleyand TOTE Maritime each announced the construction of their respective second combined container androll-on/roll-offships for Puerto Rico trade, with the specific design to maximize cubic cargo capacity by carrying 53-foot, 102-inch wide (2,591 mm) containers.[101][102]Within Canada,Oceanexoffers 53-foot-container ocean service to and from Newfoundland.[103]53-foot containers are also being used on some Asia Pacific international shipping routes.[73] In April 2017,Canadian TireandCanadian Pacific Railwayannounced deployment of what they claimed to be the first60-footintermodal containers in North America.[104]The containers are transportable on the road using specially configured trucks and telescoping trailers (where vehicle size limits permit it), and on the railway using the top positions of double-stack container cars.[105]According to initial projections, Canadian Tire believed it would allow them to increase the volume of goods shipped per container by 13%.[104]Five years after the deployment of the containers, analyst Larry Gross observed that United States truck size regulations are more constraining than those in Canada, and predicted that for the foreseeable future, these larger containers would remain exclusive to Canada.[106] TheISO 668standard has so far never standardized 10 ft (3 m) containers to be the same height as so-called "Standard-height", 8 ft 6 in (2.59 m), 20- and 40-foot containers. By the ISO standard, 10-foot (and previously included 5-ft and 61⁄2-ft boxes) are only of unnamed, 8-foot (2.44 m) height. But industry makes 10-foot units more frequently of 8 ft 6 in (2.59 m) height,[90]to mix, match (and stack) better in a fleet of longer, 8 ft 6 in tall containers. Smaller units, on the other hand, are no longer standardized, leading to deviating lengths, like 8 ft (2.44 m) or6+1⁄2ft (1.98 m), with non-standard widths of 7 ft 3 in (2.20 m) and 6 ft 5 in (1.95 m) in respectively, and non-standard heights of 7 ft 5 in (2.26 m) and 6 ft 3 in (1.91 m) respectively,[90]for storage or off-shore use. The United States military continues to use small containers, strongly reminiscent of their Transporter andConex boxesof the 1950s and 1960s. These mostly comply with (previous) ISO standard dimensions, or are a direct derivative thereof. Current terminology of the United States armed forces calls these small containersBicon,TriconandQuadcon, with sizes that correspond with (previous)ISO 668standard sizes 1D, 1E and 1F respectively. These containers are of a standard 8 ft (2.44 m) height, and with a footprint size either one half (Bicon), one third (Tricon) or one quarter (Quadcon) the size of a standard 20-foot, one TEU container.[107][108][109]At a nominal length of 10 feet (3.05 m), two Bicons coupled together lengthwise match one 20-foot ISO container, but their height is 6 inches (152 mm) shy of the more commonly available 10-foot ISO containers of so-called 'standard' height, which are 8 ft 6 in (2.59 m) tall. Tricons and Quadcons however have to be coupled transversely – either three or four in a row – to be stackable with twenty foot containers.[110]Their length of 8 ft (2.44 m) corresponds to the width of a standard 20-foot container, which is why there are forklift pockets at their ends, as well as in the sides of these boxes, and the doors only have one locking bar each. The smallest of these, the Quadcon, exists in two heights: 96 in (2.44 m) or 82 in (2.08 m).[111]Only the first conforms to ISO-668 standard dimensions (size 1F). ABC containers are small containers, typically 20 ft long and 5 ft high, used for hauling dense materials. The smaller size reduces the tare weight (as compared to using a half-full standard height container). They are normally shipped on specialized railroad flatcars, where 6 containers can be carried in the space of 4 standard containers.[112] In Japan's domestic freight rail transport, most of the containers are 12 ft (3.66 m) long in order to fit Japan's unique standard pallet sizes.[113] Each container is allocated a standardizedISO 6346reporting mark(ownership code), four letters long ending in either U, J or Z, followed by six digits and a check digit.[114]The ownership code for intermodal containers is issued by theBureau International des Containers(International container bureau, or BIC) in France, hence the name "BIC-Code" for the intermodal container reporting mark. So far there exist only four-letter BIC-Codes ending in "U". The placement and registration of BIC Codes is standardized by the commissions TC104 and TC122 in the JTC1 of the ISO which are dominated by shipping companies.Shipping containersare labelled with a series of identification codes that includes the manufacturer code, the ownership code, usage classification code, UN placard for hazardous goods and reference codes for additional transport control and security. Following the extended usage of pallet-wide containers in Europe the EU started the Intermodal Loading Unit (ILU) initiative. This showed advantages for intermodal transport of containers and swap bodies. This led to the introduction of ILU-Codes defined by the standard EN 13044 which has the same format as the earlier BIC-Codes. The International Container Office BIC agreed to only issue ownership codes ending with U, J or Z. The new allocation office of the UIRR (International Union of Combined Road-Rail Transport Companies) agreed to only issue ownership reporting marks for swap bodies ending with A, B, C, D or K – companies having a BIC-Code ending with U can allocate an ILU-Code ending with K having the same preceding letters. Since July 2011 the new ILU codes can be registered, beginning with July 2014 all intermodal ISO containers and intermodal swap bodies must have an ownership code and by July 2019 all of them must bear a standard-conforming placard.[115] Containers are transferred between rail, truck, and ship bycontainer cranesatcontainer terminals.Forklifts,reach stackers,straddle carriers, container jacks andcranesmay be used to load and unload trucks or trains outside of container terminals.Swap bodies,sidelifters, tilt deck trucks, andhook trucksallow transfer to and from trucks with no extra equipment. ISO-standard containers can be handled and lifted in a variety of ways by their corner fixtures, but the structure and strength of 45-foot (14 m) (type E) containers limits their tolerance of side-lifting, nor can they be forklifted, based on ISO 3874 (1997).[116] Containers can be transported bycontainer ship, truck andfreight trainsas part of a single journey without unpacking. Units can be secured in transit using "twistlock" points located at each corner of the container. Every container has a uniqueBIC codepainted on the outside for identification and tracking, and is capable of carrying up to 20–25tonnes. Costs for transport are calculated intwenty-foot equivalent units(TEU). When carried by rail, containers may be carried on aspine car,flatcar, orwell cars. The latter are specially designed for container transport, and can accommodatedouble-stacked containers. However, theloading gaugeof a rail system may restrict the modes and types of container shipment. The smaller loading gauges often found in European railroads will only accommodate single-stacked containers. In some countries, such as the United Kingdom, there are sections of the rail network through which high-cube containers cannot pass, or can pass through only on well cars. On the other hand,Indian Railwaysruns double-stacked containers on flatcars under25 kVoverhead electrical wires. The wires must be at least 24 feet 5 inches (7.45 m) above the track.China Railwayalso runs double-stacked containers under overhead wires, but must use well cars to do so, since the wires are only 21 feet 8 inches (6.6 m) above the track.[117] About 90% of non-bulk cargo worldwide is transported by container, and the largest container ships can carry over 19,000 TEU. Between 2011 and 2013, an average of 2,683 containers were reported lost at sea.[118]Other estimates go up to 10,000; of these 10% are expected to contain chemicals toxic to marine life.[119]Various systems are used for securing containers on ships.[120][121]Losses of containers at sea are low.[122] Containers can also be transported in planes, as seen within intermodal freight transport. However, transporting containers in this way is typically avoided due to the cost of doing such and the lack of availability of planes which can accommodate such awkwardly sized cargo. There are special aviation containers, smaller than intermodal containers, calledunit load devices. There are many established methods and materials for stabilizing and securing intermodal containers loaded on ships, as well as the internal cargo inside the boxes. Conventional restraint methods and materials such as steelstrappingand wood blocking and bracing have been around for decades and are still widely used. Polyester strapping and lashing, and synthetic webbings are also common today.Dunnage bags(also known as "air bags") are used to keepunit loadsin place. Flexi-bagscan also be directly loaded, stacked in food-grade containers. Indeed, their standard shape fills the entire ground surface of a 20 ft ISO container. Container-sized units are also often used for moving large pieces of equipment to temporary sites. Specialised containers are particularly attractive to militaries already using containerisation to move much of their freight around. Shipment of specialized equipment in this way simplifies logistics and may prevent identification of high value equipment by enemies. Such systems may include command and control facilities, mobile operating theatres[124]or evenmissile launchers[125](such as the Russian3M-54 Klubsurface-to-surface missile). Complete water treatment systems can be installed in containers and shipped around the world.[126] Electric generators can be permanently installed in containers to be used for portable power.[127] Containers have also been used by contemporary artists, exhibitions, and galleries.[128]Artists may conductresidenciesinside stationary or traveling containers.[129]Containers may also be used to install temporary art exhibitions in one or many containers at a site such as the 2005–2012Containerartproject. Half the containers that enter the United States leave empty.[130]Their value in the US is lower than in China, so they are sometimes used for other purposes. This is typically but not always at the end of their voyaging lives. The US military often used itsConex containersas on-site storage, or easily transportable housing for command staff and medical clinics.[131]Nearly all of the more than 150,000 Conex containers shipped to Vietnam remained in the country, primarily as storage or other mobile facilities.[26]Permanent or semi-permanent placement of containers for storage is common. A regular forty-foot container has about 9,000 pounds (4,000 kg) of steel, which takes 8,000kWh(28,800MJ) of energy to melt down. Repurposing used shipping containers is increasingly a practical solution to both social and ecological problems. Shipping container architectureemploys used shipping containers as the main framing of modular home designs, where the steel may be an integrated part of the design, or be camouflaged into a traditional looking home. They have also been used to make temporary shops, cafes, andcomputer datacenters, e.g. theSun Modular Datacenter. Intermodal containers are notstrong enoughfor conversion to underground bunkers without additional bracing, as the walls cannot sustain much lateral pressure and will collapse.[citation needed]Also, the wooden floor of many used containers could contain some fumigation residues, rendering them unsuitable as confined spaces, such as for prison cells or bunkers. Cleaning or replacing the wood floor can make these used containers habitable, with proper attention to such essential issues as ventilation and insulation. The City ofGöttingenhas deployed containers for the disablement ofunexploded ordnance: eitherFIBCsfilled with sand orIBCsfilled with water. When the bomb squad performs controlled detonations, such prepared containers absorb shock and fragments.[132]This use requires level, load-bearing ground. The deformed containers are unsuitable for further circulation.
https://en.wikipedia.org/wiki/Intermodal_container
InDOS memory management,extended memoryrefers tomemoryabove the firstmegabyte(220bytes) ofaddress spacein anIBM PCor compatible with an80286or laterprocessor. The term is mainly used under theDOSandWindowsoperating systems. DOS programs, running inreal modeorvirtual x86 mode, cannot directly access this memory, but are able to do so through anapplication programming interface(API) called theExtended Memory Specification(XMS). This API is implemented by adriver(such asHIMEM.SYS) or the operating system kernel, which takes care ofmemory managementand copying memory betweenconventionaland extended memory, by temporarily switching the processor intoprotected mode. In this context, the term "extended memory" may refer to either the whole of the extended memory or only the portion available through this API. Extended memory can also be accessed directly by DOS programs running in protected mode usingVCPIorDPMI, two (different and incompatible) methods of using protected mode under DOS. Extended memory should not be confused withexpanded memory(EMS), an earlier method for expanding the IBM PC's memory capacity beyond 640 kB (655,360 bytes) using anexpansion cardwithbank switchedmemory modules. Because of the available support for expanded memory in popular applications, device drivers were developed that emulated expanded memory using extended memory. Later two additional methods were developed allowing direct access to small portions of additional memory above 640 KB from real mode. One of these is referred to as thehigh memory area(HMA), consisting of the first nearly 64 KB of extended memory, and the other is referred to as theupper memory area(UMA; also referred to as upper memory blocks or UMBs), located in the address range between 640 KB and 1 MB which the IBM PC designates for hardware adapters and ROM. Onx86-based PCs, extended memory is only available with anIntel 80286processor or higher, such as theIBM PC At.[1]Only these chips can directly address more than 1 megabyte ofRAM. The earlier8086/8088processors can make use of more than 1 MB of RAM if one employsspecial hardwareto make selectable parts of it appear at addresses below 1 MB. On a 286 or better PC equipped with more than 640 kB of RAM, the additional memory would generally be re-mapped above the 1 MB boundary, since the IBM PC architecture reserves addresses between 640 kB and 1 MB for system ROM and peripherals. Extended memory is not accessible inreal mode(except for a small portion called thehigh memory area). Only applications executing inprotected modecan use extended memory directly. A supervising protected-modeoperating systemsuch asMicrosoft Windowsmanages application programs' access to memory. The processor makes this memory available through theGlobal Descriptor Table(GDT) and one or moreLocal Descriptor Tables(LDTs). The memory is "protected" in the sense that memory segments assigned a local descriptor cannot be accessed by another program because that program uses a different LDT, and memory segments assigned a global descriptor can have their access rights restricted, causing a processorexception(e.g., ageneral protection faultor GPF) on violation. This prevents programs running in protected mode from interfering with each other's memory.[2] Extended memory went unused at first because no software ran in the 80286's protected mode. By contrast, the industry quickly adopted 1985'sexpanded memorystandard, which works with all PCs regardless of processor.[1]A protected-mode operating system such as Microsoft Windows can also run real-mode programs and provide expanded memory to them. TheDOS Protected Mode Interface(DPMI) is Microsoft's prescribed method for aDOSprogram to access extended memory under amultitaskingenvironment.[2] TheExtended Memory Specification(XMS) is the specification describing the use ofIBM PCextended memory inreal modefor storing data (but not for running executable code in it). Memory is made available byextended memory manager(XMM) software such asHIMEM.SYS. The XMM functions are accessible by direct calls to a variable address that can be found throughsoftware interrupt2Fh function 4310h. XMS version 2.0, released in July 1988, allowed for up to 64 MB of memory.[3]With XMS version 3.0 this increased to 4 GB (232bytes).[4]The difference is a direct result of the sizes of the values used to report the amounts of total and unallocated (free) extended memory in 1 KB (1024-byte) units: XMS 2.0 uses 16-bit unsigned integers, capable of representing a maximum of (65535 * 1 KB) = 64 MB, while XMS 3.0 adds new alternate functions that use 32-bit unsigned integers, capable of representing (4 G * 1 KB) = 4 TB (4 terabytes) but limited by the specification to 4 GB.[3][4](4 GB is the address range of the 80386 and the 80486, the only 32-bit Intel x86 CPUs that existed when XMS 3.0 was published in 1991.) XMS 3.0 retains the original XMS 2.0 API functions with their original 64 MB limit but adds new "super extended memory" functions that support 4 GB of extended memory (minus the first 1 MB) and can be called only with a 32-bit CPU (since these "super" functions use 32-bit CPU registers to pass values).[4]To differentiate between the possibly different amount of memory that might be available to applications, depending on which version of the specification they were developed to, the latter may be referred to assuper extended memory(SXMS). The extended memory manager is also responsible for managing allocations in thehigh memory area(HMA) and theupper memory area(UMA; also referred to as upper memory blocks or UMBs). In practice the upper memory area will be provided by theexpanded memorymanager (EMM), after which DOS will try to allocate them all and manage them itself.[clarification needed][citation needed]
https://en.wikipedia.org/wiki/Extended_memory
Afree-netwas originally acomputer systemor network that provided public access to digital resources and community information, including personal communications, throughmodemdialup via thepublic switched telephone network. The concept originated in the health sciences to provide online help for medical patients.[1][2]With the development of theInternetfree-net systems became the first to offer limitedInternet accessto the general public to support the non-profit community work. TheCleveland Free-Net(cleveland.freenet.edu), founded in 1986, was the pioneering community network of this kind in the world.[3][4] Any person with a personal computer, or through access from public terminal in libraries, could register for accounts on a free-net, and was assigned anemail address. Other services often includedUsenetnewsgroups,chat rooms,IRC,telnet, and archives of community information, delivered either with text-basedGophersoftware or later theWorld-Wide Web. The word markFree-Netwas a registeredtrademarkof theNational Public Telecomputing Network(NPTN), founded in 1989 by Tom Grundner atCase Western Reserve University. NPTN was a non-profit organization dedicated to establishing and developing, free, public access, digital information and communication services for the general public.[5]It closed operations in 1996, filing for Chapter 7 bankruptcy.[6]However, prior use of the term created some conflicts.[7]NPTN distributed the software packageFreePort, developed at Case Western Reserve, that was used and licensed by many of the free-net sites. The Internetdomain namefreenet.orgwas first registered by the Greater Detroit Free-Net (detroit.freenet.org), a non-profit community system in Detroit, MI, and a member of the NPTN. The Greater Detroit Free-Net provided other subdomains to several free-net systems during its operation from 1993 to approximately 2001. Unlike commercialInternet service providers, free-nets originally provided direct terminal-based dialup, instead of other networked connections, such asPoint-to-Point Protocol(PPP). The development of Internet access with cheaper and faster connections, and the advent of theWorld-Wide Webmade the original free-net community concept obsolete. A number of free-nets, including the original Cleveland Free-Net, have shut down or changed their focus. Free-nets have always been locally governed, so interpretation of their mission to remove barriers to access and provide a forum for community information, as well as services offered, can vary widely. As text-based Internet became less popular, some of the original free-nets have made available PPP dialup and more recently DSL services, as a revenue generating mechanism, with some now transitioning into thecommunity wireless movement. Several free-net systems continue under new mission statements.Rochester Free-Net(Rochester, New York), for instance, focuses on hosting community service organizations (over 500 to date) as well as seminars about Internet use to the community at no charge.Austin FreeNet(Austin, Texas) now provides technology training and access to residents of the city, "fostering skills that enable people to succeed in a digital age."[8]
https://en.wikipedia.org/wiki/Free-net
Thelook-elsewhere effectis aphenomenonin the statistical analysis ofscientific experimentswhere an apparentlystatistically significantobservation may have actually arisen by chance because of the sheer size of theparameter spaceto be searched.[1][2][3][4][5] Once the possibility of look-elsewhere error in an analysis is acknowledged, it can be compensated for by careful application of standard mathematical techniques.[6][7][8] More generally known in statistics as theproblem of multiple comparisons, the term gained some media attention in 2011, in the context of the search for theHiggs bosonat theLarge Hadron Collider.[9] Many statistical tests deliver ap-value, the probability that a given result could be obtained by chance, assuming the hypothesis one seeks to prove is in fact false. When asking "doesXaffectY?", it is common to varyXand see if there is significant variation inYas a result. If this p-value is less than some predeterminedstatistical significancethresholdα, one considers the result "significant". However, if one is performing multiple tests ("looking elsewhere" if the first test fails) then apvalue of 1/nis expected to occur once perntests. For example, when there is no real effect, an event withp< 0.05 will still occur once, on average, for each 20 tests performed. In order to compensate for this, you could divide your thresholdαby the number of testsn, so a result is significant whenp<α/n. Or, equivalently, multiply the observedpvalue by the number of tests (significant whennp<α). This is a simplified case; the numbernis actually the number ofdegrees of freedomin the tests, or the number of effectively independent tests. If they are not fully independent, the number may be lower than the number of tests. The look-elsewhere effect is a frequent cause of "significance inflation" when the number of independent testsnis underestimated because failed tests are not published. One paper may fail to mention alternative hypotheses considered, or a paper producing no result may simply not be published at all, leading to journals dominated by statistical outliers.
https://en.wikipedia.org/wiki/Look-elsewhere_effect
Formal equivalence checkingprocess is a part ofelectronic design automation(EDA), commonly used during the development ofdigitalintegrated circuits, to formallyprovethat two representations of acircuit designexhibit exactly the same behavior. In general, there is a wide range of possible definitions of functional equivalence covering comparisons between different levels of abstraction and varying granularity of timing details. Theregister transfer level(RTL) behavior of a digital chip is usually described with ahardware description language, such asVerilogorVHDL. This description is the golden reference model that describes in detail which operations will be executed during whichclock cycleand by which pieces of hardware. Once the logic designers, by simulations and other verification methods, have verified register transfer description, the design is usually converted into anetlistby alogic synthesistool. Equivalence is not to be confused with functional correctness, which must be determined byfunctional verification. The initialnetlistwill usually undergo a number of transformations such as optimization, addition ofDesign For Test(DFT) structures, etc., before it is used as the basis for the placement of the logic elements into aphysical layout. Contemporary physical design software will occasionally also make significant modifications (such as replacing logic elements with equivalent similar elements that have a higher or lowerdrive strength and/or area) to the netlist. Throughout every step of a very complex, multi-step procedure, the original functionality and the behavior described by the original code must be maintained. When the finaltape-outis made of a digital chip, many different EDA programs and possibly some manual edits will have altered the netlist. In theory, a logic synthesis tool guarantees that the first netlist islogically equivalentto the RTL source code. All the programs later in the process that make changes to the netlist also, in theory, ensure that these changes are logically equivalent to a previous version. In practice, programs have bugs and it would be a major risk to assume that all steps from RTL through the final tape-out netlist have been performed without error. Also, in real life, it is common for designers to make manual changes to a netlist, commonly known asEngineering Change Orders, or ECOs, thereby introducing a major additional error factor. Therefore, instead of blindly assuming that no mistakes were made, a verification step is needed to check the logical equivalence of the final version of the netlist to the original description of the design (golden reference model). Historically, one way to check the equivalence was to re-simulate, using the final netlist, the test cases that were developed for verifying the correctness of the RTL. This process is called gate levellogic simulation. However, the problem with this is that the quality of the check is only as good as the quality of the test cases. Also, gate-level simulations are notoriously slow to execute, which is a major problem as the size of digital designs continues to growexponentially. An alternative way to solve this is to formally prove that the RTL code and the netlist synthesized from it have exactly the same behavior in all (relevant) cases. This process is called formal equivalence checking and is a problem that is studied under the broader area offormal verification. A formal equivalence check can be performed between any two representations of a design: RTL <> netlist, netlist <> netlist or RTL <> RTL, though the latter is rare compared to the first two. Typically, a formal equivalence checking tool will also indicate with great precision at which point there exists a difference between two representations. There are two basic technologies used for boolean reasoning in equivalence checking programs: Major products in the Logic Equivalence Checking (LEC) area ofEDAare:
https://en.wikipedia.org/wiki/Formal_equivalence_checking
Java Management Extensions(JMX) is aJavatechnology that supplies tools for managing and monitoringapplications, system objects, devices (such asprinters) and service-oriented networks. Those resources are represented by objects called MBeans (forManaged Bean). In the API,classescan be dynamically loaded and instantiated. Managing and monitoring applications can be designed and developed using the Java Dynamic Management Kit.[1] JSR 003[2]of theJava Community Processdefined JMX 1.0, 1.1 and 1.2. JMX 2.0 was being developed under JSR 255, but this JSR was subsequently withdrawn.[3]The JMX Remote API 1.0 for remote management and monitoring is specified by JSR 160.[4]An extension of the JMX Remote API for Web Services was being developed under JSR 262.[5] Adopted early on by theJ2EEcommunity, JMX has been a part ofJ2SEsince version 5.0. "JMX" is a trademark ofOracle Corporation. JMX uses a three-level architecture: Applications can be generic consoles (such asJConsole[6]andMC4J[7]) or domain-specific (monitoring) applications. External applications can interact with the MBeans through the use of JMX connectors and protocol adapters. Connectors serve to connect an agent with a remote JMX-enabled management application. This form of communication involves a connector in the JMX agent and a connector client in the management application. TheJava Platform, Standard Editionships with one connector, theRMI connector, which uses the Java Remote Method Protocol that is part of theJava remote method invocationAPI. This is the connector which most management applications use. Protocol adapters provide a management view of the JMX agent through a given protocol. Management applications that connect to a protocol adapter are usually specific to the given protocol. Amanaged bean– sometimes simply referred to as anMBean– is a type ofJavaBean, created withdependency injection. Managed Beans are particularly used in the Java Management Extensions technology – but with Java EE 6 the specification provides for a more detailed meaning of a managed bean. The MBean represents a resource running in theJava virtual machine, such as an application or a Java EE technical service (transactional monitor, JDBC driver, etc.). They can be used for collecting statistics on concerns like performance, resources usage, or problems (pull); for getting and setting application configurations or properties (push/pull); and notifying events like faults or state changes (push). Java EE 6 provides that a managed bean is a bean that is implemented by a Java class, which is called its bean class. A top-level Java class is a managed bean if it is defined to be a managed bean by any other Java EE technology specification (for example, theJavaServer Facestechnology specification), or if it meets all of the following conditions: No special declaration, such as an annotation, is required to define a managed bean. A MBean can notify the MBeanServer of its internal changes (for the attributes) by implementing thejavax.management.NotificationEmitter. The application interested in the MBean's changes registers a listener (javax.management.NotificationListener) to the MBeanServer. Note that JMX does not guarantee that the listeners will receive all notifications.[8] There are two basic types of MBean: Additional types areOpen MBeans,Model MBeansandMonitor MBeans.Open MBeansare dynamic MBeans that rely on the basic data types. They are self-explanatory and more user-friendly.Model MBeansare dynamic MBeans that can be configured during runtime. A generic MBean class is also provided for dynamically configuring the resources during program runtime. A MXBean (Platform MBean) is a special type of MBean thatreifiesJava virtual machinesubsystems such asgarbage collection,JIT compilation,memory pools,multi-threading, etc. A MLet (Management applet) is a utility MBean to load, instantiate and register MBeans in a MBeanServer from anXMLdescription. The format of the XML descriptor is:[9] JMX is supported at various levels by different vendors:
https://en.wikipedia.org/wiki/Java_Management_Extensions
Incomputer networking, awireless access point(WAP) (also justaccess point(AP)) is anetworking hardwaredevice that allows other Wi-Fi devices to connect to a wired network or wireless network. As a standalone device, the AP may have a wired or wireless connection to aswitchorrouter, but in awireless routerit can also be an integral component of the networking device itself. A WAP and AP is differentiated from ahotspot, which can be a physical location or digital location where Wi-Fi or WAP access is available.[1][2] An AP connects directly to a wiredlocal[3]area network, typicallyEthernet, and the AP then provides wireless connections usingwireless LANtechnology, typically Wi-Fi, for other devices to use that wired connection. APs support the connection of multiple wireless devices through their one wired connection. There are manywireless data standardsthat have been introduced for wireless access point andwireless routertechnology. New standards have been created to accommodate the increasing need for faster wireless connections. Access points can provide backward compatibility with older Wi-Fi protocols as many devices were manufactured for use with older standards.[3] Some people confuse wireless access points withwireless ad hoc networks. An ad hoc network uses a connection between two or more deviceswithoutusing a wireless access point; the devices communicate directly. Because setup is easy and does not require an access point, an ad hoc network is used in situations such as a quick data exchange or amultiplayer video game. Due to its peer-to-peer layout, ad hoc Wi-Fi connections are similar to connections available usingBluetooth. Ad hoc connections are generally not recommended for a permanent installation.[1]Internet accessviaad hoc networks, using features likeWindows'Internet Connection Sharingor dedicated software such asWiFi Direct Access Point, may work well with a small number of devices that are close to each other, but ad hoc networks do not scale well. Internet traffic will converge to the nodes with direct internet connection, potentially congesting these nodes. For internet-enabled nodes, access points have a clear advantage, with the possibility of having a wiredLAN. It is generally recommended that oneIEEE 802.11AP should have, at a maximum, 10–25 clients.[4]However, the actual maximum number of clients that can be supported can vary significantly depending on several factors, such as type of APs in use, density of client environment, desired client throughput, etc. The range ofcommunicationcan also vary significantly, depending on such variables as indoor or outdoor placement, height above ground, nearby obstructions, other electronic devices that might actively interfere with the signal by broadcasting on the same frequency, type ofantenna, the current weather, operatingradio frequency, and the power output of devices. Network designers can extend the range of APs through the use ofrepeaters, whichamplifya radio signal, andreflectors, which only bounce it. In experimental conditions, wireless networking has operated over distances of several hundred kilometers.[5] Most jurisdictions have only a limited number of frequencies legally available for use by wireless networks. Usually, adjacent APs will use different frequencies (channels) to communicate with their clients in order to avoidinterferencebetween the two nearby systems. Wireless devices can "listen" for data traffic on other frequencies, and can rapidly switch from one frequency to another to achieve better reception. However, the limited number of frequencies becomes problematic in crowded downtown areas with tall buildings using multiple APs. In such anenvironment, signal overlap becomes an issue causing interference, which results in signal degradation and data errors.[6] Wireless networking lags wired networking in terms of increasingbandwidthandthroughput. While (as of 2013) high-density256-QAMmodulation, 3-antenna wireless devices for the consumer market can reach sustained real-world speeds of some 240 Mbit/s at 13 m behind two standing walls (NLOS) depending on their nature or 360 Mbit/s at 10 m line of sight or 380 Mbit/s at 2 m line of sight (IEEE802.11ac) or 20 to 25 Mbit/s at 2 m line of sight (IEEE802.11g), wired hardware of similar cost reaches closer to 1000 Mbit/s up to specified distance of 100 m with twisted-pair cabling in optimal conditions (Category 5 (known as Cat-5)or better cabling withGigabit Ethernet). One impediment to increasing the speed of wireless communications comes fromWi-Fi's use of a shared communications medium: Thus, two stations in infrastructure mode that are communicating with each other even over the same AP must have each and every frame transmitted twice: from the sender to the AP, then from the AP to the receiver. This approximately halves the effective bandwidth, so an AP is only able to use somewhat less than half the actual over-the-air rate for data throughput. Thus a typical 54 Mbit/s wireless connection actually carriesTCP/IPdata at 20 to 25 Mbit/s. Users of legacy wired networks expect faster speeds, and people using wireless connections keenly want to see the wireless networks catch up. By 2012, 802.11n based access points and client devices have already taken a fair share of the marketplace and with thefinalization of the 802.11n standard in 2009inherent problems integrating products from different vendors are less prevalent. Wireless access has specialsecurityconsiderations. Many wired networks base the security on physical access control, trusting all the users on the local network, but if wireless access points are connected to the network, anybody within range of the AP (which typically extends farther than the intended area) can attach to the network. The most common solution is wireless traffic encryption. Modern access points come with built-in encryption. The first generation encryption scheme,WEP, proved easy to crack; the second and third generation schemes,WPAandWPA2, are considered secure[7]if a strong enoughpasswordorpassphraseis used. Some APs support hotspot style authentication usingRADIUSand otherauthentication servers. Opinions about wireless network security vary widely. For example, in a 2008 article forWiredmagazine,Bruce Schneierasserted the net benefits of open Wi-Fi without passwords outweigh the risks,[8]a position supported in 2014 by Peter Eckersley of theElectronic Frontier Foundation.[9]The opposite position was taken by Nick Mediati in an article forPC World, in which he advocates that every wireless access point should be protected with a password.[10]
https://en.wikipedia.org/wiki/Wireless_Access_Point
Adocument-term matrixis a mathematicalmatrixthat describes the frequency of terms that occur in each document in a collection. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix is a specific instance of adocument-feature matrixwhere "features" may refer to other properties of a document besides terms.[1]It is also common to encounter the transpose, orterm-document matrixwhere documents are the columns and terms are the rows. They are useful in the field ofnatural language processingandcomputational text analysis.[2] While the value of the cells is commonly the raw count of a given term, there are various schemes for weighting the raw counts such as row normalizing (i.e. relative frequency/proportions) andtf-idf. Terms are commonly single words separated by whitespace or punctuation on either side (a.k.a. unigrams). In such a case, this is also referred to as "bag of words" representation because the counts of individual words is retained, but not the order of the words in the document. When creating a data-set oftermsthat appear in a corpus ofdocuments, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Eachijcell, then, is the number of times wordjoccurs in documenti. As such, each row is a vector of term counts that represents the content of the document corresponding to that row. For instance if one has the following two (short) documents: then the document-term matrix would be: which shows which documents contain which terms and how many times they appear. Note that, unlike representing a document as just a token-count list, the document-term matrix includes all terms in the corpus (i.e. the corpus vocabulary), which is why there are zero-counts for terms in the corpus which do not also occur in a specific document. For this reason, document-term matrices are usually stored in a sparse matrix format. As a result of the power-law distribution of tokens in nearly every corpus (seeZipf's law), it is common to weight the counts. This can be as simple as dividing counts by the total number of tokens in a document (called relative frequency or proportions), dividing by the maximum frequency in each document (called prop max), or taking the log of frequencies (called log count). If one desires to weight the words most unique to an individual document as compared to the corpus as a whole, it is common to usetf-idf, which divides the term frequency by the term's document frequency. The document-term matrix emerged in the earliest years of the computerization of text. The increasing capacity for storing documents created the problem of retrieving a given document in an efficient manner. While previously the work of classifying and indexing was accomplished by hand, researchers explored the possibility of doing this automatically using word frequency information. One of the first published document-term matrices was inHarold Borko's 1962 article "The construction of an empirically based mathematically derived classification system" (page 282, see also his 1965 article[3]). Borko references two computer programs, "FEAT" which stood for "Frequency of Every Allowable Term," written by John C. Olney of the System Development Corporation and the Descriptor Word Index Program, written byEileen Stonealso of the System Development Corporation: Having selected the documents which were to make up the experimental library, the next step consisted of keypunching the entire body of text preparatory to computer processing.  The program used for this analysis was FEAT (Frequency of Every Allowable Term).  it was written by John C. Olney of the System Development Corporation and is designed to perform frequency and summary counts of individual words and of word pairs.  The  output of this program is an alphabetical listing, by frequency of occurrence, of all word types which appeared in the text.  Certain function words such as and, the,  at, a, etc., were placed in a "forbidden word list" table, and the frequency of these words was recorded  in a separate listing... A special computer program, called the Descriptor Word Index Program, was written to provide this information and to prepare a document-term matrix in a form suitable for in-put to the Factor Analysis Program. The Descriptor Word Index program was prepared by Eileen Stone of the System Development Corporation.[4] Shortly thereafter,Gerard Saltonpublished "Some hierarchical models for automatic document retrieval" in 1963 which also included a visual depiction of a document-term matrix.[5]Salton was at Harvard University at the time and his work was supported by the Air Force Cambridge Research Laboratories and Sylvania Electric Products, Inc. In this paper, Salton introduces the document-term matrix by comparison to a kind of term-context matrix used to measure similarities between words: If it is desired to generate document associations or document clusters instead of word associations, the same procedures can be used with slight modifications. Instead of starting with a word-sentence matrixC,... it is now convenient to construct a word-document matrixF,listing frequency of occurrence of word Wiin Document Dj... Document similarities can now be computed as before by comparing pairs of rows and by obtaining similarity coefficients based on the frequency of co-occurrences of the content words included in the given document. This procedure produces a document-document similarity matrix which can in turn be used for the generation of document clusters...[5] In addition to Borko and Salton, in 1964, F.W. Lancaster published a comprehensive review of automated indexing and retrieval. While the work was published while he worked at the Herner and Company in Washington D.C., the paper was written while he was "employed in research work at Aslib, on the Aslib Cranfield Project."[6]Lancaster credits Borko with the document-term matrix: Harold Borko, of the System Development Corporation, has carried this operation a little further. A significant group of clue words is chosen from the vocabulary of an experimental collection. These are arranged in a document/term matrix to show the frequency of occurrence of each term in each document.... A correlation coefficient for each word pair is then computed, based on their co-occurrence in the document set. The resulting term/term matrix... is then factor analysed and a series of factors are isolated. These factors, when interpreted and named on the basis of the terms with high loadings which appear in each of the factors, become the classes of an empirical classification. The terms with high loadings in each factor are the clue words or predictors of the categories. A point of view on the matrix is that each row represents a document. In thevectorial semantic model, which is normally the one used to compute a document-term matrix, the goal is to represent the topic of a document by the frequency of semantically significant terms. The terms are semantic units of the documents. It is often assumed, forIndo-European languages, that nouns, verbs and adjectives are the more significantcategories, and that words from those categories should be kept as terms. Addingcollocationas terms improves the quality of the vectors, especially when computing similarities between documents. Latent semantic analysis(LSA, performingsingular-value decompositionon the document-term matrix) can improve search results bydisambiguatingpolysemous wordsand searching forsynonymsof the query. However, searching in the high-dimensional continuous space is much slower than searching the standardtriedata structure of search engines. Multivariate analysisof the document-term matrix can reveal topics/themes of the corpus. Specifically,latent semantic analysisanddata clusteringcan be used, and, more recently,probabilistic latent semantic analysiswith its generalizationLatent Dirichlet allocation, andnon-negative matrix factorization, have been found to perform well for this task.
https://en.wikipedia.org/wiki/Term-document_matrix
Inmathematics, anatural numberais aunitary divisor(orHall divisor) of a numberbifais adivisorofband ifaandba{\displaystyle {\frac {b}{a}}}arecoprime, having no common factor other than 1. Equivalently, a divisoraofbis a unitary divisorif and only ifeveryprimefactor ofahas the samemultiplicityinaas it has inb. The concept of a unitary divisor originates fromR. Vaidyanathaswamy(1931),[1]who used the termblock divisor. The integer 5 is a unitary divisor of 60, because 5 and605=12{\displaystyle {\frac {60}{5}}=12}have only 1 as a common factor. On the contrary, 6 is a divisor but not a unitary divisor of 60, as 6 and606=10{\displaystyle {\frac {60}{6}}=10}have a common factor other than 1, namely 2. The sum-of-unitary-divisors function is denoted by the lowercase Greek letter sigma thus: σ*(n). The sum of thek-thpowersof the unitary divisors is denoted by σ*k(n): It is amultiplicative function. If theproperunitary divisors of a given number add up to that number, then that number is called aunitary perfect number. Number 1 is a unitary divisor of every natural number. The number of unitary divisors of a numbernis 2k, wherekis the number of distinct prime factors ofn. This is because eachintegerN> 1 is the product of positive powersprpof distinct prime numbersp. Thus every unitary divisor ofNis the product, over a given subsetSof the prime divisors {p} ofN, of the prime powersprpforp∈S. If there arekprime factors, then there are exactly 2ksubsetsS, and the statement follows. The sum of the unitary divisors ofnisoddifnis apower of 2(including 1), andevenotherwise. Both the count and the sum of the unitary divisors ofnaremultiplicative functionsofnthat are notcompletely multiplicative. TheDirichlet generating functionis Every divisor ofnis unitary if and only ifnissquare-free. The set of all unitary divisors ofnforms aBoolean algebrawith meet given by thegreatest common divisorand join by theleast common multiple. Equivalently, the set of unitary divisors ofnforms a Boolean ring, where the addition and multiplication are given by where(a,b){\displaystyle (a,b)}denotes the greatest common divisor ofaandb.[2] The sum of thek-th powers of the odd unitary divisors is It is also multiplicative, with Dirichlet generating function A divisordofnis abi-unitary divisorif the greatest common unitary divisor ofdandn/dis 1. This concept originates from D. Suryanarayana (1972). [The number of bi-unitary divisors of an integer, in The Theory of Arithmetic Functions, Lecture Notes in Mathematics 251: 273–282, New York, Springer–Verlag]. The number of bi-unitary divisors ofnis a multiplicative function ofnwithaverage orderAlog⁡x{\displaystyle A\log x}where[3] Abi-unitary perfect numberis one equal to the sum of its bi-unitary aliquot divisors. The only such numbers are 6, 60 and 90.[4]
https://en.wikipedia.org/wiki/Unitary_divisor
The following tables compare general and technical information for a number ofonline analytical processing(OLAP) servers. Please see the individual products articles for further information. APIs and query languages OLAP servers support. A list of OLAP features that are not supported by all vendors. All vendors support features such as parent-child, multilevel hierarchy, drilldown. Unrestricted (In-memory) The OLAP servers can run on the followingoperating systems: Note (1):The server availability depends onJava Virtual Machinenot on theoperating system
https://en.wikipedia.org/wiki/Comparison_of_OLAP_servers
Agraceful exit[1](orgraceful handling) is a simpleprogramming idiom[citation needed]wherein aprogramdetects a seriouserrorcondition and "exits gracefully" in a controlled manner as a result. Often the program prints a descriptiveerror messageto aterminalorlogas part of the graceful exit. Usually, code for a graceful exit exists when the alternative — allowing the error to go undetected andunhandled— would produce spurious errors or later anomalous behavior that would be more difficult for theprogrammertodebug. The code associated with a graceful exit may also take additional steps, such as closingfiles, to ensure that the program leaves data in a consistent, recoverable state. Graceful exits are not always desired. In many cases, an outrightcrashcan give the software developer the opportunity to attach a debugger or collect important information, such as acore dumporstack trace, to diagnose the root cause of the error. In a language that supports formalexception handling, a graceful exit may be the final step in the handling of an exception. In other languages graceful exits can be implemented with additional statements at the locations of possible errors. The phrase "graceful exit" has also been generalized to refer to letting go from a job or relationship in life that has ended.[2][3] In thePerlprogramming language, graceful exits are generally implemented via thedieoperator. For example, the code for opening a file often reads like the following: If the attempt to open the filemyresultsfails, the containing program will terminate with an error message and anexit statusindicating abnormal termination. In theJavaprogramming language, thetry...catchblock is used often to catchexceptions. All potentially dangerous code is placed inside the block and, if an exception occurred, is stopped, or caught. InCone can use theerror(3)function, provided inGNUby theGNU C Library. If the first parameter is non-zero this function will exit from the parent process and return that parameter. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Graceful_exit
Aprivacy policyis a statement or legal document (in privacy law) that discloses some or all of the ways a party gathers, uses, discloses, and manages a customer or client's data.[1]Personal information can be anything that can be used to identify an individual, not limited to the person's name, address, date of birth, marital status, contact information, ID issue, and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services.[2]In the case of a business, it is often a statement that declares a party's policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises.[3][4]Privacy policies typically represent a broader, more generalized treatment, as opposed to data use statements, which tend to be more detailed and specific. The exact contents of a certain privacy policy will depend upon the applicable law and may need to address requirements across geographical boundaries and legal jurisdictions. Most countries have own legislation and guidelines of who is covered, what information can be collected, and what it can be used for. In general, data protection laws in Europe cover the private sector, as well as the public sector. Their privacy laws apply not only to government operations but also to private enterprises and commercial transactions. In 1968, theCouncil of Europebegan to study the effects of technology onhuman rights, recognizing the new threats posed by computer technology that could link and transmit in ways not widely available before. In 1969 theOrganisation for Economic Co-operation and Development(OECD) began to examine the implications of personal information leaving the country. All this led the council to recommend that policy be developed to protectpersonal dataheld by both the private and public sectors, leading to Convention 108. In 1981,Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data(Convention 108) was introduced. One of the first privacy laws ever enacted was theSwedish Data Actin 1973, followed by the West German Data Protection Act in 1977 and the French Law on Informatics, Data Banks and Freedoms in 1978.[5] In the United States, concern over privacy policy starting around the late 1960s and 1970s led to the passage of theFair Credit Reporting Act. Although this act was not designed to be a privacy law, the act gave consumers the opportunity to examine their credit files and correct errors. It also placed restrictions on the use of information in credit records. Several congressional study groups in the late 1960s examined the growing ease with which automated personal information could be gathered and matched with other information. One such group was an advisory committee of theUnited States Department of Health and Human Services, which in 1973 drafted a code of principles called the Fair Information Practices. The work of the advisory committee led to the Privacy Act in 1974. The United States signed theOrganisation for Economic Co-operation and Developmentguidelines in 1980.[5] In Canada, aPrivacy Commissioner of Canadawas established under theCanadian Human Rights Actin 1977. In 1982, the appointment of a Privacy Commissioner was part of the new Privacy Act. Canada signed the OECD guidelines in 1984.[5] There are significant differences between the EU data protection and US data privacy laws. These standards must be met not only by businesses operating in the EU but also by any organization that transfers personal information collected concerning citizens of the EU. In 2001 the United States Department of Commerce worked to ensure legal compliance for US organizations under an opt-in Safe Harbor Program. The FTC has approved eTRUST to certify streamlined compliance with the US-EU Safe Harbor. In 1995 theEuropean Union(EU) introduced theData Protection Directive[6]for its member states. As a result, many organizations doing business within the EU began to draft policies to comply with this Directive. In the same year, the U.S.Federal Trade Commission(FTC) published the Fair Information Principles[7]which provided a set of non-binding governing principles for the commercial use ofpersonal information. While not mandating policy, these principles provided guidance of the developing concerns of how to draft privacy policies. The United States does not have a specific federal regulation establishing universal implementation of privacy policies. Congress has, at times, considered comprehensive laws regulating the collection of information online, such as the Consumer Internet Privacy Enhancement Act[8]and the Online Privacy Protection Act of 2001,[9]but none have been enacted. In 2001, the FTC stated an express preference for "more law enforcement, not more laws"[10]and promoted continued focus onindustry self-regulation. In many cases, the FTC enforces the terms of privacy policies as promises made to consumers using the authority granted by Section 5 of theFTC Actwhich prohibits unfair or deceptive marketing practices.[11]The FTC's powers are statutorily restricted in some cases; for example, airlines are subject to the authority of theFederal Aviation Administration(FAA),[12]and cell phone carriers are subject to the authority of theFederal Communications Commission(FCC).[13] In some cases, private parties enforce the terms of privacy policies by filingclass actionlawsuits, which may result in settlements or judgments. However, such lawsuits are often not an option, due toarbitration clausesin the privacy policies or otherterms of serviceagreements.[14] While no generally applicable law exists, some federal laws govern privacy policies in specific circumstances, such as: Some states have implemented more stringent regulations for privacy policies. The CaliforniaOnline Privacy Protection Actof 2003 – Business and Professions Code sections 22575-22579requires "any commercial websites or online services that collect personal information on California residents through a web site to conspicuously post a privacy policy on the site".[26]Both Nebraska and Pennsylvania have laws treating misleading statements in privacy policies published on websites as deceptive or fraudulent business practices.[27] Canada's federalPrivacy Lawapplicable to the private sector is formally referred to asPersonal Information Protection and Electronic Documents Act(PIPEDA). The purpose of the act is to establish rules to govern the collection, use, and disclosure of personal information by commercial organizations. The organization is allowed to collect, disclose and use the amount of information for the purposes that a reasonable person would consider appropriate in the circumstance.[28] The Act establishes thePrivacy Commissioner of Canadaas theOmbudsmanfor addressing any complaints that are filed against organizations. The Commissioner works to resolve problems through voluntary compliance, rather than heavy-handed enforcement. The Commissioner investigates complaints, conducts audits, promotes awareness of and undertakes research about privacy matters.[29] Theright to privacyis a highly developed area of law in Europe. All the member states of theEuropean Union(EU) are also signatories of theEuropean Convention on Human Rights(ECHR). Article 8 of the ECHR provides a right to respect for one's "private and family life, his home and his correspondence", subject to certain restrictions. TheEuropean Court of Human Rightshas given this article a very broad interpretation in its jurisprudence.[30] In 1980, in an effort to create a comprehensive data protection system throughout Europe, theOrganization for Economic Co-operation and Development(OECD) issued its "Recommendations of the Council Concerning Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data".[31]The seven principles governing theOECD’s recommendations for protection of personal data were: TheOECDguidelines, however, were nonbinding, and data privacy laws still varied widely across Europe. The US, while endorsing theOECD’s recommendations, did nothing to implement them within the United States.[32]However, all seven principles were incorporated into the EU Directive.[32] In 1995, the EU adopted theData Protection Directive, which regulates the processing of personal data within the EU. There were significant differences between the EU data protection and equivalent U.S. data privacy laws. These standards must be met not only by businesses operating in the EU but also by any organization that transfers personal information collected concerning a citizen of the EU. In 2001 theUnited States Department of Commerceworked to ensure legal compliance for US organizations under an opt-inSafe Harbor Program.[33]The FTC has approved a number of US providers to certify compliance with the US-EU Safe Harbor. Since 2010 Safe Harbor is criticised especially by German publicly appointed privacy protectors because the FTC's will to assert the defined rules hadn't been implemented in a proper even after revealing disharmonies.[34] Effective 25 May 2018, the Data Protection Directive is superseded by theGeneral Data Protection Regulation(GDPR), which harmonizes privacy rules across all EU member states. GDPR imposes more stringent rules on the collection of personal information belonging to EU data subjects, including a requirement for privacy policies to be more concise, clearly-worded, and transparent in their disclosure of any collection, processing, storage, or transfer ofpersonally identifiable information. Data controllers must also provide the opportunity for their data to be madeportablein a common format, and for it to be erased under certain circumstances.[35][36] ThePrivacy Act 1988provides the legal framework for privacy in Australia.[37]It includes a number of national privacy principles.[38]There are thirteen privacy principles under the Privacy Act.[39]It oversees and regulates the collection, use and disclosure of people's private information, makes sure who is responsible if there is a violation, and the rights of individuals to access their information.[39] The Information Technology (Amendment) Act, 2008 made significant changes to theInformation Technology Act, 2000, introducing Section 43A. This section provides compensation in the case where a corporate body is negligent in implementing and maintaining reasonable security practices and procedures and thereby causes wrongful loss or wrongful gain to any person. This applies when a corporate body possesses, deals or handles any sensitivepersonal dataor information in a computer resource that it owns, controls or operates. In 2011, the Government of India prescribed the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011[40]by publishing it in the Official Gazette.[41]These rules require a body corporate to provide a privacy policy for handling of or dealing in personal information including sensitive personal data or information.[42]Such a privacy policy should consist of the following information in accordance with the rules: The privacy policy should be published on the website of the body corporate, and be made available for view by providers of information who have provided personal information under lawful contract. Online certification or "seal" programs are an example of industry self-regulation of privacy policies. Seal programs usually require implementation of fair information practices as determined by the certification program and may require continued compliance monitoring.TRUSTArc(formerly TRUSTe),[43]the first online privacy seal program, included more than 1,800 members by 2007.[44]Other online seal programs include the Trust Guard Privacy Verified program,[45]eTrust,[46]andWebtrust.[47] Some websites also define their privacy policies usingP3PorInternet Content Rating Association(ICRA), allowing browsers to automatically assess the level of privacy offered by the site, and allowing access only when the site's privacy practices are in line with the user's privacy settings. However, these technical solutions do not guarantee websites actually follows the claimed privacy policies. These implementations also require users to have a minimum level of technical knowledge to configure their own browser privacy settings.[48]These automated privacy policies have not been popular either with websites or their users.[49]To reduce the burden of interpreting individual privacy policies, re-usable, certified policies available from a policy server have been proposed by Jøsang, Fritsch and Mahler.[50] Many critics have attacked the efficacy and legitimacy of privacy policies found on the Internet. Concerns exist about the effectiveness of industry-regulated privacy policies. For example, a 2000 FTC report Privacy Online: Fair Information Practices in the Electronic Marketplace found that while the vast majority of websites surveyed had some manner of privacy disclosure, most did not meet the standard set in the FTC Principles. In addition, many organizations reserve the express right to unilaterally change the terms of their policies. In June 2009 theEFFwebsite TOSback began tracking such changes on 56 popular internet services, including monitoring the privacy policies ofAmazon,GoogleandFacebook.[51] There are also questions about whether consumers understand privacy policies and whether they help consumers make more informed decisions. A 2002 report from theStanford Persuasive Technology Labcontended that a website's visual designs had more influence than the website's privacy policy when consumers assessed the website's credibility.[52]A 2007 study byCarnegie Mellon Universityclaimed "when not presented with prominent privacy information..." consumers were "…likely to make purchases from the vendor with the lowest price, regardless of that site's privacy policies".[53]However, the same study also showed that when information about privacy practices is clearly presented, consumers prefer retailers who better protect their privacy and some are willing to "pay a premium to purchase from more privacy protective websites". Furthermore, a 2007 study at theUniversity of California, Berkeleyfound that "75% of consumers think as long as a site has a privacy policy it means it won't share data with third parties," confusing the existence of a privacy policy with extensive privacy protection.[54]Based on the common nature of this misunderstanding, researcher Joseph Turow argued to the U.S.Federal Trade Commissionthat the term "privacy policy" thus constitutes a deceptive trade practice and that alternative phrasing like "how we use your information" should be used instead.[55] Privacy policies suffer generally from a lack of precision, especially when compared with the emerging form of the Data Use Statement. Where privacy statements provide a more general overview of data collection and use, data use statements represent a much more specific treatment. As a result, privacy policies may not meet the increased demand for transparency that data use statements provide. Critics also question if consumers even read privacy policies or can understand what they read. A 2001 study by the Privacy Leadership Initiative claimed only 3% of consumers read privacy policies carefully, and 64% briefly glanced at, or never read privacy policies.[56]The average website user once having read a privacy statement may have more uncertainty about the trustworthiness of the website than before.[57][58]One possible issue is length and complexity of policies. According to a 2008Carnegie Mellonstudy, the average length of a privacy policy is 2,500 words and requires an average of 10 minutes to read. The study cited that "Privacy policies are hard to read" and, as a result, "read infrequently".[59]However, any efforts to make the information more presentable simplify the information to the point that it does not convey the extent to which users' data is being shared and sold.[60]This is known as the "transparency paradox". There have been many studies carried out by researchers to evaluate the privacy policies of the websites of companies. One study usesnatural language processinganddeep learningas a proposed solution to automatically assess the efficiency of companies' privacy policies, in order to help the users become more aware.[61]
https://en.wikipedia.org/wiki/Privacy_policy
Incombinatorics, thetwelvefold wayis a systematic classification of 12 related enumerative problems concerning two finite sets, which include the classical problems ofcountingpermutations,combinations,multisets, and partitions eitherof a setorof a number. The idea of the classification is credited toGian-Carlo Rota, and the name was suggested byJoel Spencer.[1] LetNandXbefinite sets. Letn=|N|{\displaystyle n=|N|}andx=|X|{\displaystyle x=|X|}be thecardinalitiesof the sets. ThusNis a set withnelements, andXis a set withxelements. The general problem we consider is the enumeration ofequivalence classesoffunctionsf:N→X{\displaystyle f:N\to X}. The functions are subject to one of the three following restrictions: (The condition "fisbijective" is only an option whenn=x{\displaystyle n=x}; but then it is equivalent to both "fis injective" and "fis surjective".) There are four differentequivalence relationswhich may be defined on the set of functionsffromNtoX: The three conditions on the functions and the four equivalence relations can be paired in3 × 4 = 12ways. The twelve problems of counting equivalence classes of functions do not involve the same difficulties, and there is not one systematic method for solving them. Two of the problems are trivial (the number of equivalence classes is 0 or 1), five problems have an answer in terms of a multiplicative formula ofnandx, and the remaining five problems have an answer in terms of combinatorial functions (Stirling numbersand thepartition functionfor a given number of parts). The incorporation of classical enumeration problems into this setting is as follows. The various problems in the twelvefold way may be considered from different points of view. Traditionally many of the problems in the twelvefold way have been formulated in terms of placing balls in boxes (or some similar visualization) instead of defining functions. The setNcan be identified with a set of balls, andXwith a set of boxes; the functionf:N→X{\displaystyle f:N\to X}then describes a way to distribute the balls into the boxes, namely by putting each ballainto boxf(a){\displaystyle f(a)}. A function ascribes a unique image to each value in its domain; this property is reflected by the property that any ball can go into only one box (together with the requirement that no ball should remain outside of the boxes), whereas any box can accommodate an arbitrary number of balls. Requiring in additionf{\displaystyle f}to be injective means to forbid putting more than one ball in any one box, while requiringf{\displaystyle f}to be surjective means insisting that every box contain at least one ball. Countingmodulopermutations ofNorXis reflected by calling the balls or the boxes, respectively, "indistinguishable". This is an imprecise formulation, intended to indicate that different configurations are not to be counted separately if one can be transformed into the other by some interchange of balls or of boxes. This possibility of transformation is formalized by the action by permutations. Another way to think of some of the cases is in terms ofsampling, instatistics. Imagine a population ofXitems (or people), of which we chooseN. Two different schemes are normally described, known as "sampling with replacement" and "sampling without replacement". In the former case (sampling with replacement), once we've chosen an item, we put it back in the population, so that we might choose it again. The result is that each choice isindependentof all the other choices, and the set of samples is technically referred to asindependent identically distributed. In the latter case, however, once we have chosen an item, we put it aside so that we can not choose it again. This means that the act of choosing an item has an effect on all the following choices (the particular item can not be seen again), so our choices are dependent on one another. A second distinction among sampling schemes is whether ordering matters. For example, if we have ten items, of which we choose two, then the choice (4, 7) is different from (7, 4) if ordering matters; on the other hand, if ordering does not matter, then the choices (4, 7) and (7, 4) are equivalent. The first two rows and columns of the table below correspond to sampling with and without replacement, with and without consideration of order. The cases of sampling with replacement are found in the column labeled "Anyf{\displaystyle f}", while the cases of sampling without replacement are found in the column labeled "Injectivef{\displaystyle f}". The cases where ordering matters are found in the row labeled "Distinct," and the cases where ordering does not matter are found in the row labeled "Snorbits". Each table entry indicates how many different sets of choices there are, in a particular sampling scheme. Three of these table entries also correspond toprobability distributions. Sampling with replacement where ordering matters is comparable to describing thejoint distributionofNseparaterandom variables, each with anX-foldcategorical distribution. Sampling with replacement where ordering does not matter, however, is comparable to describing a singlemultinomial distributionofNdraws from anX-fold category, where only the number seen of each category matters. Sampling without replacement where ordering does not matter is comparable to a singlemultivariate hypergeometric distribution. Sampling without replacement where order does matter does not seem to correspond to a probability distribution.[2]In all the injective cases (sampling without replacement), the number of sets of choices is zero unlessN≤X. ("Comparable" in the above cases means that each element of thesample spaceof the corresponding distribution corresponds to a separate set of choices, and hence the number in the appropriate box indicates the size of the sample space for the given distribution.) From the perspective of sampling, the column labeled "Surjectivef{\displaystyle f}" is somewhat strange: Essentially, we keep sampling with replacement until we have chosen each item at least once. Then, we count how many choices we have made, and if it is not equal toN, throw out the entire set and repeat. This is vaguely comparable to thecoupon collector's problem, where the process involves "collecting" (by sampling with replacement) a set ofXcoupons until each coupon has been seen at least once. In all surjective cases, the number of sets of choices is zero unlessN≥X. A functionf:N→X{\displaystyle f:N\to X}can be considered from the perspective ofXor ofN. This leads to different views: These points of view are not equally suited to all cases. The labelling and selection points of view are not well compatible with permutation of the elements ofX, since this changes the labels or the selection; on the other hand the grouping point of view does not give complete information about the configurationunlessthe elements ofXmay be freely permuted. The labelling and selection points of view are more or less equivalent whenNis not permuted, but when it is, the selection point of view is more suited. The selection can then be viewed as an unordered selection: a single choice of a (multi-)set ofnelements fromXis made. When viewingf{\displaystyle f}as a labelling of the elements ofN, the latter may be thought of as arranged in a sequence, and the labels fromXas being successively assigned to them. A requirement thatf{\displaystyle f}be injective means that no label can be used a second time; the result is a sequence of labelswithout repetition. In the absence of such a requirement, the terminology "sequences with repetition" is used, meaning that labels may be used more than once (although sequences that happen to be without repetition are also allowed). When viewingf{\displaystyle f}as an unordered selection of the elements ofX, the same kind of distinction applies. Iff{\displaystyle f}must be injective, then the selection must involvendistinct elements ofX, so it is a subset ofXof sizen, also called ann-combination. Without the requirement, one and the same element ofXmay occur multiple times in the selection, and the result is amultisetof sizenof elements fromX, also called ann-multicombinationorn-combination with repetition. The requirement thatf{\displaystyle f}be surjective, from the viewpoint of labelling elements ofN, means that every label is to be used at least once; from the viewpoint of selection fromX, it means that every element ofXmust be included in the selection at least once. Labelling with surjection is equivalent to a grouping of elements ofNfollowed by labeling each group by an element ofX, and is accordingly somewhat more complicated to describe mathematically. When viewingf{\displaystyle f}as a grouping of the elements ofN(which assumes one identifies under permutations ofX), requiringf{\displaystyle f}to be surjective means the number of groups must be exactlyx. Without this requirement the number of groups can be at mostx. The requirement of injectivef{\displaystyle f}means each element ofNmust be a group in itself, which leaves at most one valid grouping and therefore gives a rather uninteresting counting problem. When in addition one identifies under permutations ofN, this amounts to forgetting the groups themselves but retaining only their sizes. These sizes moreover do not come in any definite order, while the same size may occur more than once; one may choose to arrange them into a weakly decreasing list of numbers, whose sum is the numbern. This gives the combinatorial notion of apartitionof the numbern, into exactlyx(for surjectivef{\displaystyle f}) or at mostx(for arbitraryf{\displaystyle f}) parts. Formulas for the different cases of the twelvefold way are summarized in the following table; each table entry links to a subsection below explaining the formula. The particular notations used are: This is a quick summary of what the different cases mean. The cases are described in detail below. Think of a set ofXnumbered items (numbered from 1 tox), from which we choosen, yielding an ordered list of the items: e.g. if there arex=10{\displaystyle x=10}items of which we choosen=3{\displaystyle n=3}, the result might be the list (5, 2, 10). We then count how many different such lists exist, sometimes first transforming the lists in ways that reduce the number of distinct possibilities. Then the columns mean: And the rows mean: The chart below is similar to the chart above, but instead of showing the formulas, it gives an intuitive understanding of their meaning using the familiar balls and boxes example. The rows represent the distinctness of the balls and boxes. The columns represent if multi-packs (more than one ball in one box), or empty boxes are allowed. The cells in the chart show the question that is answered by solving the formula given in the formula chart above. (no rules on placement) (no multi-packs allowed) (no empty boxes allowed) How many ways can you placenmarked balls intoxmarked boxes,with no other rules on placement? How many ways can you placenmarked balls intoxmarked boxes,with no multi-packs allowed? How many ways can you placenmarked balls intoxmarked boxes,with no empty boxes allowed? How many ways can you placenplain balls intoxmarked boxes,with no other rules on placement? How many ways can you placenplain balls intoxmarked boxes,with no multi-packs allowed? How many ways can you placenplain balls intoxmarked boxes,with no empty boxes allowed? How many ways can you placenmarked balls intoxplain boxes,with no other rules on placement? How many ways can you placenmarked balls intoxplain boxes,with no multi-packs allowed? How many ways can you placenmarked balls intoxplain boxes,with no empty boxes allowed? How many ways can you placenplain balls intoxplain boxes,with no other rules on placement? How many ways can you placenplain balls intoxplain boxes,with no multi-packs allowed? How many ways can you placenplain balls intoxplain boxes,with no empty boxes allowed? The cases below are ordered in such a way as to group those cases for which the arguments used in counting are related, which is not the ordering in the table given. This case is equivalent to countingsequences ofnelementsofXwith no restriction: a functionf:N→Xis determined by thenimages of the elements ofN, which can each be independently chosen among the elements ofx. This gives a total ofxnpossibilities. Example: X={a,b,c},N={1,2}, then{\displaystyle X=\{a,b,c\},N=\{1,2\}{\text{, then }}} |{(a,a),(a,b),(a,c),(b,a),(b,b),(b,c),(c,a),(c,b),(c,c)}|=32=9{\displaystyle \left\vert \{(a,a),(a,b),(a,c),(b,a),(b,b),(b,c),(c,a),(c,b),(c,c)\}\right\vert =3^{2}=9} This case is equivalent to counting sequences ofndistinctelements ofX, also calledn-permutationsofX, orsequences without repetitions; again this sequence is formed by thenimages of the elements ofN. This case differs from the one of unrestricted sequences in that there is one choice fewer for the second element, two fewer for the third element, and so on. Therefore instead of by an ordinary power ofx, the value is given by afalling factorial powerofx, in which each successive factor is one fewer than the previous one. The formula is Note that ifn>xthen one obtains a factor zero, so in this case there are no injective functionsN→Xat all; this is just a restatement of thepigeonhole principle. Example: X={a,b,c,d},N={1,2}, then{\displaystyle X=\{a,b,c,d\},N=\{1,2\}{\text{, then }}} |{(a,b),(a,c),(a,d),(b,a),(b,c),(b,d),(c,a),(c,b),(c,d),(d,a),(d,b),(d,c)}|=42_=4×3=12{\displaystyle \left\vert \{(a,b),(a,c),(a,d),(b,a),(b,c),(b,d),(c,a),(c,b),(c,d),(d,a),(d,b),(d,c)\}\right\vert =4^{\underline {2}}=4\times 3=12} This case is equivalent to countingsubsets withnelementsofX, also calledn-combinations ofX: among the sequences ofndistinct elements ofX, those that differ only in the order of their terms are identified by permutations ofN. Since in all cases this groups together exactlyn! different sequences, we can divide the number of such sequences byn! to get the number ofn-combinations ofX. This number is known as thebinomial coefficient(xn){\displaystyle {\tbinom {x}{n}}}, which is therefore given by Example: X={a,b,c,d},N={1,2}, then{\displaystyle X=\{a,b,c,d\},N=\{1,2\}{\text{, then }}} |{{a,b},{a,c},{a,d},{b,c},{b,d},{c,d}}|=42_2!=4×32=6{\displaystyle \left\vert \{\{a,b\},\{a,c\},\{a,d\},\{b,c\},\{b,d\},\{c,d\}\}\right\vert ={\frac {4^{\underline {2}}}{2!}}={\frac {4\times 3}{2}}=6} This case is equivalent to countingmultisets withnelementsfromX(also calledn-multicombinations). The reason is that for each element ofXit is determined how many elements ofNare mapped to it byf, while two functions that give the same such "multiplicities" to each element ofXcan always be transformed into another by a permutation ofN. The formula counting all functionsN→Xis not useful here, because the number of them grouped together by permutations ofNvaries from one function to another. Rather, as explained undercombinations, the number ofn-multicombinations from a set withxelements can be seen to be the same as the number ofn-combinations from a set withx+n− 1elements. This reduces the problem toanother onein the twelvefold way, and gives as result Example: X={a,b,c},N={1,2}, then{\displaystyle X=\{a,b,c\},N=\{1,2\}{\text{, then }}} |{{a,a},{a,b},{a,c},{b,b},{b,c},{c,c}}|=32¯2!=4×32=6{\displaystyle \left\vert \{\{a,a\},\{a,b\},\{a,c\},\{b,b\},\{b,c\},\{c,c\}\}\right\vert ={\frac {3^{\overline {2}}}{2!}}={\frac {4\times 3}{2}}=6} This case is equivalent to countingmultisetswithnelements fromX, for which each element ofXoccurs at least once. This is also equivalent to counting thecompositionsofnwithx(non-zero) terms, by listing the multiplicities of the elements ofxin order. The correspondence between functions and multisets is the same as in the previous case, and the surjectivity requirement means that all multiplicities are at least one. By decreasing all multiplicities by 1, this reduces to the previous case; since the change decreases the value ofnbyx, the result is Note that whenn<xthere are no surjective functionsN→Xat all (a kind of "empty pigeonhole" principle); this is taken into account in the formula, by the convention that binomial coefficients are always 0 if the lower index is negative. The same value is also given by the expression except in the extreme casen=x= 0, where with the former expression correctly gives(−10)=1{\displaystyle {\tbinom {-1}{0}}=1}, while the latter incorrectly gives(−1−1)=0{\displaystyle {\tbinom {-1}{-1}}=0}. The form of the result suggests looking for a manner to associate a class of surjective functionsN→Xdirectly to a subset ofn−xelements chosen from a total ofn− 1, which can be done as follows. First choose atotal orderingof the setsNandX, and note that by applying a suitable permutation ofN, every surjective functionN→Xcan be transformed into a uniqueweakly increasing(and of course still surjective) function. If one connects the elements ofNin order byn− 1arcs into alinear graph, then choosing any subset ofn−xarcs and removing the rest, one obtains a graph withxconnected components, and by sending these to the successive elements ofX, one obtains a weakly increasing surjective functionN→X; also the sizes of the connected components give a composition ofnintoxparts. This argument is basically the one given atstars and bars, except that there the complementary choice ofx− 1"separations" is made. Example: X={a,b},N={1,2,3}, then{\displaystyle X=\{a,b\},N=\{1,2,3\}{\text{, then }}} |{{a,a,b},{a,b,b}}|=(3−13−2)=(21)=2!1!×(2−1)!=2{\displaystyle \left\vert \{\{a,a,b\},\{a,b,b\}\}\right\vert ={\binom {3-1}{3-2}}={\binom {2}{1}}={\frac {2!}{1!\times (2-1)!}}=2} In this case we consider sequences ofndistinct elements fromX, but identify those obtained from one another by applying to each element a permutation ofX. It is easy to see that two different such sequences can always be identified: the permutation must map termiof the first sequence to termiof the second sequence, and since no value occurs twice in either sequence these requirements do not contradict each other; it remains to map the elements not occurring in the first sequence bijectively to those not occurring in the second sequence in an arbitrary way. The only fact that makes the result depend onnandxat all is that the existence of any such sequences to begin with requiresn≤x, by the pigeonhole principle. The number is therefore expressed as[n≤x]{\displaystyle [n\leq x]}, using theIverson bracket. This case is reduced to the previous one: since all sequences ofndistinct elements fromXcan already be transformed into each other by applying a permutation ofXto each of their terms, also allowing reordering of the terms does not give any new identifications; the number remains[n≤x]{\displaystyle [n\leq x]}. This case is equivalent to countingpartitions ofNintox(non-empty) subsets, or countingequivalence relationsonNwith exactlyxclasses. Indeed, for any surjective functionf:N→X, the relation of having the same image underfis such an equivalence relation, and it does not change when a permutation ofXis subsequently applied; conversely one can turn such an equivalence relation into a surjective function by assigning the elements ofXin some manner to thexequivalence classes. The number of such partitions or equivalence relations is by definition theStirling number of the second kindS(n,x), also written{nx}{\displaystyle \textstyle \{{n \atop x}\}}. Its value can be described using a recursion relation or usinggenerating functions, but unlike binomial coefficients there is noclosed formulafor these numbers that does not involve asummation. For each surjective functionf:N→X, its orbit under permutations ofXhasx! elements, since composition (on the left) with two distinct permutations ofXnever gives the same function onN(the permutations must differ at some element ofX, which can always be written asf(i){\displaystyle f(i)}for somei∈N, and the compositions will then differ ati). It follows that the number for this case isx! times the number for the previous case, that isx!{nx}.{\displaystyle \textstyle x!\{{n \atop x}\}.} Example: X={a,b},N={1,2,3}, then{\displaystyle X=\{a,b\},N=\{1,2,3\}{\text{, then }}} |{(a,a,b),(a,b,a),(a,b,b),(b,a,a),(b,a,b),(b,b,a)}|=2!{32}=2×3=6{\displaystyle \left\vert \{(a,a,b),(a,b,a),(a,b,b),(b,a,a),(b,a,b),(b,b,a)\}\right\vert =2!\left\{{3 \atop 2}\right\}=2\times 3=6} This case is like thecorresponding onefor surjective functions, but some elements ofxmight not correspond to any equivalence class at all (since one considers functions up to a permutation ofX, it does not matterwhichelements are concerned, just how many). As a consequence one is counting equivalence relations onNwith at mostxclasses, and the result is obtained from the mentioned case by summation over values up tox, giving∑k=0x{nk}{\displaystyle \textstyle \sum _{k=0}^{x}\{{n \atop k}\}}. In casex≥n, the size ofxposes no restriction at all, and one is countingallequivalence relations on a set ofnelements (equivalently all partitions of such a set); therefore∑k=0n{nk}{\displaystyle \textstyle \sum _{k=0}^{n}\{{n \atop k}\}}gives anexpressionfor theBell numberBn. This case is equivalent to countingpartitions of the numbernintoxnon-zero parts. Compared to the case of countingsurjective functions up to permutations ofXonly ({nx}{\displaystyle \textstyle \{{n \atop x}\}}), one only retains the sizes of the equivalence classes that the function partitionsNinto (including the multiplicity of each size), since two equivalence relations can be transformed into one another by a permutation ofNif and only if the sizes of their classes match. This is precisely what distinguishes the notion of partition ofnfrom that of partition ofN, so as a result one gets by definition the numberpx(n) of partitions ofnintoxnon-zero parts. This case is equivalent to countingpartitions of the numberninto ≤xparts. The association is the same as for the previous case, except that now some parts of the partition may be equal to 0. (Specifically, they correspond to elements ofXnot in the image of the function.) Each partition ofninto at mostxnon-zero parts can be extended to such a partition by adding the required number of zeroes, and this accounts for all possibilities exactly once, so the result is given by∑k=0xpk(n){\displaystyle \textstyle \sum _{k=0}^{x}p_{k}(n)}. By adding 1 to each of thexparts, one obtains a partition ofn+xintoxnonzero parts, and this correspondence is bijective; hence the expression given can be simplified by writing it aspx(n+x){\displaystyle p_{x}(n+x)}. The above formulas give the proper values for all finite setsNandX. In some cases there are alternative formulas which are almost equivalent, but which do not give the correct result in some extremal cases, such as whenNorXare empty. The following considerations apply to such cases. In particular in the case ofcounting multisetswithnelements taken fromX, the given expression(n+x−1n){\displaystyle {\tbinom {n+x-1}{n}}}is equivalent in most cases to(n+x−1x−1){\displaystyle {\tbinom {n+x-1}{x-1}}}, but the latter expression would give 0 for the casen=x= 0(by the usual convention that binomial coefficients with a negative lower index are always 0). Similarly, for the case ofcounting compositionsofnwithxnon-zero parts, the given expression(n−1n−x){\displaystyle {\tbinom {n-1}{n-x}}}is almost equivalent to the expression(n−1x−1){\displaystyle {\tbinom {n-1}{x-1}}}given by thestars and barsargument, but the latter gives incorrect values forn= 0andallvalues ofx. For the cases where the result involves a summation, namely those of countingpartitions ofNinto at mostxnon-empty subsets orpartitions ofninto at mostxnon-zero parts, the summation index is taken to start at 0; although the corresponding term is zero whenevern> 0, it is the unique non-zero term whenn= 0, and the result would be wrong for those cases if the summation were taken to start at 1. We can generalize further by allowing othergroupsof permutations to act onNandX. IfGis a group of permutations ofN, andHis a group of permutations ofX, then we count equivalence classes of functionsf:N→X{\displaystyle f\colon N\rightarrow X}. Two functionsfandFare considered equivalent if, and only if, there existg∈G,h∈H{\displaystyle g\in G,h\in H}so thatF=h∘f∘g{\displaystyle F=h\circ f\circ g}. This extension leads to notions such ascyclicanddihedralpermutations, as well as cyclic and dihedral partitions of numbers and sets. Another generalization calledthe twentyfold waywas developed byKenneth P. Bogartin his book "Combinatorics Through Guided Discovery". In the problem of distributing objects to boxes both the objects and the boxes may be identical or distinct. Bogart identifies 20 cases.[3]Robert A. Proctor has constructed the thirtyfold way.[4]
https://en.wikipedia.org/wiki/Twelvefold_way
Biomedical text mining(includingbiomedical natural language processingorBioNLP) refers to the methods and study of howtext miningmay be applied to texts and literature of thebiomedicaldomain. As a field of research, biomedical text mining incorporates ideas fromnatural language processing,bioinformatics,medical informaticsandcomputational linguistics. The strategies in this field have been applied to the biomedical literature available through services such asPubMed. In recent years, the scientific literature has shifted to electronic publishing but the volume of information available can be overwhelming. This revolution of publishing has caused a high demand for text mining techniques. Text mining offers information retrieval (IR) and entity recognition (ER).[1]IR allows the retrieval of relevant papers according to the topic of interest, e.g. through PubMed. ER is practiced when certain biological terms are recognized (e.g.proteinsorgenes) for further processing. Applying text mining approaches to biomedical text requires specific considerations common to the domain. Large annotatedcorporaused in the development and training of general purpose text mining methods (e.g., sets of movie dialogue,[3]product reviews,[4]or Wikipedia article text) are not specific for biomedical language. While they may provide evidence of general text properties such as parts of speech, they rarely contain concepts of interest to biologists or clinicians. Development of new methods to identify features specific to biomedical documents therefore requires assembly of specialized corpora.[5]Resources designed to aid in building new biomedical text mining methods have been developed through the Informatics for Integrating Biology and the Bedside (i2b2) challenges[6][7][8]and biomedical informatics researchers.[9][10]Text mining researchers frequently combine these corpora with thecontrolled vocabulariesandontologiesavailable through theNational Library of Medicine'sUnified Medical Language System (UMLS)andMedical Subject Headings (MeSH). Machine learning-based methods often require very large data sets as training data to build useful models.[11]Manual annotation of large text corpora is not realistically possible. Training data may therefore be products of weak supervision[12][13]or purely statistical methods. Like other text documents, biomedical documents containunstructured data.[14]Research publications follow different formats, contain different types of information, and are interspersed with figures, tables, and other non-text content. Both unstructured text and semi-structured document elements, such as tables, may contain important information that should be text mined.[15]Clinical documents may vary in structure and language between departments and locations. Other types of biomedical text, such as drug labels,[16]may follow general structural guidelines but lack further details. Biomedical literature contains statements about observations that may not be statements of fact. This text may express uncertainty or skepticism about claims. Without specific adaptations, text mining approaches designed to identify claims within text may mis-characterize these "hedged" statements as facts.[17] Biomedical text mining applications developed for clinical use should ideally reflect the needs and demands of clinicians.[5]This is a concern in environments whereclinical decision supportis expected to be informative and accurate. A comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases is presented in.[18] New text mining systems must work with existing standards, electronic medical records, and databases.[5]Methods for interfacing with clinical systems such asLOINChave been developed[19]but require extensive organizational effort to implement and maintain.[20][21] Text mining systems operating with private medical data must respect its security and ensure it is rendered anonymous where appropriate.[22][23][24] Specific sub tasks are of particular concern when processing biomedical text.[14] Developments in biomedical text mining have incorporated identification of biological entities withnamed entity recognition, or NER. Names and identifiers for biomolecules such asproteinsandgenes,[25]chemical compounds and drugs,[26]and disease names[27]have all been used as entities. Most entity recognition methods are supported by pre-defined linguistic features or vocabularies, though methods incorporatingdeep learningandword embeddingshave also been successful at biomedical NER.[28][29] Biomedical documents may beclassifiedorclusteredbased on their contents and topics. In classification, document categories are specified manually,[30]while in clustering, documents form algorithm-dependent, distinct groups.[31]These two tasks are representative ofsupervisedandunsupervisedmethods, respectively, yet the goal of both is to produce subsets of documents based on their distinguishing features. Methods for biomedical document clustering have relied uponk-means clustering.[31] Biomedical documents describe connections between concepts, whether they are interactions between biomolecules, events occurring subsequently over time (i.e.,temporalrelationships), orcausalrelationships. Text mining methods may perform relation discovery to identify these connections, often in concert with named entity recognition.[32] The challenge of identifying uncertain or "hedged" statements has been addressed through hedge cue detection in biomedical literature.[17] Multiple researchers have developed methods to identify specific scientific claims from literature.[33][34]In practice, this process involves both isolating phrases and sentences denoting the core arguments made by the authors of a document (a process known asargument mining, employing tools used in fields such as political science) and comparing claims to find potential contradictions between them.[34] Information extraction, or IE, is the process of automatically identifying structured information fromunstructuredor partially structured text. IE processes can involve several or all of the above activities, including named entity recognition, relationship discovery, and document classification, with the overall goal of translating text to a more structured form, such as the contents of a template orknowledge base. In the biomedical domain, IE is used to generate links between concepts described in text, such asgene A inhibits gene Bandgene C is involved in disease G.[35]Biomedical knowledge bases containing this type of information are generally products of extensive manual curation, so replacement of manual efforts with automated methods remains a compelling area of research.[36][37] Biomedical text mining supports applications for identifying documents and concepts matching search queries. Search engines such asPubMedsearch allow users to query literature databases with words or phrases present in document contents,metadata, orindicessuch asMeSH. Similar approaches may be used formedical literature retrieval. For more fine-grained results, some applications permit users to search withnatural language queriesand identify specific biomedical relationships.[38] On 16 March 2020, theNational Library of Medicineand others launched the COVID-19 Open Research Dataset (CORD-19) to enabletext miningof the current literature on the novel virus. The dataset is hosted by the Semantic Scholar project[39]of theAllen Institute for AI.[40]Other participants includeGoogle,Microsoft Research, theCenter for Security and Emerging Technology, and theChan Zuckerberg Initiative.[41] The following table lists a selection of biomedical text corpora and their contents. These items include annotated corpora, sources of biomedical research literature, and resources frequently used as vocabulary and/or ontology references, such asMeSH. Items marked "Yes" under "Freely Available" can be downloaded from a publicly accessible location. Several groups have developed sets of biomedical vocabulary mapped to vectors of real numbers, known asword vectors or word embeddings. Sources of pre-trained embeddings specific for biomedical vocabulary are listed in the table below. The majority are results of theword2vecmodel developed by Mikolovet al[86]or variants of word2vec. Text mining applications in the biomedical field include computational approaches to assist with studies inprotein docking,[91]protein interactions,[92][93]and protein-disease associations.[94]Text mining techniques have several advantages over traditional manual curation for identifying associations. Text mining algorithms can identify and extract information from a vast amount of literature, and more efficiently than manual curation. This includes the integration of data from different sources, including literature,databases, and experimental results. These algorithms have transformed the process of identifying and prioritizing novel genes and gene-disease associations that have previously been overlooked.[95] These methods are the foundation to facilitate systematic searches of overlooked scientific and biomedical  literature which could carry significant association between research. The combination of information can stem new discoveries and hypotheses especially with the integration of datasets. It must be noted that the quality of the database is as important as the size of it. Promising text mining methods such as iProLINK (integrated Protein Literature Information and Knowledge) have been developed to curate data sources that can aid text mining research in areas of bibliography mapping, annotation extraction, protein named entity recognition, and protein ontology development.[96]Curated databases such as UniProt can accelerate the accessibility of targeted information not only for genetic sequences, but also for literature and phylogeny. Methods for determining the association ofgene clustersobtained bymicroarrayexperiments with the biological context provided by the corresponding literature have been developed.[97] Automatic extraction of protein interactions[98]and associations of proteins to functional concepts (e.g.gene ontologyterms) has been explored.[citation needed]The search engine PIE was developed to identify and return protein-protein interaction mentions fromMEDLINE-indexed articles.[99]The extraction of kinetic parameters from text or thesubcellular locationof proteins have also been addressed by information extraction and text mining technology.[citation needed] Computational gene prioritization is an essential step in understanding the genetic basis of diseases, particularly withingenetic linkageanalysis. Text mining and other computational tools extract relevant information, including gene-disease associations, among others, from numerous data sources, then apply differentranking algorithmsto prioritize the genes based on their relevance to the specific disease.[100]Text mining and gene prioritization allow researchers to focus their efforts on the most promising candidates for further research. Computational tools for gene prioritization continue to be developed and analyzed. One group studied the performance of various text-mining techniques for disease gene prioritization. They investigated different domain vocabularies, text representation schemes, and ranking algorithms in order to find the best approach for identifying disease-causing genes to establish abenchmark.[101] An agricultural genomics group identified genes related tobovinereproductive traits using text mining, among other approaches.[102] A text mining study assembled a collection of 709 coreextracellular matrix proteinsand associated proteins based on two databases:MatrixDB(matrixdb.univ-lyon1.fr) andUniProt. This set of proteins had a manageable size and a rich body of associated information, making it a suitable for the application of text mining tools. The researchers conducted phrase-mining analysis to cross-examine individual extracellular matrix proteins across the biomedical literature concerned with six categories ofcardiovascular diseases. They used a phrase-mining pipeline, Context-aware SemanticOnline Analytical Processing(CaseOLAP),[103]then semantically scored all 709 proteins according to their Integrity, Popularity, and Distinctiveness using the CaseOLAP pipeline. The text mining study validated existing relationships and informed previously unrecognized biological processes in cardiovascular pathophysiology.[94] Search engines designed toretrieve biomedical literaturerelevant to a user-provided query frequently rely upon text mining approaches. Publicly available tools specific for research literature includePubMedsearch,Europe PubMed Centralsearch, GeneView,[104]and APSE[105]Similarly, search engines and indexing systems specific for biomedical data have been developed, including DataMed[106]and OmicsDI.[107] Some search engines, such as Essie,[108]OncoSearch,[109]PubGene,[110][111]andGoPubMed[112]were previously public but have since been discontinued, rendered obsolete, or integrated into commercial products. Electronic medical records(EMRs) andelectronic health records(EHRs) are collected by clinical staff in the course of diagnosis and treatment. Though these records generally include structured components with predictable formats and data types, the remainder of the reports are often free-text and difficult to search, leading to challenges with patient care.[113]Numerous complete systems and tools have been developed to analyse these free-text portions.[114]The MedLEE system was originally developed for analysis of chestradiologyreports but later extended to other report topics.[115]Theclinical Text Analysis and Knowledge Extraction System, or cTAKES, annotates clinical text using a dictionary of concepts.[116]The CLAMP system offers similar functionality with a user-friendly interface.[117] Computational frameworkshave been developed to rapidly build tools for biomedical text mining tasks. SwellShark[118]is a framework for biomedical NER that requires no human-labeled data but does make use of resources for weak supervision (e.g.,UMLSsemantic types). The SparkText framework[119]usesApache Sparkdata streaming, aNoSQLdatabase, and basicmachine learningmethods to buildpredictive modelsfrom scientific articles. Some biomedical text mining and natural language processing tools are available throughapplication programming interfaces, or APIs. NOBLE Coder performs concept recognition through an API.[120] The followingacademic conferencesand workshops host discussions and presentations in biomedical text mining advances. Most publishproceedings. A variety ofacademic journalspublishing manuscripts on biology and medicine include topics in text mining and natural language processing software. Some journals, including theJournal of the American Medical Informatics Association(JAMIA) and theJournal of Biomedical Informaticsare popular publications for these topics.
https://en.wikipedia.org/wiki/Biomedical_text_mining
"Flash Crowd" is a1973English-languagenovellabyscience fiction authorLarry Niven,[1]one of a series about the social consequence of inventing an instant, practically freedisplacement booth.[2] One consequence not foreseen by the builders of the system was that with the almost immediate reporting of newsworthy events, tens of thousands of people worldwide – along with criminals – wouldteleportto the scene of anything interesting, thus creating disorder and confusion. The plot centers around a television journalist who, after being fired for his inadvertent role in inciting a post-robbery riot in Los Angeles, seeks to independently investigate the teleportation system for the flaws in its design allowing for such spontaneous riots to occur. His investigation takes him to destinations and people around the world within the matter of less than 12 hours before he gets his chance to plead his case on television, and he encounters the wide-ranging effects of displacements upon aspects of human behavior such as settlement, crime, natural resources, agriculture, waste management and tourism. In various other books, for exampleRingworld, Niven suggests that easy transportation might be disruptive to traditional behavior and open the way for new forms of parties, spontaneous congregations, or shopping trips around the world. The central character inRingworld, celebrating his birthday, teleports across time-zones to "lengthen" his birthday multiple times (particularly notable since the first edition had the error of the character heading the wrong direction, increasing that edition's value). Niven's essay "Exercise in Speculation: The Theory and Practice of Teleportation" was published in the collectionAll the Myriad Ways[8]In it he discusses the ideas that underlie his teleportation stories. On theWorld Wide Web, a similar phenomenon can occur, when a web site catches the attention of a large number of people, and gets an unexpected and overloading surge of traffic. This usage was first coined by John Pettitt of Beyond.com in 1996.[citation needed]Multiple other terms for the phenomenon exist, often coming from the name of a particular prominent, high-traffic site whose normal base of viewers can constitute a flash crowd when directed to a less famous website. Notorious examples include the "Slashdot effect",[9]the "Instalanche" (when a smaller site gets links by the popular blogInstapundit), or a website being "Farked" orDrudged(where the target site is crashed due to the large number of hits in a short time).
https://en.wikipedia.org/wiki/Flash_Crowd
Social marketing intelligenceis the method of extrapolating valuable information fromsocial networkinteractions anddataflows that can enable companies tolaunch new productsand services into the market at greater speed and lower cost. This is an area of research however, companies using social marketing intelligence have achieved significant improvement in marketing campaigns.[citation needed] Through social marketing intelligence, companies can identify people that are the most influential within their communities. These are the most connected people within any given social network. These people, sometimes called thealpha usersorhubsas insmall-world networktheory, have considerable influence over the spread of information within their social network.[1] Alpha users are key elements of any social networks, who manage the connectivity of the core members of the community. Similar to how viruses spread in nature, there is an initial starting point to communications in social networks, and the originators of such communications are alpha users. They tend to be highly connected users with exceptional influence to the other thought-leaders of any social network. Beforedigital communications, it was only possible to isolate the most influential members of any community by interviewing every member and tracing their full communication patterns within the social network. Traditional fixedlandline telephoneandinternetuse did not give enough accuracy to be able to pinpoint alpha users to a meaningful degree. With the advent of mobile phones, a personal digital communication channel was available to study. Early research by mathematicians at Xtract[1]inFinlandproduced models that suggested mobile networks could indeed track the full communication and isolate the alpha users. Since then, several companies including Xtract have launched commercial tools to detect alpha users, usually using mobile operator billing and telecoms traffic data. Engagement marketingcampaigns attempt to use alpha users as spokespersons in marketing and advertising. The idea is that consumers will trust more the opinion of their friend or known contact from a social network, than the random marketing and advertising messages of companies and brands. The desire is to achieveviral marketingeffects by which the alpha users would spread the messages further. Alpha users were first briefly discussed in public in the book3G Marketingin 2004.[2]The first industry article about alpha users was by Ahonen and Ahvenainen in Total Telecom in February 2005. The first telecoms conference wherealpha userwas explained was the 3G Mobile World Congress in Tokyo in January 2005. The topic was part of the strategy keynote address at the 3GSM World Congress in Cannes in February 2005. The first book to discuss alpha users at length wasCommunities Dominate Brandsin 2005.[3]
https://en.wikipedia.org/wiki/Social_marketing_intelligence#Alpha_users
Adocument type definition(DTD) is a specification file that contains a set ofmarkup declarationsthat define adocument typefor anSGML-familymarkup language(GML,SGML,XML,HTML). The DTD specification file can be used to validate documents. A DTD defines the valid building blocks of an XML document. It defines the document structure with a list of validated elements and attributes. A DTD can be declared inline inside an XML document, or as an external reference.[1] A namespace-aware version of DTDs is being developed as Part 9 of ISODSDL. DTDs persist in applications that need special publishing characters, such as theXML and HTML Character Entity References, which derive from larger sets defined as part of theISO SGML standardeffort.XMLuses a subset ofSGMLDTD. As of 2009[update], newerXML namespace-awareschema languages(such asW3C XML SchemaandISO RELAX NG) have largely superseded DTDs as a better way to validate XML structure. A DTD is associated with an XML or SGML document by means of adocument type declaration(DOCTYPE). The DOCTYPE appears in the syntactic fragmentdoctypedeclnear the start of an XML document.[2]The declaration establishes that the document is an instance of the type defined by the referenced DTD. DOCTYPEs make two sorts of declarations: The declarations in the internal subset form part of the DOCTYPE in the document itself. The declarations in the external subset are located in a separate text file. The external subset may be referenced via apublic identifierand/or asystem identifier. Programs for reading documents may not be required to read the external subset. Any valid SGML or XML document that references anexternal subsetin its DTD, or whose body contains references toparsed external entitiesdeclared in its DTD (including those declared within itsinternal subset), may only be partially parsed but cannot be fully validated byvalidatingSGML or XML parsers in theirstandalonemode (this means that these validating parsers do not attempt to retrieve these external entities, and their replacement text is not accessible). However, such documents are still fully parsable in thenon-standalone mode of validating parsers, which signals an error if it can not locate these external entities with their specifiedpublic identifier(FPI)orsystem identifier(a URI), or are inaccessible. (Notations declared in the DTD are also referencing external entities, but these unparsed entities are not needed for the validation of documents in thestandalonemode of these parsers: the validation of all external entities referenced by notations is left to the application using the SGML or XML parser). Non-validating parsersmayeventually attempt to locate these external entities in thenon-standalone mode (by partially interpreting the DTD only to resolve their declared parsable entities), but do not validate the content model of these documents. The following example of a DOCTYPE contains both public and system identifiers: All HTML 4.01 documents conform to one of three SGML DTDs. The public identifiers of these DTDs are constant and are as follows: The system identifiers of these DTDs, if present in the DOCTYPE, areURI references. A system identifier usually points to a specific set of declarations in a resolvable location. SGML allows mapping public identifiers to system identifiers incatalogsthat are optionally available to the URI resolvers used by documentparsingsoftware. This DOCTYPE can only appearafterthe optionalXML declaration, and before the document body, if the document syntax conforms to XML. This includesXHTMLdocuments: An additional internal subset can also be provided after the external subset: Alternatively, only the internal subset may be provided: Finally, the document type definition may include no subset at all; in that case, it just specifies that the document has a single top-level element (this is an implicit requirement for all valid XML and HTML documents, but not for document fragments or for all SGML documents, whose top-level elements may be different from the implied root element), and it indicates the type name of the root element: DTDs describe the structure of a class of documents via element and attribute-list declarations. Element declarations name the allowable set of elements within the document, and specify whether and how declared elements and runs of character data may be contained within each element. Attribute-list declarations name the allowable set of attributes for each declared element, including thetypeof each attribute value, if not an explicit set of valid values. DTD markup declarations declare whichelement types,attribute lists,entities, andnotationsare allowed in the structure of the corresponding class of XML documents.[3] An element type declaration defines an element and its possible content. A valid XML document contains only elements that are defined in the DTD. Various keywords and characters specify an element's content: For example: Element type declarations are ignored bynon-validatingSGML and XML parsers (in which cases, any elements are accepted in any order, and in any number of occurrences in the parsed document), but these declarations are still checked for form and validity. An attribute list specifies for a given element type the list of all possible attribute associated with that type. For each possible attribute, it contains: For example: Here are some attribute types supported by both SGML and XML: A default value can define whether an attribute must occur (#REQUIRED) or not (#IMPLIED), or whether it has a fixed value (#FIXED), or which value should be used as a default value ("…") in case the given attribute is left out in an XML tag. Attribute list declarations are ignored bynon-validatingSGML and XML parsers (in which cases any attribute is accepted within all elements of the parsed document), but these declarations are still checked for well-formedness and validity. An entity is similar to amacro. The entity declaration assigns it a value that is retained throughout the document. A common use is to have a name more recognizable than a numeric character reference for an unfamiliar character.[5]Entities help to improve legibility of an XML text. In general, there are two types: internal and external. An example of internal entity declarations (here in an internal DTD subset of an SGML document) is: Internal entities may be defined in any order, as long as they are not referenced and parsed in the DTD or in the body of the document, in their order of parsing: it is valid to include a reference to a still undefined entity within the content of a parsed entity, but it is invalid to include anywhere else any named entity reference before this entity has been fully defined, including all other internal entities referenced in its defined content (this also prevents circular or recursive definitions of internal entities). This document is parsed as if it was: Reference to the "author" internal entity is not substituted in the replacement text of the "signature" internal entity. Instead, it is replaced only when the "signature" entity reference is parsed within the content of the "sgml" element, but only by validating parsers (non-validating parsers do not substitute entity references occurring within contents of element or within attribute values, in the body of the document. This is possible because the replacement text specified in the internal entity definitions permits a distinction betweenparameterentity references (that are introduced by the "%" character and whose replacement applies to the parsed DTD contents) andgeneralentity references (that are introduced by the "&" character and whose replacement is delayed until they are effectively parsed and validated). The "%" character for introducing parameter entity references in the DTD loses its special role outside the DTD and it becomes a literal character. However, the references to predefined character entities are substituted wherever they occur, without needing a validating parser (they are only introduced by the "&" character). Notations are used in SGML or XML. They provide a complete reference to unparsed external entities whose interpretation is left to the application (which interprets them directly or retrieves the external entity themselves), by assigning them a simple name, which is usable in the body of the document. For example, notations may be used to reference non-XML data in an XML 1.1 document. For example, to annotate SVG images to associate them with a specific renderer: This declares theTEXTof external images with this type, and associates it with a notation name "type-image-svg". However, notation names usually follow a naming convention that is specific to the application generating or using the notation: notations are interpreted as additional meta-data whose effective content is an external entity and either a PUBLIC FPI, registered in the catalogs used by XML or SGML parsers, or a SYSTEM URI, whose interpretation is application dependent (here a MIME type, interpreted as a relative URI, but it could be an absolute URI to a specific renderer, or a URN indicating an OS-specific object identifier such as a UUID). The declared notation name must be unique within all the document type declaration, i.e. in the external subset as well as the internal subset, at least for conformance with XML.[6][7] Notations can be associated to unparsed external entities included in the body of the SGML or XML document. ThePUBLICorSYSTEMparameter of these external entities specifies the FPI and/or the URI where the unparsed data of the external entity is located, and the additionalNDATAparameter of these defined entities specifies the additional notation (i.e., effectively the MIME type here). For example: Within the body of the SGML document, these referenced external entities (whose name is specified between "&" and ";") arenotreplaced like usual named entities (defined with a CDATA value), but are left as distinct unparsed tokens that may be used either as the value of an element attribute (like above) or within the element contents, provided that either the DTD allows such external entities in the declared content type of elements or in the declared type of attributes (here theENTITYtype for thedataattribute), or the SGML parser is not validating the content. Notations may also be associated directly to elements as additional meta-data, without associating them to another external entity, by giving their names as possible values of some additional attributes (also declared in the DTD within the<!ATTLIST...>declaration of the element). For example: The example above shows a notation named "type-image-svg" that references the standard public FPI and the system identifier (the standard URI) of an SVG 1.1 document, instead of specifying just a system identifier as in the first example (which was a relative URI interpreted locally as a MIME type). This annotation is referenced directly within the unparsed "type" attribute of the "img" element, but its content is not retrieved. It also declares another notation for a vendor-specific application, to annotate the "sgml" root element in the document. In both cases, the declared notation named is used directly in a declared "type" attribute, whose content is specified in the DTD with the "NOTATION" attribute type (this "type" attribute is declared for the "sgml" element, as well as for the "img" element). However, the "title" attribute of the "img" element specifies the internal entity "example1SVGTitle" whose declaration that does not define an annotation, so it is parsed by validating parsers and the entity replacement text is "Title of example1.svg". The content of the "img" element references another external entity "example1SVG" whose declaration also does not define a notation, so it is also parsed by validating parsers and the entity replacement text is located by its defined SYSTEM identifier "example1.svg" (also interpreted as a relative URI). The effective content for the "img" element be the content of this second external resource. The difference with the GIF image, is that the SVG image is parsed within the SGML document, according to the declarations in the DTD, where the GIF image is just referenced as an opaque external object (which is not parsable with SGML) via its "data" attribute (whose value type is an opaque ENTITY). Only one notation name may be specified in the value of ENTITY attributes (there is no support in SGML, XML 1.0 or XML 1.1 for multiple notation names in the same declared external ENTITY, so separate attributes are needed). However multiple external entities may be referenced (in a space-separated list of names) in attributes declared with type ENTITIES, and where each named external entity is also declared with its own notation). Notations are also completely opaque for XML and SGML parsers, so they are not differentiated by the type of the external entity that they may reference (for these parsers they just have a unique name associated to a public identifier (an FPI) and/or a system identifier (a URI)). Some applications (but not XML or SGML parsers themselves) also allow referencing notations indirectly by naming them in the"URN:''name''"value of a standard CDATA attribute, everywhere a URI can be specified. However this behaviour is application-specific, and requires that the application maintains a catalog of known URNs to resolve them into the notations that have been parsed in a standard SGML or XML parser. This use allows notations to be defined only in a DTD stored as an external entity and referenced only as the external subset of documents, and allows these documents to remain compatible with validating XML or SGML parsers that have no direct support for notations. Notations are not used in HTML, or in basic profiles for XHTML and SVG, because: Even in validating SGML or XML 1.0 or XML 1.1 parsers, the external entities referenced by an FPI and/or URI in declared notations are not retrieved automatically by the parsers themselves. Instead, these parsers just provide to the application the parsed FPI and/or URI associated to the notations found in the parsed SGML or XML document, and with a facility for a dictionary containing all notation names declared in the DTD; these validating parsers also check the uniqueness of notation name declarations, and report a validation error if some notation names are used anywhere in the DTD or in the document body but not declared: The XML DTD syntax is one of severalXML schemalanguages. However, many of the schema languages do not fully replace the XML DTD. Notably, the XML DTD allows defining entities and notations that have no direct equivalents in DTD-less XML (because internal entities and parsable external entities are not part of XML schema languages, and because other unparsed external entities and notations have no simple equivalent mappings in most XML schema languages). Most XML schema languages are only replacements for element declarations and attribute list declarations, in such a way that it becomes possible to parse XML documents withnon-validatingXML parsers (if the only purpose of the external DTD subset was to define the schema). In addition, documents for these XML schema languages must be parsed separately, so validating the schema of XML documents in pure standalone mode is not really possible with these languages: the document type declaration remains necessary for at least identifying (with aXML Catalog) the schema used in the parsed XML document and that is validated in another language. A common misconception holds that anon-validatingXML parser does not have to read document type declarations, when in fact, the document type declarations must still be scanned for correct syntax as well as validity of declarations, and the parser must still parse all entity declarations in theinternal subset, and substitute the replacement texts of internal entities occurring anywhere in the document type declaration or in the document body. Anon-validatingparser may, however, elect not to read parsableexternal entities(including theexternal subset), and does not have to honor the content model restrictions defined in element declarations and in attribute list declarations. If the XML document depends on parsable external entities (including the specifiedexternal subset, or parsable external entities declared in theinternal subset), it should assertstandalone="no"in itsXML declaration. The validating DTD may be identified by usingXML Catalogsto retrieve its specifiedexternal subset. In the example below, the XML document is declared withstandalone="no"because it has an external subset in its document type declaration: If the XML document type declaration includes any SYSTEM identifier for the external subset, it can not be safely processed as standalone: the URI should be retrieved, otherwise there may be unknown named character entities whose definition may be needed to correctly parse the effective XML syntax in the internal subset or in the document body (the XML syntax parsing is normally performedafterthe substitution of all named entities, excluding the five entities that are predefined in XML and that are implicitly substitutedafterparsing the XML document into lexical tokens). If it just includes any PUBLIC identifier, itmaybe processed as standalone, if the XML processor knows this PUBLIC identifier in its local catalog from where it can retrieve an associated DTD entity. An example of a very simple external XML DTD to describe the schema of a list of persons might consist of: Taking this line by line: An example of an XML file that uses and conforms to this DTD follows. The DTD is referenced here as an external subset, via the SYSTEM specifier and a URI. It assumes that we can identify the DTD with the relative URI reference "example.dtd"; the "people_list" after "!DOCTYPE" tells us that the root tags, or the first element defined in the DTD, is called "people_list": One can render this in an XML-enabledbrowser(such asInternet ExplorerorMozilla Firefox) by pasting and saving the DTD component above to a text file namedexample.dtdand the XML file to a differently-named text file, and opening the XML file with the browser. The files should both be saved in the same directory. However, many browsers do not check that an XML document confirms to the rules in the DTD; they are only required to check that the DTD is syntactically correct. For security reasons, they may also choose not to read the external DTD. The same DTD can also be embedded directly in the XML document itself as an internal subset, by encasing it within [square brackets] in the document type declaration, in which case the document no longer depends on external entities and can be processed in standalone mode: Alternatives to DTDs (for specifying schemas) are available: An XML DTD can be used to create a denial of service (DoS) attack by defining nested entities that expand exponentially, or by sending the XML parser to an external resource that never returns.[10] For this reason, .NET Framework provides a property that allows prohibiting or skipping DTD parsing,[10]and recent versions of Microsoft Office applications (Microsoft Office 2010 and higher) refuse to open XML files that contain DTD declarations.
https://en.wikipedia.org/wiki/Document_type_definition
The termboundary tonerefers to a rise or fall in pitch that occurs in speech at the end of a sentence or other utterance, or, if a sentence is divided into two or moreintonational phrases, at the end of each intonational phrase. It can also refer to a low or high intonational tone at the beginning of an utterance or intonational phrase. The term was first introduced in aPhDthesis on EnglishintonationbyMark Libermanin 1975 but without being developed further.[1]It was taken up again in 1980 in another PhD thesis on English intonation byJanet Pierrehumbert.[2]In Pierrehumbert's model, which later developed into theToBIsystem of intonational transcription, everyintonational phraseis marked as ending in a boundary tone, written either H% when the speaker's voice rises up or remains high, or L% when it falls or remains low. In modern intonational studies the term 'boundary tone' replaces the notion of 'terminal junctures' (falling #, rising //, and level /) used in earlier American studies of intonation.[3] Pierrehumbert gives the example of the sentenceThis is my sister Mary. This can be pronounced in two ways, either as a single intonational phrase with a single high pitch on the first syllable ofMary(L L L L L H L), or as two intonational phrases with a high pitch both onsisterand onMary(L L L H L H L). If it is pronounced the second way, the wordssisterandMaryboth have a falling intonation, and each one is transcribed by Pierrehumbert as H* L−L%.[4]Here the asterisk (*) indicates apitch accent, the hyphen (−) indicates aphrase accent, which fills the interval between the last pitch accent and the final boundary tone, and the percent symbol (%) indicates the boundary tone itself.[5] In another example, in response to the question, "What about Anna? Who did she come with?", a speaker may replyAnna came with Manny. Again there are two possible pronunciations: the speaker can either say this as a single intonational phrase with a single high pitch onManny(L L L L H L), or as two intonational phrases with one high pitch on the first syllable ofAnnaand another on the first syllable ofManny(H L L L H L). If the sentence is pronounced in the second way, because the wordAnnais the topic of the sentence and does not give new information, it will have a slight rise in pitch on the second syllable (see the illustration). In this case it is transcribed by Pierrehumbert as H* L−H%.[6] A boundary tone can also begin a sentence or intonational phrase. For example, the phraseAnother orangewould usually be pronounced with a low pitch on the first syllable. However, it can sometimes be pronounced with a high pitch on the vowelA-. Pierrehumbert marks this high pitch also with H%.[7](A low boundary tone at the beginning of an utterance is usually not marked by Pierrehumbert.) Because of its simplicity compared with previous attempts at transcribing English intonation, Pierrehumbert's model has been influential[8]and has been successfully adapted to several other languages, for examplePersian,[9]German,[10]andDutch.[11]Some analyses use a larger number of boundary tones than L% and H%; for example for Dutch,Gussenhovenuses L%, H%, and % (no boundary tone) at the end of an utterance, and %L, %H, and %HL at the beginning;[11]while forItalianFrota and Prieto posit six boundary tones, written L%, H%, LH%, HL%, L!H%, and H!H% (where !H represents adownsteppedhigh tone, i.e. one slightly lower in pitch than the previous one).[12] A rising boundary tone can often be heard internally in a sentence in some languages, for example, to mark a topic,[13]to mark off items in a list, or following the subordinate clause in a sentence such as "If you like it, please buy it".[14](See further:Chichewa tones#Boundary tones.) Boundary tones are also used to mark questions in many languages. For example, in Chichewa, a yes–no question may be indicated either by a rising tone on the final syllable, or by a high-low falling tone (e.g.mwalandirâ?"have you received it?").[15]InLuganda, a related language spoken inUganda, on the contrary, a yes–no question is indicated by alowtone on the final syllable (e.g.ssóméró'it is a school' vs.ssóméro'is it a school?').[16](SeeChichewa tonesandLuganda tones.) A corpus-based study of yes–no questions in American English found that the great majority of them (approximately 90%) ended in a high boundary tone (H%), most frequently (80%) using a "low-rise" final contour transcribed L*H-H%. The next most common contour is H*H-H%, which is described as "high-rise". A typical low-rise question transcribed in the study isAnd do you still work for a veterinarian?, with the syllableve-marked as L* followed by a smooth rise to a high pitch at the end.[17]Less commonly a yes–no question will end in a "high-fall", for example,Is it treatable?, in which the wordtreatableis marked H*L-L%.[18]
https://en.wikipedia.org/wiki/Boundary_tone_(linguistics)
Thehysteron proteron(from theGreek:ὕστερον πρότερον,hýsteron próteron, "later earlier") is arhetoricaldevice. It occurs when the first key word of the idea refers to something that happens temporally later than the second key word. The goal is to call attention to the more important idea by placing it first.[1] The standard example comes from theAeneidofVirgil: "Moriamur, et in media arma ruamus" ("Let us die, and charge into the thick of the fight"; ii. 353).[2]An example of hysteron proteron encountered in everyday life is the common reference to putting on one's "shoes and socks", rather than "socks and shoes". By this deliberate reversal, hysteron proteron draws attention to the important point, so giving it primacy. Hysteron proteron is a form ofhyperbaton, which describes general rearrangements of the sentence.[3] It can also be defined as a figure of speech consisting of the reversal of a natural or rational order (as in "then came the thunder and the lightning").[4] An example from theQuranthat demonstrates hysteron proteron, verse (aya) number 89–90 fromSuraNumber 21 says that God grantedZechariah'sprayer for a son even though Zechariah was very old and his wife was sterile: We granted his prayer and gave himJohn, and we made his wife fertile for him. A more conventional phrasing would be: "We granted his prayer; we made his wife fertile for him; and [having done so] we gave him John." The reversal of the expected sequence (hysteron proteron) in the verse suggests immediacy: Zechariah's prayer was granted without any delay at all, so much so that the detail itself,"We made his wife fertile for him,"was not allowed to intervene between the prayer and its acceptance.[5]
https://en.wikipedia.org/wiki/Hysteron_proteron
Deadlochis an Australianblack comedycrimemysterytelevision series that premiered onAmazon Prime Videoon 2 June 2023. Created byKate McCartneyandKate McLennan, the series is set in Deadloch, a fictional town inTasmania, and starsKate Box,Madeleine Sami,Alicia Gardiner, andNina Oyama.Deadlochwas produced byAmazon Studios. The series was renewed for a second season in July 2024.[1] The beguilingly sleepy settlement of Deadloch, on Tasmania's coastline, is shaken when the body of a local man turns up dead on the beach. Two female detectives reluctantly take charge of the investigation together: the fastidious Senior Sergeant Dulcie Collins, and the brash and reckless Detective Eddie Redcliffe from Darwin, aided by overeager Constable Abby Matsuda and ditsy Officer Sven Alderman. The murder coincides with the town's annual "Winter Feastival" — a celebration of local arts, cuisine and culture. The investigation forces Dulcie and Eddie to cope with each other's drastically opposite investigation styles, as they discover secrets being hidden in a town struggling to disguise the deep rift that's slowly splitting it and the lives of its residents.[2][need quotation to verify] Kate McCartney and Kate McLennan are theshow runnersand producers of the series. "The Kates", as they are nicknamed, were inspired to write a comedy from a set-up similar to that of the UK seriesBroadchurchafter they both watched the series, so much so the working title of the project was "FunnyBroadchurch". Actress Nina Oyama told theSydney Morning Herald: "The show is first and foremost a crime show, because of the way it’s laid out, and the way people will keep returning to it will be for the crime-based and mystery-based reasons... But it's also very funny.” There was also an intention in the production to subvert some of the typical genre tropes, and reverse who are usually considered the victims in society. There is a sub-plot of aFirst Nationsstoryline around local teenagers played by Leonie Whyman and Kartanya Maynard.[6] Deadlochwas written by McCartney and McLennan along with Kim Wilson, Christian White, Anchuli Felicia King, Kirsty Fisher, and Madelaine Sami. Production on the series got underway in February 2022. Directors on the series included Ben Chessell, Gracie Otto, andBeck Cole. Production is by Andy Walker for Prime Video, Guesswork Television, and OK Great Productions, with Fiona McConaghy as co-producer. McCartney, McLennan, Kevin Whyte, and Tanya Phegan were executive producers.[7]The score was written by Amanda Brown.[8] The series was renewed for a second season on 8 July 2024.[1] Filming took place in southernTasmania, outsideHobart, aroundCygnet,SnugandKingston.[9] Filming for season 2 moved from Tasmania to the Northern Territory.[1] The series premiered onAmazon Prime Videoon 2 June 2023 with three episodes, new episodes available weekly up to 7 July 2023.[10][11] The series was received positively. On thereview aggregatorwebsiteRotten Tomatoes, 100% of 22 critics' reviews are positive, with an average rating of 8.0/10. The website's consensus reads: "An irreverent twist on the crime procedural,Deadloch's addictive mixture of mystery and mordant humor makes most of its corpse-strewn competition look comparably stiff."[12] In a favourable response from Luke Buckmaster ofThe Guardian, he gave the series four out of five stars, and praised creators Kate McCartney and Kate McLennan – "They are moving into the next phase of their career, with Deadloch, a narratively richer series that’s dark and dramatic, and often also very funny."[13]In a positive review from Pemi Bakshi ofGraziamagazine said that "the eight-part series blends humour and commentary to bring us a wickedly entertaining take on the detective show genre."[14]In a somewhat more mixed review for websiteScreen Hub, Stephen A. Russell gave a rating of three stars out of five and commented that "WhileDeadloch's far from dead on arrival, its enervating lack of structural ambition did kill a lot of the buzz I had going in."[15]
https://en.wikipedia.org/wiki/Deadloch
TheHilbert–Huang transform(HHT) is a way to decompose asignalinto so-called intrinsic mode functions (IMF) along with a trend, and obtaininstantaneous frequencydata. It is designed to work well for data that isnonstationaryandnonlinear. The Hilbert–Huang transform (HHT), aNASAdesignated name,[1]was proposed byNorden E. Huang. It is the result of the empirical mode decomposition (EMD) and theHilbert spectral analysis(HSA). The HHT uses the EMD method to decompose asignalinto so-calledintrinsic mode functions(IMF) with a trend, and applies the HSA method to the IMFs to obtaininstantaneous frequencydata. Since the signal is decomposed in time domain and the length of the IMFs is the same as the original signal, HHT preserves the characteristics of the varying frequency. This is an important advantage of HHT since a real-world signal usually has multiple causes happening in different time intervals. The HHT provides a new method of analyzingnonstationaryandnonlineartime series data. The fundamental part of the HHT is theempirical mode decomposition(EMD) method. Breaking down signals into various components, EMD can be compared with other analysis methods such asFourier transformandWavelet transform. Using the EMD method, any complicated data set can be decomposed into a finite and often small number of components. These components form a complete and nearly orthogonal basis for the original signal. In addition, they can be described as intrinsic mode functions (IMF).[2] Because the first IMF usually carries the most oscillating (high-frequency) components, it can be rejected to remove high-frequency components (e.g., random noise).[3][4]EMD based smoothing algorithms have been widely used in seismic data processing, where high-quality seismic records are highly demanded.[5][6] Without leaving the time domain, EMD isadaptiveand highly efficient.[7]Since the decomposition is based on the local characteristic time scale of the data, it can be applied tononlinearandnonstationaryprocesses.[7] Anintrinsic mode function(IMF) is defined as a function that satisfies the following requirements: It represents a generally simpleoscillatorymode as a counterpart to the simpleharmonicfunction. By definition, an IMF is any function with the same number ofextremaand zero crossings, whose envelopes are symmetric with respect to zero.[7]This definition guarantees a well-behavedHilbert transformof the IMF. Hilbert spectral analysis(HSA) is a method for examining each IMF'sinstantaneous frequencyas functions of time. The final result is a frequency-time distribution of signal amplitude (or energy), designated as theHilbert spectrum, which permits the identification of localized features. The Intrinsic Mode Function (IMF) amplitude and frequency can vary with time and it must satisfy the rule below: The empirical mode decomposition (EMD) method is a necessary step to reduce any given data into a collection of intrinsic mode functions (IMF) to which theHilbert spectralanalysis can be applied. IMF represents asimple oscillatory modeas a counterpart to the simpleharmonicfunction, but it is much more general: instead of constant amplitude and frequency in a simpleharmoniccomponent, an IMF can have variable amplitude and frequency along the time axis. The procedure of extracting an IMF is called sifting. The sifting process is as follows: The upper and lower envelopes should cover all the data between them. Theirmeanism1. The difference between the data andm1is the first componenth1: Ideally,h1should satisfy the definition of an IMF, since the construction of h1described above should have made itsymmetricand having allmaximapositive and allminimanegative. After the first round of sifting, a crest may become a localmaximum. Newextremagenerated in this way actually reveal the proper modes lost in the initial examination. In the subsequent sifting process, h1can only be treated as a proto-IMF. In the next step,h1is treated as data: After repeated sifting up toktimes, h1becomes an IMF, that is Then,h1kis designated as the first IMF component of the data: The stoppage criterion determines the number of sifting steps to produce an IMF. Following are the four existing stoppage criterion: This criterion is proposed by Huang et al. (1998). It is similar to theCauchy convergence test, and we define a sum of the difference, SD, as This criterion is based on the so-called S-number, which is defined as the number of consecutive siftings for which the number of zero-crossings andextremaare equal or at most differing by one. Specifically, an S-number is pre-selected. The sifting process will stop only if, for S consecutive siftings, the numbers of zero-crossings and extrema stay the same, and are equal or at most differ by one. Proposed by Rilling,Flandrinand Gonçalvés, threshold method set two threshold values to guaranteeing globally small fluctuations in the meanwhile taking in account locally large excursions.[8] Proposed by Cheng, Yu and Yang, energy different tracking method utilized the assumption that the original signal is a composition of orthogonal signals, and calculate the energy based on the assumption. If the result of EMD is not an orthogonal basis of the original signal, the amount of energy will be different from the original energy.[9] Once a stoppage criterion is selected, the first IMF, c1, can be obtained. Overall, c1should contain the finest scale or the shortest period component of thesignal. We can, then, separate c1from the rest of the data byX(t)−c1=r1.{\displaystyle X(t)-c_{1}=r_{1}.\,}Since the residue, r1, still contains longer period variations in the data, it is treated as the new data and subjected to the same sifting process as described above. This procedure can be repeated for all the subsequent rj's, and the result is The sifting process finally stops when theresidue, rn, becomes amonotonic functionfrom which no more IMF can be extracted. From the above equations, we can induce that Thus, a decomposition of the data into n-empirical modes is achieved. The components of the EMD are usually physically meaningful, for the characteristic scales are defined by the physical data. Flandrin et al. (2003) and Wu and Huang (2004) have shown that the EMD is equivalent to a dyadic filter bank.[6][10] Having obtained the intrinsic mode function components, theinstantaneous frequencycan be computed using theHilbert transform. After performing the Hilbert transform on each IMF component, the original data can be expressed as the real part, Real, in the following form: In the above examples, all signals are one-dimensional signals, and in the case of two-dimensional signals, the Hilbert-Huang Transform can be applied for image and video processing in the following ways: Chen and Feng [2003] proposed a technique to improve the HHT procedure.[28]The authors noted that the EMD is limited in distinguishing different components innarrow-bandsignals. The narrow band may contain either (a) components that have adjacent frequencies or (b) components that are not adjacent in frequency but for which one of the components has a much higherenergyintensitythan the other components. The improved technique is based on beating-phenomenon waves. Datig and Schlurmann [2004][29]conducted a comprehensive study on the performance and limitations of HHT with particular applications toirregular water waves. The authors did extensive investigation into thespline interpolation. The authors discussed using additional points, both forward and backward, to determine better envelopes. They also performed aparametric studyon the proposed improvement and showed significant improvement in the overall EMD computations. The authors noted that HHT is capable of differentiating between time-variant components from any given data. Their study also showed that HHT was able to distinguish between riding and carrier waves. Huang and Wu [2008][30]reviewed applications of the Hilbert–Huang transformation emphasizing that the HHT theoretical basis is purely empirical, and noting that "one of the main drawbacks of EMD is mode mixing". They also outline outstanding open problems with HHT, which include: End effects of the EMD, Spline problems, Best IMF selection and uniqueness. Although the ensemble EMD (EEMD) may help mitigate the latter. End effect occurs at the beginning and end of the signal because there is no point before the first data point and after the last data point to be considered together. However, in most cases, these endpoints are not the extreme value of the signal. Therefore, when doing the EMD process of the HHT, the extreme envelope will diverge at the endpoints and cause significant errors. This error distorts the IMF waveform at its endpoints. Furthermore, the error in the decomposition result accumulates through each repetition of the sifting process.[31]When computing the instantaneous frequency and amplitude of IMFs, Fast Fourier Transform (FFT) result may cause Gibbs phenomenon and frequency leakage, leading to information loss. Here are several methods are proposed to solve the end effect in HHT: This method leverages the inherent variation trend of the signal to extend itself, resulting in extensions that closely resemble the characteristics of the original data. design and compute some needed parameters from the original signal for building a particular mathematical model. After that, the model predicts the trend of the two endpoints. Mode mixing problem happens during the EMD process. A straightforward implementation of the sifting procedure produces mode mixing due to IMF mode rectification. Specific signals may not be separated into the same IMFs every time. This problem makes it hard to implement feature extraction, model training, and pattern recognition since the feature is no longer fixed in one labeling index. Mode mixing problem can be avoided by including an intermittence test during the HHT process.[32] Source:[33] The masking method improves EMD by allowing for the separation of similar frequency components through the following steps: The optimal choice of amplitude depends on the frequencies Overall, the masking method enhances EMD by providing a means to prevent mode mixing, improving the accuracy and applicability of EMD in signal analysis Source:[34] EEMD adds finite amplitude white noise to the original signal. After that, decompose the signal into IMFs using EMD. The processing steps of EEMD are developed as follows: The effects of the decomposition using the EEMD are that the added white noise series cancel each other(or fill all the scale space uniformly). The noise also enables the EMD method to be a truly dyadic filter bank for any data, which means that a signal of a similar scale in a noisy data set could be contained in one IMF component, significantly reducing the chance of mode mixing. This approach preserves the physical uniqueness of decomposition and represents a major improvement over the EMD method.
https://en.wikipedia.org/wiki/Empirical_mode_decomposition
Ingeometry, anincidencerelationis aheterogeneous relationthat captures the idea being expressed when phrases such as "a pointlies ona line" or "a line iscontained ina plane" are used. The most basic incidence relation is that between a point,P, and a line,l, sometimes denotedPIl. IfPandlare incident,PIl, the pair(P,l)is called aflag. There are many expressions used in common language to describe incidence (for example, a linepasses througha point, a pointlies ina plane, etc.) but the term "incidence" is preferred because it does not have the additional connotations that these other terms have, and it can be used in a symmetric manner. Statements such as "linel1intersects linel2" are also statements about incidence relations, but in this case, it is because this is a shorthand way of saying that "there exists a pointPthat is incident with both linel1and linel2". When one type of object can be thought of as a set of the other type of object (viz., a plane is a set of points) then an incidence relation may be viewed ascontainment. Statements such as "any two lines in a plane meet" are calledincidence propositions. This particular statement is true in aprojective plane, though not true in theEuclidean planewhere lines may beparallel. Historically,projective geometrywas developed in order to make the propositions of incidence true without exceptions, such as those caused by the existence of parallels. From the point of view ofsynthetic geometry, projective geometryshould bedeveloped using such propositions asaxioms. This is most significant for projective planes due to the universal validity ofDesargues' theoremin higher dimensions. In contrast, theanalytic approachis to defineprojective spacebased onlinear algebraand utilizinghomogeneous co-ordinates. The propositions of incidence are derived from the following basic result onvector spaces: given subspacesUandWof a (finite-dimensional) vector spaceV, the dimension of their intersection isdimU+ dimW− dim (U+W). Bearing in mind that the geometric dimension of the projective spaceP(V)associated toVisdimV− 1and that the geometric dimension of any subspace is positive, the basic proposition of incidence in this setting can take the form:linear subspacesLandMof projective spacePmeet provideddimL+ dimM≥ dimP.[1] The following sections are limited toprojective planesdefined overfields, often denoted byPG(2,F), whereFis a field, orP2F. However these computations can be naturally extended to higher-dimensional projective spaces, and the field may be replaced by adivision ring(or skewfield) provided that one pays attention to the fact that multiplication is notcommutativein that case. LetVbe the three-dimensional vector space defined over the fieldF. The projective planeP(V) = PG(2,F)consists of the one-dimensional vector subspaces ofV, calledpoints, and the two-dimensional vector subspaces ofV, calledlines. Incidence of a point and a line is given by containment of the one-dimensional subspace in the two-dimensional subspace. Fix a basis forVso that we may describe its vectors as coordinate triples (with respect to that basis). A one-dimensional vector subspace consists of a non-zero vector and all of its scalar multiples. The non-zero scalar multiples, written as coordinate triples, are the homogeneous coordinates of the given point, calledpoint coordinates. With respect to this basis, the solution space of a single linear equation{(x,y,z) |ax+by+cz= 0} is a two-dimensional subspace ofV, and hence a line ofP(V). This line may be denoted byline coordinates[a,b,c], which are also homogeneous coordinates since non-zero scalar multiples would give the same line. Other notations are also widely used. Point coordinates may be written as column vectors,(x,y,z)T, with colons,(x:y:z), or with a subscript,(x,y,z)P. Correspondingly, line coordinates may be written as row vectors,(a,b,c), with colons,[a:b:c]or with a subscript,(a,b,c)L. Other variations are also possible. Given a pointP= (x,y,z)and a linel= [a,b,c], written in terms of point and line coordinates, the point is incident with the line (often written asPIl), if and only if, This can be expressed in other notations as: No matter what notation is employed, when the homogeneous coordinates of the point and line are just considered as ordered triples, their incidence is expressed as having theirdot productequal 0. LetP1andP2be a pair of distinct points with homogeneous coordinates(x1,y1,z1)and(x2,y2,z2)respectively. These points determine a unique linelwith an equation of the formax+by+cz= 0and must satisfy the equations: In matrix form this system of simultaneous linear equations can be expressed as: This system has a nontrivial solution if and only if thedeterminant, Expansion of this determinantal equation produces a homogeneous linear equation, which must be the equation of linel. Therefore, up to a common non-zero constant factor we havel= [a,b,c]where: In terms of thescalar triple productnotation for vectors, the equation of this line may be written as: whereP= (x,y,z)is a generic point. Points that are incident with the same line are said to becollinear. The set of all points incident with the same line is called arange. IfP1= (x1,y1,z1),P2= (x2,y2,z2), andP3= (x3,y3,z3), then these points are collinear if and only if i.e., if and only if thedeterminantof the homogeneous coordinates of the points is equal to zero. Letl1= [a1,b1,c1]andl2= [a2,b2,c2]be a pair of distinct lines. Then the intersection of linesl1andl2is point aP= (x0,y0,z0)that is the simultaneous solution (up to a scalar factor) of the system of linear equations: The solution of this system gives: Alternatively, consider another linel= [a,b,c]passing through the pointP, that is, the homogeneous coordinates ofPsatisfy the equation: Combining this equation with the two that defineP, we can seek a non-trivial solution of the matrix equation: Such a solution exists provided the determinant, The coefficients ofa,bandcin this equation give the homogeneous coordinates ofP. The equation of the generic line passing through the pointPin scalar triple product notation is: Lines that meet at the same point are said to beconcurrent. The set of all lines in a plane incident with the same point is called apencil of linescentered at that point. The computation of the intersection of two lines shows that the entire pencil of lines centered at a point is determined by any two of the lines that intersect at that point. It immediately follows that the algebraic condition for three lines,[a1,b1,c1], [a2,b2,c2], [a3,b3,c3]to be concurrent is that the determinant,
https://en.wikipedia.org/wiki/Incidence_(geometry)
Aneural networkis a group of interconnected units calledneuronsthat send signals to one another. Neurons can be eitherbiological cellsormathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In the context of biology, a neural network is a population of biologicalneuronschemically connected to each other bysynapses. A given neuron can be connected to hundreds of thousands of synapses.[1]Each neuron sends and receiveselectrochemicalsignals calledaction potentialsto its connected neighbors. A neuron can serve anexcitatoryrole, amplifying and propagating signals it receives, or aninhibitoryrole, suppressing signals instead.[1] Populations of interconnected neurons that are smaller than neural networks are calledneural circuits. Very large interconnected networks are calledlarge scale brain networks, and many of these together formbrainsandnervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and acrossneuromuscular junctionstomuscle cells, where they cause contraction and thereby motion.[2] In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines,[3]today they are almost always implemented insoftware. Neuronsin an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).[4]The "signal" input to each neuron is a number, specifically alinear combinationof the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to itsactivation function. The behavior of the network depends on the strengths (orweights) of the connections between neurons. A network is trained by modifying these weights throughempirical risk minimizationorbackpropagationin order to fit some preexisting dataset.[5] The termdeep neural networkrefers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems inartificial intelligence, and have thereby found applications in many disciplines, includingpredictive modeling,adaptive control,facial recognition,handwriting recognition,general game playing, andgenerative AI. The theoretical base for contemporary neural networks was independently proposed byAlexander Bainin 1873[6]andWilliam Jamesin 1890.[7]Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,Donald HebbdescribedHebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.[8] Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach ofconnectionism. However, starting with the invention of theperceptron, a simple artificial neural network, byWarren McCullochandWalter Pittsin 1943,[9]followed by the implementation of one in hardware byFrank Rosenblattin 1957,[3]artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
https://en.wikipedia.org/wiki/Neural_network
Acantis thejargonor language of a group, often employed to exclude or mislead people outside the group.[1]It may also be called acryptolect,argot,pseudo-language,anti-languageorsecret language. Each term differs slightly in meaning; their uses are inconsistent. There are two main schools of thought on the origin of the wordcant: Anargot(English:/ˈɑːrɡoʊ/; fromFrenchargot[aʁɡo]'slang') is a language used by various groups to prevent outsiders from understanding their conversations. The termargotis also used to refer to the informal specialized vocabulary from a particular field of study, occupation, or hobby, in whichsenseit overlaps withjargon. In his 1862 novelLes Misérables,Victor Hugorefers to that argot as both "the language of the dark" and "the language of misery".[4] The earliest known record of the termargotin this context was in a 1628 document. The word was probably derived from the contemporary nameles argotiers, given to a group of thieves at that time.[5] Under the strictest definition, anargotis a proper language with its own grammatical system.[6]Such complete secret languages are rare because the speakers usually have some public language in common, on which the argot is largely based. Such argots arelexicallydivergentformsof a particular language, with a part of its vocabulary replaced by words unknown to the larger public;argotused in thissenseissynonymouswithcant. For example,argotin this sense is used for systems such asverlanandlouchébem, which retain French syntax and apply transformations only to individual words (and often only to a certain subset of words, such as nouns, or semantic content words).[7]Such systems are examples ofargotsà clef, or "coded argots".[7] Specific words can go from argot into everyday speech or the other way. For example, modern Frenchloufoque'crazy', 'goofy', now common usage, originated in thelouchébemtransformation of Fr.fou'crazy'. In the field of medicine,physicianshave been said to have their own spoken argot, cant, or slang, which incorporates commonly understood abbreviations and acronyms, frequently used technicalcolloquialisms, and much everyday professional slang (that may or may not be institutionally or geographically localized).[8]While many of these colloquialisms may prove impenetrable to most lay people, few seem to be specifically designed to conceal meaning from patients (perhaps because standard medical terminology would usually suffice anyway).[8] The concept of theanti-languagewas first defined and studied by the linguistMichael Halliday, who used the term to describe thelingua francaof ananti-society.[9]An anti-society is a small, separate community intentionally created within a larger society as an alternative to or resistance of it.[9]For example,Adam Podgóreckistudied one anti-society composed of Polish prisoners; Bhaktiprasad Mallik of Sanskrit College studied another composed of criminals in Calcutta.[9] These societies develop anti-languages as a means to prevent outsiders from understanding their communication and as a manner of establishing a subculture that meets the needs of their alternative social structure.[10]Anti-languages differ fromslangand jargon in that they are used solely among ostracized social groups, including prisoners,[11]criminals, homosexuals,[10]and teenagers.[12]Anti-languages use the same basic vocabulary and grammar as their native language in an unorthodox fashion. For example, anti-languages borrow words from other languages, create unconventional compounds, or utilize new suffixes for existing words. Anti-languages may also change words usingmetathesis, reversal of sounds or letters (e.g., apple toelppa), or substituting their consonants.[9]Therefore, anti-languages are distinct and unique and are not simplydialectsof existing languages. In his essay "Anti-Language", Halliday synthesized the research of Thomas Harman,Adam Podgórecki, and Bhaktiprasad Mallik to explore anti-languages and the connection between verbal communication and the maintenance of a social structure. For this reason, the study of anti-languages is both a study ofsociologyand linguistics. Halliday's findings can be compiled as a list of nine criteria that a language must meet to be considered an anti-language: Examples of anti-languages includeCockney rhyming slang,CB slang,verlan, thegrypseraof Polish prisons,thieves' cant,[13]Polari,[14]andBangime.[15] Anti-languages are sometimes created by authors and used by characters in novels. These anti-languages do not have complete lexicons, cannot be observed in use forlinguistic description, and therefore cannot be studied in the same way a language spoken by an existing anti-society would. However, they are still used in the study of anti-languages. Roger Fowler's "Anti-Languages in Fiction" analyzesAnthony Burgess'sA Clockwork OrangeandWilliam S. Burroughs'Naked Lunchto redefine the nature of the anti-language and to describe its ideological purpose.[16] A Clockwork Orangeis a popular example of a novel where the main character is a teenage boy who speaks an anti-language calledNadsat. This language is often referred to as an argot, but it has been argued that it is an anti-language because of the social structure it maintains through the social class of the droogs.[12] In parts ofConnacht, in Ireland,cantmainly refers to anauction, typically onfair day("Cantmen and Cantwomen, some from as far away as Dublin, would converge on Mohill on a Fair Day, ... set up their stalls ... and immediately start auctioning off their merchandise") and secondly means talk ("very entertaining conversation was often described as 'great cant'" or "crosstalk").[17][18] In Scotland, two unrelated creole languages are termedcant.Scottish Cant(a mixed language, primarilyScotsandRomaniwithScottish Gaelicinfluences) is spoken by lowland Roma groups.Highland Traveller's Cant(orBeurla Reagaird) is aGaelic-based cant of the Indigenous Highland Traveller population.[2]The cants are mutually unintelligible. The word has also been used as asuffixto coin names for modern-day jargons such as "medicant", a term used to refer to the type of language employed by members of the medical profession that is largely unintelligible to lay people.[1] Thethieves' cantwas a feature of popular pamphlets and plays, particularly between 1590 and 1615, but continued to feature in literature through the 18th century. There are questions about how genuinely the literature reflectedvernacularuse in the criminal underworld. A thief in 1839 claimed that the cant he had seen in print was nothing like the cant then used by "gypsies, thieves, and beggars." He also said that each of these used distinct vocabularies, which overlapped, the gypsies having a cant word for everything, and the beggars using a lower style than the thieves.[23]
https://en.wikipedia.org/wiki/Argot
High Speed Packet Access(HSPA)[1]is an amalgamation of twomobileprotocols—High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA)—that extends and improves the performance of existing3Gmobile telecommunication networks using theWCDMAprotocols. A further-improved3GPPstandard calledEvolved High Speed Packet Access(also known as HSPA+) was released late in 2008, with subsequent worldwide adoption beginning in 2010. The newer standard allowsbit ratesto reach as high as 337 Mbit/s in the downlink and 34 Mbit/s in the uplink; however, these speeds are rarely achieved in practice.[2] The first HSPA specifications supported increased peak data rates of up to 14 Mbit/s in the downlink and 5.76 Mbit/s in the uplink. They also reduced latency and provided up to five times more system capacity in the downlink and up to twice as much system capacity in the uplink compared with original WCDMA protocol. High Speed Downlink Packet Access(HSDPA) is an enhanced3G(third-generation)mobilecommunications protocolin the High-Speed Packet Access (HSPA) family. HSDPA is also known as3.5Gand3G+. It allows networks based on theUniversal Mobile Telecommunications System(UMTS) to have higher data speeds and capacity. HSDPA also decreaseslatency, and therefore theround-trip timefor applications. HSDPA was introduced in3GPPRelease 5. It was accompanied by an improvement to the uplink that provided a new bearer of 384 kbit/s (the previous maximum bearer was 128 kbit/s).Evolved High Speed Packet Access(HSPA+), introduced in 3GPP Release 7, further increased data rates by adding 64QAM modulation,MIMO, andDual-Carrier HSDPAoperation. Under 3GPP Release 11, even higher speeds of up to 337.5 Mbit/s were possible.[3] The first phase of HSDPA was specified in 3GPP Release 5. This phase introduced new basic functions and was aimed to achieve peak data rates of 14.0 Mbit/s with significantly reduced latency. The improvement in speed and latency reduced the cost per bit and enhanced support for high-performance packet data applications. HSDPA is based on shared channel transmission, and its key features are shared channel and multi-code transmission,higher-order modulation, shortTransmission Time Interval(TTI), fast link adaptation and scheduling, and fasthybrid automatic repeat request(HARQ). Additional new features include the High Speed Downlink Shared Channels (HS-DSCH),quadrature phase-shift keying, 16-quadrature amplitude modulation, and the High Speed Medium Access protocol (MAC-hs) in base stations. The upgrade to HSDPA is often just a software update for WCDMA networks. In HSDPA, voice calls are usually prioritized over data transfer. The following table is derived from table 5.1a of the release 11 of 3GPP TS 25.306[4]and shows maximum data rates of different device classes and by what combination of features they are achieved. The per-cell, per-stream data rate is limited by the "maximum number of bits of an HS-DSCH transport block received within an HS-DSCH TTI" and the "minimum inter-TTI interval". The TTI is 2 milliseconds. So, for example, Cat 10 can decode 27,952 bits / 2 ms = 13.976 Mbit/s (and not 14.4 Mbit/s as often claimed incorrectly). Categories 1-4 and 11 have inter-TTI intervals of 2 or 3, which reduces the maximum data rate by that factor. Dual-Cell and MIMO 2x2 each multiply the maximum data rate by 2, because multiple independent transport blocks are transmitted over different carriers or spatial streams, respectively. The data rates given in the table are rounded to one decimal point. Further UE categories were defined from 3GGP Release 7 onwards asEvolved HSPA(HSPA+) and are listed inEvolved HSDPA UE Categories. As of 28 August 2009[update], 250 HSDPA networks had commercially launchedmobile broadbandservices in 109 countries. 169 HSDPA networks supported 3.6 Mbit/s peak downlink data throughput, and a growing number delivered 21 Mbit/s peak data downlink.[citation needed] CDMA2000-EVDOnetworks had the early lead on performance. In particular,Japaneseproviders were highly successful benchmarks for this network standard. However, this later changed in favor of HSDPA, as an increasing number of providers worldwide began adopting it. In 2007, an increasing number of telcos worldwide began sellingHSDPA USB modemsto provide mobile broadband connections. In addition, the popularity of HSDPA landline replacement boxes grew—these provided HSDPA for data viaEthernetandWi-Fi, as well as ports for connecting traditional landline telephones. Some were marketed with connection speeds of "up to 7.2 Mbit/s"[5]under ideal conditions. However, these services could be slower, such as when in fringe coverage indoors. High-Speed Uplink Packet Access(HSUPA) is a 3G mobile telephonyprotocolin the HSPA family. It is specified and standardized in 3GPP Release 6 to improve the uplink data rate to 5.76 Mbit/s, extend capacity, and reduce latency. Together with additional improvements, this allows for new features such asVoice over Internet Protocol(VoIP), uploading pictures, and sending large e-mail messages. HSUPA was the second major step in the UMTS evolution process. It has since been superseded by newer technologies with higher transfer rates, such asLTE(150 Mbit/s for downlink and 50 Mbit/s for uplink) andLTE Advanced(maximum downlink rates of over 1 Gbit/s). HSUPA adds a new transport channel to WCDMA, called the Enhanced Dedicated Channel (E-DCH). It also features several improvements similar to those of HSDPA, including multi-code transmission, shorter transmission time interval enabling fasterlink adaptation, fast scheduling, and fasthybrid automatic repeat request(HARQ) with incremental redundancy, makingretransmissionsmore effective. Similar to HSDPA, HSUPA uses a "packet scheduler", but it operates on a "request-grant" principle where theuser equipment(UE) requests permission to send data and the scheduler decides when and how many UEs will be allowed to do so. A request for transmission contains data about the state of the transmission buffer and the queue at the UE and its available power margin. However, unlike HSDPA, uplink transmissions are notorthogonalto each other. In addition to this "scheduled" mode of transmission, the standards allow a self-initiated transmission mode from the UEs, denoted "non-scheduled". The non-scheduled mode can, for example, be used for VoIP services for which even the reduced TTI and theNode Bbased scheduler are unable to provide the necessary short delay time and constant bandwidth. Each MAC-d flow (i.e., QoS flow) is configured to use either scheduled or non-scheduled modes. The UE adjusts the data rate for scheduled and non-scheduled flows independently. The maximum data rate of each non-scheduled flow is configured at call setup, and typically not frequently changed. The power used by the scheduled flows is controlled dynamically by the Node B through absolute grant (consisting of an actual value) and relative grant (consisting of a single up/down bit) messages. At thephysical layer, HSUPA introduces the following new channels: The following table shows uplink speeds for the different categories of HSUPA: Further UE categories were defined from 3GGP Release 7 onwards as Evolved HSPA (HSPA+) and are listed inEvolved HSUPA UE Categories. Evolved HSPA(also known as HSPA Evolution, HSPA+) is a wireless broadband standard defined in3GPPrelease 7 of the WCDMA specification. It provides extensions to the existing HSPA definitions and is thereforebackward compatibleall the way to the original Release 99 WCDMA network releases. Evolved HSPA provides data rates between 42.2 and 56 Mbit/s in the downlink and 22 Mbit/s in the uplink (per 5 MHz carrier) with multiple input, multiple output (2x2 MIMO) technologies and higher order modulation (64 QAM). With Dual Cell technology, these can be doubled. Since 2011, HSPA+ has been widely deployed among WCDMA operators, with nearly 200 commitments.[6]
https://en.wikipedia.org/wiki/High_Speed_Packet_Access
IEEE 802.11n-2009, or802.11n, is a wireless-networking standard that uses multiple antennas to increase data rates. TheWi-Fi Alliancehas also retroactively labelled the technology for the standard asWi-Fi 4.[4][5]It standardized support formultiple-input multiple-output(MIMO),frame aggregation, and security improvements, among other features, and can be used in the 2.4 GHz or 5 GHz frequency bands. Being the firstWi-Fistandard to introduceMIMOsupport, devices and systems which supported the 802.11n standard (or draft versions thereof) were sometimes referred to as MIMO Wi-Fi products, especially prior to the introduction of the next generation standard.[6]The use of MIMO-OFDM(orthogonal frequency division multiplexing) to increase the data rate while maintaining the same spectrum as 802.11a was first demonstrated by Airgo Networks.[7] The purpose of the standard is to improve network throughput over the two previous standards—802.11aand802.11g—with a significant increase in the maximumnet data ratefrom 54 Mbit/s to 72 Mbit/s with a single spatial stream in a 20 MHz channel, and 600 Mbit/s (slightly highergross bit rateincluding for example error-correction codes, and slightly lower maximumthroughput) with the use of four spatial streams at a channel width of 40 MHz.[8][9] IEEE 802.11n-2009 is an amendment to theIEEE 802.11-2007wireless-networking standard.802.11is a set ofIEEEstandards that govern wireless networking transmission methods. They are commonly used today in their802.11a,802.11b,802.11g, 802.11n,802.11acand802.11axversions to provide wireless connectivity in homes and businesses. Development of 802.11n began in 2002, seven years before publication. The 802.11n protocol is now Clause 20 of the publishedIEEE 802.11-2012standard and subsequently renamed to clause 19 of the published IEEE 802.11-2020 standard. IEEE 802.11n is an amendment to IEEE 802.11-2007 as amended byIEEE 802.11k-2008,IEEE 802.11r-2008,IEEE 802.11y-2008, andIEEE 802.11w-2009, and builds on previous 802.11 standards by adding amultiple-input multiple-output(MIMO) system and 40 MHz channels to thePHY (physical layer)andframe aggregationto theMAC layer. There were older proprietary implementations of MIMO and 40MHz channels such asXpress,Super GandNitrowhich were based upon 802.11g and 802.11a technology, but this was the first time it was standardized across all radio manufacturers. MIMO is a technology that uses multiple antennas to coherently resolve more information than possible using a single antenna. One way it provides this is throughspatial division multiplexing(SDM), which spatially multiplexes multiple independent data streams, transferred simultaneously within one spectral channel of bandwidth. MIMO SDM can significantly increase data throughput as the number of resolved spatial data streams is increased. Each spatial stream requires a discrete antenna at both the transmitter and the receiver. In addition, MIMO technology requires a separate radio-frequency chain and analog-to-digital converter for each antenna, making it more expensive to implement than non-MIMO systems. Channels operating with a width of 40 MHz are another feature incorporated into 802.11n; this doubles the channel width from 20 MHz in previous 802.11 PHYs to transmit data, and provides twice the PHY data rate available over a single 20 MHz channel. It can be enabled in the 5 GHz mode, or within the 2.4 GHz mode if there is knowledge that it will not interfere with any other 802.11 or non-802.11 (such as Bluetooth) system using the same frequencies.[10]The MIMO architecture, together with the wider channels, offers increased physical transfer rate over standard802.11a(5 GHz) and802.11g(2.4 GHz).[11] The transmitter and receiver useprecodingand postcoding techniques, respectively, to achieve the capacity of a MIMO link. Precoding includesspatial beamformingand spatial coding, where spatial beamforming improves the received signal quality at the decoding stage. Spatial coding can increase data throughput viaspatial multiplexingand increase range by exploiting the spatial diversity, through techniques such asAlamouti coding. The number of simultaneous data streams is limited by the minimum number of antennas in use on both sides of the link. However, the individual radios often further limit the number of spatial streams that may carry unique data. Thea×b:cnotation helps identify what a given radio is capable of. The first number (a) is the maximum number of transmit antennas or transmitting TF chains that can be used by the radio. The second number (b) is the maximum number of receive antennas or receiving RF chains that can be used by the radio. The third number (c) is the maximum number of data spatial streams the radio can use. For example, a radio that can transmit on two antennas and receive on three, but can only send or receive two data streams, would be2 × 3 : 2. The 802.11n draft allows up to4 × 4 : 4.Common configurations of 11n devices are2 × 2 : 2,2 × 3 : 2, and3 × 2 : 2. All three configurations have the same maximum throughputs and features, and differ only in the amount of diversity the antenna systems provide. In addition, a fourth configuration,3 × 3 : 3is becoming common, which has a higher throughput, due to the additional data stream.[12] Assuming equal operating parameters to an 802.11g network achieving 54 megabits per second (on a single 20 MHz channel with one antenna), an 802.11n network can achieve 72 megabits per second (on a single 20 MHz channel with one antenna and 400 nsguard interval); 802.11n's speed may go up to 150 megabits per second if there are not other Bluetooth, microwave or Wi-Fi emissions in the neighborhood by using two 20 MHz channels in 40 MHz mode. If more antennas are used, then 802.11n can go up to 288 megabits per second in 20 MHz mode with four antennas, or 600 megabits per second in 40 MHz mode with four antennas and 400 ns guard interval. Because the 2.4 GHz band is seriously congested in most urban areas, 802.11n networks usually have more success in increasing data rate by utilizing more antennas in 20 MHz mode rather than by operating in the 40 MHz mode, as the 40 MHz mode requires a relatively free radio spectrum which is only available in rural areas away from cities. Thus, network engineers installing an 802.11n network should strive to select routers and wireless clients with the most antennas possible (one, two, three or four as specified by the 802.11n standard) and try to make sure that the network's bandwidth will be satisfactory even on the 20 MHz mode. Data rates up to 600 Mbit/s are achieved only with the maximum of four spatial streams using one 40 MHz-wide channel. Various modulation schemes and coding rates are defined by the standard, which also assigns an arbitrary number to each; this number is themodulation and coding schemeindex, orMCS index. The table below shows the relationships between the variables that allow for the maximum data rate. GI (Guard Interval): Timing between symbols.[13] 20 MHz channel uses anFFTof 64, of which: 56OFDMsubcarriers, 52 are for data and 4 arepilot toneswith a carrier separation of 0.3125 MHz (20 MHz/64) (3.2 μs). Each of these subcarriers can be aBPSK,QPSK, 16-QAMor 64-QAM. The total bandwidth is 20 MHz with an occupied bandwidth of 17.8 MHz. Total symbol duration is 3.6 or 4microseconds, whichincludesa guard interval of 0.4 (also known as short guard interval (SGI)) or 0.8 microseconds. PHY level data rate does not match user level throughput because of 802.11 protocol overheads, like the contention process, interframe spacing, PHY level headers (Preamble + PLCP) and acknowledgment frames. The mainmedia access control(MAC) feature that provides a performance improvement is aggregation. Two types of aggregation are defined: Frame aggregationis a process of packing multiple MSDUs or MPDUs together to reduce the overheads and average them over multiple frames, thereby increasing the user level data rate. A-MPDU aggregation requires the use ofblock acknowledgementor BlockAck, which was introduced in 802.11e and has been optimized in 802.11n. When 802.11g was released to share the band with existing 802.11b devices, it provided ways of ensuringbackward compatibilitybetween legacy and successor devices. 802.11n extends the coexistence management to protect its transmissions from legacy devices, which include802.11g,802.11band802.11a. There are MAC and PHY level protection mechanisms as listed below: To achieve maximum output, a pure 802.11n 5 GHz network is recommended. The 5 GHz band has substantial capacity due to many non-overlapping radio channels and less radio interference as compared to the 2.4 GHz band.[14]An 802.11n-only network may be impractical for many users because they need to support legacy equipment that still is 802.11b/g only. In a mixed-mode system, an optimal solution would be to use a dual-radio access point and place the 802.11b/g traffic on the 2.4 GHz radio and the 802.11n traffic on the 5 GHz radio.[15]This setup assumes that all the 802.11n clients are 5 GHz capable, which is not a requirement of the standard. 5 GHz is optional on Wi-Fi 4; quite some Wi-Fi 4 capable devices only support 2.4 GHz and there is no practical way to upgrade them to support 5 GHz. Some enterprise-grade APs useband steeringto send 802.11n clients to the 5 GHz band, leaving the 2.4 GHz band for legacy clients. Band steering works by responding only to 5 GHz association requests and not the 2.4 GHz requests from dual-band clients.[16] The 2.4 GHzISM bandis fairly congested. With 802.11n, there is the option to double the bandwidth per channel to 40 MHz (fat channel) which results in slightly more than double the data rate. However, in North America, when in 2.4 GHz, enabling this option takes up to 82% of the unlicensed band. For example, channel 3 SCA (secondary channel above), also known as 3+7, reserves the first 9 out of the 11 channels available. In Europe and other places where channels 1–13 are available, allocating 1+5 uses slightly more than 50% of the channels, but the overlap with 9+13 is not usually significant as it lies at the edges of the bands, and so two 40 MHz bands typically work unless the transmitters are physically very closely spaced.[original research?] The specification calls for requiring one primary 20 MHz channel as well as a secondary adjacent channel spaced ±20 MHz away. The primary channel is used for communications with clients incapable of 40 MHz mode. When in 40 MHz mode, the center frequency is actually themeanof the primary and secondary channels. Local regulations may restrict certain channels from operation. For example, Channels 12 and 13 are normally unavailable for use as either a primary or secondary channel in North America. For further information, seeList of WLAN channels. TheWi-Fi Alliancehas upgraded its suite of compatibility tests for some enhancements that were finalized after a 2.0. Furthermore, it has affirmed that all draft-n certified products remain compatible with the products conforming to the final standards.[17] After the first draft of the IEEE 802.11n standard was published in 2006, many manufacturers began producing so-called "draft-n" products that claimed to comply with the standard draft, even before standard finalization which mean they might not be inter-operational with products produced according to IEEE 802.11 standard after the standard publication, nor even among themselves.[18]The Wi-Fi Alliance began certifying products based on IEEE 802.11n draft 2.0 mid-2007.[19][20]This certification program established a set of features and a level of interoperability across vendors supporting those features, thus providing one definition of "draft n" to ensure compatibility and interoperability. The baseline certification covers both 20 MHz and 40 MHz wide channels, and up to two spatial streams, for maximum throughputs of 144.4 Mbit/s for 20 MHz and 300 Mbit/s for 40 MHz (with shortguard interval). A number of vendors in both the consumer and enterprise spaces have built products that have achieved this certification.[21] The following are milestones in the development of 802.11n:[22]
https://en.wikipedia.org/wiki/IEEE_802.11n-2009#Data_rates
In the theory ofcluster analysis, thenearest-neighbor chain algorithmis analgorithmthat can speed up several methods foragglomerative hierarchical clustering. These are methods that take a collection of points as input, and create a hierarchy of clusters of points by repeatedly merging pairs of smaller clusters to form larger clusters. The clustering methods that the nearest-neighbor chain algorithm can be used for includeWard's method,complete-linkage clustering, andsingle-linkage clustering; these all work by repeatedly merging the closest two clusters but use different definitions of the distance between clusters. The cluster distances for which the nearest-neighbor chain algorithm works are calledreducibleand are characterized by a simple inequality among certain cluster distances. The main idea of the algorithm is to find pairs of clusters to merge by followingpathsin thenearest neighbor graphof the clusters. Every such path will eventually terminate at a pair of clusters that are nearest neighbors of each other, and the algorithm chooses that pair of clusters as the pair to merge. In order to save work by re-using as much as possible of each path, the algorithm uses astack data structureto keep track of each path that it follows. By following paths in this way, the nearest-neighbor chain algorithm merges its clusters in a different order than methods that always find and merge the closest pair of clusters. However, despite that difference, it always generates the same hierarchy of clusters. The nearest-neighbor chain algorithm constructs a clustering in time proportional to the square of the number of points to be clustered. This is also proportional to the size of its input, when the input is provided in the form of an explicitdistance matrix. The algorithm uses an amount of memory proportional to the number of points, when it is used for clustering methods such as Ward's method that allow constant-time calculation of the distance between clusters. However, for some other clustering methods it uses a larger amount of memory in an auxiliary data structure with which it keeps track of the distances between pairs of clusters. Many problems indata analysisconcernclustering, grouping data items into clusters of closely related items.Hierarchical clusteringis a version of cluster analysis in which the clusters form a hierarchy or tree-like structure rather than a strict partition of the data items. In some cases, this type of clustering may be performed as a way of performing cluster analysis at multiple different scales simultaneously. In others, the data to be analyzed naturally has an unknown tree structure and the goal is to recover that structure by performing the analysis. Both of these kinds of analysis can be seen, for instance, in the application of hierarchical clustering tobiological taxonomy. In this application, different living things are grouped into clusters at different scales or levels of similarity (species, genus, family, etc). This analysis simultaneously gives a multi-scale grouping of the organisms of the present age, and aims to accurately reconstruct the branching process orevolutionary treethat in past ages produced these organisms.[1] The input to a clustering problem consists of a set of points.[2]Aclusteris any proper subset of the points, and a hierarchical clustering is amaximalfamily of clusters with the property that any two clusters in the family are either nested ordisjoint. Alternatively, a hierarchical clustering may be represented as abinary treewith the points at its leaves; the clusters of the clustering are the sets of points in subtrees descending from each node of the tree.[3] In agglomerative clustering methods, the input also includes a distance function defined on the points, or a numerical measure of their dissimilarity. The distance or dissimilarity should be symmetric: the distance between two points does not depend on which of them is considered first. However, unlike the distances in ametric space, it is not required to satisfy thetriangle inequality.[2]Next, the dissimilarity function is extended from pairs of points to pairs of clusters. Different clustering methods perform this extension in different ways. For instance, in thesingle-linkage clusteringmethod, the distance between two clusters is defined to be the minimum distance between any two points from each cluster. Given this distance between clusters, a hierarchical clustering may be defined by agreedy algorithmthat initially places each point in its own single-point cluster and then repeatedly forms a new cluster by merging theclosest pairof clusters.[2] The bottleneck of this greedy algorithm is the subproblem of finding which two clusters to merge in each step. Known methods for repeatedly finding the closest pair of clusters in a dynamic set of clusters either require superlinear space to maintain adata structurethat can find closest pairs quickly, or they take greater than linear time to find each closest pair.[4][5]The nearest-neighbor chain algorithm uses a smaller amount of time and space than the greedy algorithm by merging pairs of clusters in a different order. In this way, it avoids the problem of repeatedly finding closest pairs. Nevertheless, for many types of clustering problem, it can be guaranteed to come up with the same hierarchical clustering as the greedy algorithm despite the different merge order.[2] Intuitively, the nearest neighbor chain algorithm repeatedly follows a chain of clustersA→B→C→ ...where each cluster is the nearest neighbor of the previous one, until reaching a pair of clusters that are mutual nearest neighbors.[2] In more detail, the algorithm performs the following steps:[2][6] When it is possible for one cluster to have multiple equal nearest neighbors, then the algorithm requires a consistent tie-breaking rule. For instance, one may assign arbitrary index numbers to all of the clusters, and then select (among the equal nearest neighbors) the one with the smallest index number. This rule prevents certain kinds of inconsistent behavior in the algorithm; for instance, without such a rule, the neighboring clusterDmight occur earlier in the stack than as the predecessor ofC.[7] Each iteration of the loop performs a single search for the nearest neighbor of a cluster, and either adds one cluster to the stack or removes two clusters from it. Every cluster is only ever added once to the stack, because when it is removed again it is immediately made inactive and merged. There are a total of2n− 2clusters that ever get added to the stack:nsingle-point clusters in the initial set, andn− 2internal nodes other than the root in the binary tree representing the clustering. Therefore, the algorithm performs2n− 2pushing iterations andn− 1popping iterations.[2] Each of these iterations may spend time scanning as many asn− 1inter-cluster distances to find the nearest neighbor. The total number of distance calculations it makes is therefore less than3n2. For the same reason, the total time used by the algorithm outside of these distance calculations isO(n2).[2] Since the only data structure is the set of active clusters and the stack containing a subset of the active clusters, the space required is linear in the number of input points.[2] For the algorithm to be correct, it must be the case that popping and merging the top two clusters from the algorithm's stack preserves the property that the remaining clusters on the stack form a chain of nearest neighbors. Additionally, it should be the case that all of the clusters produced during the algorithm are the same as the clusters produced by agreedy algorithmthat always merges the closest two clusters, even though the greedy algorithm will in general perform its merges in a different order than the nearest-neighbor chain algorithm. Both of these properties depend on the specific choice of how to measure the distance between clusters.[2] The correctness of this algorithm relies on a property of its distance function calledreducibility. This property was identified byBruynooghe (1977)in connection with an earlier clustering method that used mutual nearest neighbor pairs but not chains of nearest neighbors.[8]A distance functiondon clusters is defined to be reducible if, for every three clustersA,BandCin the greedy hierarchical clustering such thatAandBare mutual nearest neighbors, the following inequality holds:[2] If a distance function has the reducibility property, then merging two clustersCandDcan only cause the nearest neighbor ofEto change if that nearest neighbor was one ofCandD. This has two important consequences for the nearest neighbor chain algorithm. First, it can be shown using this property that, at each step of the algorithm, the clusters on the stackSform a valid chain of nearest neighbors, because whenever a nearest neighbor becomes invalidated it is immediately removed from the stack.[2] Second, and even more importantly, it follows from this property that, if two clustersCandDboth belong to the greedy hierarchical clustering, and are mutual nearest neighbors at any point in time, then they will be merged by the greedy clustering, for they must remain mutual nearest neighbors until they are merged. It follows that each mutual nearest neighbor pair found by the nearest neighbor chain algorithm is also a pair of clusters found by the greedy algorithm, and therefore that the nearest neighbor chain algorithm computes exactly the same clustering (although in a different order) as the greedy algorithm.[2] Ward's methodis an agglomerative clustering method in which the dissimilarity between two clustersAandBis measured by the amount by which merging the two clusters into a single larger cluster would increase the average squared distance of a point to its clustercentroid.[9]That is, Expressed in terms of the centroidcA{\displaystyle c_{A}}andcardinalitynA{\displaystyle n_{A}}of the two clusters, it has the simpler formula allowing it to be computed in constant time per distance calculation. Although highly sensitive tooutliers, Ward's method is the most popular variation of agglomerative clustering both because of the round shape of the clusters it typically forms and because of its principled definition as the clustering that at each step has the smallest variance within its clusters.[10]Alternatively, this distance can be seen as the difference ink-means costbetween the new cluster and the two old clusters. Ward's distance is also reducible, as can be seen more easily from a different formula for calculating the distance of a merged cluster from the distances of the clusters it was merged from:[9][11] Distance update formulas such as this one are called formulas "of Lance–Williams type" after the work ofLance & Williams (1967). Ifd(A,B){\displaystyle d(A,B)}is the smallest of the three distances on the right hand side (as would necessarily be true ifA{\displaystyle A}andB{\displaystyle B}are mutual nearest-neighbors) then the negative contribution from its term is cancelled by thenC{\displaystyle n_{C}}coefficient of one of the two other terms, leaving a positive value added to the weighted average of the other two distances. Therefore, the combined distance is always at least as large as the minimum ofd(A,C){\displaystyle d(A,C)}andd(B,C){\displaystyle d(B,C)}, meeting the definition of reducibility. Because Ward's distance is reducible, the nearest-neighbor chain algorithm using Ward's distance calculates exactly the same clustering as the standard greedy algorithm. Fornpoints in aEuclidean spaceof constant dimension, it takes timeO(n2)and spaceO(n).[6] Complete-linkageor furthest-neighbor clustering is a form of agglomerative clustering that defines the dissimilarity between clusters to be the maximum distance between any two points from the two clusters. Similarly, average-distance clustering uses the average pairwise distance as the dissimilarity. Like Ward's distance, these two forms of clustering obey a formula of Lance–Williams type. In complete linkage, the distanced(A∪B,C){\displaystyle d(A\cup B,C)}is the maximum of the two distancesd(A,C){\displaystyle d(A,C)}andd(B,C){\displaystyle d(B,C)}. Therefore, it is at least equal to the minimum of these two distances, the requirement for being reducible. For average distance,d(A∪B,C){\displaystyle d(A\cup B,C)}is just a weighted average of the distancesd(A,C){\displaystyle d(A,C)}andd(B,C){\displaystyle d(B,C)}. Again, this is at least as large as the minimum of the two distances. Thus, in both of these cases, the distance is reducible.[9][11] Unlike Ward's method, these two forms of clustering do not have a constant-time method for computing distances between pairs of clusters. Instead it is possible to maintain an array of distances between all pairs of clusters. Whenever two clusters are merged, the formula can be used to compute the distance between the merged cluster and all other clusters. Maintaining this array over the course of the clustering algorithm takes time and spaceO(n2). The nearest-neighbor chain algorithm may be used in conjunction with this array of distances to find the same clustering as the greedy algorithm for these cases. Its total time and space, using this array, is alsoO(n2).[12] The sameO(n2)time and space bounds can also be achieved in a different way, by a technique that overlays aquadtree-based priority queue data structure on top of the distance matrix and uses it to perform the standard greedy clustering algorithm. This quadtree method is more general, as it works even for clustering methods that are not reducible.[4]However, the nearest-neighbor chain algorithm matches its time and space bounds while using simpler data structures.[12] Insingle-linkageor nearest-neighbor clustering, the oldest form of agglomerative hierarchical clustering,[11]the dissimilarity between clusters is measured as the minimum distance between any two points from the two clusters. With this dissimilarity, meeting as an equality rather than an inequality the requirement of reducibility. (Single-linkage also obeys a Lance–Williams formula,[9][11]but with a negative coefficient from which it is more difficult to prove reducibility.) As with complete linkage and average distance, the difficulty of calculating cluster distances causes the nearest-neighbor chain algorithm to take time and spaceO(n2)to compute the single-linkage clustering. However, the single-linkage clustering can be found more efficiently by an alternative algorithm that computes theminimum spanning treeof the input distances usingPrim's algorithm, and then sorts the minimum spanning tree edges and uses this sorted list to guide the merger of pairs of clusters. Within Prim's algorithm, each successive minimum spanning tree edge can be found by asequential searchthrough an unsorted list of the smallest edges connecting the partially constructed tree to each additional vertex. This choice saves the time that the algorithm would otherwise spend adjusting the weights of vertices in itspriority queue. Using Prim's algorithm in this way would take timeO(n2)and spaceO(n), matching the best bounds that could be achieved with the nearest-neighbor chain algorithm for distances with constant-time calculations.[13] Another distance measure commonly used in agglomerative clustering is the distance between the centroids of pairs of clusters, also known as the weighted group method.[9][11]It can be calculated easily in constant time per distance calculation. However, it is not reducible. For instance, if the input forms the set of three points of an equilateral triangle, merging two of these points into a larger cluster causes the inter-cluster distance to decrease, a violation of reducibility. Therefore, the nearest-neighbor chain algorithm will not necessarily find the same clustering as the greedy algorithm. Nevertheless,Murtagh (1983)writes that the nearest-neighbor chain algorithm provides "a good heuristic" for the centroid method.[2]A different algorithm byDay & Edelsbrunner (1984)can be used to find the greedy clustering inO(n2)time for this distance measure.[5] The above presentation explicitly disallowed distances sensitive to merge order. Indeed, allowing such distances can cause problems. In particular, there exist order-sensitive cluster distances which satisfy reducibility, but for which the above algorithm will return a hierarchy with suboptimal costs. Therefore, when cluster distances are defined by a recursive formula (as some of the ones discussed above are), care must be taken that they do not use the hierarchy in a way which is sensitive to merge order.[14] The nearest-neighbor chain algorithm was developed and implemented in 1982 byJean-Paul Benzécri[15]and J. Juan.[16]They based this algorithm on earlier methods that constructed hierarchical clusterings using mutual nearest neighbor pairs without taking advantage of nearest neighbor chains.[8][17]
https://en.wikipedia.org/wiki/Nearest-neighbor_chain_algorithm
Inset theoryand related branches ofmathematics, afamily(orcollection) can mean, depending upon the context, any of the following:set,indexed set,multiset, orclass. A collectionF{\displaystyle F}ofsubsetsof a givensetS{\displaystyle S}is called afamily of subsetsofS{\displaystyle S}, or afamily of setsoverS.{\displaystyle S.}More generally, a collection of any sets whatsoever is called afamily of sets,set family, or aset system. Additionally, a family of sets may be defined as a function from a setI{\displaystyle I}, known as the index set, toF{\displaystyle F}, in which case the sets of the family are indexed by members ofI{\displaystyle I}.[1]In some contexts, a family of sets may be allowed to contain repeated copies of any given member,[2][3][4]and in other contexts it may form aproper class. A finite family of subsets of afinite setS{\displaystyle S}is also called ahypergraph. The subject ofextremal set theoryconcerns the largest and smallest examples of families of sets satisfying certain restrictions. The set of all subsets of a given setS{\displaystyle S}is called thepower setofS{\displaystyle S}and is denoted by℘(S).{\displaystyle \wp (S).}Thepower set℘(S){\displaystyle \wp (S)}of a given setS{\displaystyle S}is a family of sets overS.{\displaystyle S.} A subset ofS{\displaystyle S}havingk{\displaystyle k}elements is called ak{\displaystyle k}-subsetofS.{\displaystyle S.}Thek{\displaystyle k}-subsetsS(k){\displaystyle S^{(k)}}of a setS{\displaystyle S}form a family of sets. LetS={a,b,c,1,2}.{\displaystyle S=\{a,b,c,1,2\}.}An example of a family of sets overS{\displaystyle S}(in themultisetsense) is given byF={A1,A2,A3,A4},{\displaystyle F=\left\{A_{1},A_{2},A_{3},A_{4}\right\},}whereA1={a,b,c},A2={1,2},A3={1,2},{\displaystyle A_{1}=\{a,b,c\},A_{2}=\{1,2\},A_{3}=\{1,2\},}andA4={a,b,1}.{\displaystyle A_{4}=\{a,b,1\}.} The classOrd{\displaystyle \operatorname {Ord} }of allordinal numbersis alargefamily of sets. That is, it is not itself a set but instead aproper class. Any family of subsets of a setS{\displaystyle S}is itself a subset of thepower set℘(S){\displaystyle \wp (S)}if it has no repeated members. Any family of sets without repetitions is asubclassof theproper classof all sets (theuniverse). Hall's marriage theorem, due toPhilip Hall, gives necessary and sufficient conditions for a finite family of non-empty sets (repetitions allowed) to have asystem of distinct representatives. IfF{\displaystyle {\mathcal {F}}}is any family of sets then∪F:=⋃F∈FF{\displaystyle \cup {\mathcal {F}}:={\textstyle \bigcup \limits _{F\in {\mathcal {F}}}}F}denotes the union of all sets inF,{\displaystyle {\mathcal {F}},}where in particular,∪∅=∅.{\displaystyle \cup \varnothing =\varnothing .}Any familyF{\displaystyle {\mathcal {F}}}of sets is a family over∪F{\displaystyle \cup {\mathcal {F}}}and also a family over any superset of∪F.{\displaystyle \cup {\mathcal {F}}.} Certain types of objects from other areas of mathematics are equivalent to families of sets, in that they can be described purely as a collection of sets of objects of some type: A family of sets is said tocovera setX{\displaystyle X}if every point ofX{\displaystyle X}belongs to some member of the family. A subfamily of a cover ofX{\displaystyle X}that is also a cover ofX{\displaystyle X}is called asubcover. A family is called apoint-finite collectionif every point ofX{\displaystyle X}lies in only finitely many members of the family. If every point of a cover lies in exactly one member ofX{\displaystyle X}, the cover is apartitionofX.{\displaystyle X.} WhenX{\displaystyle X}is atopological space, a cover whose members are allopen setsis called anopen cover. A family is calledlocally finiteif each point in the space has aneighborhoodthat intersects only finitely many members of the family. Aσ-locally finiteorcountably locally finite collectionis a family that is the union of countably many locally finite families. A coverF{\displaystyle {\mathcal {F}}}is said torefineanother (coarser) coverC{\displaystyle {\mathcal {C}}}if every member ofF{\displaystyle {\mathcal {F}}}is contained in some member ofC.{\displaystyle {\mathcal {C}}.}Astar refinementis a particular type of refinement. ASperner familyis a set family in which none of the sets contains any of the others.Sperner's theorembounds the maximum size of a Sperner family. AHelly familyis a set family such that any minimal subfamily with empty intersection has bounded size.Helly's theoremstates thatconvex setsinEuclidean spacesof bounded dimension form Helly families. Anabstract simplicial complexis a set familyF{\displaystyle F}(consisting of finite sets) that isdownward closed; that is, every subset of a set inF{\displaystyle F}is also inF.{\displaystyle F.}Amatroidis an abstract simplicial complex with an additional property called theaugmentation property. Everyfilteris a family of sets. Aconvexity spaceis a set family closed under arbitrary intersections and unions ofchains(with respect to theinclusion relation). Other examples of set families areindependence systems,greedoids,antimatroids, andbornological spaces. Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .}
https://en.wikipedia.org/wiki/Family_of_sets
Thegrandmother cell, sometimes called the "Jennifer Anistonneuron", is a hypotheticalneuronthat represents a complex but specific concept or object.[1]It activates when a person "sees, hears, or otherwise sensibly discriminates"[2]a specific entity, such as their grandmother. It contrasts with the concept ofensemble coding(or "coarse" coding), where the unique set of features characterizing the grandmother is detected as a particular activation pattern across an ensemble of neurons, rather than being detected by a specific "grandmother cell".[1] The term was coined around 1969 by cognitive scientistJerry Lettvin.[1]Rather than serving as a serious hypothesis, the "grandmother cell" concept was initially largely used in jokes and came to be used as a "straw man or foil" for a discussion of ensemble theories in introductory textbooks.[1]However, a similar concept, that of thegnostic neuron, was introduced several years earlier byJerzy Konorskias a serious proposal.[3][1] In 1953,Horace Barlowdescribed cells in a frog retina as "bug detectors", but the term did not gain wide usage.[4][1]Several years later, Jerome (Jerry) Lettvin and others also studied these and other cells, eventually resulting in their widely known 1959 paper "What the frog’s eye tells the frog’s brain."[1] Around 1969, Lettvin introduced the term "grandmother cell" in a course he was teaching at MIT, telling a fictitious anecdote about a neurosurgeon who had discovered a group of "mother cells" in the brain that "responded uniquely only to a mother... whether animate or stuffed, seen from before or behind, upside down or on a diagonal or offered by caricature, photograph or abstraction".[1]In Lettvin's story, the neurosurgeon went on to remove (ablate) all these "several thousand separate neurons" from the brain of Portnoy, the title character of Philip Roth's 1969 novelPortnoy's Complaint, thus curing him from his obsession with his mother, and went on to study "grandmother cells" instead.[1] By 2005,Ed Connorobserved that the term had "become a shorthand for invoking all of the overwhelming practical arguments against a one-to-one object coding scheme. No one wants to be accused of believing in grandmother cells."[5]However, in that year UCLA neurosurgeons Itzhak Fried, mentee Rodrigo Quian Quiroga and others published findings on what they would come to call the "Jennifer Aniston neuron".[5][6]After operating on patients who experience epileptic seizures, the researchers showed photos of celebrities like Jennifer Aniston. The patients, who were fully conscious, often had a particular neuron fire, suggesting that the brain has Aniston-specific neurons.[6][7] Visual neurons in theinferior temporal cortexof the monkey fire selectively to hands and faces.[8][9][10][11]These cells are selective in that they do not fire for other visual objects important for monkeys such as fruit and genitalia. Research finds that some of these cells can be trained to show high specificity for arbitrary visual objects, and these would seem to fit the requirements of gnostic/grandmother cells.[12][13]In addition, evidence exists for cells in the humanhippocampusthat have highly selective responses to different categories of stimuli[14][15]including highly selective responses to individual human faces.[16] However most of the reported face-selective cells are not grandmother/gnostic cells since they do not represent a specific percept, that is, they are not cells narrowly selective in their activations for one face and only one face irrespective of transformations of size, orientation, and color. Even the most selective face cells usually also discharge, if more weakly, to a variety of individual faces. Furthermore, face-selective cells often vary in their responsiveness to different aspects of faces. This suggests that cell responsiveness arises from the need of a monkey to differentiate among different individual faces rather than among other categories of stimuli such as bananas with their discrimination properties linked to the fact that different individual faces are much more similar to each other in their overall organization and fine detail than other kinds of stimuli.[1]Moreover, it has been suggested that these cells might in fact be responding as specialized feature detector neurons that only function in the holistic context of a face construct.[17][18] One idea has been that such cells form ensembles for the coarse or distributed coding of faces rather than detectors for specific faces. Thus, a specific grandmother may be represented by a specialized ensemble of grandmother or near grandmother cells.[1] In 2005, aUCLAandCaltechstudy found evidence of different cells that fire in response to particular people, such asBill ClintonorJennifer Aniston. A neuron forHalle Berry, for example, might respond "to the concept, theabstract entity, of Halle Berry", and would fire not only for images of Halle Berry, but also to the actual name "Halle Berry".[19]However, there is no suggestion in that study that only the cell being monitored responded to that concept, nor was it suggested that no other actress would cause that cell to respond (although several other presented images of actresses did not cause it to respond).[19]The researchers believe that they have found evidence forsparseness, rather than for grandmother cells.[20] Further evidence for the theory that a small neural network provides facial recognition was found from analysis of cell recording studies of macaque monkeys. By formatting faces as points in a high-dimensional linear space, the scientists discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble of about 200 cells to encode the location of any face in the space.[21] The grandmother cell hypothesis is an extreme version of the idea ofsparseness,[22][5]and is not without critics. The opposite of the grandmother cell theory is the distributed representation theory, that states that a specific stimulus is coded by its unique pattern of activity over a large group of neurons widely distributed in the brain. The arguments against the sparseness include: William Jamesin 1890 proposed a related idea of a pontifical cell.[23]The pontifical cell is defined as a putative, and implausible cell which had all our experiences. This is different from a concept specific cell in that it is the site of experience of sense data. James's 1890 pontifical cell was instead a cell "to which the rest of the brain provided a representation" of a grandmother. The experience of grandmother occurred in this cell.
https://en.wikipedia.org/wiki/Grandmother_cell
Inmathematics, asingularityis a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to bewell-behavedin some particular way, such as by lackingdifferentiabilityoranalyticity.[1][2][3] For example, thereciprocal functionf(x)=1/x{\displaystyle f(x)=1/x}has a singularity atx=0{\displaystyle x=0}, where the value of thefunctionis not defined, as involving adivision by zero. Theabsolute valuefunctiong(x)=|x|{\displaystyle g(x)=|x|}also has a singularity atx=0{\displaystyle x=0}, since it is notdifferentiablethere.[4] Thealgebraic curvedefined by{(x,y):y3−x2=0}{\displaystyle \left\{(x,y):y^{3}-x^{2}=0\right\}}in the(x,y){\displaystyle (x,y)}coordinate system has a singularity (called acusp) at(0,0){\displaystyle (0,0)}. For singularities inalgebraic geometry, seesingular point of an algebraic variety. For singularities indifferential geometry, seesingularity theory. Inreal analysis, singularities are eitherdiscontinuities, or discontinuities of thederivative(sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities:type I, which has two subtypes, andtype II, which can also be divided into two subtypes (though usually is not). To describe the way these two types of limits are being used, suppose thatf(x){\displaystyle f(x)}is a function of a real argumentx{\displaystyle x}, and for any value of its argument, sayc{\displaystyle c}, then theleft-handed limit,f(c−){\displaystyle f(c^{-})}, and theright-handed limit,f(c+){\displaystyle f(c^{+})}, are defined by: The valuef(c−){\displaystyle f(c^{-})}is the value that the functionf(x){\displaystyle f(x)}tends towards as the valuex{\displaystyle x}approachesc{\displaystyle c}frombelow, and the valuef(c+){\displaystyle f(c^{+})}is the value that the functionf(x){\displaystyle f(x)}tends towards as the valuex{\displaystyle x}approachesc{\displaystyle c}fromabove, regardless of the actual value the function has at the point wherex=c{\displaystyle x=c}. There are some functions for which these limits do not exist at all. For example, the function does not tend towards anything asx{\displaystyle x}approachesc=0{\displaystyle c=0}. The limits in this case are not infinite, but ratherundefined: there is no value thatg(x){\displaystyle g(x)}settles in on. Borrowing from complex analysis, this is sometimes called anessential singularity. The possible cases at a given valuec{\displaystyle c}for the argument are as follows. In real analysis, a singularity or discontinuity is a property of a function alone. Any singularities that may exist in the derivative of a function are considered as belonging to the derivative, not to the original function. Acoordinate singularityoccurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. An example of this is the apparent singularity at the 90 degree latitude inspherical coordinates. An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). This discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity (e.g., by replacing the latitude/longitude representation with ann-vectorrepresentation). Incomplex analysis, there are several classes of singularities. These include the isolated singularities, the nonisolated singularities, and the branch points. Suppose thatf{\displaystyle f}is a function that iscomplex differentiablein thecomplementof a pointa{\displaystyle a}in anopen subsetU{\displaystyle U}of thecomplex numbersC.{\displaystyle \mathbb {C} .}Then: Other than isolated singularities, complex functions of one variable may exhibit other singular behaviour. These are termed nonisolated singularities, of which there are two types: Branch pointsare generally the result of amulti-valued function, such asz{\displaystyle {\sqrt {z}}}orlog⁡(z),{\displaystyle \log(z),}which are defined within a certain limited domain so that the function can be made single-valued within the domain. The cut is a line or curve excluded from the domain to introduce a technical separation between discontinuous values of the function. When the cut is genuinely required, the function will have distinctly different values on each side of the branch cut. The shape of the branch cut is a matter of choice, even though it must connect two different branch points (such asz=0{\displaystyle z=0}andz=∞{\displaystyle z=\infty }forlog⁡(z){\displaystyle \log(z)}) which are fixed in place. Afinite-time singularityoccurs when one input variable is time, and an output variable increases towards infinity at a finite time. These are important inkinematicsandPartial Differential Equations– infinites do not occur physically, but the behavior near the singularity is often of interest. Mathematically, the simplest finite-time singularities arepower lawsfor various exponents of the formx−α,{\displaystyle x^{-\alpha },}of which the simplest ishyperbolic growth, where the exponent is (negative) 1:x−1.{\displaystyle x^{-1}.}More precisely, in order to get a singularity at positive time as time advances (so the output grows to infinity), one instead uses(t0−t)−α{\displaystyle (t_{0}-t)^{-\alpha }}(usingtfor time, reversing direction to−t{\displaystyle -t}so that time increases to infinity, and shifting the singularity forward from 0 to a fixed timet0{\displaystyle t_{0}}). An example would be the bouncing motion of an inelastic ball on a plane. If idealized motion is considered, in which the same fraction ofkinetic energyis lost on each bounce, thefrequencyof bounces becomes infinite, as the ball comes to rest in a finite time. Other examples of finite-time singularities include the various forms of thePainlevé paradox(for example, the tendency of a chalk to skip when dragged across a blackboard), and how theprecessionrate of acoinspun on a flat surface accelerates towards infinite—before abruptly stopping (as studied using theEuler's Disktoy). Hypothetical examples includeHeinz von Foerster's facetious "Doomsday's equation" (simplistic models yield infinite human population in finite time). Inalgebraic geometry, asingularity of an algebraic varietyis a point of the variety where thetangent spacemay not be regularly defined. The simplest example of singularities are curves that cross themselves. But there are other types of singularities, likecusps. For example, the equationy2−x3= 0defines a curve that has a cusp at the originx=y= 0. One could define thex-axis as a tangent at this point, but this definition can not be the same as the definition at other points. In fact, in this case, thex-axis is a "double tangent." Foraffineandprojective varieties, the singularities are the points where theJacobian matrixhas arankwhich is lower than at other points of the variety. An equivalent definition in terms ofcommutative algebramay be given, which extends toabstract varietiesandschemes: A point issingularif thelocal ring at this pointis not aregular local ring.
https://en.wikipedia.org/wiki/Mathematical_singularity
Where a device needs ausernameand/orpasswordto log in, adefault passwordis usually provided to access the device during its initial setup, or after resetting tofactory defaults. Manufacturers of such equipment typically use a simple password, such asadminorpasswordon all equipment they ship, expecting users to change the password duringconfiguration. The default username and password are usually found in the instruction manual (common for all devices) or on the device itself.[citation needed] Default passwords are one of the major contributing factors to large-scale compromises ofhome routers.[1]Leaving such a password on devices available to the public is a major security risk.[2][3][4][5]There are several Proof-of-Concept (POC), as well as real world worms running across internet, which are configured to search for systems set with a default username and password. Voyager Alpha Force,Zotob, and MySpooler are a few examples of POC malware which scan theInternetfor specific devices and try to log in using the default credentials.[6] In the real world, many forms of malware, such asMirai, have used this vulnerability. Once devices have been compromised by exploiting the Default Credential vulnerability, they can themselves be used for various harmful purposes, such as carrying outDistributed Denial of Service(DDoS) attacks. In one particular incident, a hacker was able to gain access and control of a large number of networks including those ofUniversity of Maryland, Baltimore County, Imagination, Capital Market Strategies L, by leveraging the fact that they were using the default credentials for their NetGear switch.[7] Some devices (such aswireless routers) will have unique default router usernames and passwords printed on a sticker, which is more secure than a common default password. Some vendors will however derive the password from the device'sMAC addressusing a known algorithm, in which case the password can also be easily reproduced by attackers.[8]
https://en.wikipedia.org/wiki/Default_Credential_vulnerability
Inmathematics, anopen setis ageneralizationof anopen intervalin thereal line. In ametric space(asetwith adistancedefined between every two points), an open set is a set that, with every pointPin it, contains all points of the metric space that are sufficiently near toP(that is, all points whose distance toPis less than some value depending onP). More generally, an open set is a member of a givencollectionofsubsetsof a given set, a collection that has the property of containing everyunionof its members, every finiteintersectionof its members, theempty set, and the whole set itself. A set in which such a collection is given is called atopological space, and the collection is called atopology. These conditions are very loose, and allow enormous flexibility in the choice of open sets. For example,everysubset can be open (thediscrete topology), ornosubset can be open except the space itself and the empty set (theindiscrete topology).[1] In practice, however, open sets are usually chosen to provide a notion of nearness that is similar to that of metric spaces, without having a notion of distance defined. In particular, a topology allows defining properties such ascontinuity,connectedness, andcompactness, which were originally defined by means of a distance. The most common case of a topology without any distance is given bymanifolds, which are topological spaces that,neareach point, resemble an open set of aEuclidean space, but on which no distance is defined in general. Less intuitive topologies are used in other branches of mathematics; for example, theZariski topology, which is fundamental inalgebraic geometryandscheme theory. Intuitively, an open set provides a method to distinguish twopoints. For example, if about one of two points in atopological space, there exists an open set not containing the other (distinct) point, the two points are referred to astopologically distinguishable. In this manner, one may speak of whether two points, or more generally twosubsets, of a topological space are "near" without concretely defining adistance. Therefore, topological spaces may be seen as a generalization of spaces equipped with a notion of distance, which are calledmetric spaces. In the set of allreal numbers, one has the naturalEuclidean metric; that is, a function which measures the distance between two real numbers:d(x,y) = |x−y|. Therefore, given a real numberx, one can speak of the set of all points close to that real number; that is, withinεofx. In essence, points within ε ofxapproximatexto an accuracy of degreeε. Note thatε> 0 always but asεbecomes smaller and smaller, one obtains points that approximatexto a higher and higher degree of accuracy. For example, ifx= 0 andε= 1, the points withinεofxare precisely the points of theinterval(−1, 1); that is, the set of all real numbers between −1 and 1. However, withε= 0.5, the points withinεofxare precisely the points of (−0.5, 0.5). Clearly, these points approximatexto a greater degree of accuracy than whenε= 1. The previous discussion shows, for the casex= 0, that one may approximatexto higher and higher degrees of accuracy by definingεto be smaller and smaller. In particular, sets of the form (−ε,ε) give us a lot of information about points close tox= 0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close tox. This innovative idea has far-reaching consequences; in particular, by defining different collections of sets containing 0 (distinct from the sets (−ε,ε)), one may find different results regarding the distance between 0 and other real numbers. For example, if we were to defineRas the only such set for "measuring distance", all points are close to 0 since there is only one possible degree of accuracy one may achieve in approximating 0: being a member ofR. Thus, we find that in some sense, every real number is distance 0 away from 0. It may help in this case to think of the measure as being a binary condition: all things inRare equally close to 0, while any item that is not inRis not close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as aneighborhood basis; a member of this neighborhood basis is referred to as an open set. In fact, one may generalize these notions to an arbitrary set (X); rather than just the real numbers. In this case, given a point (x) of that set, one may define a collection of sets "around" (that is, containing)x, used to approximatex. Of course, this collection would have to satisfy certain properties (known asaxioms) for otherwise we may not have a well-defined method to measure distance. For example, every point inXshould approximatextosomedegree of accuracy. ThusXshould be in this family. Once we begin to define "smaller" sets containingx, we tend to approximatexto a greater degree of accuracy. Bearing this in mind, one may define the remaining axioms that the family of sets aboutxis required to satisfy. Several definitions are given here, in an increasing order of technicality. Each one is a special case of the next one. A subsetU{\displaystyle U}of theEuclideann-spaceRnisopenif, for every pointxinU{\displaystyle U},there existsa positive real numberε(depending onx) such that any point inRnwhoseEuclidean distancefromxis smaller thanεbelongs toU{\displaystyle U}.[2]Equivalently, a subsetU{\displaystyle U}ofRnis open if every point inU{\displaystyle U}is the center of anopen ballcontained inU.{\displaystyle U.} An example of a subset ofRthat is not open is theclosed interval[0,1], since neither0 -εnor1 +εbelongs to[0,1]for anyε> 0, no matter how small. A subsetUof ametric space(M,d)is calledopenif, for any pointxinU, there exists a real numberε> 0 such that any pointy∈M{\displaystyle y\in M}satisfyingd(x,y) <εbelongs toU. Equivalently,Uis open if every point inUhas a neighborhood contained inU. This generalizes the Euclidean space example, since Euclidean space with the Euclidean distance is a metric space. Atopologyτ{\displaystyle \tau }on a setXis a set of subsets ofXwith the properties below. Each member ofτ{\displaystyle \tau }is called anopen set.[3] Xtogether withτ{\displaystyle \tau }is called atopological space. Infinite intersections of open sets need not be open. For example, the intersection of all intervals of the form(−1/n,1/n),{\displaystyle \left(-1/n,1/n\right),}wheren{\displaystyle n}is a positive integer, is the set{0}{\displaystyle \{0\}}which is not open in the real line. A metric space is a topological space, whose topology consists of the collection of all subsets that are unions of open balls. There are, however, topological spaces that are not metric spaces. Theunionof any number of open sets, or infinitely many open sets, is open.[4]Theintersectionof a finite number of open sets is open.[4] Acomplementof an open set (relative to the space that the topology is defined on) is called aclosed set. A set may be both open and closed (aclopen set). Theempty setand the full space are examples of sets that are both open and closed.[5] A set can never been considered as open by itself. This notion is relative to a containing set and a specific topology on it. Whether a set is open depends on thetopologyunder consideration. Having opted forgreater brevity over greater clarity, we refer to a setXendowed with a topologyτ{\displaystyle \tau }as "the topological spaceX" rather than "the topological space(X,τ){\displaystyle (X,\tau )}", despite the fact that all the topological data is contained inτ.{\displaystyle \tau .}If there are two topologies on the same set, a setUthat is open in the first topology might fail to be open in the second topology. For example, ifXis any topological space andYis any subset ofX, the setYcan be given its own topology (called the 'subspace topology') defined by "a setUis open in the subspace topology onYif and only ifUis the intersection ofYwith an open set from the original topology onX."[6]This potentially introduces new open sets: ifVis open in the original topology onX, butV∩Y{\displaystyle V\cap Y}isn't open in the original topology onX, thenV∩Y{\displaystyle V\cap Y}is open in the subspace topology onY. As a concrete example of this, ifUis defined as the set of rational numbers in the interval(0,1),{\displaystyle (0,1),}thenUis an open subset of therational numbers, but not of thereal numbers. This is because when the surrounding space is the rational numbers, for every pointxinU, there exists a positive numberasuch that allrationalpoints within distanceaofxare also inU. On the other hand, when the surrounding space is the reals, then for every pointxinUthere isnopositiveasuch that allrealpoints within distanceaofxare inU(becauseUcontains no non-rational numbers). Open sets have a fundamental importance intopology. The concept is required to define and make sense oftopological spaceand other topological structures that deal with the notions of closeness and convergence for spaces such asmetric spacesanduniform spaces. EverysubsetAof a topological spaceXcontains a (possibly empty) open set; the maximum (ordered under inclusion) such open set is called theinteriorofA. It can be constructed by taking the union of all the open sets contained inA.[7] Afunctionf:X→Y{\displaystyle f:X\to Y}between two topological spacesX{\displaystyle X}andY{\displaystyle Y}iscontinuousif thepreimageof every open set inY{\displaystyle Y}is open inX.{\displaystyle X.}[8]The functionf:X→Y{\displaystyle f:X\to Y}is calledopenif theimageof every open set inX{\displaystyle X}is open inY.{\displaystyle Y.} An open set on thereal linehas the characteristic property that it is a countable union of disjoint open intervals. A set might be open, closed, both, or neither. In particular, open and closed sets are not mutually exclusive, meaning that it is in general possible for a subset of a topological space to simultaneously be both an open subsetanda closed subset. Such subsets are known asclopen sets. Explicitly, a subsetS{\displaystyle S}of a topological space(X,τ){\displaystyle (X,\tau )}is calledclopenif bothS{\displaystyle S}and its complementX∖S{\displaystyle X\setminus S}are open subsets of(X,τ){\displaystyle (X,\tau )}; or equivalently, ifS∈τ{\displaystyle S\in \tau }andX∖S∈τ.{\displaystyle X\setminus S\in \tau .} Inanytopological space(X,τ),{\displaystyle (X,\tau ),}the empty set∅{\displaystyle \varnothing }and the setX{\displaystyle X}itself are always clopen. These two sets are the most well-known examples of clopen subsets and they show that clopen subsets exist ineverytopological space. To see, it suffices to remark that, by definition of a topology,X{\displaystyle X}and∅{\displaystyle \varnothing }are both open, and that they are also closed, since each is the complement of the other. The open sets of the usualEuclidean topologyof thereal lineR{\displaystyle \mathbb {R} }are the empty set, theopen intervalsand every union of open intervals. If a topological spaceX{\displaystyle X}is endowed with thediscrete topology(so that by definition, every subset ofX{\displaystyle X}is open) then every subset ofX{\displaystyle X}is a clopen subset. For a more advanced example reminiscent of the discrete topology, suppose thatU{\displaystyle {\mathcal {U}}}is anultrafilteron a non-empty setX.{\displaystyle X.}Then the unionτ:=U∪{∅}{\displaystyle \tau :={\mathcal {U}}\cup \{\varnothing \}}is a topology onX{\displaystyle X}with the property thateverynon-empty proper subsetS{\displaystyle S}ofX{\displaystyle X}iseitheran open subset or else a closed subset, but never both; that is, if∅≠S⊊X{\displaystyle \varnothing \neq S\subsetneq X}(whereS≠X{\displaystyle S\neq X}) thenexactly oneof the following two statements is true: either (1)S∈τ{\displaystyle S\in \tau }or else, (2)X∖S∈τ.{\displaystyle X\setminus S\in \tau .}Said differently,everysubset is open or closed but theonlysubsets that are both (i.e. that are clopen) are∅{\displaystyle \varnothing }andX.{\displaystyle X.} A subsetS{\displaystyle S}of a topological spaceX{\displaystyle X}is called aregular open setifInt⁡(S¯)=S{\displaystyle \operatorname {Int} \left({\overline {S}}\right)=S}or equivalently, ifBd⁡(S¯)=Bd⁡S{\displaystyle \operatorname {Bd} \left({\overline {S}}\right)=\operatorname {Bd} S}, whereBd⁡S{\displaystyle \operatorname {Bd} S},Int⁡S{\displaystyle \operatorname {Int} S}, andS¯{\displaystyle {\overline {S}}}denote, respectively, the topologicalboundary,interior, andclosureofS{\displaystyle S}inX{\displaystyle X}. A topological space for which there exists abaseconsisting of regular open sets is called asemiregular space. A subset ofX{\displaystyle X}is a regular open set if and only if its complement inX{\displaystyle X}is a regular closed set, where by definition a subsetS{\displaystyle S}ofX{\displaystyle X}is called aregular closed setifInt⁡S¯=S{\displaystyle {\overline {\operatorname {Int} S}}=S}or equivalently, ifBd⁡(Int⁡S)=Bd⁡S.{\displaystyle \operatorname {Bd} \left(\operatorname {Int} S\right)=\operatorname {Bd} S.}Every regular open set (resp. regular closed set) is an open subset (resp. is a closed subset) although in general,[note 1]the converses arenottrue. Throughout,(X,τ){\displaystyle (X,\tau )}will be a topological space. A subsetA⊆X{\displaystyle A\subseteq X}of a topological spaceX{\displaystyle X}is called: The complement of a preopen set is calledpreclosed. The complement of a β-open set is calledβ-closed. The complement of a sequentially open set is calledsequentially closed. A subsetS⊆X{\displaystyle S\subseteq X}is sequentially closed inX{\displaystyle X}if and only ifS{\displaystyle S}is equal to itssequential closure, which by definition is the setSeqClX⁡S{\displaystyle \operatorname {SeqCl} _{X}S}consisting of allx∈X{\displaystyle x\in X}for which there exists a sequence inS{\displaystyle S}that converges tox{\displaystyle x}(inX{\displaystyle X}). Using the fact that whenever two subsetsA,B⊆X{\displaystyle A,B\subseteq X}satisfyA⊆B,{\displaystyle A\subseteq B,}the following may be deduced: Moreover, a subset is a regular open set if and only if it is preopen and semi-closed.[10]The intersection of an α-open set and a semi-preopen (resp. semi-open, preopen, b-open) set is a semi-preopen (resp. semi-open, preopen, b-open) set.[10]Preopen sets need not be semi-open and semi-open sets need not be preopen.[10] Arbitrary unions of preopen (resp. α-open, b-open, semi-preopen) sets are once again preopen (resp. α-open, b-open, semi-preopen).[10]However, finite intersections of preopen sets need not be preopen.[13]The set of all α-open subsets of a space(X,τ){\displaystyle (X,\tau )}forms a topology onX{\displaystyle X}that isfinerthanτ.{\displaystyle \tau .}[9] A topological spaceX{\displaystyle X}isHausdorffif and only if everycompact subspaceofX{\displaystyle X}is θ-closed.[13]A spaceX{\displaystyle X}istotally disconnectedif and only if every regular closed subset is preopen or equivalently, if every semi-open subset is preopen. Moreover, the space is totally disconnected if and only if theclosureof every preopen subset is open.[9]
https://en.wikipedia.org/wiki/Open_set