text
stringlengths
21
172k
source
stringlengths
32
113
Personal rapid transit(PRT), also referred to aspodcarsorguided/railed taxis, is apublic transportmode featuring a network of specially built guideways on which ride small automated vehicles that carry few (generally less than 6) passengers per vehicle. PRT is a type ofautomated guideway transit(AGT), a class of system which also includes larger vehicles all the way to small subway systems.[1]In terms of routing, it tends towardspersonal public transportsystems. PRT vehicles are sized for individual or small group travel, typically carrying no more than three to sixpassengers per vehicle.[2]Guideways are arranged in a network topology, with all stations located onsidings, and with frequent merge/diverge points. This allows for nonstop, point-to-point travel, bypassing all intermediate stations. The point-to-point service has been compared to ataxior a horizontal lift (elevator). Numerous PRT systems have been proposed but most have not been implemented. As of November 2016[update], only a handful of PRT systems are operational:Morgantown Personal Rapid Transit(the oldest and most extensive), inMorgantown, West Virginia, has been in continuous operation since 1975. Since 2010 a 10-vehicle 2getthere system has operated atMasdar City, UAE, and since 2011 a 21-vehicleUltra PRTsystem has run atLondon Heathrow Airport. A 40-vehicle Vectus system with in-line stations officially opened inSuncheon,[3]South Korea, in April 2014.[4][5]A PRT system connecting the terminals and parking has been built at the newChengdu Tianfu International Airport, which opened in 2021.[6][7] Mostmass transitsystems move people in groups over scheduled routes. This has inherent inefficiencies.[8]For passengers, time is wasted by waiting for the next vehicle to arrive, indirect routes to their destination, stopping for passengers with other destinations, and often confusing or inconsistent schedules. Slowing and accelerating large weights can undermine public transport's benefit to the environment while slowing other traffic.[8] Personal rapid transit systems attempt to eliminate these wastes by moving small groups nonstop in automated vehicles on fixed tracks. Passengers can ideally board a pod immediately upon arriving at a station, and can – with a sufficiently extensive network of tracks – take relatively direct routes to their destination without stops.[8][9] The low weight of PRT's small vehicles allows smaller guideways and support structures than mass transit systems like light rail.[8]The smaller structures translate into lower construction costs, smallereasements, and less visually obtrusive infrastructure.[8] As it stands, a citywide deployment with many lines and closely spaced stations, as envisioned by proponents, has yet to be constructed. Past projects have failed because of financing, cost overruns, regulatory conflicts, political issues, misapplied technology, and flaws in design, engineering or review.[8] However, the theory remains active. For example, from 2002 to 2005, the EDICT project, sponsored by theEuropean Union, conducted a study on the feasibility of PRT in four European cities. The study involved 12 research organizations, and concluded that PRT:[10] The report also concluded that, despite these advantages, public authorities will not commit to building PRT because of the risks associated with being the first public implementation.[10][11] The PRT acronym was introduced formally in 1978 byJ. Edward Anderson.[12]TheAdvanced Transit Association(ATRA), a group which advocates the use of technological solutions to transit problems, compiled a definition in 1988 that can be seen here.[13] Currently, five advanced transit networks (ATN) systems are operational, and several more are in the planning stage.[14] Morgantown, West Virginia, US (1975)[15] In addition, one PRT has completed construction but has not been commissioned. The following list summarizes several well-known automated transit networks (ATN) suppliers as of 2014, with subsequent amendments.[34] Modern PRT concepts began around 1953 when Donn Fichter, a city transportation planner, began research on PRT and alternative transportation methods. In 1964, Fichter published a book[38]which proposed an automated public transit system for areas of medium to low population density. One of the key points made in the book was Fichter's belief that people would not leave their cars in favor of public transit unless the system offered flexibility and end-to-end transit times that were much better than existing systems – flexibility and performance he felt only a PRT system could provide. Several other urban and transit planners also wrote on the topic and some early experimentation followed, but PRT remained relatively unknown. Around the same time, Edward Haltom was studyingmonorailsystems. Haltom noticed that the time to start and stop a conventional large monorail train, like those of theWuppertal Schwebebahn, meant that a single line could only support between 20 and 40 vehicles an hour. In order to get reasonable passenger movements on such a system, the trains had to be large enough to carry hundreds of passengers (seeheadwayfor a general discussion). This, in turn, demanded large guideways that could support the weight of these large vehicles, driving up capital costs to the point where he considered them unattractive.[39] Haltom turned his attention to developing a system that could operate with shorter timings, thereby allowing the individual cars to be smaller while preserving the same overall route capacity. Smaller cars would mean less weight at any given point, which meant smaller and less expensive guideways. To eliminate the backup at stations, the system used "offline" stations that allowed the mainline traffic to bypass the stopped vehicles. He designed theMonocabsystem using six-passenger cars suspended on wheels from an overhead guideway. Like most suspended systems, it suffered from the problem of difficult switching arrangements. Since the car rode on a rail, switching from one path to another required the rail to be moved, a slow process that limited the possible headways.[39] By the late 1950s the problems withurban sprawlwere becoming evident in the United States. When cities improved roads and the transit times were lowered, suburbs developed at ever increasing distances from the city cores, and people moved out of the downtown areas. Lackingpollution controlsystems, the rapid rise in car ownership and the longer trips to and from work were causing significant air quality problems. Additionally, movement to the suburbs led to aflight of capitalfrom the downtown areas, one cause of the rapidurban decayseen in the US. Mass transit systems were one way to combat these problems. Yet during this period, the federal government was feeding the problems by funding the development of theInterstate Highway System, while at the same time funding for mass transit was being rapidly scaled back. Public transit ridership in most cities plummeted.[40] In 1962, PresidentJohn F. KennedychargedCongresswith the task of addressing these problems. These plans came to fruition in 1964, when PresidentLyndon B. Johnsonsigned theUrban Mass Transportation Act of 1964into law, thereby forming theUrban Mass Transportation Administration.[41]UMTA was set up to fund mass transit developments in the same fashion that the earlierFederal Aid Highway Act of 1956had helped create the Interstate Highways. That is, UMTA would help cover the capital costs of building out new infrastructure. However, planners who were aware of the PRT concept were worried that building more systems based on existing technologies would not help the problem, as Fitcher had earlier noted. Proponents suggested that systems would have to offer the flexibility of a car: The reason for the sad state of public transit is a very basic one – the transit systems just do not offer a service which will attract people away from theirautomobiles. Consequently, their patronage comes very largely from those who cannot drive, either because they are too young, too old, or because they are too poor to own and operate an automobile. Look at it from the standpoint of a commuter who lives in a suburb and is trying to get to work in thecentral business district(CBD). If he is going to go by transit, a typical scenario might be the following: he must first walk to the closest bus stop, let us say a five or ten minute walk, and then he may have to wait up to another ten minutes, possibly in inclement weather, for the bus to arrive. When it arrives, he may have to stand unless he is lucky enough to find a seat. The bus will be caught up in street congestion and move slowly, and it will make many stops completely unrelated to his trip objective. The bus may then let him off at a terminal to a suburban train. Again he must wait, and, after boarding the train, again experience a number of stops on the way to the CBD, and possibly again he may have to stand in the aisle. He will get off at the station most convenient to his destination and possibly have to transfer again onto a distribution system. It is no wonder that in those cities where ample inexpensive parking is available, most of those who can drive do drive.[42] In 1966, theUnited States Department of Housing and Urban Developmentwas asked to "undertake a project to study ... new systems of urban transportation that will carry people and goods ... speedily, safely, without polluting the air, and in a manner that will contribute to sound city planning." The resulting report was published in 1968[43]and proposed the development of PRT, as well as other systems such as dial-a-bus and high-speed interurban links. In the late 1960s, theAerospace Corporation, an independent non-profit corporation set up by the US Congress, spent substantial time and money on PRT, and performed much of the early theoretical and systems analysis. However, this corporation is not allowed to sell to non-federal government customers. In 1969, members of the study team published the first widely publicized description of PRT inScientific American.[44]In 1978 the team also published a book.[45]These publications sparked off a sort of "transit race" in the same sort of fashion as thespace race, with countries around the world rushing to join what appeared to be a future market of immense size. Theoil crisis of 1973made vehicle fuels more expensive, which naturally interested people in alternative transportation. In 1967, aerospace giantMatrastarted theAramis projectinParis. After spending about 500 millionfrancs, the project was canceled when it failed its qualification trials in November 1987. The designers tried to make Aramis work like a "virtual train", but control software issues caused cars to bump unacceptably. The project ultimately failed.[46] Between 1970 and 1978,Japanoperated a project called "Computer-controlled Vehicle System" (CVS). In a full-scale test facility, 84 vehicles operated at speeds up to 60 kilometres per hour (37.3 mph) on a 4.8 km (3.0 mi) guideway; one-secondheadwayswere achieved during tests. Another version of CVS was in public operation for six months from 1975 to 1976. This system had 12 single-mode vehicles and fourdual-mode vehicleson a 1.6 km (1.0 mi) track with five stations. This version carried over 800,000 passengers. CVS was cancelled when Japan's Ministry of Land, Infrastructure and Transport declared it unsafe under existing rail safety regulations, specifically in respect of braking and headway distances. On March 23, 1973, U.S. Urban Mass Transportation Administration (UMTA) administrator Frank Herringer testified before Congress: "A DOT program leading to the development of a short, one-half to one-second headway, high-capacity PRT (HCPRT) system will be initiated in fiscal year 1974."[47]According to PRT supporterJ. Edward Anderson, this was "because of heavy lobbying from interests fearful of becoming irrelevant if a genuine PRT program became visible." From that time forward people interested in HCPRT were unable to obtain UMTA research funding.[48] In 1975, theMorgantown Personal Rapid Transitproject was completed. It has five off-line stations that enable non-stop, individually programmed trips along an 8.7-mile (14.0 km) track serviced by a fleet of 71 cars. This is a crucial characteristic of PRT. However, it is not considered a PRT system because its vehicles are too heavy and carry too many people. When it carries many people, it operates in a point-to-point fashion, instead of running like an automated people mover from one end of the line to the other. During periods of low usage all cars make a full circuit stopping at every station in both directions. Morgantown PRT is still in continuous operation atWest Virginia UniversityinMorgantown, West Virginia, with about 15,000 riders per day (as of 2003[update]). The steam-heated track has proven expensive and the system requires an operation and maintenance budget of $5 million annually.[49]Although it successfully demonstrated automated control and it is still operating it was not sold to other sites. A 2010 report concluded replacing the system with buses on roads would provide unsatisfactory service and create congestion.[50][51]Subsequently, the forty year old computer and vehicle control systems were replaced in the 2010s and there are plans to replace the vehicles. From 1969 to 1980, Mannesmann Demag andMBBcooperated to build theCabinentaxiurban transportation system inGermany. Together the firms formed the Cabintaxi Joint Venture. They created an extensive PRT technology, including a test track, that was considered fully developed by the German government and its safety authorities. The system was to have been installed inHamburg, but budget cuts stopped the proposed project before the start of construction. With no other potential projects on the horizon, the joint venture disbanded, and the fully developed PRT technology was never installed. Cabintaxi Corporation, a US-based company, obtained the technology in 1985, and remains active in the private-sector market trying to sell the system but so far there have been no installations. In 1979 the three stationDuke University Medical Center Patient Rapid Transitsystem was commissioned. Uniquely, the cars could move sideways, as well as backwards and forwards and it was described as a "horizontal elevator". The system was closed in 2009 to allow for expansion of the hospital. In the 1990s,Raytheoninvested heavily in a system called PRT 2000, based on technology developed byJ. Edward Andersonat theUniversity of Minnesota. Raytheon failed to installa contracted systeminRosemont, Illinois, nearChicago, when estimated costs escalated toUS$50 million per mile, allegedly due to design changes that increased the weight and cost of the system relative to Anderson's original design. In 2000, rights to the technology reverted to the University of Minnesota, and were subsequently purchased by Taxi2000.[52][53] In 1999 the 2getthere designedParkShuttlesystem was opened in the Kralingen neighbourhood of eastern Rotterdam using 12-seater driverless buses. The system was extended in 2005 and new second-generation vehicles introduced to serve five stations over 1.8 kilometres (1.1 mi) with five grade crossings over ordinary roads. Operation is scheduled in peak periods and on demand at other times.[54]In 2002, 2getthere operated twenty five 4-passenger "CyberCabs" at Holland's 2002Floriadehorticultural exhibition. These transported passengers along a track spiraling up to the summit of Big Spotters Hill. The track was approximately 600-metre (1,969 ft) long (one-way) and featured only two stations. The six-month operation was intended to research the public acceptance of PRT-like systems. In 2010 a 10-vehicle (four seats each), two station 2getthere system was opened to connect a parking lot to the main area atMasdar City, UAE. The systems runs in an undercroft beneath the city and was supposed to be a pilot project for a much larger network, which would also have included transport of freight. Expansion of the system was cancelled just after the pilot scheme opened due to the cost of constructing the undercroft and since then other electric vehicles have been proposed.[22] In January 2003, the prototypeULTra("Urban Light Transport") system inCardiff, Wales, was certified to carry passengers by the UK Railway Inspectorate on a 1 km (0.6 mi) test track. ULTra was selected in October 2005 byBAA plcfor London'sHeathrow Airport.[55]Since May 2011 a three-station system has been open to the public, transporting passengers from a remote parking lot to terminal 5.[26]During the deployment of the system the owners of Heathrow became owners of the UltrPRT design. In May 2013 Heathrow Airport Limited included in its draft five-year (2014–2019) master plan a scheme to use the PRT system to connect terminal 2 and terminal 3 to their respective business car parks. The proposal was not included in the final plan due to spending priority given to other capital projects and has been deferred.[56]If a third runway is constructed at Heathrow will destroy the existing system, which will be built over, will be replaced by another PRT. In June 2006, a Korean/Swedish consortium, Vectus Ltd, started constructing a 400 m (1,312 ft) test track inUppsala, Sweden.[57]This test system was presented at the 2007 PodCar City conference in Uppsala.[58]A 40-vehicle, 2-station, 4.46 km (2.8 mi) system called "SkyCube" was opened inSuncheon, South Korea, in April 2014.[59] In the 2010s the MexicanWestern Institute of Technology and Higher Educationbegan research into project LINT ("Lean Intelligent Network Transportation") and built a 1/12 operational scale model.[60]This was further developed and became the Modutram[61]system and a full-scale test track was built inGuadalajara, which was operational by 2014.[62] In 2018 it was announced that a PRT system would be installed at the newChengdu Tianfu International Airport.[6]The system will include 6 miles of guideway, 4 stations, 22 pods and will connect airport parking to two terminal buildings. It is supplied by Ultra MTS. The airport is due to open in 2021.[63] Among the handful of prototype systems (and the larger number that exist on paper) there is a substantial diversity of design approaches, some of which are controversial. Vehicle weight influences the size and cost of a system's guideways, which are in turn a major part of the capital cost of the system. Larger vehicles are more expensive to produce, require larger and more expensive guideways, and use more energy to start and stop. If vehicles are too large, point-to-point routing also becomes more expensive. Against this, smaller vehicles have more surface area per passenger (thus have higher total air resistance which dominates the energy cost of keeping vehicles moving at speed), and larger motors are generally more efficient than smaller ones. The number of riders who will share a vehicle is a key unknown. In the U.S., the average car carries 1.16 persons,[64]and most industrialized countries commonly average below two people; not having to share a vehicle with strangers is a key advantage ofprivate transport. Based on these figures, some have suggested that two passengers per vehicle (such as withskyTran, EcoPRT and Glydways), or even a single passenger per vehicle is optimum. Other designs use a car for a model, and choose larger vehicles, making it possible to accommodate families with small children, riders with bicycles, disabled passengers with wheelchairs, or apalletor two of freight. All current designs (except for the human-poweredShweeb) are powered byelectricity. In order to reduce vehicle weight, power is generally transmitted via lineside conductors although two of the operating systems use on-board batteries. According to the designer of Skyweb/Taxi2000,J. Edward Anderson, the lightest system useslinear induction motor(LIM) on the vehicle for both propulsion and braking, which also makes manoeuvres consistent regardless of the weather, especially rain or snow. LIMs are used in a small number of rapid transit applications, but most designs userotary motors. Most such systems retain a small on-board battery to reach the next stop after a power failure. CabinTaxi uses a LIM and was able to demonstrate 0.5 second headways on its test track. The Vectus prototype system used continuous track mounted LIMs with the reaction plate on the vehicle, eliminating the active propulsion system (and power required) on the vehicle. ULTraand 2getthere use on-board batteries, recharged at stations. This increases the safety, and reduces the complexity, cost and maintenance of the guideway. As a result, the ULTRa guideway resembles a sidewalk with curbs and is inexpensive to construct. ULTRa and 2getthere vehicles resembles small automated electric cars, and use similar components. (The ULTRa POD chassis and cabin have been used as the basis of a shared autonomous vehicle for running in mixed traffic.[65]) Almost all designs avoidtrack switching, instead advocating vehicle-mounted switches (which engage with special guiderails at the junctions) or conventional steering. Advocates say that vehicle-switching permits faster routing so vehicles can run closer together which increases capacity. It also simplifies the guideway, makes junctions less visually obtrusive and reduces the impact of malfunctions, because a failed switch on one vehicle is less likely to affect other vehicles. Track switching greatly increases headway distance. A vehicle must wait for the previous vehicle to clear the junction, for the track to switch and for the switch to be verified. Communication between the vehicle and wayside controllers adds both delays and more points of failure. If the track switching is faulty, vehicles must be able to stop before reaching the switch, and all vehicles approaching the failed junction would be affected. Mechanical vehicle switching minimizes inter-vehicle spacing or headway distance, but it also increases the minimum distances between consecutive junctions. A mechanically switching vehicle, maneuvering between two adjacent junctions with different switch settings, cannot proceed from one junction to the next. The vehicle must adopt a new switch position, and then wait for the in-vehicle switch's locking mechanism to be verified. If the vehicle switching is faulty, that vehicle must be able to stop before reaching the next switch, and all vehicles approaching the failed vehicle would be affected. Conventional steering allows a simpler 'track' consisting only of a road surface with some form of reference for the vehicle's steering sensors. Switching would be accomplished by the vehicle following the appropriate reference line – maintaining a set distance from the left roadway edge would cause the vehicle to diverge left at a junction, for example. Several types of guideways have been proposed or implemented, including beams similar to monorails, bridge-liketrussessupporting internal tracks, and cables embedded in a roadway. Most designs put the vehicle on top of the track, which reduces visual intrusion and cost, as well as easing ground-level installation. An overhead track is necessarily higher, but may also be narrower. Most designs use the guideway to distribute power and data communications, including to the vehicles. TheMorgantown PRTfailed its cost targets because of the steam-heated track required to keep the large channel guideway free of frequent snow and ice. Heating uses up to four times as much as energy as that used to propel the vehicles.[66]Most proposals plan to resist snow and ice in ways that should be less expensive. The Heathrow system has a special de-icing vehicle. Masdar's system has been limited because the exclusive right-of-way for the PRT was gained by running the vehicles in an undercroft at ground-level while building an elevated "street level" between all the buildings. This led to unrealistically expensive buildings and roads.[22] Proposals usually have stations close together, and located on side tracks so that through traffic can bypass vehicles picking up or dropping off passengers. Each station might have multiple berths, with perhaps one-third of the vehicles in a system being stored at stations waiting for passengers. Stations are envisioned to be minimalistic, without facilities such as rest rooms. For elevated stations, an elevator may be required for accessibility. At least one system, Metrino, provides wheelchair and freight access by using a cogway in the track, so that the vehicle itself can go from a street-level stop to an overhead track. Some designs have included substantial extra expense for the track needed to decelerate to and accelerate from stations. In at least one system, Aramis, this nearly doubled the width and cost of the required right-of-way and caused the nonstop passenger delivery concept to be abandoned. Other designs have schemes to reduce this cost, for example merging vertically to reduce the footprint. Spacing of vehicles on the guideway influences the maximum passenger capacity of a track, so designers prefer smallerheadwaydistances. Computerized control and active electronic braking (of motors) theoretically permit much closer spacing than the two-second headways recommended for cars at speed. In these arrangements, multiple vehicles operate in "platoons" and can be braked simultaneously. There are prototypes forautomatic guidance of private carsbased on similar principles. Very short headways are controversial. The UK Railway Inspectorate has evaluated the ULTra design and is willing to accept one-second headways, pending successful completion of initial operational tests at more than 2 seconds.[67]In other jurisdictions, preexisting rail regulations apply to PRT systems (see CVS, above); these typically calculate headways for absolute stopping distances with standing passengers. These severely restrict capacity and make PRT systems infeasible. Another standard said trailing vehicles must stop if the vehicle in front stopped instantaneously (or like a "brick wall"). In 2018 a committee of theAmerican Society of Mechanical Engineersconsidered replacing the "brick wall" standard with a requirement for vehicles to maintain a safe "separation zone" based on the minimum stopping distance of the lead vehicle and the maximum stopping of the trailing vehicle.[68]These changes were introduced into the standard in 2021. PRT is usually proposed as an alternative to rail systems, so comparisons tend to be with rail. PRT vehicles seat fewer passengers than trains and buses, and must offset this by combining higher average speeds, diverse routes, and shorter headways. Proponents assert that equivalent or higher overall capacity can be achieved by these means. With two-second headways and four-person vehicles, a single PRT line can achieve theoretical maximum capacity of 7,200 passengers per hour. However, most estimates assume that vehicles will not generally be filled to capacity, due to the point-to-point nature of PRT. At a more typical average vehicle occupancy of 1.5 persons per vehicle, the maximum capacity is 2,700 passengers per hour. Some researchers have suggested that rush hour capacity can be improved if operating policies support ridesharing.[69] Capacity is inversely proportional to headway. Therefore, moving from two-second headways to one-second headways would double PRT capacity. Half-second headways would quadruple capacity. Theoretical minimum PRT headways would be based on the mechanical time to engage brakes, and these are much less than a half second. Researchers suggest that high capacity PRT (HCPRT) designs could operate safely at half-second headways, which has already been achieved in practice on the Cabintaxi test track in the late 1970s.[70]Using the above figures, capacities above 10,000 passengers per hour seem in reach. In simulations of rush hour or high-traffic events, about one-third of vehicles on the guideway need to travel empty to resupply stations with vehicles in order to minimize response time. This is analogous to trains and buses travelling nearly empty on the return trip to pick up more rush hour passengers. Grade separatedlight rail systems can move 15,000 passengers per hour on a fixed route, but these are usually fully grade separated systems. Street level systems typically move up to 7,500 passengers per hour. Heavy rail subways can move 50,000 passengers per hour per direction. As with PRT, these estimates depend on having enough trains. Neither light nor heavy rail scales operated efficiently in off-peak when capacity utilization is low but a schedule must be maintained. In a PRT system when demand is low, surplus vehicles will be configured to stop at empty stations at strategically placed points around the network. This enables an empty vehicle to quickly be despatched to wherever it is required, with minimal waiting time for the passenger. PRT systems will have to re-circulate empty vehicles if there is an imbalance in demand along a route, as is common in peak periods. The above discussion compares line orcorridor capacityand may therefore not be relevant for a networked PRT system, where several parallel lines (or parallel components of a grid) carry traffic. In addition, Muller estimated[71]that while PRT may need more than one guideway to match the capacity of a conventional system, the capital cost of the multiple guideways may still be less than that of the single guideway conventional system. Thus comparisons of line capacity should also consider the cost per line. PRT systems should require much less horizontal space than existing metro systems, with individual cars being typically around 50% as wide for side-by-side seating configurations, and less than 33% as wide for single-file configurations. This is an important factor in densely populated, high-traffic areas. For a given peak speed, nonstop journeys are about three times as fast as those with intermediate stops. This is not just because of the time for starting and stopping. Scheduled vehicles are also slowed by boardings and exits for multiple destinations. Therefore, a given PRT seat transports about three times as many passenger miles per day as a seat performing scheduled stops. So PRT should also reduce the number of needed seats threefold for a given number of passenger miles. While a few PRT designs have operating speeds of 100 km/h (62 mph), and one as high as 241 km/h (150 mph),[72]most are in the region of 40–70 km/h (25–43 mph). Rail systems generally have higher maximum speeds, typically 90–130 km/h (56–81 mph) and sometimes well in excess of 160 km/h (99 mph), but average travel speed is reduced about threefold by scheduled stops and passenger transfers. If PRT designs deliver the claimed benefit of being substantially faster than cars in areas with heavy traffic, simulations suggest that PRT could attract many more car drivers than other public transit systems. Standard mass transit simulations accurately predict that 2% of trips (including cars) will switch to trains. Similar methods predict that 11% to 57% of trips would switch to PRT, depending on its costs and delays.[10][73][74] The typical control algorithm places vehicles in imaginary moving "slots" that go around the loops of track. Real vehicles are allocated a slot by track-side controllers. Traffic jams are prevented by placing north–south vehicles in even slots, and east/west vehicles in odd slots. At intersections, the traffic in these systems can interpenetrate without slowing. On-board computers maintain their position by using anegative feedback loopto stay near the center of the commanded slot. Early PRT vehicles measured their position by adding up the distance usingodometers, with periodic check points to compensate for cumulative errors.[45]Next-generationGPSand radio location could measure positions as well. Another system, "pointer-following control", assigns a path and speed to a vehicle, after verifying that the path does not violate the safety margins of other vehicles. This permits system speeds and safety margins to be adjusted to design or operating conditions, and may use slightly less energy.[75]The maker of the ULTra PRT system reports that testing of its control system shows lateral (side-to-side) accuracy of 1 cm, and docking accuracy better than 2 cm. Computer control eliminates errors from human drivers, so PRT designs in a controlled environment should be much safer than private motoring on roads. Most designs enclose the running gear in the guideway to prevent derailments. Grade-separated guideways would prevent conflict with pedestrians or manually controlled vehicles. Other public transitsafety engineeringapproaches, such as redundancy and self-diagnosis of critical systems, are also included in designs. The Morgantown system, more correctly described as aGroup Rapid Transit(GRT) type ofAutomated Guideway Transitsystem (AGT), has completed 110 million passenger-miles without serious injury. According to the U.S. Department of Transportation, AGT systems as a group have higher injury rates than any other form of rail-based transit (subway, metro, light rail, or commuter rail) though still much better than ordinary buses orcars. More recent research by the British company ULTra PRT reported that AGT systems have a better safety than more conventional, non-automated modes.[citation needed] As with many current transit systems, personal passenger safety concerns are likely to be addressed through CCTV monitoring,[76]and communication with a central command center from which engineering or other assistance may be dispatched. Theenergy efficiencyadvantages claimed by PRT proponents include two basic operational characteristics of PRT: an increased average load factor; and the elimination of intermediate starting and stopping.[77] Average load factor, in transit systems, is the ratio of the total number of riders to the total theoretical capacity. A transit vehicle running at full capacity has a 100% load factor, while an empty vehicle has 0% load factor. If a transit vehicle spends half the time running at 100% and half the time running at 0%, theaverageload factor is 50%. Higher average load factor corresponds to lower energy consumption per passenger, so designers attempt to maximize this metric. Scheduled mass transit (i.e. buses or trains) trades off service frequency and load factor. Buses and trains must run on a predefined schedule, even during off-peak times when demand is low and vehicles are nearly empty. So to increase load factor, transportation planners try to predict times of low demand, and run reduced schedules or smaller vehicles at these times. This increases passengers' wait times. In many cities, trains and buses do not run at all at night or on weekends. PRT vehicles, in contrast, would only move in response to demand, which places a theoretical lower bound on their average load factor. This allows 24-hour service without many of the costs of scheduled mass transit.[78] ULTra PRT estimates its system will consume 839 BTU per passenger mile (0.55MJper passenger km).[79][80]By comparison, cars consume 3,496 BTU, and personal trucks consume 4,329 BTU per passenger mile.[81] Due to PRT's efficiency, some proponents say solar becomes a viable power source.[82]PRT elevated structures provide a ready platform for solar collectors, therefore some proposed designs include solar power as a characteristic of their networks. For bus and rail transit, the energy per passenger-mile depends on the ridership and the frequency of service. Therefore, the energy per passenger-mile can vary significantly from peak to non-peak times. In the US, buses consume an average of 4,318 BTU/passenger-mile, transit rail 2,750 BTU/passenger-mile, and commuter rail 2,569 BTU/passenger-mile.[81] Opponents to PRT schemes have expressed a number of concerns: Vukan R. Vuchic, professor of Transportation Engineering at theUniversity of Pennsylvaniaand a proponent of traditional forms of transit, has stated his belief that the combination of small vehicles and expensive guideway makes it highly impractical in both cities (not enough capacity) and suburbs (guideway too expensive). According to Vuchic: "...the PRT concept combines two mutually incompatible elements of these two systems: very small vehicles with complicated guideways and stations. Thus, in central cities, where heavy travel volumes could justify investment in guideways, vehicles would be far too small to meet the demand. In suburbs, where small vehicles would be ideal, the extensive infrastructure would be economically unfeasible and environmentally unacceptable."[83] PRT supporters claim that Vuchic's conclusions are based on flawed assumptions. PRT proponent J.E. Anderson wrote, in a rebuttal to Vuchic: "I have studied and debated with colleagues and antagonists every objection to PRT, including those presented in papers by Professor Vuchic, and find none of substance. Among those willing to be briefed in detail and to have all of their questions and concerns answered, I find great enthusiasm to see the system built."[83] The manufacturers of ULTra acknowledge that current forms of their system would provide insufficient capacity in high-density areas such as centralLondon, and that the investment costs for the tracks and stations are comparable to building new roads, making the current version of ULTra more suitable for suburbs and other moderate capacity applications, or as a supplementary system in larger cities.[citation needed] Possible regulatory concerns include emergency safety, headways, and accessibility for the disabled. Many jurisdictions regulate PRT systems as if they were trains. At least one successful prototype, CVS, failed deployment because it could not obtain permits from regulators.[84] Several PRT systems have been proposed forCalifornia,[85][86]but theCalifornia Public Utilities Commission(CPUC) states that its rail regulations apply to PRT, and these require railway-sized headways.[87]The degree to which CPUC would hold PRT to "light rail" and "rail fixed guideway" safety standards is not clear because it can grant particular exemptions and revise regulations.[88] Other forms of automated transit have been approved for use in California, notably the Airtrain system atSFO. CPUC decided not to require compliance with General Order 143-B (for light rail) since Airtrain has no on-board operators. They did require compliance with General Order 164-D which mandates a safety and security plan, as well as periodic on-site visits by an oversight committee.[89] If safety or access considerations require the addition of walkways, ladders, platforms or other emergency/disabled access to or egress from PRT guideways, the size of the guideway may be increased. This may impact the feasibility of a PRT system, though the degree of impact would depend on both the PRT design and the municipality. Wayne D. Cottrell of theUniversity of Utahconducted a critical review of PRT academic literature since the 1960s. He concluded that there are several issues that would benefit from more research, including urban integration, risks of PRT investment, bad publicity, technical problems, and competing interests from other transport modes. He suggests that these issues, "while not unsolvable, are formidable," and that the literature might be improved by better introspection and criticism of PRT. He also suggests that more government funding is essential for such research to proceed, especially in the United States.[90] Several proponents ofnew urbanism, an urban design movement that advocates forwalkable cities, have expressed opinions on PRT. Peter CalthorpeandSir Peter Hallhave supported[91][92]the concept, butJames Howard Kunstlerdisagrees.[93] As the development of self-steering technology forautonomous carsand shuttles advances,[94]the guideway technology of PRT seems obsolete at first glance. Automated operation might become feasible on existing roads too. On the other hand, PRT systems can also make use of self-steering technology and significant benefits remain from operating on a segregated route network.
https://en.wikipedia.org/wiki/Personal_rapid_transit
Non-separable waveletsare multi-dimensionalwaveletsthat are not directly implemented astensor productsof wavelets on some lower-dimensional space. They have been studied since 1992.[1]They offer a few important advantages. Notably, using non-separable filters leads to more parameters in design, and consequently better filters.[2]The main difference, when compared to the one-dimensional wavelets, is thatmulti-dimensional samplingrequires the use oflattices(e.g., the quincunx lattice). The wavelet filters themselves can be separable or non-separable regardless of the sampling lattice. Thus, in some cases, the non-separable wavelets can be implemented in a separable fashion. Unlike separable wavelet, the non-separable wavelets are capable of detecting structures that are not only horizontal, vertical or diagonal (show lessanisotropy). Thissignal processing-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Non-separable_wavelet
Agraphical user interface, orGUI[a], is a form ofuser interfacethat allowsuserstointeract with electronic devicesthroughgraphicaliconsand visual indicators such assecondary notation. In many applications, GUIs are used instead oftext-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steeplearning curveofcommand-line interfaces(CLIs),[4][5][6]which require commands to be typed on acomputer keyboard. The actions in a GUI are usually performed throughdirect manipulationof the graphical elements.[7][8][9]Beyond computers, GUIs are used in many handheldmobile devicessuch asMP3players, portable media players, gaming devices,smartphonesand smaller household, office andindustrial controls. The termGUItends not to be applied to other lower-display resolutiontypes of interfaces, such asvideo games(wherehead-up displays(HUDs)[10]are preferred), or not including flat screens likevolumetric displays[11]because the term is restricted to the scope of2Ddisplay screens able to describe generic information, in the tradition of thecomputer scienceresearch at theXerox Palo Alto Research Center. Designing the visual composition and temporal behavior of a GUI is an important part ofsoftware applicationprogramming in the area ofhuman–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a storedprogram, a design discipline namedusability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to aschromeorGUI.[12][13][14]Typically, users interact with information by manipulating visualwidgetsthat allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. Amodel–view–controllerallows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a differentskinorthemeat will, and eases the designer's work to change the interface as user needs evolve. Good GUI design relates to users more, and to system architecture less. Large widgets, such aswindows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of avertical marketas application-specific GUIs. Examples includeautomated teller machines(ATM),point of sale(POS) touchscreens at restaurants,[15]self-service checkoutsused in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ areal-time operating system(RTOS). Cell phonesand handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information. A series of elements conforming avisual languagehave evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is thewindows, icons, text fields, canvases, menus, pointer(WIMP) paradigm, especially inpersonal computers.[16] The WIMP style of interaction uses a virtualinput deviceto represent the position of apointing device's interface, most often amouse, and presentsinformationorganized in windows and represented withicons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. Awindow managerfacilitates the interactions between windows,applications, and thewindowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer. Inpersonal computers, all these elements are modeled through adesktop metaphorto produce a simulation called adesktop environmentin which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism. Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations in between exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.[17] Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found onimage search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameterdisplay: inline-block;. A waterfall layout found onImgurandTweetDeckwith fixed width but variable height per item is usually implemented by specifyingcolumn-width:. Smaller app mobile devices such aspersonal digital assistants(PDAs) andsmartphonestypically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newerinteraction techniques, collectively termedpost-WIMPUIs.[18] As of 2011, some touchscreen-based operating systems such as Apple'siOS(iPhone) andAndroiduse the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.[19] Human interface devices, for the efficient interaction with a GUI include acomputer keyboard, especially used together withkeyboard shortcuts,pointing devicesfor thecursor(or ratherpointer) control:mouse,pointing stick,touchpad,trackball,joystick,virtual keyboards, andhead-up displays(translucent information devices at the eye level). There are also actions performed by programs that affect the GUI. For example, there are components likeinotifyorD-Busto facilitate communication between computer programs. Ivan SutherlanddevelopedSketchpadin 1963, widely held as the first graphicalcomputer-aided designprogram. It used alight pento create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at theStanford Research Institute, led byDouglas Engelbart, developed theOn-Line System(NLS), which used text-basedhyperlinksmanipulated with a then-new device: themouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos".) In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers atXerox PARCand specificallyAlan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for theSmalltalk programming language, which ran on theXerox Altocomputer, released in 1973. Most modern general-purpose GUIs are derived from this system. The Xerox PARC GUI consisted of graphical elements such aswindows,menus,radio buttons, andcheck boxes. The concept oficonswas later introduced byDavid Canfield Smith, who had written a thesis on the subject under the guidance of Kay.[20][21][22]The PARC GUI employs apointing devicealong with a keyboard. These aspects can be emphasized by using the alternative term and acronym forwindows, icons, menus,pointing device(WIMP). This effort culminated in the 1973Xerox Alto, the first computer with a GUI, though the system never reached commercial production. The first commercially available computer with a GUI was the 1979PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the ideas from the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as theXerox Star.[23][24]These early systems spurred many other GUI efforts, includingLisp machinesbySymbolicsand other manufacturers, theApple Lisa(which presented the concept ofmenu barandwindow controls) in 1983, theAppleMacintosh 128Kin 1984, and theAtari STwithDigital Research'sGEM, and CommodoreAmigain 1985.Visi Onwas released in 1983 for theIBM PC compatiblecomputers, but was never popular due to its high hardware demands.[25]Nevertheless, it was a crucial influence on the contemporary development ofMicrosoft Windows.[26] Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM'sCommon User Accessspecifications formed the basis of the GUIs used in Microsoft Windows, IBMOS/2Presentation Manager, and the UnixMotiftoolkit andwindow manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in variousdesktop environmentsforUnix-likeoperating systems, such as macOS andLinux. Thus most current GUIs have largely common idioms. GUIs were a hot topic in the early 1980s. TheApple Lisawas released in 1983, and various windowing systems existed forDOSoperating systems (includingPC GEMandPC/GEOS). Individual applications for many platforms presented their own GUI variants.[27]Despite the GUI's advantages, many reviewers questioned the value of the entire concept,[28]citing hardware limits and problems in finding compatible software. In 1984, Applereleased a television commercialwhich introduced the Apple Macintosh during the telecast ofSuper Bowl XVIIIbyCBS,[29]withallusionstoGeorge Orwell's noted novelNineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems,[30]and becoming a signature representation of Apple products.[31] In 1985,Commodorereleased theAmiga 1000, along withWorkbenchandKickstart 1.0(which containedIntuition). This interface ran as a separate task, meaning it was very responsive and, unlike other GUIs of the time, it didn't freeze up when a program was busy. Additionally, it was the first GUI to introduce something resemblingVirtual Desktops. Windows 95, accompanied by an extensive marketing campaign,[32]was a major success in the marketplace at launch and shortly became the most popular desktop operating system.[33] In 2007, with theiPhone[34]and later in 2010 with the introduction of theiPad,[35]Apple popularized the post-WIMP style of interaction formulti-touchscreens, and those devices were considered to be milestones in the development ofmobile devices.[36][37] The GUIs familiar to most people as of the mid-late 2010s areMicrosoft Windows,macOS, and theX Window Systeminterfaces for desktop and laptop computers, andAndroid, Apple'siOS,Symbian,BlackBerry OS,Windows Phone/Windows 10 Mobile,Tizen,WebOS, andFirefox OSfor handheld (smartphone) devices.[38][39] People said it's more of a right-brain machine and all that—I think there is some truth to that. I think there is something to dealing in a graphical interface and a more kinetic interface—you're reallymovinginformation around, you're seeing it move as though it had substance. And you don't see that on a PC. The PC is very much of a conceptual machine; you move information around the way you move formulas, elements on either side of an equation. I think there's a difference. Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions. Command-line interfaces are morelightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned.[4]But reaching this level takes some time because the command words may not be easily discoverable ormnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However,windows, icons, menus, pointer(WIMP) interfaces present users with manywidgetsthat represent and can trigger some of the system's available commands. GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script. WIMPs extensively usemodes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory andenvironment variables. Most modernoperating systemsprovide both a GUI and some level of a CLI, although the GUIs usually receive more attention. GUI wrappers find a way around thecommand-line interfaceversions (CLI) of (typically)LinuxandUnix-likesoftware applications and theirtext-based UIsor typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steeplearning curveof the command-line, which requires commands to be typed on thekeyboard. By starting a GUI wrapper,userscan intuitivelyinteractwith, start, stop, and change its working parameters, through graphicaliconsand visual indicators of adesktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed forUnix-likeoperating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in ashell script. Many environments and games use the methods of3D graphicsto project 3D GUI objects onto the screen. The use of 3D graphics has become increasingly common in mainstream operating systems (ex.Windows Aero, andAqua(macOS)) to create attractive interfaces, termed eye candy (which includes, for example, the use ofdrop shadowsunderneath windows and thecursor), or for functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube with faces representing each user's workspace, and window management is represented via aRolodex-style flipping mechanism inWindows Vista(seeWindows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows. The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used inMicrosoft Bob, 3dwm, File System Navigator,File System Visualizer, 3D Mailbox,[41][42]andGopherVR.Zooming(ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. In 2006,Hillcrest Labsintroduced the first ZUI for television.[43]Other innovations include the menus on thePlayStation 2; the menus on theXbox; Sun'sProject Looking Glass;Metisse, which was similar to Project Looking Glass;[44]BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents;Croquet OS, which is built for collaboration;[45]andcompositing window managerssuch asEnlightenmentandCompiz.Augmented realityandvirtual realityalso make use of 3D GUI elements.[46] 3D GUIs have appeared inscience fictionliterature andfilms, even before certain technologies were feasible or in common use.[47]
https://en.wikipedia.org/wiki/Graphical_user_interface#Comparison_to_other_interfaces
AnIrish bullis a ludicrous, incongruent orlogicallyabsurd statement, generally unrecognized as such by its author. The inclusion of the epithetIrishis a late addition.[1] John Pentland Mahaffy, Provost of Trinity College, Dublin, observed, "an Irish bull is always pregnant", i.e. with truthful meaning.[2]The "father" of the Irish bull is often said to be SirBoyle Roche,[3]who once asked "Why should we put ourselves out of our way to do anything for posterity, for what has posterity ever done for us?".[4]Roche may have beenSheridan'smodel forMrs Malaprop.[5] The derivation of "bull" in this sense is unclear. It may be related toOld Frenchboul"fraud, deceit, trickery",Icelandicbull"nonsense",Middle Englishbull"falsehood", or the verbbull"befool, mock, cheat".[6] As the Oxford English Dictionary points out, the epithet "Irish" is a more recent addition, the original wordbullfor such nonsense having been traced back at least to the early 17th century.[1]By the late 19th century the expressionIrish bullwas well known, but writers were expressing reservations such as: "But it is a cruel injustice to poor Paddy to speak of the genuine 'bull' as something distinctly Irish, when countless examples of the same kind of blunder, not a whit less startling, are to be found elsewhere." The passage continues, presenting Scottish, English and French specimens in support.[7]
https://en.wikipedia.org/wiki/Irish_bull
In the context of theCorC++programming languages, alibraryis calledheader-onlyif the full definitions of allmacros,functionsandclassescomprising the library are visible to thecompilerin aheader fileform.[1]Header-only libraries do not need to be separatelycompiled, packaged and installed in order to be used. All that is required is to point the compiler at the location of the headers, and then#includethe header files into the application source. Another advantage is that the compiler's optimizer can do a much better job when all the library's source code is available. The disadvantages include: Nonetheless, the header-only form is popular because it avoids the (often much more serious) problem of packaging. ForC++ templates, including the definitions in header is the only way to compile, since the compiler needs to know the full definition of the templates in order to instantiate.
https://en.wikipedia.org/wiki/Header-only
Structure from motion(SfM)[1]is aphotogrammetricrange imagingtechnique for estimating three-dimensional structures from two-dimensional image sequences that may be coupled with localmotion signals. It is a classic problem studied in the fields ofcomputer visionandvisual perception. In computer vision, the problem of SfM is to design an algorithm to perform this task. In visual perception, the problem of SfM is to find an algorithmby which biological creatures perform this task. Humans perceive a great deal of information about the three-dimensional structure in their environment by moving around it. When the observer moves, objects around them move different amounts depending on their distance from the observer. This is known asmotion parallax, and this depth information can be used to generate an accurate 3D representation of the world around them.[2] Finding structure from motion presents a similar problem to finding structure fromstereo vision. In both instances, the correspondence between images and thereconstructionof 3D object needs to be found. To findcorrespondencebetween images, features such as corner points (edges with gradients in multiple directions) are tracked from one image to the next. One of the most widely used feature detectors is thescale-invariant feature transform(SIFT). It uses the maxima from adifference-of-Gaussians(DOG) pyramid as features. The first step in SIFT is finding a dominant gradient direction. To make it rotation-invariant, the descriptor is rotated to fit this orientation.[3]Another common feature detector is theSURF(speeded-up robust features).[4]In SURF, the DOG is replaced with aHessian matrix-based blob detector. Also, instead of evaluating the gradient histograms, SURF computes for the sums of gradient components and the sums of their absolute values.[5]Its usage of integral images allows the features to be detected extremely quickly with high detection rate.[6]Therefore, comparing to SIFT, SURF is a faster feature detector with drawback of less accuracy in feature positions.[5]Another type of feature recently made practical for structure from motion are general curves (e.g., locally an edge with gradients in one direction), part of a technology known aspointless SfM,[7][8]useful when point features are insufficient, common in man-made environments.[9] The features detected from all the images will then be matched. One of the matching algorithms that track features from one image to another is theLucas–Kanade tracker.[10] Sometimes some of the matched features are incorrectly matched. This is why the matches should also be filtered.RANSAC(random sample consensus) is the algorithm that is usually used to remove the outlier correspondences. In the paper of Fischler and Bolles, RANSAC is used to solve thelocation determination problem(LDP), where the objective is to determine the points in space that project onto an image into a set of landmarks with known locations.[11] The feature trajectories over time are then used to reconstruct their 3D positions and the camera's motion.[12]An alternative is given by so-called direct approaches, where geometric information (3D structure and camera motion) is directly estimated from the images, without intermediate abstraction to features or corners.[13] There are several approaches to structure from motion. In incremental SfM,[14]camera poses are solved for and added one by one to the collection. In global SfM,[15][16]the poses of all cameras are solved for at the same time. A somewhat intermediate approach isout-of-coreSfM, where several partial reconstructions are computed that are then integrated into a global solution. Structure-from-motion photogrammetry with multi-view stereo provides hyperscale landform models using images acquired from a range of digital cameras and optionally a network of ground control points. The technique is not limited in temporal frequency and can provide point cloud data comparable in density and accuracy to those generated by terrestrial and airborne laser scanning at a fraction of the cost.[17][18][19]Structure from motion is also useful in remote or rugged environments where terrestrial laser scanning is limited by equipment portability and airborne laser scanning is limited by terrain roughness causing loss of data and image foreshortening. The technique has been applied in many settings such as rivers,[20]badlands,[21]sandy coastlines,[22][23]fault zones,[24]landslides,[25][26]and coral reef settings.[27]SfM has been also successfully applied for the assessment of changes[28]and large wood accumulation volume[29]and porosity[30]in fluvial systems, the characterization of rock masses through the determination of some properties as the orientation, persistence, etc. of discontinuities.[31][32]as well as for the evaluation of the stability of rock cut slopes.[33]A full range of digital cameras can be utilized, including digital SLR's, compact digital cameras and even smart phones. Generally though, higher accuracy data will be achieved with more expensive cameras, which include lenses of higher optical quality. The technique therefore offers exciting opportunities to characterize surface topography in unprecedented detail and, with multi-temporal data, to detect elevation, position and volumetric changes that are symptomatic of earth surface processes. Structure from motion can be placed in the context of other digital surveying methods. Cultural heritage is present everywhere. Its structural control, documentation and conservation is one of humanity's main duties (UNESCO). Under this point of view, SfM is used in order to properly estimate situations as well as planning and maintenance efforts and costs, control and restoration. Because serious constraints often exist connected to the accessibility of the site and impossibility to install invasive surveying pillars that did not permit the use of traditional surveying routines (like total stations), SfM provides a non-invasive approach for the structure, without the direct interaction between the structure and any operator. The use is accurate as only qualitative considerations are needed. It is fast enough to respond to the monument’s immediate management needs.[34]The first operational phase is an accurate preparation of the photogrammetric surveying where is established the relation between best distance from the object, focal length, the ground sampling distance (GSD) and the sensor’s resolution. With this information the programmed photographic acquisitions must be made using vertical overlapping of at least 60% (figure 02).[35] Furthermore, structure-from-motion photogrammetry represents a non-invasive, highly flexible and low-cost methodology to digitalize historical documents.[36]
https://en.wikipedia.org/wiki/Structure_from_motion
Emergentismis thephilosophical theorythat higher-level properties or phenomenaemergefrom more basic components, and that these emergent properties are not fully reducible to or predictable from those lower-level parts. A property of asystemis said to be emergent if it is a new outcome of some other properties of the system and their interaction, while it is itself different from them.[1]Within thephilosophy of science, emergentism is analyzed both as it contrasts with and parallelsreductionism.[1][2]This philosophical theory suggests that higher-level properties and phenomena arise from the interactions and organization of lower-level entities yet are not reducible to these simpler components. It emphasizes the idea that the whole is more than the sum of its parts. The concept of emergence can be traced back to ancient philosophical traditions.Aristotle, in particular, suggested that the whole could possess properties that its individual parts did not, laying an early foundation for emergentist thought. This idea persisted through the ages, influencing various schools of thought.[3] The term "emergence" was formally introduced in the 19th century by the philosopher George Henry Lewes. He distinguished between "resultant" and "emergent" properties, where resultant properties could be predicted from the properties of the parts, whereas emergent properties could not. This distinction was crucial in differentiating emergent phenomena from simple aggregative effects.[4] In the early 20th century, emergentism gained further traction through the works of British emergentists like C.D. Broad and Samuel Alexander. C.D. Broad, in his 1925 bookThe Mind and Its Place in Nature, argued thatmental stateswere emergent properties of brain processes.[5]Samuel Alexander, in his workSpace, Time, and Deity, suggested that emergent qualities likeconsciousnessandlifecould not be fully explained by the underlying physical processes alone.[6] These philosophers were reacting against the reductionist view that all phenomena could be fully explained by their constituent parts. They argued that emergent properties such as consciousness have their own causal powers and cannot be reduced to or predicted from their base components. This period also saw the influence ofGestalt psychology, which emphasized that psychological phenomena cannot be understood solely by analyzing their component parts, further supporting emergentist ideas.[3] During the mid-20th century, emergentism was somewhat overshadowed by the rise ofbehaviorismand later thecognitive sciences, which often leaned towards more reductionist explanations. However, the concept ofemergencefound renewed interest towards the late 20th century with the advent ofcomplex systemstheory andnon-linear dynamics.[4] In this period, scientists and philosophers began to explore how complex behaviors and properties could arise from relatively simple interactions in systems as diverse as ant colonies, economic markets, andneural networks. This interdisciplinary approach highlighted the ubiquity and importance of emergent phenomena across different domains, fromphysicstobiologytosocial sciences.[3] In recent years, emergentism has continued to evolve, integrating insights from various scientific fields. For example, in physics, the study of phenomena such assuperconductivityand the behavior of complexquantum systemshas provided empirical examples of emergent properties.[7]In biology, the study of complexbiological networksand the dynamics ofecosystemshas further illustrated how emergent properties play a crucial role in natural systems.[8] The resurgence of interest inartificial intelligenceandmachine learninghas also contributed to contemporary discussions on emergentism. Researchers in these fields are particularly interested in how intelligent behavior and consciousness might emerge from artificial systems, providing new perspectives and challenges for emergentist theories.[9] Emergentism can be compatible withphysicalism,[10]the theory that the universe is composed exclusively of physical entities, and in particular with the evidence relating changes in the brain with changes in mental functioning. Some varieties of emergentism are not specifically concerned with themind–body problembut constitute a theory of the nature of the universe comparable topantheism.[11]They suggest ahierarchicalor layered view of the whole of nature, with the layers arranged in terms of increasingcomplexitywith each requiring its ownspecial science. Emergentism is underpinned by several core principles that define its theoretical framework and distinguish it from other philosophical doctrines such asreductionismandholism. Emergence refers to the arising of novel andcoherent structures,patterns, and properties during the process ofself-organizationin complex systems. These emergent properties are not predictable from the properties of the individual components alone. Emergent properties are seen as a result of the interactions and relationships between the components of a system, which produce new behaviors and characteristics that are not present in the isolated parts. This concept is crucial in understanding why certain phenomena cannot be fully explained by analyzing their parts independently.[3] Emergentism distinguishes between two main types of emergence: weak and strong. Emergent properties are characterized by several key features that distinguish them from simple aggregative properties: The theoretical foundations of emergentism are deeply intertwined with various philosophical theories and debates, particularly those concerning the nature ofreality, the relationship between parts and wholes, and the nature ofcausality. Emergentism contrasts sharply withreductionism, which attempts to explain complex phenomena entirely in terms of their simpler components, andholism, which emphasizes the whole without necessarily addressing the emergence of properties.[3] Emergentism stands in contrast to reductionism, which holds that all phenomena can be fully explained by their constituent parts. Reductionists argue that understanding the basic building blocks of a system provides a complete understanding of the system itself. However, emergentists contend that this approach overlooks the novel properties that arise from complex interactions within a system. For example, while the properties of water can be traced back tohydrogenandoxygenatoms, the wetness ofwatercannot be fully explained by examining theseatomsin isolation.[4] Holism, on the other hand, emphasizes the significance of the whole system, suggesting that the properties of the whole are more important than the properties of the parts. Emergentism agrees with holism to some extent but differs in that it specifically focuses on how new properties emerge from the interactions within the system. Holism often overlooks the dynamic processes that lead to the emergence of new properties, which are central to emergentism.[3] Emmecheet al.(1998) state that "there is a very important difference between the vitalists and the emergentists: the vitalist's creative forces were relevant only in organic substances, not in inorganic matter. Emergence hence is creation of new properties regardless of the substance involved." "The assumption of an extra-physical vitalis (vital force,entelechy,élan vital, etc.), as formulated in most forms (old or new) of vitalism, is usually without any genuine explanatory power. It has served altogether too often as anintellectual tranquilizer or verbal sedative—stifling scientific inquiry rather than encouraging it to proceed in new directions."[13] Emergentism can be divided into ontological and epistemological categories, each addressing different aspects of emergent properties. A crucial aspect of emergentism is its treatment ofcausality, particularly the concept ofdownward causation. Downward causation refers to the influence that higher-level properties can exert on the behavior of lower-level entities within a system. This idea challenges the traditional view that causation only works from the bottom up, from simpler to more complex levels.[4] Emergentism finds its scientific support and application across various disciplines, illustrating how complex behaviors and properties arise from simpler interactions. These scientific perspectives demonstrate the practical significance of emergentist theories. Inphysics, emergence is observed in phenomena where macroscopic properties arise from the interactions of microscopic components. A classic example issuperconductivity, where the collective behavior ofelectronsin certain materials leads to the phenomenon of zeroelectrical resistance. This emergent property cannot be fully explained by the properties of individual electrons alone, but rather by their interactions within the lattice structure of the material.[7] Another significant example isquantum entanglement, where particles become interconnected in such a way that the state of one particle instantly influences the state of another, regardless of the distance between them. This non-local property emerges from the quantum interactions and cannot be predicted merely by understanding the individual particles separately. Such emergent properties challenge classical notions of locality and causality, showcasing the profound implications of emergentism in modern physics.[3] Inthermodynamics, emergent behaviors are observed innon-equilibrium systemswhere patterns and structures spontaneously form. For instance,Bénard cells— a phenomenon where heated fluid formshexagonalconvectioncells — arise fromthermal gradientsandfluid dynamics. Thisself-organizationis an emergent property of the system, highlighting how macro-level order can emerge from micro-level interactions.[4] Emergent phenomena are prevalent inbiology, particularly in the study of life and evolutionary processes. One of the most fundamental examples is the emergence oflifefrom non-living chemical compounds. This process, often studied through the lens ofabiogenesis, involves complexchemical reactionsthat lead toself-replicatingmolecules and eventuallyliving organisms. The properties of life — such asmetabolism,growth, andreproduction— emerge from these molecular interactions and cannot be fully understood by examining individual molecules in isolation.[15] In evolutionary biology, the diversity of life forms arises fromgenetic mutations,natural selection, and environmental interactions. Complex traits such as the eye or the brain emerge over time through evolutionary processes. These traits exhibit novel properties that are not predictable from the genetic components alone but result from the dynamic interplay between genes and the environment.[3] Systems biology further illustrates emergent properties in biological networks. For example, metabolic networks whereenzymesand substrates interact exhibit emergent behaviors likerobustnessandadaptability. These properties are crucial for the survival of organisms in changing environments and arise from the complex interconnections within the network.[4] Incognitive science, emergentism plays a crucial role in understandingconsciousnessandcognitive processes. Consciousness is often cited as a paradigmatic example of an emergent property. While neural processes in thebraininvolve electrochemical interactions amongneurons, the subjective experience of consciousness arises from these processes in a way that is not directly reducible to them. This emergence of conscious experience from neural substrates is a central topic in thephilosophy of mindand cognitive science.[16] Artificial intelligence(AI) andmachine learningprovide contemporary examples of emergent behavior in artificial systems. Complex algorithms andneural networkscan learn, adapt, and exhibit intelligent behavior that is not explicitly programmed. For instance,deep learningmodels can recognize patterns and make decisions based on vast amounts of data, demonstrating emergentintelligencefrom simpler computational rules. This emergent behavior in AI systems reflects the principles of emergentism, where higher-level functions arise from the interaction of lower-level components.[9] Emergentism andlanguageare intricately connected through the concept that linguistic properties and structures arise from simpler interactions among cognitive, communicative and social processes. This perspective provides a dynamic view oflanguage development,structure, andevolution, emphasizing the role of interaction andadaptationover innate or static principles. This connection can be explored from several angles: Literary emergentism is a trend in literary theory. It arises as a reaction against traditional interpretive approaches –hermeneutics,structuralism,semiotics, etc., accusing them of analyticalreductionismand lack of hierarchy. Literary emergentism claims to describe the emergence of a text as contemplative logic consisting of seven degrees, similar to the epistemological doctrine ofRudolf Steinerin hisPhilosophy of Freedom.[17]There are also references toTerrence Deacon, author of the theory of Incomplete nature, according to whom the emergent perspective is metaphysical, whereas the human consciousness emerges as an incessant creation of something from nothing.[18]According toDimitar Kalev, in all modern literary-theoretical discourses, there is an epistemological "gap" present between the sensory-imagery phenomena of reading and their proto-phenomena from the text.[19]Therefore, in any attempt at literary reconstructions, certain "destruction" is reached, which, from an epistemological point of view, is a designation of the existing transcendence as some "interruption" of the divine "top-down". The emergentist approach does not interpret the text but rather reconstructs its becoming, identifying itself with the contemplative logic of the writer, claiming that it possesses a being of ideal objectivity and universal accessibility. Emergentism, like any philosophical theory, has been subject to various criticisms and debates. These discussions revolve around the validity of emergent properties, the explanatory power of emergentism, and its implications for other areas of philosophy and science. These criticisms and debates highlight the dynamic and evolving nature of emergentism, reflecting its impact and relevance across various fields of inquiry. By addressing these challenges, proponents of emergentism continue to refine and strengthen their theoretical framework. Emergentism finds applications across various scientific and philosophical domains, illustrating how complex behaviors and properties can arise from simpler interactions. These applications underscore the practical relevance of emergentist theories and their impact on understanding complex systems. These applications of emergentism illustrate its broad relevance and utility in explaining and understanding complex systems across various domains, highlighting the interdisciplinary impact of emergentist theories. Emergentism has been significantly shaped and debated by numerous philosophers and scientists over the years. Here are notable figures who have contributed to the development and discourse of emergentism, providing a rich tapestry of ideas and empirical evidence that support the theory's application across various domains: Contribution: One of the earliest thinkers to suggest that the whole could possess properties that its individual parts did not. This idea laid the foundational groundwork for emergentist thought by emphasizing that certain phenomena cannot be fully explained by their individual components alone. Major Work:Metaphysics[22] Contribution: Formally introduced the term "emergence" in the 19th century. He distinguished between "resultant" and "emergent" properties where emergent properties could not be predicted from the properties of the parts, a critical distinction in emergentist theory. Major Work:Problems of Life and Mind[23] Contribution: Early proponent of emergentism in social and political contexts. Mill's work emphasized the importance of understanding social phenomena as more than the sum of individual actions, highlighting the emergent properties in societal systems. Major Work:A System of Logic[24] Contribution: In his 1925 bookThe Mind and Its Place in Nature, Broad argued that mental states were emergent properties of brain processes. He developed a comprehensive philosophical framework for emergentism, advocating for the irreducibility of higher-level properties. Major Work:The Mind and Its Place in Nature[5] Contribution: In his workSpace, Time, and Deity, Alexander suggested that emergent qualities like consciousness and life could not be fully explained by underlying physical processes alone, emphasizing the novelty and unpredictability of emergent properties. Major Work:Space, Time, and Deity[6] Contribution: A prominent critic and commentator on emergentism. Kim extensively analyzed the limits and scope of emergent properties, particularly in the context of mental causation and the philosophy of mind, questioning the coherence and causal efficacy of emergent properties. Major Work:Mind in a Physical World[14] Contribution: Advanced the idea that emergent properties are irreducible and possess their own causal powers. Polanyi's work in chemistry and philosophy of science provided empirical and theoretical support for emergentist concepts, especially in complex systems and hierarchical structures. Major Work:Personal Knowledge[25] Contribution: Nobel laureate in physics, Anderson's work on condensed matter physics and the theory of superconductivity provided significant empirical examples of emergent phenomena. His famous essay "More is Different" argued for the necessity of emergentist explanations in physics. Major Work:More is Different[26] Contribution: A theoretical biologist whose work in complex systems and self-organization highlighted the role of emergence in biological evolution and the origin of life. Kauffman emphasized the unpredictability and novelty of emergent biological properties. Major Work:The Origins of Order[8] Contribution: Neuropsychologist and Nobel laureate, Sperry's split-brain research contributed to the understanding of consciousness as an emergent property of brain processes. He argued that emergent mental properties have causal efficacy, influencing the lower-level neural processes. Major Work:Science and Moral Priority[27] Contribution: Anthropologist and neuroscientist, Deacon's work on the evolution of language and human cognition explored how emergent properties arise from neural and social interactions. His bookIncomplete Naturedelves into the emergentist explanation of life and mind. Major Work:Incomplete Nature: How Mind Emerged from Matter[28] Contribution: An author and theorist whose popular science books, such asEmergence: The Connected Lives of Ants, Brains, Cities, and Software, have brought the concept of emergentism to a broader audience. Johnson illustrates how complex systems in nature and society exhibit emergent properties. Major Work:Emergence: The Connected Lives of Ants, Brains, Cities, and Software[9] Emergentism offers a valuable framework for understanding complex systems and phenomena that cannot be fully explained by their constituent parts. Its interdisciplinary nature and broad applicability make it a significant area of study in both philosophy and science. Future research will continue to explore the implications and potential of emergent properties, contributing to our understanding of the natural world.
https://en.wikipedia.org/wiki/Emergentism
Incomputer science,array-access analysisis acompiler analysisapproach used to decide the read and write access patterns to elements or portions of arrays.[1] The major data type manipulated in scientific programs is the array. The define/use analysis on a whole array is insufficient for aggressivecompiler optimizationssuch asauto parallelizationand arrayprivatization. Array access analysis aims to obtain the knowledge of which portions or even which elements of the array are accessed by a given code segment (basic block,loop, or even at theprocedurelevel). Array-access analysis can be largely categorized into exact (or reference-list-based) and summary methods for different tradeoffs of accuracy and complexity. Exact methods are precise but very costly in terms of computation and space storage, while summary methods are approximate but can be computed quickly and economically. Typical exact array-access analysis include linearization andatom images. Summary methods can be further divided intoarray sections, bounded regular sections usingtriplet notation, linear-constraint methods such as data-access descriptors andarray-region analysis. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Array_access_analysis
Incomputer programming,machine codeiscomputer codeconsisting ofmachine languageinstructions, which are used to control a computer'scentral processing unit(CPU). For conventionalbinary computers, machine code is the binary[nb 1]representation of a computer program that is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions (possibly interspersed with data).[1] Each machine code instruction causes the CPU to perform a specific task. Examples of such tasks include: In general, each architecture family (e.g.,x86,ARM) has its owninstruction set architecture(ISA), and hence its own specific machine code language. There are exceptions, such as theVAXarchitecture, which includes optional support of thePDP-11instruction set; theIA-64architecture, which includes optional support of theIA-32instruction set; and thePowerPC 615microprocessor, which can natively process bothPowerPCand x86 instruction sets. Machine code is a strictly numerical language, and it is the lowest-level interface to the CPU intended for a programmer.Assembly languageprovides a direct map between the numerical machine code and a human-readable mnemonic. In assembly, numericalopcodesand operands are replaced with mnemonics and labels. For example, thex86architecture has available the 0x90 opcode; it is represented asNOPin the assemblysource code. While it is possible to write programs directly in machine code, managing individual bits and calculating numericaladdressesis tedious and error-prone. Therefore, programs are rarely written directly in machine code. However, an existing machine code program may be edited if the assembly source code is not available. The majority of programs today are written in ahigh-level language. A high-level program may be translated into machine code by acompiler. Every processor or processor family has its owninstruction set. Machine instructions are patterns ofbits[nb 2]that specify some particular action.[2]An instruction set is described by itsinstruction format. Some ways in which instruction formats may differ:[2] A processor's instruction set needs to execute the circuits of a computer'sdigital logic level. At the digital level, the program needs to control the computer's registers, bus, memory, ALU, and other hardware components.[3]To control a computer'sarchitecturalfeatures, machine instructions are created. Examples of features that are controlled using machine instructions: The criteria for instruction formats include: Determining the size of the address field is a choice between space and speed.[7]On some computers, the number of bits in the address field may be too small to access all of the physical memory. Also,virtual address spaceneeds to be considered. Another constraint may be a limitation on the size of registers used to construct the address. Whereas a shorter address field allows the instructions to execute more quickly, other physical properties need to be considered when designing the instruction format. Instructions can be separated into two types: general-purpose and special-purpose. Special-purpose instructions exploit architectural features that are unique to a computer. General-purpose instructions control architectural features common to all computers.[8] General-purpose instructions control: A much more human-friendly rendition of machine language, namedassembly language, usesmnemonic codesto refer to machine code instructions, rather than using the instructions' numeric values directly, and usessymbolic namesto refer to storage locations and sometimesregisters.[9]For example, on theZilog Z80processor, the machine code00000101, which causes the CPU to decrement theBgeneral-purpose register, would be represented in assembly language asDEC B.[10] TheIBM 704, 709, 704x and 709xstore one instruction in each instruction word; IBM numbers the bit from the left as S, 1, ..., 35. Most instructions have one of two formats: For all but theIBM 7094and 7094 II, there are three index registers designated A, B and C; indexing with multiple 1 bits in the tag subtracts thelogical orof the selected index registers and loading with multiple 1 bits in the tag loads all of the selected index registers. The 7094 and 7094 II have seven index registers, but when they are powered on they are inmultiple tag mode, in which they use only the three of the index registers in a fashion compatible with earlier machines, and require a Leave Multiple Tag Mode (LMTM) instruction in order to access the other four index registers. The effective address is normally Y-C(T), where C(T) is either 0 for a tag of 0, the logical or of the selected index registers in multiple tag mode or the selected index register if not in multiple tag mode. However, the effective address for index register control instructions is just Y. A flag with both bits 1 selects indirect addressing; the indirect address word has both a tag and a Y field. In addition totransfer(branch) instructions, these machines have skip instruction that conditionally skip one or two words, e.g., Compare Accumulator with Storage (CAS) does a three way compare and conditionally skips to NSI, NSI+1 or NSI+2, depending on the result. TheMIPS architectureprovides a specific example for a machine code whose instructions are always 32 bits long.[11]: 299The general type of instruction is given by theop(operation) field, the highest 6 bits. J-type (jump) and I-type (immediate) instructions are fully specified byop. R-type (register) instructions include an additional fieldfunctto determine the exact operation. The fields used in these types are: rs,rt, andrdindicate register operands;shamtgives a shift amount; and theaddressorimmediatefields contain an operand directly.[11]: 299–301 For example, adding the registers 1 and 2 and placing the result in register 6 is encoded:[11]: 554 Load a value into register 8, taken from the memory cell 68 cells after the location listed in register 3:[11]: 552 Jumping to the address 1024:[11]: 552 On processor architectures withvariable-length instruction sets[12](such asIntel'sx86processor family) it is, within the limits of the control-flowresynchronizingphenomenon known as theKruskal count,[13][12][14][15][16]sometimes possible through opcode-level programming to deliberately arrange the resulting code so that two code paths share a common fragment of opcode sequences.[nb 3]These are calledoverlapping instructions,overlapping opcodes,overlapping code,overlapped code,instruction scission, orjump into the middle of an instruction.[17][18][19] In the 1970s and 1980s, overlapping instructions were sometimes used to preserve memory space. One example were in the implementation of error tables inMicrosoft'sAltair BASIC, whereinterleaved instructionsmutually shared their instruction bytes.[20][12][17]The technique is rarely used today, but might still be necessary to resort to in areas where extreme optimization for size is necessary on byte-level such as in the implementation ofboot loaderswhich have to fit intoboot sectors.[nb 4] It is also sometimes used as acode obfuscationtechnique as a measure againstdisassemblyand tampering.[12][15] The principle is also used in shared code sequences offat binarieswhich must run on multiple instruction-set-incompatible processor platforms.[nb 3] This property is also used to findunintended instructionscalledgadgetsin existing code repositories and is used inreturn-oriented programmingas alternative tocode injectionfor exploits such asreturn-to-libc attacks.[21][12] In some computers, the machine code of thearchitectureis implemented by an even more fundamental underlying layer calledmicrocode, providing a common machine language interface across a line or family of different models of computer with widely different underlyingdataflows. This is done to facilitateportingof machine language programs between different models. An example of this use is the IBMSystem/360family of computers and their successors. Machine code is generally different frombytecode(also known as p-code), which is either executed by an interpreter or itself compiled into machine code for faster (direct) execution. An exception is when a processor is designed to use a particular bytecode directly as its machine code, such as is the case withJava processors. Machine code and assembly code are sometimes callednativecodewhen referring to platform-dependent parts of language features or libraries.[22] From the point of view of the CPU, machine code is stored in RAM, but is typically also kept in a set of caches for performance reasons. There may be different caches for instructions and data, depending on the architecture. The CPU knows what machine code to execute, based on its internal program counter. The program counter points to a memory address and is changed based on special instructions which may cause programmatic branches. The program counter is typically set to a hard coded value when the CPU is first powered on, and will hence execute whatever machine code happens to be at this address. Similarly, the program counter can be set to execute whatever machine code is at some arbitrary address, even if this is not valid machine code. This will typically trigger an architecture specific protection fault. The CPU is oftentimes told, by page permissions in a paging based system, if the current page actually holds machine code by an execute bit — pages have multiple such permission bits (readable, writable, etc.) for various housekeeping functionality. E.g. onUnix-likesystems memory pages can be toggled to be executable with themprotect()system call, and on Windows,VirtualProtect()can be used to achieve a similar result. If an attempt is made to execute machine code on a non-executable page, an architecture specific fault will typically occur. Treatingdata as machine code, or finding new ways to use existing machine code, by various techniques, is the basis of some security vulnerabilities. Similarly, in a segment based system, segment descriptors can indicate whether a segment can contain executable code and in whatringsthat code can run. From the point of view of aprocess, thecode spaceis the part of itsaddress spacewhere the code in execution is stored. Inmultitaskingsystems this comprises the program'scode segmentand usuallyshared libraries. Inmulti-threadingenvironment, different threads of one process share code space along with data space, which reduces the overhead ofcontext switchingconsiderably as compared to process switching. Machine code can be seen as a set of electrical pulses that make the instructions readable to the computer; it is not readable by humans,[23]withDouglas Hofstadtercomparing it to examining the atoms of aDNAmolecule.[24]However, various tools and methods exist to decode machine code to human-readablesource code. One such method isdisassembly, which easily decodes it back to its corresponding assembly languagesource codebecause assembly language forms a one-to-one mapping to machine code.[25] Machine code may also be decoded tohigh-level languageunder two conditions. The first condition is to accept anobfuscatedreading of the source code. An obfuscated version of source code is displayed if the machine code is sent to adecompilerof the source language. The second condition requires the machine code to have information about the source code encoded within. The information includes asymbol tablethat containsdebug symbols. The symbol table may be stored within the executable, or it may exist in separate files. Adebuggercan then read the symbol table to help the programmer interactivelydebugthe machine code inexecution.
https://en.wikipedia.org/wiki/Overlapping_code
Inpredicate logic, anexistential quantificationis a type ofquantifier, alogical constantwhich isinterpretedas "there exists", "there is at least one", or "for some". It is usually denoted by thelogical operatorsymbol∃, which, when used together with a predicate variable, is called anexistential quantifier("∃x" or "∃(x)" or "(∃x)"[1]). Existential quantification is distinct fromuniversal quantification("for all"), which asserts that the property or relation holds forallmembers of the domain.[2][3]Some sources use the termexistentializationto refer to existential quantification.[4] Quantification in general is covered in the article onquantification (logic). The existential quantifier is encoded asU+2203∃THERE EXISTSinUnicode, and as\existsinLaTeXand related formula editors. Consider theformalsentence This is a single statement using existential quantification. It is roughly analogous to the informal sentence "Either0×0=25{\displaystyle 0\times 0=25}, or1×1=25{\displaystyle 1\times 1=25}, or2×2=25{\displaystyle 2\times 2=25}, or... and so on," but more precise, because it doesn't need us to infer the meaning of the phrase "and so on." (In particular, the sentence explicitly specifies itsdomain of discourseto be the natural numbers, not, for example, thereal numbers.) This particular example is true, because 5 is a natural number, and when we substitute 5 forn, we produce the true statement5×5=25{\displaystyle 5\times 5=25}. It does not matter that "n×n=25{\displaystyle n\times n=25}" is true only for that single natural number, 5; the existence of a singlesolutionis enough to prove this existential quantification to be true. In contrast, "For someeven numbern{\displaystyle n},n×n=25{\displaystyle n\times n=25}" is false, because there are no even solutions. Thedomain of discourse, which specifies the values the variablenis allowed to take, is therefore critical to a statement's trueness or falseness.Logical conjunctionsare used to restrict the domain of discourse to fulfill a given predicate. For example, the sentence islogically equivalentto the sentence Themathematical proofof an existential statement about "some" object may be achieved either by aconstructive proof, which exhibits an object satisfying the "some" statement, or by anonconstructive proof, which shows that there must be such an object without concretely exhibiting one. Insymbolic logic, "∃" (a turned letter "E" in asans-seriffont, Unicode U+2203) is used to indicate existential quantification. For example, the notation∃n∈N:n×n=25{\displaystyle \exists {n}{\in }\mathbb {N} :n\times n=25}represents the (true) statement The symbol's first usage is thought to be byGiuseppe PeanoinFormulario mathematico(1896). Afterwards,Bertrand Russellpopularised its use as the existential quantifier. Through his research in set theory, Peano also introduced the symbols∩{\displaystyle \cap }and∪{\displaystyle \cup }to respectively denote the intersection and union of sets.[5] A quantified propositional function is a statement; thus, like statements, quantified functions can be negated. The¬{\displaystyle \lnot \ }symbol is used to denote negation. For example, ifP(x) is the predicate "xis greater than 0 and less than 1", then, for a domain of discourseXof all natural numbers, the existential quantification "There exists a natural numberxwhich is greater than 0 and less than 1" can be symbolically stated as: This can be demonstrated to be false. Truthfully, it must be said, "It is not the case that there is a natural numberxthat is greater than 0 and less than 1", or, symbolically: If there is no element of the domain of discourse for which the statement is true, then it must be false for all of those elements. That is, the negation of is logically equivalent to "For any natural numberx,xis not greater than 0 and less than 1", or: Generally, then, the negation of apropositional function's existential quantification is auniversal quantificationof that propositional function's negation; symbolically, (This is a generalization ofDe Morgan's lawsto predicate logic.) A common error is stating "all persons are not married" (i.e., "there exists no person who is married"), when "not all persons are married" (i.e., "there exists a person who is not married") is intended: Negation is also expressible through a statement of "for no", as opposed to "for some": Unlike the universal quantifier, the existential quantifier distributes over logical disjunctions: ∃x∈XP(x)∨Q(x)→(∃x∈XP(x)∨∃x∈XQ(x)){\displaystyle \exists {x}{\in }\mathbf {X} \,P(x)\lor Q(x)\to \ (\exists {x}{\in }\mathbf {X} \,P(x)\lor \exists {x}{\in }\mathbf {X} \,Q(x))} Arule of inferenceis a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the existential quantifier. Existential introduction(∃I) concludes that, if the propositional function is known to be true for a particular element of the domain of discourse, then it must be true that there exists an element for which the proposition function is true. Symbolically, Existential instantiation, when conducted in a Fitch style deduction, proceeds by entering a new sub-derivation while substituting an existentially quantified variable for a subject—which does not appear within any active sub-derivation. If a conclusion can be reached within this sub-derivation in which the substituted subject does not appear, then one can exit that sub-derivation with that conclusion. The reasoning behind existential elimination (∃E) is as follows: If it is given that there exists an element for which the proposition function is true, and if a conclusion can be reached by giving that element an arbitrary name, that conclusion isnecessarily true, as long as it does not contain the name. Symbolically, for an arbitrarycand for a propositionQin whichcdoes not appear: P(c)→Q{\displaystyle P(c)\to \ Q}must be true for all values ofcover the same domainX; else, the logic does not follow: Ifcis not arbitrary, and is instead a specific element of the domain of discourse, then statingP(c) might unjustifiably give more information about that object. The formula∃x∈∅P(x){\displaystyle \exists {x}{\in }\varnothing \,P(x)}is always false, regardless ofP(x). This is because∅{\displaystyle \varnothing }denotes theempty set, and noxof any description – let alone anxfulfilling a given predicateP(x) – exist in the empty set. See alsoVacuous truthfor more information. Incategory theoryand the theory ofelementary topoi, the existential quantifier can be understood as theleft adjointof afunctorbetweenpower sets, theinverse imagefunctor of a function between sets; likewise, theuniversal quantifieris theright adjoint.[6]
https://en.wikipedia.org/wiki/Existential_quantification
Inprobability theoryandstatistics, theConway–Maxwell–Poisson (CMP or COM–Poisson) distributionis adiscrete probability distributionnamed afterRichard W. Conway,William L. Maxwell, andSiméon Denis Poissonthat generalizes thePoisson distributionby adding a parameter to modeloverdispersionandunderdispersion. It is a member of theexponential family,[1]has the Poisson distribution andgeometric distributionasspecial casesand theBernoulli distributionas alimiting case.[2] The CMP distribution was originally proposed by Conway and Maxwell in 1962[3]as a solution to handlingqueueing systemswith state-dependent service rates. The CMP distribution was introduced into the statistics literature by Boatwright et al. 2003[4]and Shmueli et al. (2005).[2]The first detailed investigation into the probabilistic and statistical properties of the distribution was published by Shmueli et al. (2005).[2]Some theoretical probability results of COM-Poisson distribution is studied and reviewed by Li et al. (2019),[5]especially the characterizations of COM-Poisson distribution. The CMP distribution is defined to be the distribution withprobability mass function where : The functionZ(λ,ν){\displaystyle Z(\lambda ,\nu )}serves as anormalization constantso the probability mass function sums to one. Note thatZ(λ,ν){\displaystyle Z(\lambda ,\nu )}does not have a closed form. The domain of admissible parameters isλ,ν>0{\displaystyle \lambda ,\nu >0}, and0<λ<1{\displaystyle 0<\lambda <1},ν=0{\displaystyle \nu =0}. The additional parameterν{\displaystyle \nu }which does not appear in thePoisson distributionallows for adjustment of the rate of decay. This rate of decay is a non-linear decrease in ratios of successive probabilities, specifically Whenν=1{\displaystyle \nu =1}, the CMP distribution becomes the standardPoisson distributionand asν→∞{\displaystyle \nu \to \infty }, the distribution approaches aBernoulli distributionwith parameterλ/(1+λ){\displaystyle \lambda /(1+\lambda )}. Whenν=0{\displaystyle \nu =0}the CMP distribution reduces to ageometric distributionwith probability of success1−λ{\displaystyle 1-\lambda }providedλ<1{\displaystyle \lambda <1}.[2] For the CMP distribution, moments can be found through the recursive formula[2] For generalν{\displaystyle \nu }, there does not exist a closed form formula for thecumulative distribution functionofX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )}. Ifν≥1{\displaystyle \nu \geq 1}is an integer, we can, however, obtain the following formula in terms of thegeneralized hypergeometric function:[6] Many important summary statistics, such as moments and cumulants, of the CMP distribution can be expressed in terms of the normalizing constantZ(λ,ν){\displaystyle Z(\lambda ,\nu )}.[2][7]Indeed, Theprobability generating functionisE⁡sX=Z(sλ,ν)/Z(λ,ν){\displaystyle \operatorname {E} s^{X}=Z(s\lambda ,\nu )/Z(\lambda ,\nu )}, and themeanandvarianceare given by Thecumulant generating functionis and thecumulantsare given by Whilst the normalizing constantZ(λ,ν)=∑i=0∞λi(i!)ν{\displaystyle Z(\lambda ,\nu )=\sum _{i=0}^{\infty }{\frac {\lambda ^{i}}{(i!)^{\nu }}}}does not in general have a closed form, there are some noteworthy special cases: Because the normalizing constant does not in general have a closed form, the followingasymptotic expansionis of interest. Fixν>0{\displaystyle \nu >0}. Then, asλ→∞{\displaystyle \lambda \rightarrow \infty },[8] where thecj{\displaystyle c_{j}}are uniquely determined by the expansion In particular,c0=1{\displaystyle c_{0}=1},c1=ν2−124{\displaystyle c_{1}={\frac {\nu ^{2}-1}{24}}},c2=ν2−11152(ν2+23){\displaystyle c_{2}={\frac {\nu ^{2}-1}{1152}}\left(\nu ^{2}+23\right)}. Furthercoefficientsare given in.[8] For general values ofν{\displaystyle \nu }, there does not exist closed form formulas for the mean, variance and moments of the CMP distribution. We do, however, have the following neat formula.[7]Let(j)r=j(j−1)⋯(j−r+1){\displaystyle (j)_{r}=j(j-1)\cdots (j-r+1)}denote thefalling factorial. LetX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )},λ,ν>0{\displaystyle \lambda ,\nu >0}. Then forr∈N{\displaystyle r\in \mathbb {N} }. Since in general closed form formulas are not available for moments and cumulants of the CMP distribution, the following asymptotic formulas are of interest. LetX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )}, whereν>0{\displaystyle \nu >0}. Denote theskewnessγ1=κ3σ3{\displaystyle \gamma _{1}={\frac {\kappa _{3}}{\sigma ^{3}}}}andexcess kurtosisγ2=κ4σ4{\displaystyle \gamma _{2}={\frac {\kappa _{4}}{\sigma ^{4}}}}, whereσ2=Var(X){\displaystyle \sigma ^{2}=\mathrm {Var} (X)}. Then, asλ→∞{\displaystyle \lambda \rightarrow \infty },[8] where The asymptotic series forκn{\displaystyle \kappa _{n}}holds for alln≥2{\displaystyle n\geq 2}, andκ1=E⁡X{\displaystyle \kappa _{1}=\operatorname {E} X}. Whenν{\displaystyle \nu }is an integer explicit formulas formomentscan be obtained. The caseν=1{\displaystyle \nu =1}corresponds to the Poisson distribution. Suppose now thatν=2{\displaystyle \nu =2}. Form∈N{\displaystyle m\in \mathbb {N} },[7] whereIr(x){\displaystyle I_{r}(x)}is themodified Bessel functionof the first kind. Using the connecting formula for moments and factorial moments gives In particular, the mean ofX{\displaystyle X}is given by Also, sinceE⁡X2=λ{\displaystyle \operatorname {E} X^{2}=\lambda }, the variance is given by Suppose now thatν≥1{\displaystyle \nu \geq 1}is an integer. Then[6] In particular, and Var(X)=λ22ν−10Fν−1(;3,…,3;λ)0Fν−1(;1,…,1;λ)+E⁡[X]−(E⁡[X])2.{\displaystyle \mathrm {Var} (X)={\frac {\lambda ^{2}}{2^{\nu -1}}}{\frac {_{0}F_{\nu -1}(;3,\ldots ,3;\lambda )}{_{0}F_{\nu -1}(;1,\ldots ,1;\lambda )}}+\operatorname {E} [X]-(\operatorname {E} [X])^{2}.} LetX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )}. Then themodeofX{\displaystyle X}is⌊λ1/ν⌋{\displaystyle \lfloor \lambda ^{1/\nu }\rfloor }ifλ1/ν<m{\displaystyle \lambda ^{1/\nu }<m}is not an integer. Otherwise, the modes ofX{\displaystyle X}areλ1/ν{\displaystyle \lambda ^{1/\nu }}andλ1/ν−1{\displaystyle \lambda ^{1/\nu }-1}.[7] The mean deviation ofXν{\displaystyle X^{\nu }}about its meanλ{\displaystyle \lambda }is given by[7] No explicit formula is known for themedianofX{\displaystyle X}, but the following asymptotic result is available.[7]Letm{\displaystyle m}be the median ofX∼CMP(λ,ν){\displaystyle X\sim {\mbox{CMP}}(\lambda ,\nu )}. Then asλ→∞{\displaystyle \lambda \rightarrow \infty }. LetX∼CMP(λ,ν){\displaystyle X\sim {\mbox{CMP}}(\lambda ,\nu )}, and suppose thatf:Z+↦R{\displaystyle f:\mathbb {Z} ^{+}\mapsto \mathbb {R} }is such thatE⁡|f(X+1)|<∞{\displaystyle \operatorname {E} |f(X+1)|<\infty }andE⁡|Xνf(X)|<∞{\displaystyle \operatorname {E} |X^{\nu }f(X)|<\infty }. Then Conversely, suppose now thatW{\displaystyle W}is a real-valued random variable supported onZ+{\displaystyle \mathbb {Z} ^{+}}such thatE⁡[λf(W+1)−Wνf(W)]=0{\displaystyle \operatorname {E} [\lambda f(W+1)-W^{\nu }f(W)]=0}for all boundedf:Z+↦R{\displaystyle f:\mathbb {Z} ^{+}\mapsto \mathbb {R} }. ThenW∼CMP(λ,ν){\displaystyle W\sim {\mbox{CMP}}(\lambda ,\nu )}.[7] LetYn{\displaystyle Y_{n}}have theConway–Maxwell–binomial distributionwith parametersn{\displaystyle n},p=λ/nν{\displaystyle p=\lambda /n^{\nu }}andν{\displaystyle \nu }. Fixλ>0{\displaystyle \lambda >0}andν>0{\displaystyle \nu >0}. Then,Yn{\displaystyle Y_{n}}converges in distribution to theCMP(λ,ν){\displaystyle \mathrm {CMP} (\lambda ,\nu )}distribution asn→∞{\displaystyle n\rightarrow \infty }.[7]This result generalises the classical Poisson approximation of the binomial distribution. More generally, the CMP distribution arises as a limiting distribution of Conway–Maxwell–Poisson binomial distribution.[7]Apart from the fact that COM-binomial approximates to COM-Poisson, Zhang et al. (2018)[9]illustrates that COM-negative binomial distribution withprobability mass function convergents to a limiting distribution which is the COM-Poisson, asr→+∞{\displaystyle {r\to +\infty }}. There are a few methods of estimating the parameters of the CMP distribution from the data. Two methods will be discussed: weighted least squares and maximum likelihood. The weighted least squares approach is simple and efficient but lacks precision. Maximum likelihood, on the other hand, is precise, but is more complex and computationally intensive. Theweighted least squaresprovides a simple, efficient method to derive rough estimates of the parameters of the CMP distribution and determine if the distribution would be an appropriate model. Following the use of this method, an alternative method should be employed to compute more accurate estimates of the parameters if the model is deemed appropriate. This method uses the relationship of successive probabilities as discussed above. By taking logarithms of both sides of this equation, the following linear relationship arises wherepx{\displaystyle p_{x}}denotesPr(X=x){\displaystyle \Pr(X=x)}. When estimating the parameters, the probabilities can be replaced by therelative frequenciesofx{\displaystyle x}andx−1{\displaystyle x-1}. To determine if the CMP distribution is an appropriate model, these values should be plotted againstlog⁡x{\displaystyle \log x}for all ratios without zero counts. If the data appear to be linear, then the model is likely to be a good fit. Once the appropriateness of the model is determined, the parameters can be estimated by fitting a regression oflog⁡(p^x−1/p^x){\displaystyle \log({\hat {p}}_{x-1}/{\hat {p}}_{x})}onlog⁡x{\displaystyle \log x}. However, the basic assumption ofhomoscedasticityis violated, so aweighted least squaresregression must be used. The inverse weight matrix will have the variances of each ratio on the diagonal with the one-step covariances on the first off-diagonal, both given below. The CMPlikelihood functionis whereS1=∑i=1nxi{\displaystyle S_{1}=\sum _{i=1}^{n}x_{i}}andS2=∑i=1nlog⁡xi!{\displaystyle S_{2}=\sum _{i=1}^{n}\log x_{i}!}. Maximizing the likelihood yields the following two equations which do not have an analytic solution. Instead, themaximum likelihoodestimates are approximated numerically by theNewton–Raphson method. In each iteration, the expectations, variances, and covariance ofX{\displaystyle X}andlog⁡X!{\displaystyle \log X!}are approximated by using the estimates forλ{\displaystyle \lambda }andν{\displaystyle \nu }from the previous iteration in the expression This is continued until convergence ofλ^{\displaystyle {\hat {\lambda }}}andν^{\displaystyle {\hat {\nu }}}. The basic CMP distribution discussed above has also been used as the basis for ageneralized linear model(GLM) using a Bayesian formulation. A dual-link GLM based on the CMP distribution has been developed,[10]and this model has been used to evaluate traffic accident data.[11][12]The CMP GLM developed by Guikema and Coffelt (2008) is based on a reformulation of the CMP distribution above, replacingλ{\displaystyle \lambda }withμ=λ1/ν{\displaystyle \mu =\lambda ^{1/\nu }}. The integral part ofμ{\displaystyle \mu }is then the mode of the distribution. A full Bayesian estimation approach has been used withMCMCsampling implemented inWinBugswithnon-informative priorsfor the regression parameters.[10][11]This approach is computationally expensive, but it yields the full posterior distributions for the regression parameters and allows expert knowledge to be incorporated through the use of informative priors. A classical GLM formulation for a CMP regression has been developed which generalizesPoisson regressionandlogistic regression.[13]This takes advantage of theexponential familyproperties of the CMP distribution to obtain elegant model estimation (viamaximum likelihood), inference, diagnostics, and interpretation. This approach requires substantially less computational time than the Bayesian approach, at the cost of not allowing expert knowledge to be incorporated into the model.[13]In addition it yields standard errors for the regression parameters (via the Fisher Information matrix) compared to the full posterior distributions obtainable via the Bayesian formulation. It also provides astatistical testfor the level of dispersion compared to a Poisson model. Code for fitting a CMP regression, testing for dispersion, and evaluating fit is available.[14] The two GLM frameworks developed for the CMP distribution significantly extend the usefulness of this distribution for data analysis problems.
https://en.wikipedia.org/wiki/Conway%E2%80%93Maxwell%E2%80%93Poisson_distribution
Inmathematics, acharacter sumis a sum∑χ(n){\textstyle \sum \chi (n)}of values of aDirichlet characterχmoduloN, taken over a given range of values ofn. Such sums are basic in a number of questions, for example in the distribution ofquadratic residues, and in particular in the classical question of finding an upper bound for theleast quadratic non-residuemoduloN. Character sums are often closely linked toexponential sumsby theGauss sums(this is like a finiteMellin transform). Assume χ is a non-principal Dirichlet character to the modulusN. The sum taken over all residue classes modNis then zero. This means that the cases of interest will be sumsΣ{\displaystyle \Sigma }over relatively short ranges, of lengthR<Nsay, A fundamental improvement on the trivial estimateΣ=O(N){\displaystyle \Sigma =O(N)}is thePólya–Vinogradov inequality, established independently byGeorge PólyaandI. M. Vinogradovin 1918,[1][2]stating inbig O notationthat Assuming thegeneralized Riemann hypothesis,Hugh MontgomeryandR. C. Vaughanhave shown[3]that there is the further improvement Another significant type of character sum is that formed by for some functionF, generally apolynomial. A classical result is the case of a quadratic, for example, and χ aLegendre symbol. Here the sum can be evaluated (as −1), a result that is connected to thelocal zeta-functionof aconic section. More generally, such sums for theJacobi symbolrelate to local zeta-functions ofelliptic curvesandhyperelliptic curves; this means that by means ofAndré Weil's results, forN=paprime number, there are non-trivial bounds The constant implicit in the notation islinearin thegenusof the curve in question, and so (Legendre symbol or hyperelliptic case) can be taken as the degree ofF. (More general results, for other values ofN, can be obtained starting from there.) Weil's results also led to theBurgess bound,[4]applying to give non-trivial results beyond Pólya–Vinogradov, forRa power ofNgreater than 1/4. Assume the modulusNis a prime. for any integerr≥ 3.[5]
https://en.wikipedia.org/wiki/Character_sum
Inmathematics, especially inprobability theoryandergodic theory, theinvariant sigma-algebrais asigma-algebraformed by sets which areinvariantunder agroup actionordynamical system. It can be interpreted as of being "indifferent" to the dynamics. The invariant sigma-algebra appears in the study ofergodic systems, as well as in theorems ofprobability theorysuch asde Finetti's theoremand theHewitt-Savage law. Let(X,F){\displaystyle (X,{\mathcal {F}})}be ameasurable space, and letT:(X,F)→(X,F){\displaystyle T:(X,{\mathcal {F}})\to (X,{\mathcal {F}})}be ameasurable function. A measurable subsetS∈F{\displaystyle S\in {\mathcal {F}}}is calledinvariantif and only ifT−1(S)=S{\displaystyle T^{-1}(S)=S}.[1][2][3]Equivalently, if for everyx∈X{\displaystyle x\in X}, we have thatx∈S{\displaystyle x\in S}if and only ifT(x)∈S{\displaystyle T(x)\in S}. More generally, letM{\displaystyle M}be agroupor amonoid, letα:M×X→X{\displaystyle \alpha :M\times X\to X}be amonoid action, and denote the action ofm∈M{\displaystyle m\in M}onX{\displaystyle X}byαm:X→X{\displaystyle \alpha _{m}:X\to X}. A subsetS⊆X{\displaystyle S\subseteq X}isα{\displaystyle \alpha }-invariantif for everym∈M{\displaystyle m\in M},αm−1(S)=S{\displaystyle \alpha _{m}^{-1}(S)=S}. Let(X,F){\displaystyle (X,{\mathcal {F}})}be ameasurable space, and letT:(X,F)→(X,F){\displaystyle T:(X,{\mathcal {F}})\to (X,{\mathcal {F}})}be ameasurable function. A measurable subset (event)S∈F{\displaystyle S\in {\mathcal {F}}}is calledalmost surelyinvariantif and only if itsindicator function1S{\displaystyle 1_{S}}isalmost surelyequal to the indicator function1T−1(S){\displaystyle 1_{T^{-1}(S)}}.[4][5][3] Similarly, given a measure-preservingMarkov kernelk:(X,F,p)→(X,F,p){\displaystyle k:(X,{\mathcal {F}},p)\to (X,{\mathcal {F}},p)}, we call an eventS∈F{\displaystyle S\in {\mathcal {F}}}almost surely invariantif and only ifk(S∣x)=1S(x){\displaystyle k(S\mid x)=1_{S}(x)}for almost allx∈X{\displaystyle x\in X}. As for the case of strictly invariant sets, the definition can be extended to an arbitrary group or monoid action. In many cases, almost surely invariant sets differ by invariant sets only by a null set (see below). Both strictly invariant sets and almost surely invariant sets are closed under taking countable unions and complements, and hence they formsigma-algebras. These sigma-algebras are usually called either theinvariant sigma-algebraor thesigma-algebra of invariant events, both in the strict case and in the almost sure case, depending on the author.[1][2][3][4][5]For the purpose of the article, let's denote byI{\displaystyle {\mathcal {I}}}the sigma-algebra of strictly invariant sets, and byI~{\displaystyle {\tilde {\mathcal {I}}}}the sigma-algebra of almost surely invariant sets. Given a measurable space(X,A){\displaystyle (X,{\mathcal {A}})}, denote by(XN,A⊗N){\displaystyle (X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })}be the countablecartesian powerofX{\displaystyle X}, equipped with theproduct sigma-algebra. We can viewXN{\displaystyle X^{\mathbb {N} }}as the space of infinite sequences of elements ofX{\displaystyle X}, Consider now the groupS∞{\displaystyle S_{\infty }}offinitepermutationsofN{\displaystyle \mathbb {N} }, i.e.bijectionsσ:N→N{\displaystyle \sigma :\mathbb {N} \to \mathbb {N} }such thatσ(n)≠n{\displaystyle \sigma (n)\neq n}only for finitely manyn∈N{\displaystyle n\in \mathbb {N} }. Each finite permutationσ{\displaystyle \sigma }acts measurably onXN{\displaystyle X^{\mathbb {N} }}by permuting the components, and so we have an action of the countable groupS∞{\displaystyle S_{\infty }}onXN{\displaystyle X^{\mathbb {N} }}. An invariant event for this sigma-algebra is often called anexchangeable eventorsymmetric event, and the sigma-algebra of invariant events is often called theexchangeable sigma-algebra. Arandom variableonXN{\displaystyle X^{\mathbb {N} }}is exchangeable (i.e. permutation-invariant) if and only if it is measurable for the exchangeable sigma-algebra. The exchangeable sigma-algebra plays a role in theHewitt-Savage zero-one law, which can be equivalently stated by saying that for everyprobability measurep{\displaystyle p}on(X,A){\displaystyle (X,{\mathcal {A}})}, theproduct measurep⊗N{\displaystyle p^{\otimes \mathbb {N} }}onXN{\displaystyle X^{\mathbb {N} }}assigns to each exchangeable event probability either zero or one.[9]Equivalently, for the measurep⊗N{\displaystyle p^{\otimes \mathbb {N} }}, every exchangeable random variable onXN{\displaystyle X^{\mathbb {N} }}is almost surely constant. It also plays a role in thede Finetti theorem.[9] As in the example above, given a measurable space(X,A){\displaystyle (X,{\mathcal {A}})}, consider the countably infinite cartesian product(XN,A⊗N){\displaystyle (X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })}. Consider now theshiftmapT:XN→XN{\displaystyle T:X^{\mathbb {N} }\to X^{\mathbb {N} }}given by mapping(x0,x1,x2,…)∈XN{\displaystyle (x_{0},x_{1},x_{2},\dots )\in X^{\mathbb {N} }}to(x1,x2,x3,…)∈XN{\displaystyle (x_{1},x_{2},x_{3},\dots )\in X^{\mathbb {N} }}. An invariant event for this sigma-algebra is called ashift-invariant event, and the resulting sigma-algebra is sometimes called theshift-invariant sigma-algebra. This sigma-algebra is related to the one oftail events, which is given by the following intersection, whereAm⊆A⊗N{\displaystyle {\mathcal {A}}_{m}\subseteq {\mathcal {A}}^{\otimes \mathbb {N} }}is the sigma-algebra induced onXN{\displaystyle X^{\mathbb {N} }}by the projection on them{\displaystyle m}-th componentπm:(XN,A⊗N)→(X,A){\displaystyle \pi _{m}:(X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })\to (X,{\mathcal {A}})}. Every shift-invariant event is a tail event, but the converse is not true.
https://en.wikipedia.org/wiki/Invariant_sigma-algebra
Invector calculus, aninvex functionis adifferentiable functionf{\displaystyle f}fromRn{\displaystyle \mathbb {R} ^{n}}toR{\displaystyle \mathbb {R} }for which there exists a vector valued functionη{\displaystyle \eta }such that for allxandu. Invex functions were introduced by Hanson as a generalization ofconvex functions.[1]Ben-Israel and Mond provided a simple proof that a function is invex if and only if everystationary pointis aglobal minimum, a theorem first stated by Craven and Glover.[2][3] Hanson also showed that if the objective and the constraints of anoptimization problemare invex with respect to the same functionη(x,u){\displaystyle \eta (x,u)}, then theKarush–Kuhn–Tucker conditionsare sufficient for a global minimum. A slight generalization of invex functions calledType I invex functionsare the most general class of functions for which theKarush–Kuhn–Tucker conditionsare necessary and sufficient for a global minimum.[4]Consider a mathematical program of the form minf(x)s.t.g(x)≤0{\displaystyle {\begin{array}{rl}\min &f(x)\\{\text{s.t.}}&g(x)\leq 0\end{array}}} wheref:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }andg:Rn→Rm{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}are differentiable functions. LetF={x∈Rn|g(x)≤0}{\displaystyle F=\{x\in \mathbb {R} ^{n}\;|\;g(x)\leq 0\}}denote the feasible region of this program. The functionf{\displaystyle f}is aType Iobjective functionand the functiong{\displaystyle g}is aType I constraint functionatx0{\displaystyle x_{0}}with respect toη{\displaystyle \eta }if there exists a vector-valued functionη{\displaystyle \eta }defined onF{\displaystyle F}such that f(x)−f(x0)≥η(x)⋅∇f(x0){\displaystyle f(x)-f(x_{0})\geq \eta (x)\cdot \nabla {f(x_{0})}} and −g(x0)≥η(x)⋅∇g(x0){\displaystyle -g(x_{0})\geq \eta (x)\cdot \nabla {g(x_{0})}} for allx∈F{\displaystyle x\in {F}}.[5]Note that, unlike invexity, Type I invexity is defined relative to a pointx0{\displaystyle x_{0}}. Theorem (Theorem 2.1 in[4]):Iff{\displaystyle f}andg{\displaystyle g}are Type I invex at a pointx∗{\displaystyle x^{*}}with respect toη{\displaystyle \eta }, and theKarush–Kuhn–Tucker conditionsare satisfied atx∗{\displaystyle x^{*}}, thenx∗{\displaystyle x^{*}}is a global minimizer off{\displaystyle f}overF{\displaystyle F}. LetE{\displaystyle E}fromRn{\displaystyle \mathbb {R} ^{n}}toRn{\displaystyle \mathbb {R} ^{n}}andf{\displaystyle f}fromM{\displaystyle \mathbb {M} }toR{\displaystyle \mathbb {R} }be anE{\displaystyle E}-differentiable function on a nonempty open setM⊂Rn{\displaystyle \mathbb {M} \subset \mathbb {R} ^{n}}. Thenf{\displaystyle f}is said to be an E-invex function atu{\displaystyle u}if there exists a vector valued functionη{\displaystyle \eta }such that for allx{\displaystyle x}andu{\displaystyle u}inM{\displaystyle \mathbb {M} }. E-invex functions were introduced by Abdulaleem as a generalization of differentiableconvex functions.[6] LetE:Rn→Rn{\displaystyle E:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}, andM⊂Rn{\displaystyle M\subset \mathbb {R} ^{n}}be an open E-invex set. A vector-valued pair(f,g){\displaystyle (f,g)}, wheref{\displaystyle f}andg{\displaystyle g}represent objective and constraint functions respectively, is said to beE-type Iwith respect to a vector-valued functionη:M×M→Rn{\displaystyle \eta :M\times M\to \mathbb {R} ^{n}}, atu∈M{\displaystyle u\in M}, if the following inequalities hold for allx∈FE={x∈Rn|g(E(x))≤0}{\displaystyle x\in F_{E}=\{x\in \mathbb {R} ^{n}\;|\;g(E(x))\leq 0\}}: fi(E(x))−fi(E(u))≥∇fi(E(u))⋅η(E(x),E(u)),{\displaystyle f_{i}(E(x))-f_{i}(E(u))\geq \nabla f_{i}(E(u))\cdot \eta (E(x),E(u)),} −gj(E(u))≥∇gj(E(u))⋅η(E(x),E(u)).{\displaystyle -g_{j}(E(u))\geq \nabla g_{j}(E(u))\cdot \eta (E(x),E(u)).} Iff{\displaystyle f}andg{\displaystyle g}are differentiable functions andE(x)=x{\displaystyle E(x)=x}(E{\displaystyle E}is an identity map), then the definition of E-type I functions[7]reduces to the definition of type I functions introduced by Rueda and Hanson.[8]
https://en.wikipedia.org/wiki/Invex_function
Incomputer science, in particular inconcurrency theory, adependency relationis abinary relationon a finite domainΣ{\displaystyle \Sigma },[1]: 4symmetric, andreflexive;[1]: 6i.e. a finitetolerance relation. That is, it is a finite set ofordered pairsD{\displaystyle D}, such that In general, dependency relations are nottransitive; thus, they generalize the notion of anequivalence relationby discarding transitivity. Σ{\displaystyle \Sigma }is also called thealphabeton whichD{\displaystyle D}is defined. Theindependencyinduced byD{\displaystyle D}is the binary relationI{\displaystyle I} That is, the independency is the set of all ordered pairs that are not inD{\displaystyle D}. The independency relation is symmetric and irreflexive. Conversely, given any symmetric and irreflexive relationI{\displaystyle I}on a finite alphabet, the relation is a dependency relation. The pair(Σ,D){\displaystyle (\Sigma ,D)}is called theconcurrent alphabet.[2]: 6The pair(Σ,I){\displaystyle (\Sigma ,I)}is called theindependency alphabetorreliance alphabet, but this term may also refer to the triple(Σ,D,I){\displaystyle (\Sigma ,D,I)}(withI{\displaystyle I}induced byD{\displaystyle D}).[3]: 6Elementsx,y∈Σ{\displaystyle x,y\in \Sigma }are calleddependentifxDy{\displaystyle xDy}holds, andindependent, else (i.e. ifxIy{\displaystyle xIy}holds).[1]: 6 Given a reliance alphabet(Σ,D,I){\displaystyle (\Sigma ,D,I)}, a symmetric and irreflexive relation≐{\displaystyle \doteq }can be defined on thefree monoidΣ∗{\displaystyle \Sigma ^{*}}of all possible strings of finite length by:xaby≐xbay{\displaystyle xaby\doteq xbay}for all stringsx,y∈Σ∗{\displaystyle x,y\in \Sigma ^{*}}and all independent symbolsa,b∈I{\displaystyle a,b\in I}. Theequivalence closureof≐{\displaystyle \doteq }is denoted≡{\displaystyle \equiv }or≡(Σ,D,I){\displaystyle \equiv _{(\Sigma ,D,I)}}and called(Σ,D,I){\displaystyle (\Sigma ,D,I)}-equivalence. Informally,p≡q{\displaystyle p\equiv q}holds if the stringp{\displaystyle p}can be transformed intoq{\displaystyle q}by a finite sequence of swaps of adjacent independent symbols. Theequivalence classesof≡{\displaystyle \equiv }are calledtraces,[1]: 7–8and are studied intrace theory. Given the alphabetΣ={a,b,c}{\displaystyle \Sigma =\{a,b,c\}}, a possible dependency relation isD={(a,b),(b,a),(a,c),(c,a),(a,a),(b,b),(c,c)}{\displaystyle D=\{(a,b),\,(b,a),\,(a,c),\,(c,a),\,(a,a),\,(b,b),\,(c,c)\}}, see picture. The corresponding independency isI={(b,c),(c,b)}{\displaystyle I=\{(b,c),\,(c,b)\}}. Then e.g. the symbolsb,c{\displaystyle b,c}are independent of one another, and e.g.a,b{\displaystyle a,b}are dependent. The stringacbba{\displaystyle acbba}is equivalent toabcba{\displaystyle abcba}and toabbca{\displaystyle abbca}, but to no other string.
https://en.wikipedia.org/wiki/Dependency_relation
Theneocortex, also called theneopallium,isocortex, or thesix-layered cortex, is a set of layers of themammaliancerebral cortexinvolved in higher-order brain functions such assensory perception, cognition, generation ofmotor commands,[1]spatial reasoning, andlanguage.[2]The neocortex is further subdivided into thetrue isocortexand theproisocortex.[3] In thehuman brain, thecerebral cortexconsists of the larger neocortex and the smallerallocortex, respectively taking up 90% and 10%.[4]The neocortex is made up ofsix layers, labelled from the outermost inwards, I to VI. The term is fromcortex,Latin, "bark" or "rind", combined withneo-,Greek, "new".Neopalliumis a similar hybrid, from Latinpallium, "cloak".Isocortexandallocortexare hybrids with Greekisos, "same", andallos, "other". The neocortex is the most developed in its organisation and number of layers, of the cerebral tissues.[5]The neocortex consists of thegrey matter, or neuronal cell bodies andunmyelinatedfibers, surrounding the deeperwhite matter(myelinatedaxons) in thecerebrum. This is a very thin layer though, about 2–4 mm thick.[6]There are two types of cortex in the neocortex, theproisocortexand the true isocortex. The pro-isocortex is a transitional area between the true isocortex and theperiallocortex(part of theallocortex). It is found in thecingulate cortex(part of thelimbic system), inBrodmann's areas24,25,30and32, theinsulaand theparahippocampal gyrus. Of all the mammals studied to date (including humans), a species ofoceanic dolphinknown as thelong-finned pilot whalehas been found to have the most neocortical neurons.[7] The neocortex is smooth inrodentsand other small mammals, whereas inelephants,dolphinsandprimatesand other larger mammals it has deep grooves (sulci) and ridges (gyri). These folds allow the surface area of the neocortex to be greatly increased. All human brains have the same overall pattern of main gyri and sulci, although they differ in detail from one person to another.[8]The mechanism by which the gyri form during embryogenesis is not entirely clear, and there are several competing hypotheses that explain gyrification, such as axonal tension,[9]cortical buckling[10]or differences in cellular proliferation rates in different areas of the cortex.[11] The neocortex contains both excitatory (~80%) and inhibitory (~20%)neurons, named for their effect on other neurons.[12]The human neocortex consists of hundreds of different types of cells.[13]The structure of the neocortex is relatively uniform (hence the alternative names "iso-" and "homotypic" cortex), consisting of six horizontal layers segregated principally bycelltype andneuronalconnections.[14]However, there are many exceptions to this uniformity; for example, layer IV is small or missing in theprimary motor cortex. There is some canonical circuitry within the cortex; for example,pyramidal neuronsin the upper layers II and III project theiraxonsto other areas of neocortex, while those in the deeper layers V and VI often project out of the cortex, e.g. to thethalamus,brainstem, andspinal cord. Neurons in layer IV receive the majority of thesynaptic connectionsfrom outside the cortex (mostly from thalamus), and themselves make short-range, local connections to other cortical layers.[12]Thus, layer IV is the main recipient of incoming sensory information and distributes it to the other layers for further processing. The neocortex is often described as being arranged in vertical structures calledcortical columns, patches of neocortex with a diameter of roughly 0.5 mm (and a depth of 2 mm, i.e., spanning all six layers). These columns are often thought of as the basic repeating functional units of the neocortex, but their many definitions, in terms of anatomy, size, or function, are generally not consistent with each other, leading to a lack of consensus regarding their structure or function or even whether it makes sense to try to understand the neocortex in terms of columns.[15] The neocortex is derived embryonically from the dorsaltelencephalon, which is therostralpart of theforebrain. The neocortex is divided into regions demarcated by the cranial sutures in the skull above, intofrontal,parietal,occipital, andtemporallobes, which perform different functions. For example, the occipital lobe contains theprimary visual cortex, and the temporal lobe contains theprimary auditory cortex. Further subdivisions or areas of neocortex are responsible for more specific cognitive processes. In humans, thefrontal lobecontains areas devoted to abilities that are enhanced in or unique to our species, such as complex language processing localized to theventrolateral prefrontal cortex(Broca's area).[12]In humans and other primates, social and emotional processing is localized to theorbitofrontal cortex. The neocortex has also been shown to play an influential role in sleep, memory and learning processes.Semantic memoriesappear to be stored in the neocortex, specifically the anterolateraltemporal lobeof the neocortex.[16]It is also involved ininstrumental conditioning; responsible for transmitting sensory information and information about plans for movement to thebasal ganglia.[16]The firing rate of neurons in the neocortex also has an effect onslow-wave sleep. When the neurons are at rest and arehyperpolarizing, a period of inhibition occurs during a slowoscillation, called the down state. When the neurons of the neocortex are in the excitatorydepolarizingphase and are firing briefly at a high rate, a period of excitation occurs during a slow oscillation, called the up state.[16] Lesions that develop inneurodegenerative disorders, such asAlzheimer's disease, interrupt the transfer of information from the sensory neocortex to the prefrontal neocortex. This disruption of sensory information contributes to the progressive symptoms seen in neurodegenerative disorders such as changes in personality, decline in cognitive abilities, anddementia.[17]Damage to the neocortex of the anterolateral temporal lobe results insemantic dementia, which is the loss of memory of factual information (semantic memories). These symptoms can also be replicated bytranscranial magnetic stimulationof this area. If damage is sustained to this area, patients do not developanterograde amnesiaand are able to recallepisodic information.[18] The neocortex is the newest part of thecerebral cortexto evolve (hence the prefixneomeaning new); the other part of the cerebral cortex is theallocortex. The cellular organization of the allocortex is different from the six-layered neocortex. In humans, 90% of the cerebral cortex and 76% of the entire brain is neocortex.[12] For a species to develop a larger neocortex, the brain must evolve in size so that it is large enough to support the region. Body size, basalmetabolic rateand life history are factors affecting brain evolution and thecoevolutionof neocortex size and group size.[19]The neocortex increased in size in response to pressures for greater cooperation and competition in early ancestors. With the size increase, there was greater voluntary inhibitory control of social behaviors resulting in increased social harmony.[20] The six-layer cortex appears to be a distinguishing feature of mammals; it has been found in the brains of all mammals, but not in any other animals.[2]There is some debate,[21][22]however, as to the cross-speciesnomenclature forneocortex. Inavians, for instance, there are clear examples of cognitive processes that are thought to be neocortical in nature, despite the lack of the distinctive six-layer neocortical structure.[23]Evidence suggest theavian palliumto be broadly equivalent to the mammalian neocortex.[24][25][26]In a similar manner,reptiles, such asturtles, have primary sensory cortices. A consistent, alternative name has yet to be agreed upon. The neocortex ratio of a species is the ratio of the size of the neocortex to the rest of the brain. A high neocortex ratio is thought to correlate with a number of social variables such asgroup sizeand the complexity of social mating behaviors.[27]Humans have a large neocortex as a percentage of total brain matter when compared with other mammals. For example, there is only a 30:1 ratio of neocortical gray matter to the size of themedulla oblongatain the brainstem of chimpanzees, while the ratio is 60:1 in humans.[28]
https://en.wikipedia.org/wiki/Neocortex#Layers
There are many longstandingunsolved problems in mathematicsfor which a solution has still not yet been found. Thenotable unsolved problems instatisticsare generally of a different flavor; according to John Tukey,[1]"difficulties in identifying problems have delayed statistics far more than difficulties in solving problems." A list of "one or two open problems" (in fact 22 of them) was given byDavid Cox.[2]
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_statistics
Gorō Shimura(志村 五郎,Shimura Gorō, 23 February 1930 – 3 May 2019)was a Japanesemathematicianand Michael Henry StraterProfessor EmeritusofMathematicsatPrinceton Universitywho worked innumber theory,automorphic forms, andarithmetic geometry.[1]He was known for developing the theory ofcomplex multiplication of abelian varietiesandShimura varieties, as well as posing theTaniyama–Shimura conjecturewhich ultimately led to theproofofFermat's Last Theorem. Gorō Shimura was born inHamamatsu,Japan, on 23 February 1930.[2]Shimura graduated with a B.A. in mathematics and a D.Sc. in mathematics from theUniversity of Tokyoin 1952 and 1958, respectively.[3][2] After graduating, Shimura became a lecturer at the University of Tokyo, then worked abroad — including ten months in Paris and a seven-month stint at Princeton'sInstitute for Advanced Study— before returning to Tokyo, where he married Chikako Ishiguro.[4][2]He then moved from Tokyo to join the faculty ofOsaka University, but growing unhappy with his funding situation, he decided to seek employment in the United States.[4][2]ThroughAndré Weilhe obtained a position at Princeton University.[4]Shimura joined the Princeton faculty in 1964 and retired in 1999, during which time he advised over 28 doctoral students and received theGuggenheim Fellowshipin 1970, theCole Prizefor number theory in 1977, theAsahi Prizein 1991, and theSteele Prizefor lifetime achievement in 1996.[1][5] Shimura described his approach to mathematics as "phenomenological": his interest was in finding new types of interesting behavior in the theory of automorphic forms. He also argued for a "romantic" approach, something he found lacking in the younger generation of mathematicians.[6]Shimura used a two-part process for research, using one desk in his home dedicated to working on new research in the mornings and a second desk for perfecting papers in the afternoon.[2] Shimura had two children, Tomoko and Haru, with his wife Chikako.[2]Shimura died on 3 May 2019 inPrinceton,New Jerseyat the age of 89.[1][2] Shimura was a colleague and a friend ofYutaka Taniyama, with whom he wrote the first book on thecomplex multiplication of abelian varietiesand formulated the Taniyama–Shimura conjecture.[7]Shimura then wrote a long series of major papers, extending the phenomena found in the theory ofcomplex multiplication of elliptic curvesand the theory ofmodular formsto higher dimensions (e.g. Shimura varieties). This work provided examples for which the equivalence betweenmotivicandautomorphicL-functionspostulated in theLanglands programcould be tested:automorphic formsrealized in thecohomologyof a Shimura variety have a construction that attachesGalois representationsto them.[8] In 1958, Shimura generalized the initial work ofMartin Eichleron theEichler–Shimura congruence relationbetween thelocalL-functionof amodular curveand the eigenvalues ofHecke operators.[9][10]In 1959, Shimura extended the work of Eichler on theEichler–Shimura isomorphismbetween Eichler cohomology groups and spaces ofcusp formswhich would be used inPierre Deligne's proof of theWeil conjectures.[11][12] In 1971, Shimura's work on explicitclass field theoryin the spirit ofKronecker's Jugendtraumresulted in his proof ofShimura's reciprocity law.[13]In 1973, Shimura established theShimura correspondencebetween modular forms of half integral weightk+1/2, and modular forms of even weight 2k.[14] Shimura's formulation of the Taniyama–Shimura conjecture (later known as the modularity theorem) in the 1950s played a key role in the proof of Fermat's Last Theorem byAndrew Wilesin 1995. In 1990,Kenneth RibetprovedRibet's theoremwhich demonstrated that Fermat's Last Theorem followed from the semistable case of this conjecture.[15]Shimura dryly commented that his first reaction on hearing ofAndrew Wiles's proof of the semistable case was 'I told you so'.[16] His hobbies wereshogiproblems of extreme length and collectingImari porcelain.The Story of Imari: The Symbols and Mysteries of Antique Japanese Porcelainis a non-fiction work about the Imari porcelain that he collected over 30 years that was published byTen Speed Pressin 2008.[2][17]
https://en.wikipedia.org/wiki/Goro_Shimura
Elementary arithmeticis a branch ofmathematicsinvolvingaddition,subtraction,multiplication, anddivision. Due to its low level ofabstraction, broad range of application, and position as the foundation of all mathematics, elementary arithmetic is generally the first branch of mathematics taught in schools.[1][2] Innumeral systems,digitsare characters used to represent the value of numbers. An example of a numeral system is the predominantly usedIndo-Arabic numeral system(0 to 9), which uses adecimalpositional notation.[3]Other numeral systems include theKaktovik system(often used in theEskimo-Aleutlanguages ofAlaska,Canada, andGreenland), and is avigesimalpositional notationsystem.[4]Regardless of the numeral system used, the results of arithmetic operations are unaffected. In elementary arithmetic, thesuccessorof anatural number(including zero) is the next natural number and is the result of adding one to that number. The predecessor of a natural number (excluding zero) is the previous natural number and is the result of subtracting one from that number. For example, the successor of zero is one, and the predecessor of eleven is ten (0+1=1{\displaystyle 0+1=1}and11−1=10{\displaystyle 11-1=10}). Every natural number has a successor, and every natural number except 0 has a predecessor.[5] The natural numbers have atotal ordering. If one number is greater than (>{\displaystyle >}) another number, then the latter is less than (<{\displaystyle <}) the former. For example, three is less than eight (3<8{\displaystyle 3<8}), thus eight is greater than three (8>3{\displaystyle 8>3}). The natural numbers are alsowell-ordered, meaning that any subset of the natural numbers has aleast element. Counting assigns a natural number to each object in aset, starting with 1 for the first object and increasing by 1 for each subsequent object. The number of objects in the set is the count. This is also known as thecardinalityof the set. Counting can also be the process oftallying, the process of drawing a mark for each object in a set. Additionis a mathematical operation that combines two or more numbers (called addends or summands) to produce a combined number (called the sum). The addition of two numbers is expressed with the plus sign (+{\displaystyle +}).[6]It is performed according to these rules: When the sum of a pair of digits results in a two-digit number, the "tens" digit is referred to as the "carry digit".[9]In elementary arithmetic, students typically learn to addwhole numbersand may also learn about topics such asnegative numbersandfractions. Subtractionevaluates the difference between two numbers, where the minuend is the number being subtracted from, and the subtrahend is the number being subtracted. It is represented using the minus sign (−{\displaystyle -}). The minus sign is also used to notate negative numbers.[10] Subtraction is not commutative, which means that the order of the numbers can change the final value;3−5{\displaystyle 3-5}is not the same as5−3{\displaystyle 5-3}. In elementary arithmetic, the minuend is always larger than the subtrahend to produce a positive result. Subtraction is also used to separate,combine(e.g., find the size of a subset of a specific set), and find quantities in other contexts. There are several methods to accomplish subtraction. Thetraditional mathematicsmethod subtracts using methods suitable for hand calculation.[11]Reform mathematicsis distinguished generally by the lack of preference for any specific technique, replaced by guiding students to invent their own methods of computation. American schools teach a method of subtraction using borrowing.[12]A subtraction problem such as86−39{\displaystyle 86-39}is solved by borrowing a 10 from the tens place to add to the ones place in order to facilitate the subtraction. Subtracting 9 from 6 involves borrowing a 10 from the tens place, making the problem into70+16−39{\displaystyle 70+16-39}. This is indicated by crossing out the 8, writing a 7 above it, and writing a 1 above the 6. These markings are called "crutches", which were invented byWilliam A. Brownell, who used them in a study, in November 1937.[13] The Austrian method, also known as the additions method, is taught in certain European countries[which?]. In contrast to the previous method, no borrowing is used, although there are crutches that vary according to certain countries.[14][15]The method of addition involves augmenting the subtrahend. This transforms the previous problem into(80+16)−(39+10){\displaystyle (80+16)-(39+10)}. A small 1 is marked below the subtrahend digit as a reminder. Subtracting the numbers 792 and 308, starting with the ones column, 2 is smaller than 8. Using the borrowing method, 10 is borrowed from 90, reducing 90 to 80. This changes the problem to12−8{\displaystyle 12-8}. In the tens column, the difference between 80 and 0 is 80. In the hundreds column, the difference between 700 and 300 is 400. The result: 792−308=484{\displaystyle 792-308=484} Multiplicationis a mathematical operation of repeated addition. When two numbers are multiplied, the resulting value is a product. The numbers being multiplied are multiplicands, multipliers, or factors. Multiplication can be expressed as "five times three equals fifteen," "five times three is fifteen," or "fifteen is the product of five and three." Multiplication is represented using the multiplication sign (×), the asterisk (*), parentheses (), or a dot (⋅). The statement "five times three equals fifteen" can be written as "5×3=15{\displaystyle 5\times 3=15}", "5∗3=15{\displaystyle 5\ast 3=15}", "(5)(3)=15{\displaystyle (5)(3)=15}", or "5⋅3=15{\displaystyle 5\cdot 3=15}". In elementary arithmetic, multiplication satisfies the following properties[a]: In the multiplication algorithm, the "tens" digit of the product of a pair of digits is referred to as the "carry digit". Multiplying 729 and 3, starting on the ones column, the product of 9 and 3 is 27. 7 is written under the ones column and 2 is written above the tens column as a carry digit. The product of 2 and 3 is 6, and the carry digit adds 2 to 6, so 8 is written under the tens column. The product of 7 and 3 is 21, and since this is the last digit, 2 will not be written as a carry digit, but instead beside 1. The result: Multiplying 789 and 345, starting with the ones column, the product of 789 and 5 is 3945. 4 is in the tens digit. The multiplier is 40, not 4. The product of 789 and 40 is 31560. 3 is in the hundreds digits. The multiplier is 300. The product of 789 and 300 is 236700. Adding all the products, The result: Divisionis an arithmetic operation, and the inverse ofmultiplication, given thatc×b=a{\displaystyle c\times b=a}. Division can be written asa÷b{\displaystyle a\div b},ab{\displaystyle {\frac {a}{b}}}, ora⁄b. This can be read verbally as "adivided byb" or "aoverb". In some non-English-speaking cultures[which?], "adivided byb" is writtena:b. In English usage, thecolonis restricted to the concept ofratios("ais tob"). In an equationa÷b=c{\displaystyle a\div b=c}, ais the dividend,bthe divisor, andcthe quotient.Division by zerois considered impossible at an elementary arithmetic level. Two numbers can be divided on paper usinglong division. An abbreviated version of long division,short division, can be used for smaller divisors. A less systematic method involves the concept ofchunking, involving subtracting more multiples from the partial remainder at each stage. Dividing 272 and 8, starting with the hundreds digit, 2 is not divisible by 8. Add 20 and 7 to get 27. The largest number that the divisor of 8 can be multiplied by without exceeding 27 is 3, so it is written under the tens column. Subtracting 24 (the product of 3 and 8) from 27 gives 3 as theremainder. Going to the ones digit, the number is 2. Adding 30 (the remainder, 3, times 10) and 2 gets 32. The quotient of 32 and 8 is 4, which is written under the ones column. The result:272÷8=34{\displaystyle 272\div 8=34} Another method of dividing taught in some schools is the bus stop method, sometimes notated as The steps here are shown below, using the same example as above: The result: 272÷8=34{\displaystyle 272\div 8=34} Elementary arithmetic is typically taught at the primary or secondary school levels and is governed by local educational standards. There has been debate about the content and methods used to teach elementary arithmetic in the United States and Canada.[16][17] +Addition(+) −Subtraction(−) ×Multiplication(×or·) ÷Division(÷or∕)
https://en.wikipedia.org/wiki/Elementary_arithmetic
TheInternational Student Identity Card(ISIC) serves as internationally recognized proof ofstudentstatus and offers access to various benefits and discounts globally, includingtravel, accommodation, andcultural institutions. The ISIC Association also issues the International Youth Travel Card (IYTC) for non-students, and the International Teacher Identity Card (ITIC) forteachersandprofessors. Membership fees for these cards vary by country. The ISIC Association is a non-profit membership organisation legally registered in Denmark.[1] The ISIC card is administered and managed at a global level by the ISIC Service Office d.o.o. The ISIC Service Office d.o.o is a company seated in Belgrade, Serbia. The ISIC Service Office d.o.o is wholly owned by the ISIC Association.[1] ISIC Exclusive Representatives, who have the exclusive rights to issue ISIC cards in their respective countries, make up a global distribution network for ISIC cards. The ISIC card is available in over 130 countries. In each country, the ISIC Exclusive Representative is exclusively responsible for ISIC card distribution, promotion and development, including the development and managing a portfolio of local and national benefits and discounts, and services available to the ISIC holders.[2] Eligibility for the International Student Identity Card (ISIC) is limited to students in higher,tertiary, or full-timesecondary education, with a minimum age requirement of 12 years. There is no upper age limit for obtaining an ISIC card. The validity of an ISIC card spans 16 months, aligning with theacademic yearof the country where it is purchased.[3] The idea to conduct an ISIC Brand refresh originated in 2017, and was subsequently launched in May 2019.[4] The United Nations Educational, Scientific and Cultural Organization (UNESCO) has been involved in the ISIC development almost since the beginning. UNESCO joined the International Student Travel Conference in 1995 and supported the ISIC card. In 1968 UNESCO issued an official endorsement in full support of the ISIC card. UNESCO recognised the ISIC card as the only internationally accepted proof of full-time student status and a unique document encouraging cultural exchange and international understanding. A renewed Memorandum of Understanding was signed in 1993. The UNESCO logo has appeared on the ISIC card since 1993. UNESCO recognises the ISIC card as a unique document encouraging cultural exchange and international understanding.[5] This initiative was launched by the British Council IELTS,Studyportalsand the ISIC Association in 2015. The goal of these annual awards is to encourage and support more students undertaking study abroad. This award is available in all countries worldwide. In total there have been 17 winners in 5 rounds[6] The ISIC Event[35]
https://en.wikipedia.org/wiki/International_Student_Identity_Card
OnUnixandUnix-likecomputeroperating systems, azombie processordefunct processis aprocessthat has completed execution (via theexitsystem call) but still has an entry in theprocess table: it is a process in the "terminated state". This occurs for thechild processes, where the entry is still needed to allow theparent processto read its child'sexit status: once the exit status is read via thewaitsystem call, the zombie's entry is removed from the process table and it is said to be "reaped". A child process initially becomes a zombie, only then being removed from the resource table. Under normal system operation, zombies are immediately waited on by their parent and then reaped by the system. Processes that stay zombies for a long time are usually an error and can cause aresource leak. Generally, the only kernel resource they occupy is the process table entry, their process ID. However, zombies can also hold buffers open, consuming memory. Zombies can hold handles to file descriptors, which prevents the space for those files from being available to the filesystem. This effect can be seen by a difference betweenduanddf. Whiledumay show a large amount of free disk space,dfwill show a full partition. If the zombies are not cleaned, this can fill the root partition and crash the system. The termzombie processderives from the common definition ofzombie— anundeadperson. In the term's metaphor, the child process has "died" but has not yet been "reaped". Unlike normal processes, thekillcommand has no effect on a zombie process. Zombie processes should not be confused withorphan processes, a process that is still executing, but whose parent has died. When the parent dies, the orphaned child process is adopted byinit. When orphan processes die, they do not remain as zombie processes; instead, they arewaited on byinit. When a process ends viaexit, all of the memory and resources associated with it are deallocated so they can be used by other processes. However, the process's entry in the process table remains. The parent can read the child's exit status by executing thewaitsystem call, whereupon the zombie is removed. Thewaitcall may be executed in sequential code, but it is commonly executed in ahandlerfor theSIGCHLDsignal, which the parent receives whenever a child has died. After the zombie is removed, itsprocess identifier(PID) and entry in the process table can then be reused. However, if a parent fails to callwait, the zombie will be left in the process table, causing aresource leak. In some situations this may be desirable – the parent process wishes to continue holding this resource – for example if the parent creates another child process it ensures that it will not be allocated the same PID. On modern UNIX-like systems (that comply withSUSv3specification in this respect), the following special case applies: if the parentexplicitlyignores SIGCHLD by setting its handler toSIG_IGN(rather than simply ignoring the signal by default) or has theSA_NOCLDWAITflag set, all child exit status information will be discarded and no zombie processes will be left.[1] Zombies can be identified in the output from the Unixpscommandby the presence of a "Z" in the "STAT" column.[2]Zombies that exist for more than a short period of time typically indicate a bug in the parent program, or just an uncommon decision to not reap children (see example). If the parent program is no longer running, zombie processes typically indicate a bug in the operating system. As with other resource leaks, the presence of a few zombies is not worrisome in itself, but may indicate a problem that would grow serious under heavier loads. Since there is no memory allocated to zombie processes – the only system memory usage is for the process table entry itself – the primary concern with many zombies is not running out of memory, but rather running out of process table entries, concretely process ID numbers. However, zombies can hold open buffers that are associated with file descriptors, and thereby cause memory to be consumed by the zombie. Zombies can also hold a file descriptor to a file that has been deleted. This prevents the file system from recovering the i-nodes for the deleted file. Therefore, the command to show disk usage will not count the deleted files whose space cannot be reused due to the zombie holding the filedescriptor. To remove zombies from a system, the SIGCHLDsignalcan be sent to the parent manually, using thekillcommand. If the parent process still refuses to reap the zombie, and if it would be fine to terminate the parent process, the next step can be to remove the parent process. When a process loses its parent,initbecomes its new parent.initperiodically executes thewaitsystem call to reap any zombies withinitas parent. Synchronouslywaiting for the specific child processes in a (specific) order may leave zombies present longer than the above-mentioned "short period of time". It is not necessarily a program bug. In the first loop, the original (parent) process forks 10 copies of itself. Each of these child processes (detected by the fact that fork() returned zero) prints a message, sleeps, and exits. All of the children are created at essentially the same time (since the parent is doing very little in the loop), so it is somewhat random when each of them gets scheduled for the first time - thus the scrambled order of their messages. During the loop, an array of child process IDs is built. There is a copy of the pids[] array in all 11 processes, but only in the parent is it complete - the copy in each child will be missing the lower-numbered child PIDs, and have zero for its own PID. (Not that this really matters, as only the parent process actually uses this array.) The second loop executes only in the parent process (because all of the children have exited before this point), and waits for each child to exit. It waits for the child that slept 10 seconds first; all the others have long since exited, so all of the messages (except the first) appear in quick succession. There is no possibility of random ordering here, since it is driven by a loop in a single process. The first parent message actually appeared before any of the children messages - the parent was able to continue into the second loop before any of the child processes were able to start. This again is just the random behavior of the process scheduler - the "parent9" message could have appeared anywhere in the sequence prior to "parent8". Child0 through Child8 spend one or more seconds in this state, between the time they exited and the time the parent did a waitpid() on them. The parent was already waiting on Child9 before it exited, so that one process spent essentially no time as a zombie.[3]
https://en.wikipedia.org/wiki/Zombie_process
George Edward MooreOMFBA(4 November 1873 – 24 October 1958) was an English philosopher, who withBertrand Russell,Ludwig Wittgensteinand earlierGottlob Fregewas among the initiators ofanalytic philosophy. He and Russell began de-emphasizing theidealismwhich was then prevalent among British philosophers and became known for advocatingcommon-senseconcepts and contributing toethics,epistemologyandmetaphysics. He was said to have had an "exceptional personality and moral character".[6]Ray Monkdubbed him "the most revered philosopher of his era".[7] As Professor of Philosophy at theUniversity of Cambridge, he influenced but abstained from theBloomsbury Group, an informal set of intellectuals. He edited the journalMind. He was a member of theCambridge Apostlesfrom 1894 to 1901,[8]a fellow of theBritish Academyfrom 1918, and was chairman of the Cambridge University Moral Sciences Club in 1912–1944.[9][10]Ahumanist, he presided over the British Ethical Union (nowHumanists UK) in 1935–1936.[11] George Edward Moore was born inUpper Norwood, in south-east London, on 4 November 1873, the middle child of seven of Daniel Moore, a medical doctor, and Henrietta Sturge.[12][13][14]His grandfather was the authorGeorge Moore. His eldest brother wasThomas Sturge Moore, a poet, writer and engraver.[12][15][16] He was educated atDulwich College[17]and, in 1892, began attendingTrinity College, Cambridge, to learnclassicsandmoral sciences. Histriposresults were adouble first.[18]He became a Fellow of Trinity in 1898 and was laterUniversity of CambridgeProfessor of Mental Philosophy and Logicfrom 1925 to 1939. Moore is known best now for defendingethical non-naturalism, his emphasis oncommon sensefor philosophical method, and theparadox that bears his name. He was admired by and influenced by other philosophers and some of theBloomsbury Group. But unlike his colleague and admirer Bertrand Russell, who for some years thought Moore fulfilled his "ideal of genius",[19]he is mostly unknown presently except among academic philosophers. Moore's essays are known for their clarity and circumspection of writing style and methodical and patient treatment of philosophical problems. He was critical of modern philosophy for lack ofprogress, which he saw as a stark contrast to the dramatic advances in thenatural sciencessince theRenaissance. Among Moore's most famous works are hisPrincipia Ethica,[20]and his essays, "The Refutation of Idealism", "A Defence of Common Sense", and "A Proof of the External World". Moore was an important and admired member of the secretiveCambridge Apostles, a discussion group drawn from the British intellectual elite. At the time another member, 22-year-old Bertrand Russell, wrote "I almost worship him as if he were a god. I have never felt such an extravagant admiration for anybody",[7]and would later write that "for some years he fulfilled my ideal of genius. He was in those days beautiful and slim, with a look almost of inspiration as deeply passionate asSpinoza's".[21] From 1918 to 1919, Moore was chairman of theAristotelian Society, a group committed to the systematic study of philosophy, its historical development and its methods and problems.[22]He was appointed to theOrder of Meritin 1951.[23] Moore died in England in theEvelyn Nursing Homeon 24 October 1958.[24]He was cremated at Cambridge Crematorium on 28 October 1958 and his ashes interred at theParish of the Ascension Burial Groundin the city. His wife, Dorothy Ely (1892–1977), was buried there. Together, they had two sons, the poetNicholas Mooreand the composer Timothy Moore.[25][26] His influential workPrincipia Ethicais one of the main inspirations of the reaction againstethical naturalism(seeethical non-naturalism) and is partly responsible for the twentieth-century concern withmeta-ethics.[27] Moore asserted that philosophical arguments can suffer from a confusion between the use of a term in a particular argument and the definition of that term (in all arguments). He named this confusion thenaturalistic fallacy. For example, an ethical argument may claim that if an item has certain properties, then that item is 'good.' Ahedonistmay argue that 'pleasant' items are 'good' items. Other theorists may argue that 'complex' things are 'good' things. Moore contends that, even if such arguments are correct, they do not provide definitions for the term 'good'. The property of 'goodness' cannot be defined. It can only be shown and grasped. Any attempt to define it (X is good if it has property Y) will simply shift the problem (Why is Y-ness good in the first place?). Moore'sargumentfor the indefinability of 'good' (and thus for the fallaciousness in the "naturalistic fallacy") is often termed theopen-question argument; it is presented in§13 ofPrincipia Ethica. The argument concerns the nature of statements such as "Anything that is pleasant is also good" and the possibility of asking questions such as "Is itgoodthat x is pleasant?". According to Moore, these questions areopenand these statements aresignificant; and they will remain so no matter what is substituted for "pleasure". Moore concludes from this that any analysis of value is bound to fail. In other words, if value could be analysed, then such questions and statements would be trivial and obvious. Since they are anything but trivial and obvious, value must be indefinable. Critics of Moore's arguments sometimes claim that he is appealing to general puzzles concerning analysis (cf. theparadox of analysis), rather than revealing anything special about value. The argument clearly depends on the assumption that if 'good' were definable, it would be ananalytic truthabout 'good', an assumption that many contemporary moral realists likeRichard BoydandPeter Railtonreject. Other responses appeal to theFregeandistinction betweensense and reference, allowing that value concepts are special andsui generis, but insisting that value properties are nothing but natural properties (this strategy is similar to that taken bynon-reductive materialistsinphilosophy of mind). Moore contended that goodness cannot be analysed in terms of any other property. InPrincipia Ethica, he writes: Therefore, we cannot define 'good' by explaining it in other words. We can only indicate athingor anactionand say "That is good". Similarly, we cannot describe to a person born totally blind exactly what yellow is. We can only show a sighted person a piece of yellow paper or a yellow scrap of cloth and say "That is yellow". In addition to categorising 'good' as indefinable, Moore also emphasized that it is a non-natural property. This means that it cannot be empirically or scientifically tested or verified—it is not analyzable by "natural science". Moore argued that, once arguments based on thenaturalistic fallacyhad been discarded, questions of intrinsic goodness could be settled only by appeal to what he (followingSidgwick) termed "moral intuitions":self-evidentpropositions which recommend themselves to moral thought, but which are not susceptible to either direct proof or disproof (Principia,§ 45). As a result of his opinion, he has often been described by later writers as an advocate ofethical intuitionism. Moore, however, wished to distinguish his opinions from the opinions usually described as "Intuitionist" whenPrincipia Ethicawas written: In order to express the fact that ethical propositions of myfirstclass [propositions about what is good as an end in itself] are incapable of proof or disproof, I have sometimes followed Sidgwick's usage in calling them 'Intuitions.' But I beg that it may be noticed that I am not an 'Intuitionist,' in the ordinary sense of the term. Sidgwick himself seems never to have been clearly aware of the immense importance of the difference which distinguishes his Intuitionism from the common doctrine, which has generally been called by that name. The Intuitionist proper is distinguished by maintaining that propositions of mysecondclass—propositions which assert that a certain action isrightor aduty—are incapable of proof or disproof by any enquiry into the results of such actions. I, on the contrary, am no less anxious to maintain that propositions ofthiskind arenot'Intuitions,' than to maintain that propositions of myfirstclassareIntuitions. Moore distinguished his view from the opinion ofdeontologicalintuitionists, who claimed that "intuitions" could determine questions about whatactionsare right or required byduty. Moore, as aconsequentialist, argued that "duties" and moral rules could be determined by investigating theeffectsof particular actions or kinds of actions (Principia,§ 89), and so were matters for empirical investigation rather than direct objects of intuition (Principia,§ 90). According to Moore, "intuitions" revealed not the rightness or wrongness of specific actions, but only what items were good in themselves, asends to be pursued. Moore holds thatright actionsare those producing the most good.[28]The difficulty with this is that the consequences of most actions are too complex for us to properly take into account, especially the long-term consequences. Because of this, Moore suggests that the definition of duty is limited to what generally produces better results than probable alternatives in a comparatively near future.[29]: §109Whether a given rule of action is also adutydepends to some extent on the conditions of the corresponding society butdutiesagree mostly with what common-sense recommends.[29]: §95Virtues, like honesty, can in turn be defined aspermanent dispositionsto perform duties.[29]: §109 One of the most important parts of Moore's philosophical development was his differing with theidealismthat dominated British philosophy (as represented by the works of his former teachersF. H. BradleyandJohn McTaggart), and his defence of what he regarded as a "common sense" type ofrealism. In his 1925 essay "A Defence of Common Sense", he argued against idealism andscepticismtoward the external world, on the grounds that they could not give reasons to accept that their metaphysical premises were more plausible than the reasons we have for accepting the common sense claims about our knowledge of the world, which sceptics and idealists must deny. He famously put the point into dramatic relief with his 1939 essay "Proof of an External World", in which he gave a common sense argument against scepticism by raising his right hand and saying "Here is one hand" and then raising his left and saying "And here is another", then concluding that there are at least two external objects in the world, and therefore that he knows (by this argument) that an external world exists. Not surprisingly, not everyone preferring sceptical doubts found Moore's method of argument entirely convincing; Moore, however, defends his argument on the grounds that sceptical arguments seem invariably to require an appeal to "philosophical intuitions" that we have considerably less reason to accept than we have for the common sense claims that they supposedly refute. The "Here is one hand" argument also influencedLudwig Wittgenstein, who spent his last years working out a new method for Moore's argument in the remarks that were published posthumously asOn Certainty.) Moore is also remembered for drawing attention to the peculiar inconsistency involved in uttering a sentence such as "It is raining, but I do not believe it is raining", a puzzle now commonly termed "Moore's paradox". The puzzle is that it seems inconsistent for anyone toassertsuch a sentence; but there doesn't seem to be anylogical contradictionbetween "It is raining" and "I don't believe that it is raining", because the former is a statement about the weather and the latter a statement about a person's belief about the weather, and it is perfectly logically possible that it may rain whilst a person does not believe that it is raining. In addition to Moore's own work on the paradox, the puzzle also inspired a great deal of work byLudwig Wittgenstein, who described the paradox as the most impressive philosophical insight that Moore had ever introduced. It is said[by whom?]that when Wittgenstein first heard this paradox one evening (which Moore had earlier stated in a lecture), he rushed round to Moore's lodgings, got him out of bed and insisted that Moore repeat the entire lecture to him. Moore's description of the principle of theorganic wholeis extremely straightforward, nonetheless, and a variant on a pattern that began with Aristotle: According to Moore, a moral actor cannot survey the 'goodness' inherent in the various parts of a situation, assign a value to each of them, and then generate a sum in order to get an idea of its total value. A moral scenario is a complex assembly of parts, and its total value is often created by the relations between those parts, and not by their individual value. The organic metaphor is thus very appropriate: biological organisms seem to have emergent properties which cannot be found anywhere in their individual parts. For example, a human brain seems to exhibit a capacity for thought when none of its neurons exhibit any such capacity. In the same way, a moral scenario can have a value different than the sum of its component parts. To understand the application of the organic principle to questions of value, it is perhaps best to consider Moore's primary example, that of a consciousness experiencing a beautiful object. To see how the principle works, a thinker engages in "reflective isolation", the act of isolating a given concept in a kind of null context and determining its intrinsic value. In our example, we can easily see that, of themselves, beautiful objects and consciousnesses are not particularly valuable things. They might have some value, but when we consider the total value of a consciousness experiencing a beautiful object, it seems to exceed the simple sum of these values. Hence the value of a whole must not be assumed to be the same as the sum of the values of its parts.
https://en.wikipedia.org/wiki/G._E._Moore#Organic_wholes
e2fsprogs(sometimes called thee2fs programs) is a set of utilities for maintaining theext2,ext3andext4file systems. Since those file systems are often the default forLinux distributions, it is commonly considered to be essential software. Included with e2fsprogs, ordered byASCIIbetical order, are: Many of these utilities are based on thelibext2fslibrary. Despite what its name might suggest, e2fsprogs works not only with ext2, but also with ext3 and ext4. Although ext3'sjournalingcapability can reduce the need to usee2fsck, it is sometimes still necessary to help protect against kernel bugs or bad hardware. As theuserspacecompanion for the ext2, ext3, and ext4 drivers in theLinux kernel, the e2fsprogs are most commonly used withLinux. However, they have been ported to other systems, such asFreeBSDandDarwin.
https://en.wikipedia.org/wiki/E2fsprogs
Anescrowis acontractualarrangement in which a third party (thestakeholderorescrow agent) receives and disburses money or property for the primary transacting parties, with the disbursement dependent on conditions agreed to by the transacting parties. Examples include an account established by abrokerfor holding funds on behalf of the broker'sprincipalor some other person until the consummation or termination of a transaction;[1]or, a trust account held in the borrower's name to pay obligations such as property taxes and insurance premiums. The word derives from theOld Frenchwordescroue, meaning a scrap of paper or a scroll of parchment; this indicated the deed that a third party held until a transaction was completed.[2] Escrow generally refers to money held by a third party on behalf of transacting parties. It is mostly used regarding the purchase of shares of a company. It is best known in the United States in the context of thereal estateindustry (specifically inmortgageswhere the mortgage company establishes an escrow account to payproperty taxandinsuranceduring the term of the mortgage).[3]Escrow is an account separate from the mortgage account where deposit of funds occurs for payment of certain conditions that apply to the mortgage, usually property taxes and insurance. The escrow agent has the duty to properly account for the escrow funds and ensure that usage of funds is explicitly for the purpose intended. Since a mortgage lender is not willing to take the risk that a homeowner may not pay property tax, escrow is usually required under the mortgage terms. Escrow companies are also commonly used in the transfer of high value personal and business property such aswebsitesand businesses and in the completion of person-to-person remoteauctions(such aseBay), although the advent of new low-cost online escrow services has meant that even low-cost transactions are now starting to benefit from use of escrow. In the UK escrow accounts are often used during private property transactions to hold solicitors' clients' money, such as the deposit, until such time as the transaction is completed.[4]Other examples include: Internet escrow has existed since the beginning of Internet auctions and commerce. It was one of the many developments that allowed trust to be established in the online sphere.[5] As with traditional escrow, Internet escrow works by placing money in the control of an independent and licensed third party in order to protect both the buyer and seller in a transaction. When both parties verify the transaction has been completed per terms set, the money is released. If at any point there is a dispute between the parties in the transaction, the process moves along to dispute resolution. The outcome of the dispute resolution process will decide what happens to money in escrow. With the growth of both business and individual commerce on the web, traditional escrow companies have been supplanted by new technologies. In the US, theCalifornia Department of Business Oversightenacted Internet escrow companies as a licensed class effective 1 July 2001.[6]The first Internet escrow company to be licensed wasEscrow.com,[7]founded byFidelity National Financialin 1999.[8] In theEuropean Union, thePayment Services Directive, which commenced on 1 November 2009, has for the first time allowed the introduction of very low-cost Internet escrow services that are properly licensed and government-regulated. The regulatory framework in the EU allows these web-based escrow services, which operate along the lines of expensiveletter of creditservice run by banks for international buyers and sellers but at a cost in cents rather than thousands of Euros, the ability to enhance security in commercial transactions.[9] Bogus escrowmethods have been employed online. In an effort to persuade a wary Internet auction participant, the perpetrator will propose the use of a third-party escrow service. The victim is unaware that the perpetrator has actually created an escrow site that closely resembles a legitimate escrow service. The victim sends payment to the fraudulent escrow company and ends up receiving nothing in return. Alternatively, a victim may send merchandise to the subject and waits for his/her payment through the escrow site, which is never received because it is illegitimate.[10]Genuine online escrow companies will be listed on a government register, and users are generally advised not to use an online escrow service without first verifying that it is genuine by independently viewing a government on-line register. Currently, the US Federal Government does not offer a license for online escrow services. However, certain states offer their own license for online escrow services; such as the California Department of Business[11]and the Arizona Department of Financial Institutions.[12] Escrow is used in the field of automatic banking and vending equipment. One example isautomated teller machines(ATMs), and is the function which allows the machine to hold the money deposited by the customer separately, and in case he or she challenges the counting result, the money is returned. Another example is avending machine, where the customer's money is held in a separate escrow area pending successful completion of the transaction. If a problem occurs and the customer presses the refund button, the coins are returned from escrow; if no problem occurs, they fall into the coin vault of the machine.[13] Source code escrowagents holdsource codeofsoftwarein escrow just as other escrow companies hold cash. Sometimes one may not own or have any rights to the software (including source code) that they are accessing under the terms of a regularSaaSor desktop software agreement. This arrangement does not usually become an issue until technical problems start to arise, i.e. unexpected service interruptions, downtime, loss of application functionality and loss of data. This can add significant costs to one's business, as they remain reliant upon the software supplier to resolve these issues, unless an escrow agreement is in place. Escrow is when the software source code is held by a third party—an escrow agent—on behalf of the customer and the supplier.[citation needed]Information escrow agents, such as theInternational Creative Registry, hold in escrow intellectual property and other information. Examples include song music and lyrics, manufacturing designs and laboratory notebooks, and television and movie treatments and scripts. This is done to establish legal ownership rights, with the independent escrow agents attesting to the information's ownership, contents, and creation date. Escrow is also known in the judicial context. So-called escrow funds are commonly used to distribute money from a cash settlement in aclass actionor environmental enforcement action. This way the defendant is not responsible for distribution of judgment monies to the individual plaintiffs or the court-determined use (such asenvironmental remediationormitigation). The defendant pays the total amount of the judgment (or settlement) to the court-administered or appointed escrow fund, and the fund distributes the money (often reimbursing its expenses from the judgment funds). In the US,escrow paymentis a common term referring to the portion of amortgagepayment that is designated to pay forreal property taxesandhazard insurance. It is an amount "over and above" theprincipalandinterestportion of a mortgage payment. Since the escrow payment is used to pay taxes and insurance, it is referred to as "T&I", while the mortgage payment consisting of principal and interest is called "P&I". The sum total of all elements is then referred to as "PITI", for "Principal, Interest, Tax, and Insurance". Some mortgage companies require customers to maintain an escrow account that pays the property taxes and hazard insurance. Others offer it as an option for customers. Some types of loans, most notablyFederal Housing Administration(FHA) loans, require the lender to maintain an escrow account for the life of the loan. Even with a fixedinterest rate, monthly mortgage payments may change over the life of the loan due to changes in property taxes and insurance premiums. For instance, if a hazard insurance premium increases by $120 per year, the escrow payment will need to increase by $10 per month to account for this difference (in addition to collection for the resulting escrow shortage when the mortgage company paid $120 more for the hazard insurance premium than what was anticipated). ByRESPAguidelines the escrow payment must be recomputed at least once every 12 months to account for increases in property taxes or insurance. This is called an escrow analysis. The escrow payment used to pay taxes and insurance is a long-term escrow account that may last for years or for the life of the loan. Escrow can also refer to a shorter-term account used to facilitate the closing of a real estate transaction. In this type of escrow, the escrow company holds all documents and money related to closing the transaction, rather than having the buyer and the seller deal directly with each other. When and if the transaction is ready to close, the escrow company distributes all funds and documents to their rightful recipients, and records the deed with the appropriate authorities.[14] Courts sometimes act as stakeholders, holding property while litigation between the possible owners resolves the issue in which one is entitled to the property. Escrow arrangement is often used as a part ofmergers and acquisitionsa supplement that warranties and indemnities offered by the seller(s).[15]This will be particularly likely where thecredit riskof the seller(s) is of poor quality and the buyer is concerned about their ability to recover any sums that may become due. Unlike many other forms of escrow, escrow arrangements in corporate transactions are often designed to last for extended periods rather than simply to complete the transfer of an asset. There is also commonly the requirement for an escrow agent to adjudicate on the validity of a claim on the escrow funds, which can lead to the risk of the dispute between the parties. Due to the length that the funds are held, the escrow arrangements need to take into account different considerations to those for other escrow arrangements, for example (i) information provision to the parties; (ii) application ofinterestearned on the funds; and (iii)credit worthinessof thefinancial institution. For example, two people maybeton the outcome of a future event. They ask a third, disinterested, neutral person—the stakeholder—to hold the money ("stakes") they have wagered ("staked"). After the event occurs, the stakeholder distributes the stakes to one or both of the original (or other) parties according to the outcome of the event and according to the previously decided conditions. Trustees also often act as stakeholders, holding property until beneficiaries come of age, for example. Not all escrow agreements impose the duties of a legal trustee on the escrow agent, and in many such agreements, escrow agents are held to a meregross negligencestandard and benefit fromindemnityandhold harmlessprovisions. If the escrow agent is licensed by governmental authority,[where?]then much higher legal standards may apply.
https://en.wikipedia.org/wiki/Escrow
In communications,Circuit Switched Data(CSD) (also namedGSM data) is the original form ofdatatransmission developed for thetime-division multiple access(TDMA)-basedmobile phonesystems likeGlobal System for Mobile Communications(GSM). In later years,High Speed Circuit Switched Data(HSCSD) was developed providing increased data rates over conventional CSD. After 2010 many telecommunication carriers dropped support for CSD and HSCSD, having been superseded byGPRSandEDGE(E-GPRS). CSD uses a single radiotime slotto deliver 9.6kbit/sdata transmission to the GSMnetwork switching subsystemwhere it could be connected through the equivalent of a normalmodemto thePublic Switched Telephone Network(PSTN), allowing direct calls to anydial-upservice. For backwards compatibility, theIS-95standard also supports CDMA Circuit Switched Data. However, unlike TDMA, there are no time slots, and allCDMAradios can be active all the time to deliver up to 14.4 kbit/s data transmission speeds. With the evolution of CDMA toCDMA2000and1xRTT, the use ofIS-95CDMA Circuit Switched Data declined in favour of the faster data transmission speeds available with the newer technologies. Prior to CSD, data transmission over mobile phone systems was done by using a modem, either built into the phone or attached to it. Such systems were limited by the quality of the audio signal to 2.4 kbit/s or less. With the introduction of digital transmission in TDMA-based systems like GSM, CSD provided almost direct access to the underlying digital signal, allowing for higher speeds. At the same time, the speech-orientedaudio compressionused in GSM actually meant that data rates using a traditional modem connected to the phone would have been even lower than with olderanalogsystems. A CSDcallfunctions in a very similar way to a normalvoice callin a GSM network. A single dedicated radio time slot is allocated between the phone and thebase station. A dedicated "sub-time slot" (16 kbit/s) is allocated from the base station to thetranscoder, and finally, another time slot (64 kbit/s) is allocated from the transcoder to theMobile Switching Centre(MSC). At the MSC, it is possible to use a modem to convert to ananalog signal, though this will typically actually be encoded as a digitalpulse-code modulation(PCM) signal when sent from the MSC. It is also possible to directly use the digital signal as anIntegrated Services Digital Network(ISDN) data signal and feed it into the equivalent of aremote access server. High Speed Circuit Switched Data(HSCSD) is an enhancement to CSD designed to provide higher data rates by means of more efficient channel coding and/or multiple (up to 4) time slots. It requires the time slots being used to be fully reserved to a single user. A transfer rate of up to 57.6 kbit/s (i.e., 4 × 14.4 kbit/s) can be reached, or even 115 kbit/s if a network allows combining 8 slots instead of just 4. It is possible that either at the beginning of the call, or at some point during a call, it will not be possible for the user's full request to be satisfied since the network is often configured to allow normal voice calls to take precedence over additional time slots for HSCSD users. An innovation in HSCSD is to allow different error correction methods to be used for data transfer. The original error correction used in GSM was designed to work at the limits of coverage and in the worst case that GSM will handle. This means that a large part of the GSM transmission capacity is taken up with error correction codes. HSCSD provides different levels of possible error correction which can be used according to the quality of the radio link. This means that in the best conditions 14.4 kbit/s can be put through a single time slot that under CSD would only carry 9.6 kbit/s, i.e. a 50% improvement in throughput. The user is typically charged for HSCSD at a rate higher than a normal phone call (e.g., by the number of time slots allocated) for the total period of time that the user has a connection active. This makes HSCSD relatively expensive in many GSM networks and is one of the reasons that packet-switchedGeneral Packet Radio Service(GPRS), which typically has lower pricing (based on amount of data transferred rather than the duration of the connection), has become more common than HSCSD. Apart from the fact that the full allocated bandwidth of the connection is available to the HSCSD user, HSCSD also has an advantage in GSM systems in terms of lower average radio interface latency than GPRS. This is because the user of an HSCSD connection does not have to wait for permission from the network to send a packet. HSCSD is also an option inEnhanced Data Rates for GSM Evolution(EDGE) andUniversal Mobile Telecommunications System(UMTS) systems where packet data transmission rates are much higher. In the UMTS system, the advantages of HSCSD over packet data are even lower since the UMTS radio interface has been specifically designed to support high bandwidth, low latency packet connections. This means that the primary reason to use HSCSD in this environment would be access to legacy dial up systems. HSCSD was specified in 1997.[1]Nokia 6210was the first mobile phone from Nokia that supported HSCSD. GSM data transmission has advanced since the introduction of CSD: In some places CSD services have continued to operate on 2G networks for a long time. In theNetherlandsoperatorKPNswitched the service off in 2021.[3]
https://en.wikipedia.org/wiki/Circuit_Switched_Data
Amodchip(short formodification chip) is a small electronic device used to alter or disable artificial restrictions of computers or entertainment devices. Modchips are mainly used invideo game consoles, but also in someDVDorBlu-rayplayers. They introduce various modifications to its host system's function, including the circumvention ofregion coding,digital rights management, andcopy protectionchecks for the purpose of using media intended for other markets, copied media, or unlicensed third-party (homebrew) software. Modchips operate by replacing or overriding a system's protection hardware or software. They achieve this by either exploiting existing interfaces in an unintended or undocumented manner, or by actively manipulating the system's internal communication, sometimes to the point of re-routing it to substitute parts provided by the modchip. Most modchips consist of one or moreintegrated circuits(microcontrollers,FPGAs, orCPLDs), often complemented withdiscrete parts, usually packaged on a smallPCBto fit within the console system it is designed for. Although there are modchips that can be reprogrammed for different purposes, most modchips are designed to work within only one console system or even only one specific hardware version. Modchips typically require some degree of technical skill to install since they must be connected to a console's circuitry, most commonly bysolderingwires to select traces or chip legs on a system's circuit board. Some modchips allow for installation by directly soldering the modchip's contacts to the console's circuit ("quicksolder"), by the precise positioning of electrical contacts ("solderless"), or, in rare cases, by plugging them into a system's internal or external connector. Memory cards or cartridges that offer functions similar to modchips work on a completely different concept, namely by exploiting flaws in the system's handling of media. Such devices are not referred to as modchips, even if they are frequently traded under this umbrella term. The diversity of hardware modchips operate on and varying methods they use mean that while modchips are often used for the same goal, they may work in vastly different ways, even if they are intended for use on the same console. Some of the first modchips for theWii, known as drive chips, modify the behaviour and communication of the optical drive to bypass security. On theXbox 360, a common modchip took advantage of the fact short periods of instability in the CPU could be used to fairly reliably lead it to incorrectly compare security signatures. The precision required in this attack meant that the modchip had to make use of a CPLD. Other modchips, such as the XenoGC and clones for theGameCube, invoke a debug mode where security measures are reduced or absent (in which case, a stockAtmelAVR microcontrollerwas used). A more recent innovation are optical disk drive emulators or ODDE, which replace the optical disk drive and allow data to come from another source bypassing the need to circumvent any security. These often make use of FPGAs to enable them to accurately emulate timing and performance characteristics of the optical drives. Most cartridge-based console systems did not have modchips produced for them. They usually implemented copy protection and regional lockout with game cartridges, both on hardware and software level. Converters or passthrough devices have been used to circumvent the restrictions, while flash memory devices (game backup devices) were widely adopted in later years to copy game media. Early in the transition from solid-state to optical media, CD-based console systems did not have regional market segmentation or copy protection measures due to the rarity and high cost of user-writable media at the time. Modchips started to surface with thePlayStationsystem, due to the increasing availability and affordability of CD writers and the increasing sophistication of DRM protocols. At the time, a modchip's sole purpose was to allow the use of imported and copied game media. Today, modchips are available for practically every current console system, often in a great number of variations. In addition to circumventing regional lockout and copy protection mechanisms, modern modchips may introduce more sophisticated modifications to the system, such as allowing the use of user-created software (homebrew), expanding the hardware capabilities of its host system, or even installing an alternative operating system to completely re-purpose the host system (e.g. for use as ahome theater PC). Most modchips open the system to copied media, therefore the availability of a modchip for a console system is undesirable for console manufacturers. They react by removing the intrusion points exploited by a modchip from subsequent hardware or software versions, changing the PCB layout the modchips are customized for, or by having the firmware or software detect an installed modchip and refuse operation as a consequence. Since modchips often hook into fundamental functions of the host system that cannot be removed or adjusted, these measures may not completely prevent a modchip from functioning but only prompt an adjustment of its installation process or programming, e.g. to include measures to make it undetectable ("stealth") to its host system. With the advent of online services to be used by video game consoles, some manufacturers have executed their possibilities within the service'slicense agreementto ban consoles equipped with modchips from using those services.[1] In an effort to dissuade modchip creation, some console manufacturers included the option to run homebrew software or even an alternative operating system on their consoles, such asLinux for PlayStation 2. However, some of these features have been withdrawn at a later date.[2][3][4]An argument can be made that a console system remains largely untouched by modchips as long as their manufacturers provide an official way of running unlicensed third-party software.[5] One of the most prominent functions of many modchips—the circumvention of copy protection mechanisms—is outlawed by many countries' copyright laws such as theDigital Millennium Copyright Actin the United States, theEuropean Copyright Directiveand its various implementations by the EU member countries, and theAustralian Copyright Act. Other laws may apply to the many diversified functions of a modchip, e.g. Australian law specifically allowing the circumvention of region coding. The ambiguity of applicable law, its nonuniform interpretation by the courts, and constant profound changes and amendments to copyright law do not allow for a definitive statement on the legality of modchips. A modchip's legality under a country's legislature may only be individually asserted in court. Most of the very few cases that have been brought before a court ended with the conviction of the modchip merchant or the manufacturer under the respective country's anti-circumvention laws. A small number of cases in the United Kingdom and Australia were dismissed under the argument that a system's copy protection mechanism would not be able to prevent the actual infringement of copyright—the actual process of copying game media—and therefore cannot be considered an effectivetechnical protection measureprotected by anti-circumvention laws.[6][7]In 2006, Australian copyright law has been amended to effectively close this legal loophole.[8] In a 2017 lawsuit against a retailer, a Canadian court ruled in favor of Nintendo under anti-circumvention provisions inCanadian copyright law, which prohibit any breaching of technical protection measures. The court ruled that even though the retailer claimed the products could be used for homebrew, thus asserting exemptions for maintaining interoperability, the court ruled that because Nintendo offers development kits for its platforms, interoperability could be achieved without breaching TPMs, and thus the defence is invalid.[9] In Japan, modchips were outlawed as part of new legislation in 2018 which made savegame editing and console modding illegal.[10] An alternative of installing a modchip is a process ofsoftmoddinga device. A softmodded device does not need to permanently have any additional hardware pieces inside. Instead, the software of a device or its internal part is modified in order to change the device's behaviour.
https://en.wikipedia.org/wiki/Modchip
Disk encryptionis a technology which protects information by converting it into code that cannot be deciphered easily by unauthorized people or processes. Disk encryption usesdisk encryption softwareorhardwaretoencrypteverybitofdatathat goes on adiskor diskvolume. It is used to prevent unauthorized access to data storage.[1] The expressionfull disk encryption (FDE)(orwhole disk encryption) signifies that everything on the disk is encrypted, but themaster boot record(MBR), or similar area of a bootable disk, with code that starts theoperating systemloading sequence, is not encrypted. Somehardware-based full disk encryptionsystems can truly encrypt an entireboot disk, including the MBR. Transparent encryption, also known asreal-time encryptionandon-the-fly encryption(OTFE), is a method used by somedisk encryption software. "Transparent" refers to the fact that data is automaticallyencryptedor decrypted as it is loaded or saved. With transparent encryption, the files are accessible immediately after thekeyis provided, and the entirevolumeis typicallymountedas if it were a physical drive, making the files just as accessible as any unencrypted ones. No data stored on an encrypted volume can be read (decrypted) without using the correctpassword/keyfile(s) or correctencryption keys. The entirefile systemwithin the volume is encrypted (including file names, folder names, file contents, and othermeta-data).[2] To betransparentto the end-user, transparent encryption usually requires the use ofdevice driversto enable theencryptionprocess. Althoughadministratoraccess rights are normally required to install such drivers, encrypted volumes can typically be used by normal users without these rights.[3] In general, every method in which data is seamlessly encrypted on write and decrypted on read, in such a way that the user and/orapplication softwareremains unaware of the process, can be called transparent encryption. Disk encryption does not replace file encryption in all situations. Disk encryption is sometimes used in conjunction withfilesystem-level encryptionwith the intention of providing a more secure implementation. Since disk encryption generally uses the same key for encrypting the whole drive, all of the data can be decrypted when the system runs. However, some disk encryption solutions use multiple keys for encrypting different volumes. If an attacker gains access to the computer at run-time, the attacker has access to all files. Conventional file and folder encryption instead allows different keys for different portions of the disk. Thus an attacker cannot extract information from still-encrypted files and folders. Unlike disk encryption, filesystem-level encryption does not typically encrypt filesystem metadata, such as the directory structure, file names, modificationtimestampsor sizes. Trusted Platform Module(TPM) is asecure cryptoprocessorembedded in themotherboardthat can be used toauthenticatea hardware device. Since each TPM chip is unique to a particular device, it is capable of performing platformauthentication. It can be used to verify that the system seeking the access is the expected system.[4] A limited number of disk encryption solutions have support for TPM. These implementations can wrap the decryption key using the TPM, thus tying thehard disk drive(HDD) to a particular device. If the HDD is removed from that particular device and placed in another, the decryption process will fail. Recovery is possible with the decryptionpasswordortoken. The TPM can impose a limit on decryption attempts per unit time, making brute-forcing harder. The TPM itself is intended to be impossible to duplicate, so that the brute-force limit is not trivially bypassed.[5] Although this has the advantage that the disk cannot be removed from the device, it might create asingle point of failurein the encryption. For example, if something happens to the TPM or themotherboard, a user would not be able to access the data by connecting the hard drive to another computer, unless that user has a separate recovery key. There are multiple tools available in the market that allow for disk encryption. However, they vary greatly in features and security. They are divided into three main categories:software-based, hardware-based within the storage device, and hardware-based elsewhere (such asCPUorhost bus adaptor).Hardware-based full disk encryptionwithin the storage device are called self-encrypting drives and have no impact on performance whatsoever. Furthermore, the media-encryption key never leaves the device itself and is therefore not available to any malware in the operating system. TheTrusted Computing GroupOpal Storage Specificationprovides industry accepted standardization for self-encrypting drives. External hardware is considerably faster than the software-based solutions, although CPU versions may still have a performance impact[clarification needed], and the media encryption keys are not as well protected. There are other (non-TCGA/OPAL based) self-encrypted drives (SED) that don't have the known vulnerabilities of the TCG/OPAL based drives (see section below).[6]They are Host/OS and BIOS independent and don't rely on the TPM module or the motherboard BIOS, and their Encryption Key never leaves the crypto-boundary of the drive. All solutions for the boot drive require apre-boot authenticationcomponent which is available for all types of solutions from a number of vendors. It is important in all cases that the authentication credentials are usually a major potential weakness since thesymmetric cryptographyis usually strong.[clarification needed] Secure and safe recovery mechanisms are essential to the large-scale deployment of any disk encryption solutions in an enterprise. The solution must provide an easy but secure way to recover passwords (most importantly data) in case the user leaves the company without notice or forgets the password. Challenge–responsepassword recovery mechanism allows the password to be recovered in a secure manner. It is offered by a limited number of disk encryption solutions. Some benefits of challenge–response password recovery: An emergency recovery information (ERI) file provides an alternative for recovery if a challenge–response mechanism is unfeasible due to the cost of helpdesk operatives for small companies or implementation challenges. Some benefits of ERI-file recovery: Most full disk encryption schemes are vulnerable to acold boot attack, whereby encryptionkeyscan be stolen bycold-bootinga machine already running anoperating system, then dumping the contents ofmemorybefore the data disappears. The attack relies on thedata remanenceproperty ofcomputer memory, whereby databitscan take up to several minutes to degrade after power has been removed.[7]Even aTrusted Platform Module(TPM) is not effective against the attack, as the operating system needs to hold the decryption keys in memory in order to access the disk.[7] Full disk encryption is also vulnerable when a computer is stolen when suspended. As wake-up does not involve aBIOSboot sequence, it typically does not ask for the FDE password. Hibernation, in contrast goes via a BIOS boot sequence, and is safe. All software-based encryption systems are vulnerable to variousside channel attackssuch asacoustic cryptanalysisandhardware keyloggers. In contrast, self-encrypting drives are not vulnerable to these attacks since the hardware encryption key never leaves the disk controller. Also, most full disk encryption schemes don't protect from data tampering (or silentdata corruption, i.e.bitrot).[8]That means they only provide privacy, but not integrity.Block cipher-based encryption modesused for full disk encryption are notauthenticated encryptionthemselves because of concerns of the storage overhead needed for authentication tags. Thus, if tampering would be done to data on the disk, the data would be decrypted to garbled random data when read and hopefully errors may be indicated depending on which data is tampered with (for the case of OS metadata – by the file system; and for the case of file data – by the corresponding program that would process the file). One of the ways to mitigate these concerns, is to use file systems with full data integrity checks viachecksums(likeBtrfsorZFS) on top of full disk encryption. However,cryptsetupstarted experimentally to supportauthenticated encryption[9] Full disk encryption has several benefits compared to regular file or folder encryption, or encrypted vaults. The following are some benefits of disk encryption: One issue to address in full disk encryption is that the blocks where theoperating systemis stored must be decrypted before the OS can boot, meaning that the key has to be available before there is a user interface to ask for a password. Most Full Disk Encryption solutions utilizePre-Boot Authenticationby loading a small, highly secure operating system which is strictly locked down and hashed versus system variables to check for the integrity of the Pre-Boot kernel. Some implementations such asBitLocker Drive Encryptioncan make use of hardware such as a Trusted Platform Module to ensure the integrity of the boot environment, and thereby frustrate attacks thattarget the boot loaderby replacing it with a modified version. This ensures that authentication can take place in a controlled environment without the possibility of a bootkit being used to subvert the pre-boot decryption. With apre-boot authenticationenvironment, the key used to encrypt the data is not decrypted until an external key is input into the system. Solutions for storing the external key include: All these possibilities have varying degrees of security; however, most are better than an unencrypted disk.
https://en.wikipedia.org/wiki/Disk_encryption
Crypto-shreddingor crypto erase (cryptographic erasure) is the practice of rendering encrypteddataunusable by deliberately deleting or overwriting theencryptionkeys: assuming the key is not later recovered and the encryption is not broken, the data should become irrecoverable, effectively permanently deleted or "shredded".[1]This requires that the data have been encrypted. Data may be considered to exist in three states:data at rest,data in transitanddata in use. General data security principles, such as in theCIA triadofconfidentiality,integrity, andavailability, require that all three states must be adequately protected. Deleting data at rest on storage media such asbackuptapes,data stored in the cloud,computers, phones, ormulti-function printerscan present challenges when confidentiality of information is of concern. When encryption is in place, data disposal is more secure, as less data (only the key material) needs to be destroyed. There are various reasons for using crypto-shredding, including when the data is contained in defective or out-of date systems, there is no further use for the data, the circumstances are such that there are no [longer] legal rights to use or retain the data, and other similar motivations. Legal obligations may also come from regulations such as theright to be forgotten, theGeneral Data Protection Regulation, and others. Data security is largely influenced byconfidentialityandprivacyconcerns. In some cases all data storage is encrypted, such as encrypting entireharddisks,computer files, ordatabases. Alternatively only specific data may be encrypted, such aspassportnumbers,social security numbers,bank account numbers,person names, orrecord in a databases. Additionally, data in one system may be encrypted with separate keys when that same data is contained in multiple systems. Whenspecific pieces of data are encrypted(possibly with different keys) it allows for more specific data shredding. There is no need to have access to the data (like an encrypted backup tape), only the encryption keys need to be shredded.[2] iOSdevices andMacintoshcomputers with anApple siliconchip use crypto-shredding when performing the "Erase all content and settings" action by discarding all the keys in 'effaceablestorage'. This renders all user data on the device cryptographically inaccessible, in a very short amount of time.[3] There are many security issues that should be considered when securing data. Some examples are listed in this section. The security issues listed here are not specific to crypto-shredding, and in general these may apply to all types of data encryption. In addition to crypto-shredding,data erasure,degaussingandphysically shreddingthe physical device (disk) can mitigate the risk further.
https://en.wikipedia.org/wiki/Crypto-shredding
Alarge language model(LLM) is a type ofmachine learningmodeldesigned fornatural language processingtasks such as languagegeneration. LLMs arelanguage modelswith many parameters, and are trained withself-supervised learningon a vast amount of text. This page lists notable large language models. For the training cost column, 1 petaFLOP-day = 1 petaFLOP/sec × 1 day = 8.64E19 FLOP. Also, only the largest model's cost is written.
https://en.wikipedia.org/wiki/List_of_large_language_models
Bayesian optimizationis asequential designstrategy forglobal optimizationofblack-boxfunctions,[1][2][3]that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions. With the rise ofartificial intelligenceinnovation in the 21st century, Bayesian optimizations have found prominent use inmachine learningproblems foroptimizing hyperparameter values.[4][5] The term is generally attributed toJonas Mockus[lt]and is coined in his work from a series of publications on global optimization in the 1970s and 1980s.[6][7][1] The earliest idea of Bayesian optimization[8]sprang in 1964, from a paper by American applied mathematician Harold J. Kushner,[9]“A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise”. Although not directly proposing Bayesian optimization, in this paper, he first proposed a new method of locating the maximum point of an arbitrary multipeak curve in a noisy environment. This method provided an important theoretical foundation for subsequent Bayesian optimization. By the 1980s, the framework we now use for Bayesian optimization was explicitly established. In 1978, the Lithuanian scientist Jonas Mockus,[10]in his paper “The Application of Bayesian Methods for Seeking the Extremum”, discussed how to use Bayesian methods to find the extreme value of a function under various uncertain conditions. In his paper, Mockus first proposed theExpected Improvement principle (EI), which is one of the core sampling strategies of Bayesian optimization. This criterion balances exploration while optimizing the function efficiently by maximizing the expected improvement. Because of the usefulness and profound impact of this principle, Jonas Mockus is widely regarded as the founder of Bayesian optimization. Although Expected Improvement principle (IE) is one of the earliest proposed core sampling strategies for Bayesian optimization, it is not the only one, with the development of modern society, we also have Probability of Improvement (PI), or Upper Confidence Bound (UCB)[11]and so on. In the 1990s, Bayesian optimization began to gradually transition from pure theory to real-world applications. In 1998, Donald R. Jones[12]and his coworkers published a paper titled “Gaussian Optimization[13]”. In this paper, they proposed the Gaussian Process(GP) and elaborated on the Expected Improvement principle(EI) proposed by Jonas Mockus in 1978. Through the efforts of Donald R. Jones and his colleagues, Bayesian Optimization began to shine in the fields like computers science and engineering. However, the computational complexity of Bayesian optimization for the computing power at that time still affected its development to a large extent. In the 21st century, with the gradual rise of artificial intelligence and bionic robots, Bayesian optimization has been widely used in machine learning and deep learning, and has become an important tool forHyperparameter Tuning.[14]Companies such as Google, Facebook and OpenAI have added Bayesian optimization to their deep learning frameworks to improve search efficiency. However, Bayesian optimization still faces many challenges, for example, because of the use of Gaussian Process[15]as a proxy model for optimization, when there is a lot of data, the training of Gaussian Process will be very slow and the computational cost is very high. This makes it difficult for this optimization method to work well in more complex drug development and medical experiments. Bayesian optimization is used on problems of the formmaxx∈Xf(x){\textstyle \max _{x\in X}f(x)}, withX{\textstyle X}being the set of all possible parametersx{\textstyle x}, typically with less than or equal to 20dimensionsfor optimal usage (X→Rd∣d≤20{\textstyle X\rightarrow \mathbb {R} ^{d}\mid d\leq 20}), and whose membership can easily be evaluated. Bayesian optimization is particularly advantageous for problems wheref(x){\textstyle f(x)}is difficult to evaluate due to its computational cost. The objective function,f{\textstyle f}, is continuous and takes the form of some unknown structure, referred to as a "black box". Upon its evaluation, onlyf(x){\textstyle f(x)}is observed and itsderivativesare not evaluated.[17] Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place apriorover it. The prior captures beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form theposterior distributionover the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point. There are several methods used to define the prior/posterior distribution over the objective function. The most common two methods useGaussian processesin a method calledkriging. Another less expensive method uses theParzen-Tree Estimatorto construct two distributions for 'high' and 'low' points, and then finds the location that maximizes the expected improvement.[18] Standard Bayesian optimization relies upon eachx∈X{\displaystyle x\in X}being easy to evaluate, and problems that deviate from this assumption are known asexotic Bayesian optimizationproblems. Optimization problems can become exotic if it is known that there is noise, the evaluations are being done in parallel, the quality of evaluations relies upon a tradeoff between difficulty and accuracy, the presence of random environmental conditions, or if the evaluation involves derivatives.[17] Examples of acquisition functions include and hybrids of these.[19]They all trade-offexploration and exploitationso as to minimize the number of function queries. As such, Bayesian optimization is well suited for functions that are expensive to evaluate. The maximum of the acquisition function is typically found by resorting to discretization or by means of an auxiliary optimizer. Acquisition functions are maximized using anumerical optimization technique, such asNewton's methodor quasi-Newton methods like theBroyden–Fletcher–Goldfarb–Shanno algorithm. The approach has been applied to solve a wide range of problems,[20]includinglearning to rank,[21]computer graphicsand visual design,[22][23][24]robotics,[25][26][27][28]sensor networks,[29][30]automatic algorithm configuration,[31][32]automatic machine learningtoolboxes,[33][34][35]reinforcement learning,[36]planning, visual attention, architecture configuration indeep learning, static program analysis, experimentalparticle physics,[37][38]quality-diversity optimization,[39][40][41]chemistry, material design, and drug development.[17][42][43] Bayesian optimization has been applied in the field of facial recognition.[44]The performance of the Histogram of Oriented Gradients (HOG) algorithm, a popular feature extraction method, heavily relies on its parameter settings. Optimizing these parameters can be challenging but crucial for achieving high accuracy.[44]A novel approach to optimize the HOG algorithm parameters and image size for facial recognition using a Tree-structured Parzen Estimator (TPE) based Bayesian optimization technique has been proposed.[44]This optimized approach has the potential to be adapted for other computer vision applications and contributes to the ongoing development of hand-crafted parameter-based feature extraction algorithms in computer vision.[44]
https://en.wikipedia.org/wiki/Bayesian_Optimization
TheCheiRankis aneigenvectorwith a maximal real eigenvalue of theGoogle matrixG∗{\displaystyle G^{*}}constructed for a directed network with the inverted directions of links. It is similar to thePageRankvector, which ranks the network nodes in average proportionally to a number of incoming links being the maximal eigenvector of theGoogle matrixG{\displaystyle G}with a given initial direction of links. Due to inversion of link directions the CheiRank ranks the network nodes in average proportionally to a number of outgoing links. Since each node belongs both to CheiRank andPageRankvectors the ranking of information flow on a directed network becomestwo-dimensional. For a given directed network the Google matrix is constructed in the way described in the articleGoogle matrix. ThePageRankvector is the eigenvector with the maximal real eigenvalueλ=1{\displaystyle \lambda =1}. It was introduced in[1]and is discussed in the articlePageRank. In a similar way the CheiRank is the eigenvector with the maximal real eigenvalue of the matrixG∗{\displaystyle G^{*}}built in the same way asG{\displaystyle G}butusing inverted direction of links in the initially givenadjacency matrix. Both matricesG{\displaystyle G}andG∗{\displaystyle G^{*}}belong to the class of Perron–Frobenius operators and according to thePerron–Frobenius theoremthe CheiRankPi∗{\displaystyle P_{i}^{*}}and PageRankPi{\displaystyle P_{i}}eigenvectors have nonnegative components which can be interpreted as probabilities.[2][3]Thus allN{\displaystyle N}nodesi{\displaystyle i}of the network can be ordered in a decreasing probability order with ranksKi∗,Ki{\displaystyle K_{i}^{*},K_{i}}for CheiRank and PageRankPi∗,Pi{\displaystyle P_{i}^{*},P_{i}}respectively. In average the PageRank probabilityPi{\displaystyle P_{i}}is proportional to the number of ingoing links withPi∝1/Kiβ{\displaystyle P_{i}\propto 1/{K_{i}}^{\beta }}.[4][5][6]For the World Wide Web (WWW) network the exponentβ=1/(ν−1)≈0.9{\displaystyle \beta =1/(\nu -1)\approx 0.9}whereν≈2.1{\displaystyle \nu \approx 2.1}is the exponent for ingoing links distribution.[4][5]In a similar way the CheiRank probability is in average proportional to the number of outgoing links withPi∗∝1/Ki∗β∗{\displaystyle P_{i}^{*}\propto 1/{K_{i}^{*}}^{\beta ^{*}}}withβ∗=1/(ν∗−1)≈0.6{\displaystyle \beta ^{*}=1/(\nu ^{*}-1)\approx 0.6}whereν∗≈2.7{\displaystyle \nu ^{*}\approx 2.7}is the exponent for outgoing links distribution of the WWW.[4][5]The CheiRank was introduced for the procedure call network of Linux Kernel software in,[7]the term itself was used in Zhirov.[8]While the PageRank highlights very well known and popular nodes, the CheiRank highlights very communicative nodes. Top PageRank and CheiRank nodes have certain analogy to authorities and hubs appearing in theHITS algorithm[9]but the HITS is query dependent while the rank probabilitiesPi{\displaystyle P_{i}}andPi∗{\displaystyle P_{i}^{*}}classify all nodes of the network. Since each node belongs both to CheiRank and PageRank we obtain a two-dimensional ranking of network nodes. There had been early studies of PageRank in networks with inverted direction of links[10][11]but the properties of two-dimensional ranking had not been analyzed in detail. An example of nodes distribution in the plane of PageRank and CheiRank is shown in Fig.1 for the procedure call network of Linux Kernel software.[7] The dependence ofP,P∗{\displaystyle P,P^{*}}onK,K∗{\displaystyle K,K^{*}}for the network of hyperlink network of Wikipedia English articles is shown in Fig.2 from Zhirov. The distribution of these articles in the plane of PageRank and CheiRank is shown in Fig.3 from Zhirov. The difference between PageRank and CheiRank is clearly seen from the names of Wikipedia articles (2009) with highest rank. At the top of PageRank we have 1.United States, 2.United Kingdom, 3.France while for CheiRank we find 1.Portal:Contents/Outline of knowledge/Geography and places, 2.List of state leaders by year, 3.Portal:Contents/Index/Geography and places. Clearly PageRank selects first articles on a broadly known subject with a large number of ingoing links while CheiRank selects first highly communicative articles with many outgoing links. Since the articles are distributed in 2D they can be ranked in various ways corresponding to projection of 2D set on a line. The horizontal and vertical lines correspond to PageRank and CheiRank, 2DRank combines properties of CheiRank and PageRank as it is discussed in Zhirov.[8]It gives top Wikipedia articles 1.India, 2.Singapore, 3.Pakistan. The 2D ranking highlights the properties of Wikipedia articles in a new rich and fruitful manner. According to the PageRank the top 100 personalities described in Wikipedia articles have in 5 main category activities: 58 (politics), 10 (religion),17 (arts), 15 (science), 0 (sport) and thus the importance of politicians is strongly overestimated. The CheiRank gives respectively 15, 1, 52, 16, 16 while for 2DRank one finds 24, 5, 62, 7, 2. Such type of 2D ranking can find useful applications for various complex directed networks including the WWW. CheiRank and PageRank naturally appear for the world trade network, orinternational trade, where they and linked with export and import flows for a given country respectively.[12] Possibilities of development of two-dimensional search engines based on PageRank and CheiRank are considered.[13]Directed networks can be characterized by the correlator between PageRank and CheiRank vectors: in certain networks this correlator is close to zero (e.g. Linux Kernel network) while other networks have large correlator values (e.g. Wikipedia or university networks).[7][13] A simple example of the construction of the Google matricesG{\displaystyle G}andG∗{\displaystyle G^{*}}, used for determination of the related PageRank and CheiRank vectors, is given below. The directed network example with 7 nodes is shown in Fig.4. The matrixS{\displaystyle S}, built with the rules described in the articleGoogle matrix, is shown in Fig.5; the related Google matrix isG=αS+(1−α)eeT/N{\displaystyle G=\alpha S+(1-\alpha )ee^{T}/N}and the PageRank vector is the right eigenvector ofG{\displaystyle G}with the unit eigenvalue (GP=P{\displaystyle GP=P}). In a similar way, to determine the CheiRank eigenvector all directions of links in Fig.4 are inverted, then the matrixS∗{\displaystyle S^{*}}is built, according to the same rules applied for the network with inverted link directions, as shown in Fig.6. The related Google matrix isG∗=αS∗+(1−α)eeT/N{\displaystyle G^{*}=\alpha S^{*}+(1-\alpha )ee^{T}/N}and the CheiRank vector is the right eigenvector ofG∗{\displaystyle G^{*}}with the unit eigenvalue (G∗P∗=P∗{\displaystyle G^{*}P^{*}=P^{*}}). Hereα≈0.85{\displaystyle \alpha \approx 0.85}is the damping factor taken at its usual value.
https://en.wikipedia.org/wiki/CheiRank
Incomputer science,coinductionis a technique for defining and proving properties of systems ofconcurrentinteractingobjects. Coinduction is themathematicaldualtostructural induction.[citation needed]Coinductively defineddata typesare known ascodataand are typicallyinfinitedata structures, such asstreams. As a definition orspecification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As aprooftechnique, it may be used to show that an equation is satisfied by all possibleimplementationsof such a specification. To generate and manipulate codata, one typically usescorecursivefunctions, in conjunction withlazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result. In programming, co-logic programming (co-LP for brevity) "is a natural generalization oflogic programmingand coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation,concurrent logic programming,model checking,bisimilarityproofs, etc."[1]Experimental implementations of co-LP are available from theUniversity of Texas at Dallas[2]and in the languageLogtalk(for examples see[3]) andSWI-Prolog. In his bookTypes and Programming Languages,[4]Benjamin C. Piercegives a concise statement of both theprinciple of inductionand theprinciple of coinduction. While this article is not primarily concerned withinduction, it is useful to consider their somewhat generalized forms at once. In order to state the principles, a few preliminaries are required. LetU{\displaystyle U}be a set andF{\displaystyle F}be amonotone function2U→2U{\displaystyle 2^{U}\rightarrow 2^{U}}, that is: X⊆Y⇒F(X)⊆F(Y){\displaystyle X\subseteq Y\Rightarrow F(X)\subseteq F(Y)} Unless otherwise stated,F{\displaystyle F}will be assumed to be monotone. These terms can be intuitively understood in the following way. Suppose thatX{\displaystyle X}is a set of assertions, andF(X){\displaystyle F(X)}is the operation that yields the consequences ofX{\displaystyle X}. ThenX{\displaystyle X}isF-closedwhen one cannot conclude anymore than has already been asserted, whileX{\displaystyle X}isF-consistentwhen all of the assertions are supported by other assertions (i.e. there are no "non-F-logical assumptions"). TheKnaster–Tarski theoremtells us that theleast fixed-pointofF{\displaystyle F}(denotedμF{\displaystyle \mu F}) is given by the intersection of allF-closedsets, while thegreatest fixed-point(denotedνF{\displaystyle \nu F}) is given by the union of allF-consistentsets. We can now state the principles of induction and coinduction. The principles, as stated, are somewhat opaque, but can be usefully thought of in the following way. Suppose you wish to prove a property ofμF{\displaystyle \mu F}. By theprinciple of induction, it suffices to exhibit anF-closedsetX{\displaystyle X}for which the property holds. Dually, suppose you wish to show thatx∈νF{\displaystyle x\in \nu F}. Then it suffices to exhibit anF-consistentset thatx{\displaystyle x}is known to be a member of. Consider the following grammar of datatypes: T=⊥|⊤|T×T{\displaystyle T=\bot \;|\;\top \;|\;T\times T} That is, the set of types includes the "bottom type"⊥{\displaystyle \bot }, the "top type"⊤{\displaystyle \top }, and (non-homogenous) lists. These types can be identified with strings over the alphabetΣ={⊥,⊤,×}{\displaystyle \Sigma =\{\bot ,\top ,\times \}}. LetΣ≤ω{\displaystyle \Sigma ^{\leq \omega }}denote all (possibly infinite) strings overΣ{\displaystyle \Sigma }. Consider the functionF:2Σ≤ω→2Σ≤ω{\displaystyle F:2^{\Sigma ^{\leq \omega }}\rightarrow 2^{\Sigma ^{\leq \omega }}}: F(X)={⊥,⊤}∪{x×y:x,y∈X}{\displaystyle F(X)=\{\bot ,\top \}\cup \{x\times y:x,y\in X\}} In this context,x×y{\displaystyle x\times y}means "the concatenation of stringx{\displaystyle x}, the symbol×{\displaystyle \times }, and stringy{\displaystyle y}." We should now define our set of datatypes as a fixpoint ofF{\displaystyle F}, but it matters whether we take theleastorgreatestfixpoint. Suppose we takeμF{\displaystyle \mu F}as our set of datatypes. Using theprinciple of induction, we can prove the following claim: To arrive at this conclusion, consider the set of all finite strings overΣ{\displaystyle \Sigma }. ClearlyF{\displaystyle F}cannot produce an infinite string, so it turns out this set isF-closedand the conclusion follows. Now suppose that we takeνF{\displaystyle \nu F}as our set of datatypes. We would like to use theprinciple of coinductionto prove the following claim: Here⊥×⊥×⋯{\displaystyle \bot \times \bot \times \cdots }denotes the infinite list consisting of all⊥{\displaystyle \bot }. To use theprinciple of coinduction, consider the set: {⊥×⊥×⋯}{\displaystyle \{\bot \times \bot \times \cdots \}} This set turns out to beF-consistent, and therefore⊥×⊥×⋯∈νF{\displaystyle \bot \times \bot \times \cdots \in \nu F}. This depends on the suspicious statement that ⊥×⊥×⋯=(⊥×⊥×⋯)×(⊥×⊥×⋯){\displaystyle \bot \times \bot \times \cdots =(\bot \times \bot \times \cdots )\times (\bot \times \bot \times \cdots )} The formal justification of this is technical and depends on interpreting strings assequences, i.e. functions fromN→Σ{\displaystyle \mathbb {N} \rightarrow \Sigma }. Intuitively, the argument is similar to the argument that0.0¯1=0{\displaystyle 0.{\bar {0}}1=0}(seeRepeating decimal). Consider the following definition of astream:[5] This would seem to be a definition that isnot well-founded, but it is nonetheless useful in programming and can be reasoned about. In any case, a stream is an infinite list of elements from which you may observe the first element, or place an element in front of to get another stream. Consider theendofunctorF{\displaystyle F}in thecategory of sets: F(x)=A×xF(f)=⟨idA,f⟩{\displaystyle {\begin{aligned}F(x)&=A\times x\\F(f)&=\langle \mathrm {id} _{A},f\rangle \end{aligned}}} Thefinal F-coalgebraνF{\displaystyle \nu F}has the following morphism associated with it: out:νF→F(νF)=A×νF{\displaystyle \mathrm {out} :\nu F\rightarrow F(\nu F)=A\times \nu F} This induces another coalgebraF(νF){\displaystyle F(\nu F)}with associated morphismF(out){\displaystyle F(\mathrm {out} )}. BecauseνF{\displaystyle \nu F}isfinal, there is a unique morphism F(out)¯:F(νF)→νF{\displaystyle {\overline {F(\mathrm {out} )}}:F(\nu F)\rightarrow \nu F} such that out∘F(out)¯=F(F(out)¯)∘F(out)=F(F(out)¯∘out){\displaystyle \mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ F(\mathrm {out} )=F\left({\overline {F(\mathrm {out} )}}\circ \mathrm {out} \right)} The compositionF(out)¯∘out{\displaystyle {\overline {F(\mathrm {out} )}}\circ \mathrm {out} }induces anotherF-coalgebra homomorphismνF→νF{\displaystyle \nu F\rightarrow \nu F}. SinceνF{\displaystyle \nu F}is final, this homomorphism is unique and thereforeidνF{\displaystyle \mathrm {id} _{\nu F}}. Altogether we have: F(out)¯∘out=idνFout∘F(out)¯=F(F(out)¯)∘out)=idF(νF){\displaystyle {\begin{aligned}{\overline {F(\mathrm {out} )}}\circ \mathrm {out} &=\mathrm {id} _{\nu F}\\\mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ \mathrm {out} )&=\mathrm {id} _{F(\nu F)}\end{aligned}}} This witnesses the isomorphismνF≃F(νF){\displaystyle \nu F\simeq F(\nu F)}, which in categorical terms indicates thatνF{\displaystyle \nu F}is a fixed point ofF{\displaystyle F}and justifies the notation.[6][verification needed] We will show thatStream Ais the final coalgebra of the functorF(x)=A×x{\displaystyle F(x)=A\times x}. Consider the following implementations: These are easily seen to be mutually inverse, witnessing the isomorphism. See the reference for more details. We will demonstrate how theprinciple of inductionsubsumes mathematical induction. LetP{\displaystyle P}be some property ofnatural numbers. We will take the following definition of mathematical induction: 0∈P∧(n∈P⇒n+1∈P)⇒P=N{\displaystyle 0\in P\land (n\in P\Rightarrow n+1\in P)\Rightarrow P=\mathbb {N} } Now consider the functionF:2N→2N{\displaystyle F:2^{\mathbb {N} }\rightarrow 2^{\mathbb {N} }}: F(X)={0}∪{x+1:x∈X}{\displaystyle F(X)=\{0\}\cup \{x+1:x\in X\}} It should not be difficult to see thatμF=N{\displaystyle \mu F=\mathbb {N} }. Therefore, by theprinciple of induction, if we wish to prove some propertyP{\displaystyle P}ofN{\displaystyle \mathbb {N} }, it suffices to show thatP{\displaystyle P}isF-closed. In detail, we require: F(P)⊆P{\displaystyle F(P)\subseteq P} That is, {0}∪{x+1:x∈P}⊆P{\displaystyle \{0\}\cup \{x+1:x\in P\}\subseteq P} This is preciselymathematical inductionas stated.
https://en.wikipedia.org/wiki/Codata_(computer_science)
Aweb application(orweb app) isapplication softwarethat is created withweb technologiesand runs via aweb browser.[1][2]Web applications emerged during the late 1990s and allowed for the server todynamicallybuild a response to the request, in contrast tostatic web pages.[3] Web applications are commonly distributed via aweb server. There are several different tier systems that web applications use to communicate between the web browsers, the client interface, and server data. Each system has its own uses as they function in different ways. However, there are many security risks that developers must be aware of during development; proper measures to protect user data are vital. Web applications are often constructed with the use of aweb application framework.Single-page applications (SPAs)andprogressive web apps (PWAs)are two architectural approaches to creating web applications that provide auser experiencesimilar tonative apps, including features such as smooth navigation, offline support, and faster interactions. The concept of a "web application" was first introduced in the Java language in the Servlet Specification version 2.2, which was released in 1999. At that time, both JavaScript andXMLhad already been developed, but theXMLHttpRequestobject had only been recently introduced on Internet Explorer 5 as anActiveXobject.[citation needed]Beginning around the early 2000s, applications such as "Myspace(2003),Gmail(2004),Digg(2004), [and]Google Maps(2005)," started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. The practice became known as Ajax in 2005. In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as itsuser interfaceand had to be separately installed on each user'spersonal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to thesupportcost and decreasingproductivity. Additionally, both the client and server components of the application were bound tightly to a particularcomputer architectureandoperating system, which madeportingthem to other systems prohibitively expensive for all but the largest applications. Later, in 1995,Netscapeintroduced theclient-side scriptinglanguage calledJavaScript, which allowed programmers to adddynamic elementsto the user interface that ran on the client side. Essentially, instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such asinput validationor showing/hiding parts of the page. "Progressive web apps", the term coined by designer Frances Berriman andGoogle Chromeengineer Alex Russell in 2015, refers to apps taking advantage of new features supported by modern browsers, which initially run inside a web browser tab but later can run completely offline and can be launched without entering the app URL in the browser. Traditional PC applications are typically single-tiered, residing solely on the client machine. In contrast, web applications inherently facilitate a multi-tiered architecture. Though many variations are possible, the most common structure is thethree-tieredapplication. In its most common form, the three tiers are calledpresentation,applicationandstorage. The first tier, presentation, refers to a web browser itself. The second tier refers to any engine using dynamic web content technology (such asASP,CGI,ColdFusion,Dart,JSP/Java,Node.js,PHP,PythonorRuby on Rails). The third tier refers to a database that stores data and determines the structure of a user interface. Essentially, when using the three-tiered system, the web browser sends requests to the engine, which then services them by making queries and updates against the database and generates a user interface. The 3-tier solution may fall short when dealing with more complex applications, and may need to be replaced with the n-tiered approach; the greatest benefit of which is howbusiness logic(which resides on the application tier) is broken down into a more fine-grained model.[4]Another benefit would be to add an integration tier, which separates the data tier and provides an easy-to-use interface to access the data.[4]For example, the client data would be accessed by calling a "list_clients()" function instead of making anSQLquery directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers.[4] There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server.[4]The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both.[4]While this increases the scalability of the applications and separates the display and the database, it still does not allow for true specialization of layers, so most applications will outgrow this model.[4] Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application, and there are some key operational areas that must be included in the development process.[5]This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning is sometimes more effective and less disruptive in the long run. Writing web applications is simplified with the use ofweb application frameworks. These frameworks facilitaterapid application developmentby allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management.[6] In addition, there is potential for the development of applications onInternet operating systems, although currently there are not many viable platforms that fit this model.[citation needed]
https://en.wikipedia.org/wiki/Web_application
Innumericaloptimization, theBroyden–Fletcher–Goldfarb–Shanno(BFGS)algorithmis aniterative methodfor solving unconstrainednonlinear optimizationproblems.[1]Like the relatedDavidon–Fletcher–Powell method, BFGS determines thedescent directionbypreconditioningthegradientwith curvature information. It does so by gradually improving an approximation to theHessian matrixof theloss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalizedsecant method.[2] Since the updates of the BFGS curvature matrix do not requirematrix inversion, itscomputational complexityis onlyO(n2){\displaystyle {\mathcal {O}}(n^{2})}, compared toO(n3){\displaystyle {\mathcal {O}}(n^{3})}inNewton's method. Also in common use isL-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.[3]The BFGS matrix also admits acompact representation, which makes it better suited for large constrained problems. The algorithm is named afterCharles George Broyden,Roger Fletcher,Donald GoldfarbandDavid Shanno.[4][5][6][7] The optimization problem is to minimizef(x){\displaystyle f(\mathbf {x} )}, wherex{\displaystyle \mathbf {x} }is a vector inRn{\displaystyle \mathbb {R} ^{n}}, andf{\displaystyle f}is a differentiable scalar function. There are no constraints on the values thatx{\displaystyle \mathbf {x} }can take. The algorithm begins at an initial estimatex0{\displaystyle \mathbf {x} _{0}}for the optimal value and proceeds iteratively to get a better estimate at each stage. Thesearch directionpkat stagekis given by the solution of the analogue of the Newton equation: whereBk{\displaystyle B_{k}}is an approximation to theHessian matrixatxk{\displaystyle \mathbf {x} _{k}}, which is updated iteratively at each stage, and∇f(xk){\displaystyle \nabla f(\mathbf {x} _{k})}is the gradient of the function evaluated atxk. Aline searchin the directionpkis then used to find the next pointxk+1by minimizingf(xk+γpk){\displaystyle f(\mathbf {x} _{k}+\gamma \mathbf {p} _{k})}over the scalarγ>0.{\displaystyle \gamma >0.} The quasi-Newton condition imposed on the update ofBk{\displaystyle B_{k}}is Letyk=∇f(xk+1)−∇f(xk){\displaystyle \mathbf {y} _{k}=\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})}andsk=xk+1−xk{\displaystyle \mathbf {s} _{k}=\mathbf {x} _{k+1}-\mathbf {x} _{k}}, thenBk+1{\displaystyle B_{k+1}}satisfies which is the secant equation. The curvature conditionsk⊤yk>0{\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}>0}should be satisfied forBk+1{\displaystyle B_{k+1}}to be positive definite, which can be verified by pre-multiplying the secant equation withskT{\displaystyle \mathbf {s} _{k}^{T}}. If the function is notstrongly convex, then the condition has to be enforced explicitly e.g. by finding a pointxk+1satisfying theWolfe conditions, which entail the curvature condition, using line search. Instead of requiring the full Hessian matrix at the pointxk+1{\displaystyle \mathbf {x} _{k+1}}to be computed asBk+1{\displaystyle B_{k+1}}, the approximate Hessian at stagekis updated by the addition of two matrices: BothUk{\displaystyle U_{k}}andVk{\displaystyle V_{k}}are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS andDFPupdating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known assymmetric rank-onemethod, which does not guarantee thepositive definiteness. In order to maintain the symmetry and positive definiteness ofBk+1{\displaystyle B_{k+1}}, the update form can be chosen asBk+1=Bk+αuu⊤+βvv⊤{\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }}. Imposing the secant condition,Bk+1sk=yk{\displaystyle B_{k+1}\mathbf {s} _{k}=\mathbf {y} _{k}}. Choosingu=yk{\displaystyle \mathbf {u} =\mathbf {y} _{k}}andv=Bksk{\displaystyle \mathbf {v} =B_{k}\mathbf {s} _{k}}, we can obtain:[8] Finally, we substituteα{\displaystyle \alpha }andβ{\displaystyle \beta }intoBk+1=Bk+αuu⊤+βvv⊤{\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }}and get the update equation ofBk+1{\displaystyle B_{k+1}}: Consider the following unconstrained optimization problemminimizex∈Rnf(x),{\displaystyle {\begin{aligned}{\underset {\mathbf {x} \in \mathbb {R} ^{n}}{\text{minimize}}}\quad &f(\mathbf {x} ),\end{aligned}}}wheref:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a nonlinear objective function. From an initial guessx0∈Rn{\displaystyle \mathbf {x} _{0}\in \mathbb {R} ^{n}}and an initial guess of the Hessian matrixB0∈Rn×n{\displaystyle B_{0}\in \mathbb {R} ^{n\times n}}the following steps are repeated asxk{\displaystyle \mathbf {x} _{k}}converges to the solution: Convergence can be determined by observing the norm of the gradient; given someϵ>0{\displaystyle \epsilon >0}, one may stop the algorithm when||∇f(xk)||≤ϵ.{\displaystyle ||\nabla f(\mathbf {x} _{k})||\leq \epsilon .}IfB0{\displaystyle B_{0}}is initialized withB0=I{\displaystyle B_{0}=I}, the first step will be equivalent to agradient descent, but further steps are more and more refined byBk{\displaystyle B_{k}}, the approximation to the Hessian. The first step of the algorithm is carried out using the inverse of the matrixBk{\displaystyle B_{k}}, which can be obtained efficiently by applying theSherman–Morrison formulato the step 5 of the algorithm, giving This can be computed efficiently without temporary matrices, recognizing thatBk−1{\displaystyle B_{k}^{-1}}is symmetric, and thatykTBk−1yk{\displaystyle \mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}\mathbf {y} _{k}}andskTyk{\displaystyle \mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}}are scalars, using an expansion such as Therefore, in order to avoid any matrix inversion, theinverseof the Hessian can be approximated instead of the Hessian itself:Hk=defBk−1.{\displaystyle H_{k}{\overset {\operatorname {def} }{=}}B_{k}^{-1}.}[9] From an initial guessx0{\displaystyle \mathbf {x} _{0}}and an approximateinvertedHessian matrixH0{\displaystyle H_{0}}the following steps are repeated asxk{\displaystyle \mathbf {x} _{k}}converges to the solution: In statistical estimation problems (such asmaximum likelihoodor Bayesian inference),credible intervalsorconfidence intervalsfor the solution can be estimated from theinverseof the final Hessian matrix[citation needed]. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.[10] The BFGS update formula heavily relies on the curvaturesk⊤yk{\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}}being strictly positive and bounded away from zero. This condition is satisfied when we perform a line search with Wolfe conditions on a convex target. However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures. This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search. It is also possible to produce spurious values due to noise in the target. In such cases, one of the so-called damped BFGS updates can be used (see[11]) which modifysk{\displaystyle \mathbf {s} _{k}}and/oryk{\displaystyle \mathbf {y} _{k}}in order to obtain a more robust update. Notable open source implementations are: Notable proprietary implementations include:
https://en.wikipedia.org/wiki/BFGS_method
Closenessis a basic concept intopologyand related areas inmathematics. Intuitively, we say two sets are close if they are arbitrarily near to each other. The concept can be defined naturally in ametric spacewhere a notion of distance between elements of the space is defined, but it can be generalized totopological spaceswhere we have no concrete way to measure distances. Theclosure operatorclosesa given set by mapping it to aclosed setwhich contains the original set and all points close to it. The concept of closeness is related tolimit point. Given ametric space(X,d){\displaystyle (X,d)}a pointp{\displaystyle p}is calledcloseornearto a setA{\displaystyle A}if where the distance between a point and a set is defined as where inf stands forinfimum. Similarly a setB{\displaystyle B}is calledcloseto a setA{\displaystyle A}if where LetV{\displaystyle V}be some set. A relation between the points ofV{\displaystyle V}and the subsets ofV{\displaystyle V}is a closeness relation if it satisfies the following conditions: LetA{\displaystyle A}andB{\displaystyle B}be two subsets ofV{\displaystyle V}andp{\displaystyle p}a point inV{\displaystyle V}.[1] Topological spaces have a closeness relationship built into them: defining a pointp{\displaystyle p}to be close to a subsetA{\displaystyle A}if and only ifp{\displaystyle p}is in the closure ofA{\displaystyle A}satisfies the above conditions. Likewise, given a set with a closeness relation, defining a pointp{\displaystyle p}to be in the closure of a subsetA{\displaystyle A}if and only ifp{\displaystyle p}is close toA{\displaystyle A}satisfies theKuratowski closure axioms. Thus, defining a closeness relation on a set is exactly equivalent to defining a topology on that set. LetA{\displaystyle A},B{\displaystyle B}andC{\displaystyle C}be sets. The closeness relation between a set and a point can be generalized to any topological space. Given a topological space and a pointp{\displaystyle p},p{\displaystyle p}is calledcloseto a setA{\displaystyle A}ifp∈cl⁡(A)=A¯{\displaystyle p\in \operatorname {cl} (A)={\overline {A}}}. To define a closeness relation between two sets the topological structure is too weak and we have to use auniform structure. Given auniform space, setsA{\displaystyle A}andB{\displaystyle B}are calledcloseto each other if they intersect allentourages, that is, for any entourageU{\displaystyle U},(A×B)∩U{\displaystyle (A\times B)\cap U}is non-empty.
https://en.wikipedia.org/wiki/Closeness_(mathematics)
Inprobability theoryandinformation theory, themutual information(MI) of tworandom variablesis a measure of the mutualdependencebetween the two variables. More specifically, it quantifies the "amount of information" (inunitssuch asshannons(bits),natsorhartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that ofentropyof a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable. Not limited to real-valued random variables and linear dependence like thecorrelation coefficient, MI is more general and determines how different thejoint distributionof the pair(X,Y){\displaystyle (X,Y)}is from the product of the marginal distributions ofX{\displaystyle X}andY{\displaystyle Y}. MI is theexpected valueof thepointwise mutual information(PMI). The quantity was defined and analyzed byClaude Shannonin his landmark paper "A Mathematical Theory of Communication", although he did not call it "mutual information". This term was coined later byRobert Fano.[2]Mutual Information is also known asinformation gain. Let(X,Y){\displaystyle (X,Y)}be a pair ofrandom variableswith values over the spaceX×Y{\displaystyle {\mathcal {X}}\times {\mathcal {Y}}}. If their joint distribution isP(X,Y){\displaystyle P_{(X,Y)}}and the marginal distributions arePX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}, the mutual information is defined as whereDKL{\displaystyle D_{\mathrm {KL} }}is theKullback–Leibler divergence, andPX⊗PY{\displaystyle P_{X}\otimes P_{Y}}is theouter productdistribution which assigns probabilityPX(x)⋅PY(y){\displaystyle P_{X}(x)\cdot P_{Y}(y)}to each(x,y){\displaystyle (x,y)}. Expressed in terms of theentropyH(⋅){\displaystyle H(\cdot )}and theconditional entropyH(⋅|⋅){\displaystyle H(\cdot |\cdot )}of the random variablesX{\displaystyle X}andY{\displaystyle Y}, one also has (Seerelation to conditional and joint entropy): Notice, as per property of theKullback–Leibler divergence, thatI(X;Y){\displaystyle I(X;Y)}is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. whenX{\displaystyle X}andY{\displaystyle Y}are independent (and hence observingY{\displaystyle Y}tells you nothing aboutX{\displaystyle X}).I(X;Y){\displaystyle I(X;Y)}is non-negative, it is a measure of the price for encoding(X,Y){\displaystyle (X,Y)}as a pair of independent random variables when in reality they are not. If thenatural logarithmis used, the unit of mutual information is thenat. If thelog base2 is used, the unit of mutual information is theshannon, also known as the bit. If thelog base10 is used, the unit of mutual information is thehartley, also known as the ban or the dit. The mutual information of two jointly discrete random variablesX{\displaystyle X}andY{\displaystyle Y}is calculated as a double sum:[3]: 20 whereP(X,Y){\displaystyle P_{(X,Y)}}is thejoint probabilitymassfunctionofX{\displaystyle X}andY{\displaystyle Y}, andPX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}are themarginal probabilitymass functions ofX{\displaystyle X}andY{\displaystyle Y}respectively. In the case of jointly continuous random variables, the double sum is replaced by adouble integral:[3]: 251 whereP(X,Y){\displaystyle P_{(X,Y)}}is now the joint probabilitydensityfunction ofX{\displaystyle X}andY{\displaystyle Y}, andPX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}are the marginal probability density functions ofX{\displaystyle X}andY{\displaystyle Y}respectively. Intuitively, mutual information measures the information thatX{\displaystyle X}andY{\displaystyle Y}share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, ifX{\displaystyle X}andY{\displaystyle Y}are independent, then knowingX{\displaystyle X}does not give any information aboutY{\displaystyle Y}and vice versa, so their mutual information is zero. At the other extreme, ifX{\displaystyle X}is a deterministic function ofY{\displaystyle Y}andY{\displaystyle Y}is a deterministic function ofX{\displaystyle X}then all information conveyed byX{\displaystyle X}is shared withY{\displaystyle Y}: knowingX{\displaystyle X}determines the value ofY{\displaystyle Y}and vice versa. As a result, the mutual information is the same as the uncertainty contained inY{\displaystyle Y}(orX{\displaystyle X}) alone, namely theentropyofY{\displaystyle Y}(orX{\displaystyle X}). A very special case of this is whenX{\displaystyle X}andY{\displaystyle Y}are the same random variable. Mutual information is a measure of the inherent dependence expressed in thejoint distributionofX{\displaystyle X}andY{\displaystyle Y}relative to the marginal distribution ofX{\displaystyle X}andY{\displaystyle Y}under the assumption of independence. Mutual information therefore measures dependence in the following sense:I⁡(X;Y)=0{\displaystyle \operatorname {I} (X;Y)=0}if and only ifX{\displaystyle X}andY{\displaystyle Y}are independent random variables. This is easy to see in one direction: ifX{\displaystyle X}andY{\displaystyle Y}are independent, thenp(X,Y)(x,y)=pX(x)⋅pY(y){\displaystyle p_{(X,Y)}(x,y)=p_{X}(x)\cdot p_{Y}(y)}, and therefore: Moreover, mutual information is nonnegative (i.e.I⁡(X;Y)≥0{\displaystyle \operatorname {I} (X;Y)\geq 0}see below) andsymmetric(i.e.I⁡(X;Y)=I⁡(Y;X){\displaystyle \operatorname {I} (X;Y)=\operatorname {I} (Y;X)}see below). UsingJensen's inequalityon the definition of mutual information we can show thatI⁡(X;Y){\displaystyle \operatorname {I} (X;Y)}is non-negative, i.e.[3]: 28 The proof is given considering the relationship with entropy, as shown below. IfC{\displaystyle C}is independent of(A,B){\displaystyle (A,B)}, then Mutual information can be equivalently expressed as: whereH(X){\displaystyle \mathrm {H} (X)}andH(Y){\displaystyle \mathrm {H} (Y)}are the marginalentropies,H(X∣Y){\displaystyle \mathrm {H} (X\mid Y)}andH(Y∣X){\displaystyle \mathrm {H} (Y\mid X)}are theconditional entropies, andH(X,Y){\displaystyle \mathrm {H} (X,Y)}is thejoint entropyofX{\displaystyle X}andY{\displaystyle Y}. Notice the analogy to the union, difference, and intersection of two sets: in this respect, all the formulas given above are apparent from the Venn diagram reported at the beginning of the article. In terms of a communication channel in which the outputY{\displaystyle Y}is a noisy version of the inputX{\displaystyle X}, these relations are summarised in the figure: BecauseI⁡(X;Y){\displaystyle \operatorname {I} (X;Y)}is non-negative, consequently,H(X)≥H(X∣Y){\displaystyle \mathrm {H} (X)\geq \mathrm {H} (X\mid Y)}. Here we give the detailed deduction ofI⁡(X;Y)=H(Y)−H(Y∣X){\displaystyle \operatorname {I} (X;Y)=\mathrm {H} (Y)-\mathrm {H} (Y\mid X)}for the case of jointly discrete random variables: The proofs of the other identities above are similar. The proof of the general case (not just discrete) is similar, with integrals replacing sums. Intuitively, if entropyH(Y){\displaystyle \mathrm {H} (Y)}is regarded as a measure of uncertainty about a random variable, thenH(Y∣X){\displaystyle \mathrm {H} (Y\mid X)}is a measure of whatX{\displaystyle X}doesnotsay aboutY{\displaystyle Y}. This is "the amount of uncertainty remaining aboutY{\displaystyle Y}afterX{\displaystyle X}is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty inY{\displaystyle Y}, minus the amount of uncertainty inY{\displaystyle Y}which remains afterX{\displaystyle X}is known", which is equivalent to "the amount of uncertainty inY{\displaystyle Y}which is removed by knowingX{\displaystyle X}". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other. Note that in the discrete caseH(Y∣Y)=0{\displaystyle \mathrm {H} (Y\mid Y)=0}and thereforeH(Y)=I⁡(Y;Y){\displaystyle \mathrm {H} (Y)=\operatorname {I} (Y;Y)}. ThusI⁡(Y;Y)≥I⁡(X;Y){\displaystyle \operatorname {I} (Y;Y)\geq \operatorname {I} (X;Y)}, and one can formulate the basic principle that a variable contains at least as much information about itself as any other variable can provide. For jointly discrete or jointly continuous pairs(X,Y){\displaystyle (X,Y)}, mutual information is theKullback–Leibler divergencefrom the product of themarginal distributions,pX⋅pY{\displaystyle p_{X}\cdot p_{Y}}, of thejoint distributionp(X,Y){\displaystyle p_{(X,Y)}}, that is, Furthermore, letp(X,Y)(x,y)=pX∣Y=y(x)∗pY(y){\displaystyle p_{(X,Y)}(x,y)=p_{X\mid Y=y}(x)*p_{Y}(y)}be the conditional mass or density function. Then, we have the identity The proof for jointly discrete random variables is as follows: Similarly this identity can be established for jointly continuous random variables. Note that here the Kullback–Leibler divergence involves integration over the values of the random variableX{\displaystyle X}only, and the expressionDKL(pX∣Y∥pX){\displaystyle D_{\text{KL}}(p_{X\mid Y}\parallel p_{X})}still denotes a random variable becauseY{\displaystyle Y}is random. Thus mutual information can also be understood as theexpectationof the Kullback–Leibler divergence of theunivariate distributionpX{\displaystyle p_{X}}ofX{\displaystyle X}from theconditional distributionpX∣Y{\displaystyle p_{X\mid Y}}ofX{\displaystyle X}givenY{\displaystyle Y}: the more different the distributionspX∣Y{\displaystyle p_{X\mid Y}}andpX{\displaystyle p_{X}}are on average, the greater theinformation gain. If samples from a joint distribution are available, a Bayesian approach can be used to estimate the mutual information of that distribution. The first work to do this, which also showed how to do Bayesian estimation of many other information-theoretic properties besides mutual information, was.[5]Subsequent researchers have rederived[6]and extended[7]this analysis. See[8]for a recent paper based on a prior specifically tailored to estimation of mutual information per se. Besides, recently an estimation method accounting for continuous and multivariate outputs,Y{\displaystyle Y}, was proposed in .[9] The Kullback-Leibler divergence formulation of the mutual information is predicated on that one is interested in comparingp(x,y){\displaystyle p(x,y)}to the fully factorizedouter productp(x)⋅p(y){\displaystyle p(x)\cdot p(y)}. In many problems, such asnon-negative matrix factorization, one is interested in less extreme factorizations; specifically, one wishes to comparep(x,y){\displaystyle p(x,y)}to a low-rank matrix approximation in some unknown variablew{\displaystyle w}; that is, to what degree one might have Alternately, one might be interested in knowing how much more informationp(x,y){\displaystyle p(x,y)}carries over its factorization. In such a case, the excess information that the full distributionp(x,y){\displaystyle p(x,y)}carries over the matrix factorization is given by the Kullback-Leibler divergence The conventional definition of the mutual information is recovered in the extreme case that the processW{\displaystyle W}has only one value forw{\displaystyle w}. Several variations on mutual information have been proposed to suit various needs. Among these are normalized variants and generalizations to more than two variables. Many applications require ametric, that is, a distance measure between pairs of points. The quantity satisfies the properties of a metric (triangle inequality,non-negativity,indiscernabilityand symmetry), where equalityX=Y{\displaystyle X=Y}is understood to mean thatX{\displaystyle X}can be completely determined fromY{\displaystyle Y}.[10] This distance metric is also known as thevariation of information. IfX,Y{\displaystyle X,Y}are discrete random variables then all the entropy terms are non-negative, so0≤d(X,Y)≤H(X,Y){\displaystyle 0\leq d(X,Y)\leq \mathrm {H} (X,Y)}and one can define a normalized distance Plugging in the definitions shows that This is known as the Rajski Distance.[11]In a set-theoretic interpretation of information (see the figure forConditional entropy), this is effectively theJaccard distancebetweenX{\displaystyle X}andY{\displaystyle Y}. Finally, is also a metric. Sometimes it is useful to express the mutual information of two random variables conditioned on a third. For jointlydiscrete random variablesthis takes the form which can be simplified as For jointlycontinuous random variablesthis takes the form which can be simplified as Conditioning on a third random variable may either increase or decrease the mutual information, but it is always true that for discrete, jointly distributed random variablesX,Y,Z{\displaystyle X,Y,Z}. This result has been used as a basic building block for proving otherinequalities in information theory. Several generalizations of mutual information to more than two random variables have been proposed, such astotal correlation(or multi-information) anddual total correlation. The expression and study of multivariate higher-degree mutual information was achieved in two seemingly independent works: McGill (1954)[12]who called these functions "interaction information", and Hu Kuo Ting (1962).[13]Interaction information is defined for one variable as follows: and forn>1,{\displaystyle n>1,} Some authors reverse the order of the terms on the right-hand side of the preceding equation, which changes the sign when the number of random variables is odd. (And in this case, the single-variable expression becomes the negative of the entropy.) Note that The multivariate mutual information functions generalize the pairwise independence case that states thatX1,X2{\displaystyle X_{1},X_{2}}if and only ifI(X1;X2)=0{\displaystyle I(X_{1};X_{2})=0}, to arbitrary numerous variable. n variables are mutually independent if and only if the2n−n−1{\displaystyle 2^{n}-n-1}mutual information functions vanishI(X1;…;Xk)=0{\displaystyle I(X_{1};\ldots ;X_{k})=0}withn≥k≥2{\displaystyle n\geq k\geq 2}(theorem 2[14]). In this sense, theI(X1;…;Xk)=0{\displaystyle I(X_{1};\ldots ;X_{k})=0}can be used as a refined statistical independence criterion. For 3 variables, Brenner et al. applied multivariate mutual information toneural codingand called its negativity "synergy"[15]and Watkinson et al. applied it to genetic expression.[16]For arbitrary k variables, Tapia et al. applied multivariate mutual information to gene expression.[17][14]It can be zero, positive, or negative.[13]The positivity corresponds to relations generalizing the pairwise correlations, nullity corresponds to a refined notion of independence, and negativity detects high dimensional "emergent" relations and clusterized datapoints[17]). One high-dimensional generalization scheme which maximizes the mutual information between the joint distribution and other target variables is found to be useful infeature selection.[18] Mutual information is also used in the area of signal processing as ameasure of similaritybetween two signals. For example, FMI metric[19]is an image fusion performance measure that makes use of mutual information in order to measure the amount of information that the fused image contains about the source images. TheMatlabcode for this metric can be found at.[20]A python package for computing all multivariate mutual informations,conditional mutual information, joint entropies, total correlations, information distance in a dataset of n variables is available.[21] Directed information,I⁡(Xn→Yn){\displaystyle \operatorname {I} \left(X^{n}\to Y^{n}\right)}, measures the amount of information that flows from the processXn{\displaystyle X^{n}}toYn{\displaystyle Y^{n}}, whereXn{\displaystyle X^{n}}denotes the vectorX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}andYn{\displaystyle Y^{n}}denotesY1,Y2,...,Yn{\displaystyle Y_{1},Y_{2},...,Y_{n}}. The termdirected informationwas coined byJames Masseyand is defined as Note that ifn=1{\displaystyle n=1}, the directed information becomes the mutual information. Directed information has many applications in problems wherecausalityplays an important role, such ascapacity of channelwith feedback.[22][23] Normalized variants of the mutual information are provided by thecoefficients of constraint,[24]uncertainty coefficient[25]or proficiency:[26] The two coefficients have a value ranging in [0, 1], but are not necessarily equal. This measure is not symmetric. If one desires a symmetric measure they can consider the followingredundancymeasure: which attains a minimum of zero when the variables are independent and a maximum value of when one variable becomes completely redundant with the knowledge of the other. See alsoRedundancy (information theory). Another symmetrical measure is thesymmetric uncertainty(Witten & Frank 2005), given by which represents theharmonic meanof the two uncertainty coefficientsCXY,CYX{\displaystyle C_{XY},C_{YX}}.[25] If we consider mutual information as a special case of thetotal correlationordual total correlation, the normalized version are respectively, This normalized version also known asInformation Quality Ratio (IQR)which quantifies the amount of information of a variable based on another variable against total uncertainty:[27] There's a normalization[28]which derives from first thinking of mutual information as an analogue tocovariance(thusShannon entropyis analogous tovariance). Then the normalized mutual information is calculated akin to thePearson correlation coefficient, In the traditional formulation of the mutual information, eacheventorobjectspecified by(x,y){\displaystyle (x,y)}is weighted by the corresponding probabilityp(x,y){\displaystyle p(x,y)}. This assumes that all objects or events are equivalentapart fromtheir probability of occurrence. However, in some applications it may be the case that certain objects or events are moresignificantthan others, or that certain patterns of association are more semantically important than others. For example, the deterministic mapping{(1,1),(2,2),(3,3)}{\displaystyle \{(1,1),(2,2),(3,3)\}}may be viewed as stronger than the deterministic mapping{(1,3),(2,1),(3,2)}{\displaystyle \{(1,3),(2,1),(3,2)\}}, although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (Cronbach 1954,Coombs, Dawes & Tversky 1970,Lockhead 1970), and is therefore not sensitive at all to theformof the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the followingweighted mutual information(Guiasu 1977). which places a weightw(x,y){\displaystyle w(x,y)}on the probability of each variable value co-occurrence,p(x,y){\displaystyle p(x,y)}. This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevantholisticorPrägnanzfactors. In the above example, using larger relative weights forw(1,1){\displaystyle w(1,1)},w(2,2){\displaystyle w(2,2)}, andw(3,3){\displaystyle w(3,3)}would have the effect of assessing greaterinformativenessfor the relation{(1,1),(2,2),(3,3)}{\displaystyle \{(1,1),(2,2),(3,3)\}}than for the relation{(1,3),(2,1),(3,2)}{\displaystyle \{(1,3),(2,1),(3,2)\}}, which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs,[29]and there are examples where the weighted mutual information also takes negative values.[30] A probability distribution can be viewed as apartition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be? Theadjusted mutual informationor AMI subtracts the expectation value of the MI, so that the AMI is zero when two different distributions are random, and one when two distributions are identical. The AMI is defined in analogy to theadjusted Rand indexof two different partitions of a set. Using the ideas ofKolmogorov complexity, one can consider the mutual information of two sequences independent of any probability distribution: To establish that this quantity is symmetric up to a logarithmic factor (IK⁡(X;Y)≈IK⁡(Y;X){\displaystyle \operatorname {I} _{K}(X;Y)\approx \operatorname {I} _{K}(Y;X)}) one requires thechain rule for Kolmogorov complexity(Li & Vitányi 1997). Approximations of this quantity viacompressioncan be used to define adistance measureto perform ahierarchical clusteringof sequences without having anydomain knowledgeof the sequences (Cilibrasi & Vitányi 2005). Unlike correlation coefficients, such as theproduct moment correlation coefficient, mutual information contains information about all dependence—linear and nonlinear—and not just linear dependence as the correlation coefficient measures. However, in the narrow case that the joint distribution forX{\displaystyle X}andY{\displaystyle Y}is abivariate normal distribution(implying in particular that both marginal distributions are normally distributed), there is an exact relationship betweenI{\displaystyle \operatorname {I} }and the correlation coefficientρ{\displaystyle \rho }(Gel'fand & Yaglom 1957). The equation above can be derived as follows for a bivariate Gaussian: Therefore, WhenX{\displaystyle X}andY{\displaystyle Y}are limited to be in a discrete number of states, observation data is summarized in acontingency table, with row variableX{\displaystyle X}(ori{\displaystyle i}) and column variableY{\displaystyle Y}(orj{\displaystyle j}). Mutual information is one of the measures ofassociationorcorrelationbetween the row and column variables. Other measures of association includePearson's chi-squared teststatistics,G-teststatistics, etc. In fact, with the same log base, mutual information will be equal to theG-testlog-likelihood statistic divided by2N{\displaystyle 2N}, whereN{\displaystyle N}is the sample size. In many applications, one wants to maximize mutual information (thus increasing dependencies), which is often equivalent to minimizingconditional entropy. Examples include:
https://en.wikipedia.org/wiki/Mutual_information#Metric
Incoding theory, anerasure codeis aforward error correction(FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message ofksymbols into a longer message (code word) withnsymbols such that the original message can be recovered from a subset of thensymbols. The fractionr=k/nis called thecode rate. The fractionk’/k, wherek’denotes the number of symbols required for recovery, is calledreception efficiency. The recovery algorithm expects that it is known which of thensymbols are lost. Erasure coding was invented byIrving ReedandGustave Solomonin 1960.[1] There are many different erasure coding schemes. The most popular erasure codes areReed-Solomon coding,Low-density parity-check code(LDPC codes), andTurbo codes.[1] As of 2023, modern data storage systems can be designed to tolerate the complete failure of a few disks without data loss, using one of 3 approaches:[2][3][4] While technically RAID can be seen as a kind of erasure code,[5]"RAID" is generally applied to an array attached to a single host computer (which is a single point of failure), while "erasure coding" generally implies multiple hosts,[3]sometimes called aRedundant Array of Inexpensive Servers(RAIS). The erasure code allows operations to continue when any one of those hosts stops.[4][6] Compared to block-level RAID systems, object storage erasure coding has some significant differences that make it more resilient.[7][8][9][10][11] Optimal erasure codes have the property that anykout of thencode word symbols are sufficient to recover the original message (i.e., they have optimal reception efficiency). Optimal erasure codes aremaximum distance separable codes(MDS codes). Parity check is the special case wheren=k+ 1. From a set ofkvalues{vi}1≤i≤k{\displaystyle \{v_{i}\}_{1\leq i\leq k}}, a checksum is computed and appended to theksource values: The set ofk+ 1 values{vi}1≤i≤k+1{\displaystyle \{v_{i}\}_{1\leq i\leq k+1}}is now consistent with regard to the checksum. If one of these values,ve{\displaystyle v_{e}}, is erased, it can be easily recovered by summing the remaining variables: RAID 5is a widely-used application of the parity check erasure code.[1] In the simple case wherek= 2, redundancy symbols may be created by sampling different points along the line between the two original symbols. This is pictured with a simple example, called err-mail: Alicewants to send her telephone number (555629) toBobusing err-mail. Err-mail works just like e-mail, except Instead of asking Bob to acknowledge the messages she sends, Alice devises the following scheme. Bob knows that the form off(k) isf(i)=a+(b−a)(i−1){\displaystyle f(i)=a+(b-a)(i-1)}, whereaandbare the two parts of the telephone number. Now suppose Bob receives "D=777" and "E=851". Bob can reconstruct Alice's phone number by computing the values ofaandbfrom the values (f(4) andf(5)) he has received. Bob can perform this procedure using any two err-mails, so the erasure code in this example has a rate of 40%. Note that Alice cannot encode her telephone number in just one err-mail, because it contains six characters, and that the maximum length of one err-mail message is five characters. If she sent her phone number in pieces, asking Bob to acknowledge receipt of each piece, at least four messages would have to be sent anyway (two from Alice, and two acknowledgments from Bob). So the erasure code in this example, which requires five messages, is quite economical. This example is a little bit contrived. For truly generic erasure codes that work over any data set, we would need something other than thef(i) given. The linear construction above can be generalized topolynomial interpolation. Additionally, points are now computed over afinite field. First we choose a finite fieldFwith order of at leastn, but usually a power of 2. The sender numbers the data symbols from 0 tok− 1 and sends them. He then constructs a(Lagrange) polynomialp(x) of orderksuch thatp(i) is equal to data symboli. He then sendsp(k), ...,p(n− 1). The receiver can now also use polynomial interpolation to recover the lost packets, provided he receivesksymbols successfully. If the order ofFis less than 2b, where b is the number of bits in a symbol, then multiple polynomials can be used. The sender can construct symbolskton− 1 'on the fly', i.e., distribute the workload evenly between transmission of the symbols. If the receiver wants to do his calculations 'on the fly', he can construct a new polynomialq, such thatq(i) =p(i) if symboli<kwas received successfully andq(i) = 0 when symboli<kwas not received. Now letr(i) =p(i) −q(i). Firstly we know thatr(i) = 0 if symboli<khas been received successfully. Secondly, if symboli≥khas been received successfully, thenr(i) =p(i) −q(i) can be calculated. So we have enough data points to constructrand evaluate it to find the lost packets. So both the sender and the receiver requireO(n(n−k)) operations and onlyO(n−k) space for operating 'on the fly'. This process is implemented byReed–Solomon codes, with code words constructed over afinite fieldusing aVandermonde matrix. Most practical erasure codes aresystematic codes-- each one of the originalksymbols can be found copied, unencoded, as one of thenmessage symbols.[12](Erasure codes that supportsecret sharingnever use a systematic code). Near-optimal erasure codesrequire (1 + ε)ksymbols to recover the message (where ε>0). Reducing ε can be done at the cost of CPU time.Near-optimal erasure codestrade correction capabilities for computational complexity: practical algorithms can encode and decode with linear time complexity. Fountain codes(also known asrateless erasure codes) are notable examples ofnear-optimal erasure codes. They can transform aksymbol message into a practically infinite encoded form, i.e., they can generate an arbitrary amount of redundancy symbols that can all be used for error correction. Receivers can start decoding after they have received slightly more thankencoded symbols. Regenerating codesaddress the issue of rebuilding (also called repairing) lost encoded fragments from existing encoded fragments. This issue occurs in distributed storage systems where communication to maintain encoded redundancy is a problem.[12] Erasure coding is now standard practice for reliable data storage.[13][14][15]In particular, various implementations of Reed-Solomon erasure coding are used byApache Hadoop, theRAID-6built into Linux, Microsoft Azure, Facebook cold storage, and Backblaze Vaults.[15][12] The classical way to recover from failures in storage systems was to use replication. However, replication incurs significant overhead in terms of wasted bytes. Therefore, increasingly large storage systems, such as those used in data centers use erasure-coded storage. The most common form of erasure coding used in storage systems isReed-Solomon (RS) code, an advanced mathematics formula used to enable regeneration of missing data from pieces of known data, called parity blocks. In a (k,m) RS code, a given set ofkdata blocks, called "chunks", are encoded into (k+m) chunks. The total set of chunks comprises astripe. The coding is done such that as long as at leastkout of (k+m) chunks are available, one can recover the entire data. This means a (k,m) RS-encoded storage can tolerate up tomfailures. Example:In RS (10, 4) code, which is used in Facebook for theirHDFS,[16]10 MB of user data is divided into ten 1MB blocks. Then, four additional 1 MB parity blocks are created to provide redundancy. This can tolerate up to 4 concurrent failures. The storage overhead here is 14/10 = 1.4X. In the case of a fully replicated system, the 10 MB of user data will have to be replicated 4 times to tolerate up to 4 concurrent failures. The storage overhead in that case will be 50/10 = 5 times. This gives an idea of the lower storage overhead of erasure-coded storage compared to full replication and thus the attraction in today's storage systems. Initially, erasure codes were used to reduce the cost of storing "cold" (rarely-accessed) data efficiently; but erasure codes can also be used to improve performance serving "hot" (more-frequently-accessed) data.[12] RAID N+M divides data blocks across N+M drives, and can recover all the data when any M drives fail.[1]In particular, RAID 7.3 refers to triple-parity RAID, and can recover all the data when any 3 drives fail.[17] Here are some examples of implementations of the various codes:
https://en.wikipedia.org/wiki/Erasure_code
Linguistic discrimination(also calledglottophobia,linguicismandlanguagism) is the unfair treatment of people based upontheir use of languageand the characteristics of their speech, such as theirfirst language, theiraccent, the perceived size of theirvocabulary(whether or not the speaker uses complex and varied words), theirmodality, and theirsyntax.[1]For example, anOccitan speakerin France will probably be treated differently from aFrench speaker.[2] Based on a difference in use of language, a person may automatically form judgments about another person'swealth,education,social status, character or other traits, which may lead todiscrimination. This has led topublic debatesurroundinglocalisation theories, likewise with overalldiversity prevalencein numerous nations acrossthe West. Linguistic discrimination was at first considered an act ofracism. In the mid-1980s,linguistTove Skutnabb-Kangascaptured the idea of language-based discrimination aslinguicism, which was defined as "ideologies and structures used to legitimize, effectuate, and reproduce unequal divisions of power and resources (both material and non-material) between groups which are defined on the basis of language".[3]Although different names have been given to this form of discrimination, they all hold the same definition. Linguistic discrimination is culturally and socially determined due to preference for one use of language over others. Scholars have analyzed the role oflinguistic imperialismin linguicism, with some asserting that speakers of dominant languages gravitate toward discrimination against speakers of other, less dominant languages, while disadvantaging themselves linguistically by remaining monolingual.[4] According to Carolyn McKinley, this phenomenon is most present inAfrica, where the majority of the population speaksEuropean languagesintroduced during thecolonial era; African states are also noted as instituting European languages as the main medium of instruction, instead ofindigenous languages.[4]UNESCOreports have noted that this has historically benefitted only the Africanupper class, conversely disadvantaging the majority of Africa's population who hold varying level of fluency in the European languages spoken across the continent.[4] Scholars have also noted the influence of the linguistic dominance of English onacademic disciplines;Anna Wierzbicka, professor of linguistics at theAustralian National University, has described disciplines such as thesocial sciencesandhumanitiesas being "locked in a conceptual framework grounded in English", preventingacademiaas a whole from reaching a "more universal, culture-independent perspective."[5] Speakers with certain accents may experienceprejudice. For example, some accents hold moreprestigethan others depending on the cultural context. However, with so manydialects, it can be difficult to determine which is the most preferable. The best answer linguists can give, such as the authors ofDo You Speak American?, is that it depends on the location and the speaker. Research has determined however that some sounds in languages may be determined to sound less pleasant naturally.[6]Also, certain accents tend to carry more prestige in some societies over other accents. For example, in theUnited StatesspeakingGeneral American(a variety associated with thewhitemiddle class) is widely preferred in many contexts such as television journalism. Also, in theUnited Kingdom, theReceived Pronunciationis associated with being of higher class and thus more likable.[7]In addition to prestige, research has shown that certain accents may also be associated with less intelligence, and having poorer social skills.[8]An example can be seen in the difference between Southerners and Northerners in the United States, where people from the North are typically perceived as being less likable in character, and Southerners are perceived as being less intelligent. As sociolinguist, Lippi-Green, argues, "It has been widely observed that when histories are written, they focus on the dominant class... Generally studies of the development of language over time are very narrowly focused on the smallest portion of speakers: those with power and resources to control the distribution of information."[9] Linguistic discrimination appeared before the term was established. During the 1980s, scholars explored the connection betweenracismand languages. Linguistic discrimination was a part of racism when it was first studied. The first case found that helped establish the term was inNew Zealand, where whitecolonizersjudge the native population,Māori, by judging their language. Linguistic discrimination may originate from fixed institutions andstereotypesof the elite class. Elites reveal strong racism through writing, speaking, and other communication methods, providing a basis for discrimination. Their way of speaking the language is considered the higher class, emphasizing the idea that how one speaks a language is related to social, economic, and political status.[10] As sociolinguistics evolved, scholars began to recognize the need for a more nuanced framework to analyze the complex interactions between language and social identity. This led to the introduction of linguistic ideology, a critical concept that specifically addresses the nuances of linguistic discrimination without conflating it with broader issues of racism. Linguistic ideology can be defined as the beliefs, attitudes, and assumptions that society holds about language, including the idea that the way an individual speaks can serve as a powerful indicator of their social status and identity within a community. This perspective enables researchers to unpack how certain linguistic features—such as accents, dialects, and speech patterns—are often laden with social meanings that can perpetuate stereotypes about different groups. The implication is that these ideologies shape our perceptions and evaluations of speakers, leading to discriminatory practices based on linguistic characteristics. Consequently, linguistic discrimination can be understood as a phenomenon deeply rooted in societal beliefs and cognitive biases, which highlight the intersectionality of language, identity, and power dynamics within various populations. By focusing on linguistic ideology, sociolinguistics provides a more targeted lens through which to examine the social consequences of language use and the systemic inequalities that arise from such perceptions. This innovative approach not only enriches our understanding of language as a social tool but also emphasizes the importance of critically examining the underlying ideologies that inform our judgments about speech and the speakers themselves. It is natural for human beings to want to identify with others. One way we do this is by categorizing individuals into specificsocial groups. While some groups are often assumed to be readily noticeable (such as those defined by ethnicity or gender), other groups are lesssalient. Linguist Carmen Fought explains how an individual's use of language may allow another person to categorize them into a specific social group that may otherwise be less apparent.[11]For example, in the United States it is common to perceive Southerners as less intelligent. Belonging to a social group such as the South may be less salient than membership to other groups that are defined by ethnicity or gender. Language provides a bridge for prejudice to occur for these less salient social groups.[12] Linguistic discrimination is a form ofracism. Impact of linguistic discrimination ranges from physical violence tomental trauma, and then to extinction of a language. Victims of linguistic discrimination may experience physical bullying in school and a decrease in earnings in jobs. In countries where a variety of languages exist, it is hard for people to obtain basic social service such aseducationand health care[13]since they do not understand the language. Mentally, they may be ashamed or feel guilty to speak their home language.[14] People who speak a language that is not the mainstream language do not feel social acceptance. Research shows that countries with assimilation policies result in higher stress.[15]They are forced to accept the mainstream language and foreign culture.[16] According to statistics, every two weeks an endangered language will be extinct. This is because, on the country level, linguisticallymarginalizedpopulations must learn the common language to obtain resources. Their opportunities are very limited when they cannot communicate in a way everyone else understands.[17] English, being a language that most countries speak in the world, experiences a lot of linguistic discrimination when people from different linguistic backgrounds meet. Regional differences and native languages may have an impact on how people speak the language. For example, many non-native speakers in other countries fail to pronounce the “th” sound. Instead, they use the "s" sound, which is more common in other languages, to replace it. “Thank” becomes “sank,” and “mother” becomes “mozer.” InRussian-Englishpronunciation, “Hi, where were you” may be pronounced like “Hi, veir ver you” since it is closer to Russian. It may be considered an inappropriate ways to speak the language and be ridiculed by native speakers. Research has shown that this linguistic discrimination may lead to bullying and violence in the worst case. However, linguistic discrimination may not always be bad bias or cause superiority. A mixed pronunciation of different languages may also lead to mixed reactions. Some people who are native to the language may find these mixes to be special and good, while some others are unfriendly with these speakers. Nonetheless, all these are stereotypes of certain languages and may lead to cognition bias. Former president Donald Trump's wife,Melania Trump, was harshly mocked and insulted on the internet due to her Slovenian accent of speaking English.[18]In fact, in many countries where English is thelingua franca, accent is a part of identity.[19] The impacts of colonization on linguistic traditions vary based on the form of colonization experienced: trader, settler or exploitation.[20]Congolese-American linguistSalikoko Mufwenedescribes trader colonization as one of the earliest forms of European colonization. In regions such as the western coast of Africa as well as the Americas, trade relations between European colonizers and indigenous peoples led to the development ofpidgin languages.[20]Some of these languages, such as Delaware Pidgin andMobilian Jargon, were based on Native American languages, while others, such asNigerian PidginandCameroonian Pidgin, were based on European ones.[21]As trader colonization proceeded mainly via these hybrid languages, rather than the languages of the colonizers, scholars like Mufwene contend that it posed little threat to indigenous languages.[21] Trader colonization was often followed by settler colonization, where European colonizers settled in these colonies to build new homes.[20]Hamel, a Mexican linguist, argues that "segregation" and "integration" were two primary ways through which settler colonists engaged with aboriginal cultures.[22]In countries such as Uruguay, Brazil, Argentina, and those in theCaribbean, segregation and genocide decimated indigenous societies.[22]Widespread death due to war and illness caused many indigenous populations to lose theirindigenous languages.[20]In contrast, in countries that pursued policies of "integration", such as Mexico,Guatemalaand the Andean states, indigenous cultures were lost as aboriginal tribes mixed with colonists.[22]In these countries, the establishment of new European orders led to the adoption of colonial languages in governance and industry.[20]In addition, European colonists also viewed the dissolution of indigenous societies and traditions as necessary for the development of a unifiednation state.[22]This led to efforts to destroy tribal languages and cultures: in Canada and the United States, for example, Native children were sent to boarding schools such as Col. Richard Pratt'sCarlisle Indian Industrial School.[20][23]Today, in countries such as the United States, Canada and Australia, which were once settler colonies, indigenous languages are spoken by only a small minority of the populace. Several postcolonial literary theorists have drawn a link between linguistic discrimination and the oppression of indigenous cultures. ProminentKenyanauthorNgugi wa Thiong'o, for example, argues in his bookDecolonizing the Mindthat language is both a medium of communication, as well as a carrier of culture.[25]As a result, linguistic discrimination resulting from colonization has facilitated the erasure of pre-colonial histories and identities.[25]For example,African slaveswere taught English and forbidden to use their indigenous languages. This severed the slaves' linguistic and thus cultural connection to Africa.[25] In contrast to settler colonies, in exploitation colonies, education in colonial tongues was only accessible to a small indigenous elite.[26]Both the British Macaulay Doctrine, as well as French and Portuguese systems ofassimilation, for example, sought to create an "elite class of colonial auxiliaries" who could serve as intermediaries between the colonial government and local populace.[26]As a result, fluency in colonial languages became a signifier of class in colonized lands.[citation needed] In postcolonial states, linguistic discrimination continues to reinforce notions of class. In Haiti, for example, working-class Haitians predominantly speakHaitian Creole, while members of the local bourgeoisie are able to speak both French and Creole.[27]Members of this local elite frequently conduct business and politics in French, thereby excluding many of the working-class from such activities.[27]In addition, D. L. Sheath, an advocate for the use of indigenous languages in India, also writes that the Indian elite associates nationalism with a unitary identity, and in this context, "uses English as a means of exclusion and an instrument ofcultural hegemony”.[28] Class disparities in postcolonial nations are often reproduced through education. In countries such asHaiti, schools attended by the bourgeoisie are usually of higher quality and use colonial languages as their means of instruction. On the other hand, schools attended by the rest of the population are often taught inHaitian Creole.[27]Scholars such as Hebblethwaite argue that Creole-based education will improve learning, literacy and socioeconomic mobility in a country where 95% of the population are monolingual in Creole.[29]However, resultant disparities in colonial language fluency and educational quality can impede social mobility.[27] On the other hand, areas such asFrench Guianahave chosen to teach colonial languages in all schools, often to the exclusion of local indigenous languages.[30]As colonial languages were viewed by many as the "civilized" tongues, being "educated" often meant being able to speak and write in these colonial tongues.[30]Indigenous language education was often seen as an impediment to achieving fluency in these colonial languages, and thus deliberately suppressed.[30] CertainCommonwealth nationssuch as Uganda and Kenya have historically had a policy of teaching in indigenous languages and only introducing English in the upper grades.[31]This policy was a legacy of the "dual mandate" as conceived byLord Lugard, a British colonial administrator inNigeria.[31]However, by the post-war period, English was increasingly viewed as necessary skill for accessing professional employment and better economic opportunities.[31][32]As a result, there was increasing support amongst the populace for English-based education, which Kenya's Ministry of Education adopted post-independence, and Uganda following their civil war. Later on, members of the Ominde Commission in Kenya expressed the need forKiswahiliin promoting a national and pan-African identity. Kenya therefore began to offer Kiswahili as a compulsory, non-examinable subject in primary school, but it remained secondary to English as a medium of instruction.[31] While the mastery of colonial languages may provide better economic opportunities, theConvention against Discrimination in Education[33]and theUN Convention on the Rights of the Childalso states that minority children have the right to "use [their] own [languages]". The suppression ofindigenous languageswithin the education system appears to contravene this treaty.[34][35]In addition, children who speak indigenous languages can also be disadvantaged when educated in foreign languages, and often have high illiteracy rates. For example, when the French arrived to "civilize" Algeria, which included imposing French on local Algerians, the literacy rate in Algeria was over 40%, higher than that in France at the time. However, when the French left in 1962, the literacy rate in Algiers was at best 10-15%.[36] As colonial languages are used as the languages of governance and commerce in many colonial and postcolonial states,[37]locals who only speak indigenous languages can be disenfranchised. By forcing the locals to speak the colonizers' language, colonizers assimilate the indigenous people and hold colonies longer. For example, when representative institutions were introduced to theAlgomaregion in what is now modern-day Canada, the local returning officer only accepted the votes of individuals who were enfranchised, which required indigenous peoples to "read and write fluently... [their] own and another language, either English or French".[38]This caused political parties to increasingly identify with settler perspectives rather than indigenous ones.[38] It is a common approach for colonizers to set language limitations. Japanese government in 1910 enacted decrees in colony Korea to eliminate existing Korean culture and language. All schools must teach Japanese and Hanja. By doing so, Japanese government was able to make Korea more dependent on Japan and colonize Korea longer. Even today, many postcolonial states continue to use colonial languages in their public institutions, even though these languages are not spoken by the majority of their residents.[39]For example, theSouth Africanjustice system still relies primarily on English andAfrikaansas its primary languages, even though most South Africans, particularlyBlack South Africans, speak indigenous languages.[40]In these situations, the use of colonial languages can present barriers to participation in public institutions. Linguistic discrimination is often defined in terms of prejudice of language. It is important to note that although there is a relationship between prejudice and discrimination, they are not always directly related.[41]Prejudicecan be defined as negative attitudes towards a person based on their membership of a social group, whereasdiscriminationcan be seen as the acts towards them. The difference between the two should be recognized because prejudice may be held against someone, but it may not be acted on.[42]The following are examples of linguistic prejudice which may result in discrimination. While, theoretically, any speaker may be the victim of linguicism regardless of social and ethnic status, oppressed andmarginalizedsocial minorities are often the most consistent targets, due to the fact that the speech varieties that come to be associated with such groups have a tendency to bestigmatized. Canada was first colonized by French settlers. Later, the British took control of Canada, while the influence of French culture and languages were still enormous. Historically, the Canadian government andEnglish Canadianshavediscriminated against Canada's French-speaking population, during some periods in thehistory of Canada, they have treated its members assecond-class citizens, and they have favored the members of the more powerful English-speaking population. This form of discrimination has resulted in or contributed to many developments in Canadian history, including the rise of theQuebec sovereignty movement,Quebecois nationalism, theLower Canada Rebellion, theRed River Rebellion, a proposedAcadia province, extreme poverty and lowsocio-economic statusof theFrench Canadianpopulation, low francophone graduation rates as a result of the outlawing of francophone schools across Canada, differences in average earnings between francophones and anglophones in the same positions, fewer chances of being hired or promoted for francophones, and many other things. TheCharter of the French Language, first established in 1977 and amended several times since, has been accused of being discriminatory by English-speakers.[citation needed]The law makes French theofficial languageof Quebec and mandates its use (with exceptions) in government offices and communiques, schools, and in commercial public relations. The law is a way of preventing linguistic discrimination against the majority francophone population of Quebec who were for a very long time controlled by the English minority of the province. The law also seeks to protect French against the growing social and economic dominance of English. Though the English-speaking population had been shrinking since the 1960s, it was hastened by the law, and the 2006 census showed a net loss of 180,000 native English-speakers.[43]Despite this, speaking English at work continues to be strongly correlated with higher earnings, with French-only speakers earning significantly less.[44]The law is credited with successfully raising the status of French in a predominantly English-speaking economy, and it has been influential in countries facing similar circumstances.[43]However, amendments have made it less powerful under the pressure from society and thus less effective than it was in the past.[45] The linguistic disenfranchisement rate in the EU can significantly vary across countries. For residents in two EU-countries that are either native speakers of English or proficient in English as a foreign language the disenfranchisement rate is equal to zero. In his study "Multilingual communication for whom? Language policy and fairness in the European Union",Michele Gazzolacomes to the conclusion that the current multilingual policy of the EU is not in the absolute the most effective way to inform Europeans about the EU; in certain countries, additional languages may be useful to minimize linguistic exclusion.[46] In the 24 countries examined, an English-onlylanguage policywould exclude 51% to 90% of adult residents. A language regime based on English, French and German would disenfranchise 30% to 56% of residents, whereas a regime based on six languages would bring the shares of excluded population down to 9–22%. AfterBrexit, the rates of linguistic exclusion associated with a monolingual policy and with a trilingual and a hexalingual regime are likely to increase.[46] Here and elsewhere the terms 'standard' and 'non-standard' make analysis of linguicism difficult. These terms are used widely by linguists and non-linguists when discussing varieties of American English that engender strong opinions, afalse dichotomywhich is rarely challenged or questioned. This has been interpreted by linguists Nicolas Coupland,Rosina Lippi-Green, andRobin Queen(among others) as a discipline-internal lack of consistency which undermines progress; if linguists themselves cannot move beyond the ideological underpinnings of 'right' and 'wrong' in language, there is little hope of advancing a more nuanced understanding in the general population.[64][65] Because some black Americans speak a particular non-standard variety of English which is often seen as substandard, they are often targets of linguicism.[66]AAVEis often perceived by members of mainstream American society as indicative of low intelligence or limited education, and as with many other non-standard dialects and especially creoles, it is usually called "lazy" or "bad" English. According to researches, AAVE was initially a language that black people in America used to clearly express the life of oppression.[67]People reflect that it is usually more difficult and understand and respond to an AAVE speaker.[68] AAVE usually contains words and phrases that have a different meaning from their original meaning in standard English.Pronunciationalso differs from standard English. Some phrases require sufficient cultural background to understand. From the grammatic aspect, AAVE shows more complex structures that allow speaker to express a wider range with more specificity.[69] The linguistJohn McWhorterhas described this particular form of linguicism as particularly problematic in the United States, where non-standard linguistic structures are often deemed "incorrect" by teachers and potential employers in contrast to other countries such as Morocco, Finland and Italy wherediglossia(the ability to switch between two or more dialects or languages) is an accepted norm, and non-standard usage in conversation is seen as a mark of regional origin, not of intellectual capacity or achievement. In the1977 Ann Arborcourt case, AAVE was compared against standard English to determine how much of an education barrier existed for children that had been primarily raised with AAVE. The assigned linguists determined that the differences, stemming from a history of racial segregation, were significant enough for the children to receive supplementary teaching to better understand standard English.[70] For example, a black American who uses a typical AAVE sentence such as "He be comin' in every day and sayin' he ain't done nothing" may be judged as having a deficient command of grammar, whereas, in fact, such a sentence is constructed based on a complex grammar which is different from that of standard English, not a degenerate form of it.[71]A listener may misjudge the user of such a sentence to be unintellectual or uneducated. The speaker may be intellectually capable, educated, and proficient in standard English, but chose to say the sentence in AAVE for social and sociolinguistic reasons such as the intended audience of the sentence, a phenomenon known ascode switching. Currently, AAVE is unique and organized enough to be a new language that derives from English but becomes its own new language. It shares many similar characteristics with standard English, but it has its own complexity with African American culture and history. Nonetheless, AAVE is only used in non-formal situations. It is not uncommon for AAVE speakers to speak in formal and standard English under formal situations. Reports have shown that black workers who sound more "black" earn on average 12% less than their peers (data in 2009).[72]In education, students who speak in AAVE are educated by their teachers that AAVE is not proper or is not correct. According to a survey, when a person speaks in AAVE, listeners tend to believe that the speaker is an African American from North America and is more related to adjectives such as poor, uneducated, and unintelligent.[73]By merely sounding like black, a person may be assumed to be in certain image. Furthermore, the legal system in the United States has been found to produce worse outcomes for speakers of AAVE. Court reporters are less accurate at transcribing black speakers,[74]and judges can misinterpret the meaning of black speech in cases.[75] Another form of linguicism is evidenced by the following: in some parts of the United States, a person who has a strong Spanish accent and uses only simple English words may be thought of as poor, poorly educated, and possibly anundocumented immigrant. However, if the same person has a diluted accent or no noticeable accent at all and can use a myriad of words in complex sentences, they are likely to be perceived as more successful, better educated, and a "legitimatecitizen". Accent has two parts, the speaker and the listener. Thus, some people may perceive an accent as strong because they are not used to hearing them and the emphasis is on an unexpected syllable or as soft and imperceptible. The bias and discrimination that ensues is tied to the difficulty the listener has in understanding that accent. The fact that the person uses a very broad vocabulary creates even morecognitive dissonanceon the part of the listener who will immediately think of the speaker as either undocumented, poor, uneducated or even insulting to their intelligence. Linguistic discrimination againstAsiansis still a topic understudied. A scholar in a paper included a short story where an Asian reporter was asked whether she can speak English every time she meets a stranger. Everyone assumed that she may not understand English because she had an Asian appearance.[78]In a Pew Research study done in 2022, they found that around 59% of Asian immigrants could speak fluent English.[79]The proportion is much lower for new immigrants. However, this low English literacy level and lack of translation discourages many Asian immigrants to obtain access to social services, such as health care. Asian immigrants, especially younger students, experience a language barrier. They are forced to learn a new language.[80] Chinglishis a common point of attack. It is the mixture ofChinesephrases or grammar and English that encompasses the way Chinese immigrants speak, often accompanied by a Chinese accent. An example would be "Open the light," since "open" and "turn on" are the same word ("开") in Chinese. Another example would be "Yes, I have."[81]This is the literal translation from Chinese to English, and it is hard for Chinese people to learn this quickly. Speaking Chinglish may result in racial discrimination, while this is only the nuance between Chinese and English grammar. Users ofAmerican Sign Language(ASL) have faced linguistic discrimination based on the perception of the legitimacy of signed languages compared to spoken languages. This attitude was explicitly expressed in theMilan Conferenceof 1880 which set precedence for public opinion of manual forms of communication, including ASL, creating lasting consequences for members of the Deaf community.[82]The conference almost unanimously (save a handful of allies such asThomas Hopkins Gallaudet), reaffirmed the use oforalism, instruction conducted exclusively in spoken language, as the preferred education method for Deaf individuals.[83]These ideas were outlined in eight resolutions which ultimately resulted in the removal of Deaf individuals from their own educational institutions, leaving generations of Deaf persons to be educated single-handedly by hearing individuals.[84] Due to misconceptions about ASL, it was not recognized as its own, fully functioning language until recently. In the 1960s, linguistWilliam Stokoeproved ASL to be its own language based on its unique structure and grammar, separate from that of English. Before this, ASL was thought to be merely a collection of gestures used to represent English. Because of its use of visual space, it was mistakenly believed that its users are of a lesser mental capacity. The misconception that ASL users are incapable of complex thought was prevalent, although this has decreased as further studies about its recognition of a language have taken place. For example, ASL users faced overwhelming discrimination for the supposedly "lesser" language that they use and were met with condescension especially when using their language in public.[85]Another way discrimination against ASL is evident is how, despite research conducted by linguists like Stokoe orClayton Valliand Cecil Lucas ofGallaudet University, ASL is not always recognized as a language.[86]Its recognition is crucial both for those learning ASL as an additional language, and for prelingually-deaf children who learn ASL as their first language. Linguist Sherman Wilcox concludes that given that it has a body of literature and international scope, to single ASL out as unsuitable for a foreign language curriculum is inaccurate. Russel S. Rosen also writes about government and academic resistance to acknowledging ASL as a foreign language at the high school or college level, which Rosen believes often resulted from a lack of understanding about the language. Rosen and Wilcox's conclusions both point to discrimination ASL users face regarding its status as a language, that although decreasing over time is still present.[87] In the medical community, there is immense bias against deafness and ASL. This stems from the belief that spoken languages are superior to sign languages.[88]Because 90% of deaf babies are born to hearing parents, who are usually unaware of the existence of theDeaf community, they often turn to the medical community for guidance.[89]Medical and audiological professionals, who are typically biased against sign languages, encourage parents to get acochlear implantfor their deaf child in order for the child to use spoken language.[88]Research shows, however, that deaf kids without cochlear implants acquire ASL with much greater ease than deaf kids with cochlear implants acquire spoken English. In addition, medical professionals discourage parents from teaching ASL to their deaf kid to avoid compromising their English[90]although research shows that learning ASL does not interfere with a child's ability to learn English. In fact, the early acquisition of ASL proves to be useful to the child in learning English later on. When making a decision about cochlear implantation, parents are not properly educated about the benefits of ASL or the Deaf Community.[89]This is seen by many members of the Deaf Community as cultural and linguistic genocide.[90] Linguicism applies towritten,spoken, orsigned languages. The quality of a book or article may be judged by the language in which it is written. In the scientific community, for example, those who evaluated a text in two language versions, English and the nationalScandinavian language, rated the English-language version as being of higher scientific content.[116] TheInternetoperates a great deal using written language. Readers of aweb page,Usenetgroup,forumpost, or chat session may be more inclined to take the author seriously if the language is written in accordance with the standard language. In contrast to the previous examples of linguistic prejudice, linguistic discrimination involves the actual treatment of individuals based on use of language. Examples may be clearly seen in the workplace, in marketing, and in education systems. For example, some workplaces enforce anEnglish-onlypolicy, which is part of an American political movement that pushes for English to be accepted as the official language. In the United States, the federal law, Titles VI and VII of theCivil Rights Act of 1964protects non-native speakers from discrimination in the workplace based on their national origin or use of dialect. There are state laws which also address the protection of non-native speakers, such as the California Fair Employment and Housing Act. However, industries often argue in retrospect that clear, understandable English is often needed in specific work settings in the U.S.[2]
https://en.wikipedia.org/wiki/Glottophobia
Computer science(also called computing science) is the study of the theoretical foundations ofinformationandcomputationand their implementation and application incomputersystems. One well known subject classification system for computer science is theACM Computing Classification Systemdevised by theAssociation for Computing Machinery. Computer science can be described as all of the following: Outline of artificial intelligence Outline of databases Outline of software engineering
https://en.wikipedia.org/wiki/Outline_of_computer_science
Frequency bands for5G New Radio(5G NR), which is the air interface orradio access technologyof the5Gmobile networks, are separated into two different frequency ranges. First there is Frequency Range 1 (FR1),[1]which includes sub-7 GHz frequency bands, some of which are traditionally used by previous standards, but has been extended to cover potential new spectrum offerings from 410 MHz to 7125 MHz. The other is Frequency Range 2 (FR2),[2]which includes frequency bands from 24.25 GHz to 71.0 GHz. In November and December 2023, a third band, Frequency Range 3 (FR3),[3]covering frequencies from 7.125 GHz to 24.25 GHz, was proposed by theWorld Radio Conference; as of September 2024, this band has not been added to the official standard. Frequency bands are also available for non-terrestrial networks (NTN)[4]in both the sub-7 GHz and in the 17.3 GHz to 30 GHz ranges. From the latest published version (Rel. 18) of the respective3GPPtechnical standard (TS 38.101),[5]the following tables list the specified frequency bands and the channel bandwidths of the 5G NR standard. Note that the NR bands are defined with prefix of "n". When the NR band is overlapping with the4G LTE band, they share the same band number.
https://en.wikipedia.org/wiki/5G_NR_frequency_bands
Finitismis aphilosophy of mathematicsthat accepts the existence only offinitemathematical objects. It is best understood in comparison to the mainstream philosophy of mathematics where infinite mathematical objects (e.g.,infinite sets) are accepted as existing. The main idea of finitistic mathematics is not accepting the existence of infinite objects such as infinite sets. While allnatural numbersare accepted as existing, thesetof all natural numbers is not considered to exist as a mathematical object. Thereforequantificationover infinite domains is not considered meaningful. The mathematical theory often associated with finitism isThoralf Skolem'sprimitive recursive arithmetic. The introduction of infinite mathematical objects occurred a few centuries ago when the use of infinite objects was already a controversial topic among mathematicians. The issue entered a new phase whenGeorg Cantorin 1874 introduced what is now callednaive set theoryand used it as a base for his work ontransfinite numbers. When paradoxes such asRussell's paradox,Berry's paradoxand theBurali-Forti paradoxwere discovered in Cantor's naive set theory, the issue became a heated topic among mathematicians. There were various positions taken by mathematicians. All agreed about finite mathematical objects such as natural numbers. However there were disagreements regarding infinite mathematical objects. One position was theintuitionistic mathematicsthat was advocated byL. E. J. Brouwer, which rejected the existence of infinite objects until they are constructed. Another position was endorsed byDavid Hilbert: finite mathematical objects are concrete objects, infinite mathematical objects are ideal objects, and accepting ideal mathematical objects does not cause a problem regarding finite mathematical objects. More formally, Hilbert believed that it is possible to show that any theorem about finite mathematical objects that can be obtained using ideal infinite objects can be also obtained without them. Therefore allowing infinite mathematical objects would not cause a problem regarding finite objects. This led toHilbert's programof proving bothconsistencyandcompletenessof set theory using finitistic means as this would imply that adding ideal mathematical objects isconservativeover the finitistic part. Hilbert's views are also associated with theformalist philosophy of mathematics. Hilbert's goal of proving the consistency and completeness of set theory or even arithmetic through finitistic means turned out to be an impossible task due toKurt Gödel'sincompleteness theorems. However,Harvey Friedman'sgrand conjecturewould imply that most mathematical results are provable using finitistic means. Hilbert did not give a rigorous explanation of what he considered finitistic and referred to as elementary. However, based on his work withPaul Bernayssome experts such asTait (1981)have argued thatprimitive recursive arithmeticcan be considered an upper bound on what Hilbert considered finitistic mathematics.[1] As a result of Gödel's theorems, as it became clear that there is no hope of proving both the consistency and completeness of mathematics, and with the development of seemingly consistentaxiomatic set theoriessuch asZermelo–Fraenkel set theory, most modern mathematicians do not focus on this topic. In her bookThe Philosophy of Set Theory,Mary Tilescharacterized those who allowpotentially infiniteobjects asclassical finitists, and those who do not allow potentially infinite objects asstrict finitists: for example, a classical finitist would allow statements such as "every natural number has asuccessor" and would accept the meaningfulness ofinfinite seriesin the sense oflimitsof finite partial sums, while a strict finitist would not. Historically, the written history of mathematics was thus classically finitist until Cantor created the hierarchy oftransfinitecardinalsat the end of the 19th century. Leopold Kroneckerremained a strident opponent to Cantor's set theory:[2] Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk.God created the integers; all else is the work of man. Reuben Goodsteinwas another proponent of finitism. Some of his work involved building up toanalysisfrom finitist foundations. Although he denied it, much ofLudwig Wittgenstein's writing on mathematics has a strong affinity with finitism.[4] If finitists are contrasted withtransfinitists(proponents of e.g.Georg Cantor's hierarchy of infinities), then alsoAristotlemay be characterized as a finitist. Aristotle especially promoted thepotential infinityas a middle option between strict finitism andactual infinity(the latter being an actualization of something never-ending in nature, in contrast with the Cantorist actual infinity consisting of the transfinitecardinalandordinalnumbers, which have nothing to do with the things in nature): But on the other hand to suppose that the infinite does not exist in any way leads obviously to many impossible consequences: there will be a beginning and end of time, a magnitude will not be divisible into magnitudes, number will not be infinite. If, then, in view of the above considerations, neither alternative seems possible, an arbiter must be called in. Ultrafinitism(also known as ultraintuitionism) has an even more conservative attitude towards mathematical objects than finitism, and has objections to the existence of finite mathematical objects when they are too large. Towards the end of the 20th centuryJohn Penn Mayberrydeveloped a system of finitary mathematics which he called "Euclidean Arithmetic". The most striking tenet of his system is a complete and rigorous rejection of the special foundational status normally accorded to iterative processes, including in particular the construction of the natural numbers by the iteration "+1". Consequently Mayberry is in sharp dissent from those who would seek to equate finitary mathematics withPeano arithmeticor any of its fragments such asprimitive recursive arithmetic.
https://en.wikipedia.org/wiki/Strict_finitism
Theage of majorityis the threshold of legaladulthoodas recognized or declared inlaw.[1]It is the moment when a person ceases to be considered aminor, and assumes legal control over their person, actions, and decisions, thus terminating the control and legal responsibilities of their parents or guardian over them. Most countries set the age of majority at 18, but some jurisdictions have a higher age and others lower. The wordmajorityhere refers to having greater years and being of full age as opposed tominority, the state of being a minor. The law in a given jurisdiction may not actually use the term "age of majority". The term refers to a collection of laws bestowing the status of adulthood. The termage of majoritycan be confused with the similar concept of theage of license.[2]As a legal term, "license" means "permission", referring to a legally enforceable right or privilege. Thus, an age of license is an age at which one has legal permission from a given government to participate in certain activities or rituals. The age of majority, on the other hand, is a legal recognition that one has become an adult. Many ages of license coincide with the age of majority to recognize the transition to legal adulthood, but they are nonetheless legally distinct concepts. One need not have attained the age of majority to have permission to exercise certain rights and responsibilities. Some ages of license may be higher, lower, or match the age of majority. For example, to purchasealcoholic beverages, the age of license is 21 in all U.S. states. Another example is the voting age, which prior to 1971 was 21 in the US, as was the age of majority in all or most states. After the voting age was lowered from 21 to 18, the age of majority was lowered to 18 in most states. In most US states, one may obtain a driver's license, consent to sexual activity, and gain full-time employment at age 16 even though the age of majority is 18 in most states.[3]In the Republic of Ireland the age of majority is 18, but one must be 21 or over to stand for election to the Houses of theOireachtas.[4]Also, in Portugal the age of majority is 18, and citizens who have reached that age are also eligible to run for Parliament,[5]but they need to be 35 or over in order to run for President.[6] A child who is legallyemancipatedby a court of competent jurisdiction automatically attains to their maturity upon the signing of the court order. Only emancipation confers the status of maturity before a person has actually reached the age of majority. In almost all places, minors who marry are automatically emancipated. Some places also do the same for minors who are in the armed forces or who have a certain degree or diploma.[7] Minors who are emancipated may be able to choose where they live, sign contracts, and have control over their financial and medical decisions and generally make decisions free from parental control but are not exempt from age requirements set forth in law for other rights. For example, a minor can emancipate at 16 in the US (or younger depending on the state) but must still wait until 18 to vote or buy a firearm, and 21 to buy alcohol or tobacco. The Jewish Talmud says that every judgmentJosiah, the sixteenth king of Judah(c.640–609BCE),issued from his coronation until the age of eighteen was reversed and he returned the money to the parties whom he judged liable, due to concern that in his youth he may not have judged the cases correctly.[8]Other Jewish commentators have discussed whether age 13 or 18 is the age to make decisions in aJewish Court.[9] Roman law did not have an age of majority in the modern sense, as individuals remained under the authority of thePater familiasuntil his death. Theage of adulthoodwas set at 12 for girls and 14 for boys, with boys gaining rights such as marriage, military service, and any legal capacity that depended on age only, including, until the introduction of theLex Villia, the ability to be eligible for public office.[10] TheLex Plaetoriaallowed those under 25 to contest disadvantageous agreements in case of fraud, later extending to other circumstances, and the other party might escape repercussions only if acuratorwas involved. To enter a contract, individuals in this age group could request thepraetorfor such acurator, thus ensuring protection for both sides: this shielded the other contracting party from legal risk and allowed transactions to proceed, as no prudent person would engage without this safeguard. Unlike with atutor, the requester retained full legal capacity to act, and the role of thecuratorwas merely to prevent fraud. Later, under Marcus Aurelius, their appointment became mandatory. Someone under 25 who wanted to enter a contracthad torequest acurator, and could propose a candidate, which thepraetorcould reject. Thecurator's control over property became closer to that of atutor, but it was only applied to the properties that thepraetorassigned to him, not those acquired by the requester after his appointment.[10] Over time, there was a gradual evolution, initially focusing on property laws (while other legal matters, such as marriage and wills, continued to have separate age thresholds), eventually arriving at the modern concept of age of majority, commonly set at 18. Since 2015, some countries have lowered the voting age to 16.[11][12]Some countries, likeEngland and Wales, are even considering lowering the age of majority to 16,[13]similar to how it already is inCubaandScotland.[14]The main argument for lowering is that, on average, young people are much more educated (both because of better individual educational outcomes and being raised by more educated parents) than in the past (the same argument was made in the 1970s when most countries lowered the age of majority from 21 to 18, which remains the age used for most countries, including the United States).[15][16]Related to newer generations being more educated and being ready for life earlier: compared to the past, information is much moreeasily accessibleas a result of the spread of theInternet, which can be accessed through both thepersonal computerand thesmartphone. A person reaches the age of majority atmidnightat the beginning of the day of that person's relevant birthday; under English common law this was not always the case.[17][better source needed] In many countries minors can beemancipated: depending on jurisdiction, this may happen through acts such asmarriage, attaining economic self-sufficiency, obtaining an educationaldegreeordiploma, or participating in a form ofmilitary service. In the United States, all states have some form of emancipation of minors.[18] The age of majority in countries (oradministrative divisions) in the order of lowest to highest: Religions have their own rules as to theage of maturity, when a child is regarded to be an adult, at least for ritual purposes: In some countries, reaching the age of majority carries other rights and obligations, although in other countries, these rights and obligations may be had before or after reaching the aforementioned age.
https://en.wikipedia.org/wiki/Age_of_majority
Linguistic relativityasserts thatlanguageinfluencesworldvieworcognition. One form of linguistic relativity,linguistic determinism, regards peoples' languages as determining and influencing the scope of culturalperceptionsof their surrounding world.[1] Various colloquialisms refer to linguistic relativism: theWhorf hypothesis; theSapir–Whorf hypothesis(/səˌpɪərˈhwɔːrf/sə-PEERWHORF); theWhorf-Sapir hypothesis; andWhorfianism. The hypothesis is in dispute, with many different variations throughout its history.[2][3]Thestrong hypothesisof linguistic relativity, now referred to as linguistic determinism, is that languagedeterminesthought and that linguistic categories limit and restrict cognitive categories. This was a claim by some earlier linguists pre-World War II;[4]since then it has fallen out of acceptance by contemporary linguists.[5][need quotation to verify]Nevertheless, research has produced positiveempirical evidencesupporting aweakerversion of linguistic relativity:[5][4]that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them. Although common, the termSapir–Whorf hypothesisis sometimes considered amisnomerfor several reasons.Edward Sapir(1884–1939) andBenjamin Lee Whorf(1897–1941) never co-authored any works and never stated their ideas in terms of a hypothesis. The distinction between a weak and a strong version of this hypothesis is also a later development; Sapir and Whorf never used such a dichotomy, although often their writings and their opinions of this relativity principle expressed it in stronger or weaker terms.[6][7] The principle of linguistic relativity and the relationship between language and thought has also received attention in varying academic fields, includingphilosophy,psychologyandanthropology. It has also influenced works of fiction and the invention ofconstructed languages. The idea was first expressed explicitly by 19th-century thinkers such asWilhelm von HumboldtandJohann Gottfried Herder, who considered language as the expression of the spirit of a nation. Members of the early 20th-century school of American anthropology includingFranz BoasandEdward Sapiralso approved versions of the idea to a certain extent, including in a 1928 meeting of the Linguistic Society of America,[8]but Sapir, in particular, wrote more often against than in favor of anything like linguistic determinism. Sapir's student,Benjamin Lee Whorf, came to be considered as the primary proponent as a result of his published observations of how he perceived linguistic differences to have consequences for human cognition and behavior.Harry Hoijer, another of Sapir's students, introduced the term "Sapir–Whorf hypothesis",[9]even though the two scholars never formally advanced any such hypothesis.[10]A strong version of relativist theory was developed from the late 1920s by the German linguistLeo Weisgerber. Whorf's principle of linguistic relativity was reformulated as a testable hypothesis byRoger BrownandEric Lennebergwho performed experiments designed to determine whethercolor perceptionvaries between speakers of languages that classified colors differently. As the emphasis of the universal nature of human language and cognition developed during the 1960s, the idea of linguistic relativity became disfavored among linguists. From the late 1980s, a new school of linguistic relativity scholars has examined the effects of differences in linguistic categorization on cognition, finding broad support for non-deterministic versions of the hypothesis in experimental contexts.[11][12]Some effects of linguistic relativity have been shown in several semantic domains, although they are generally weak. Currently, a nuanced opinion of linguistic relativity is espoused by most linguists holding that language influences certain kinds of cognitive processes in non-trivial ways, but that other processes are better considered as developing fromconnectionistfactors. Research emphasizes exploring the manners and extent to which language influences thought.[11] The idea that language and thought are intertwined is ancient. In his dialogueCratylus,Platoexplores the idea that conceptions of reality, such asHeracliteanflux, are embedded in language. But Plato has been read as arguing againstsophistthinkers such asGorgias of Leontini, who claimed that the physical world cannot be experienced except through language; this made the question of truth dependent on aesthetic preferences or functional consequences. Plato may have held instead that the world consisted of eternal ideas and that language should represent these ideas as accurately as possible.[13]Nevertheless, Plato'sSeventh Letterclaims that ultimate truth is inexpressible in words. Following Plato,St. Augustine, for example, argued that language was merely like labels applied to concepts existing already. This opinion remained prevalent throughout theMiddle Ages.[14]Roger Baconhad the opinion that language was but a veil covering eternal truths, hiding them from human experience. ForImmanuel Kant, language was but one of several methods used by humans to experience the world. During the late 18th and early 19th centuries, the idea of the existence of different national characters, orVolksgeister, of different ethnic groups was a major motivator for the German romantics school and the beginning ideologies of ethnic nationalism.[15] Johann Georg Hamannis often suggested to be the first among the actual German Romantics to discuss the concept of the "genius" of a language.[16][17]In his "Essay Concerning an Academic Question", Hamann suggests that a people's language affects their worldview: The lineaments of their language will thus correspond to the direction of their mentality.[18] In 1820,Wilhelm von Humboldtassociated the study of language with the national romanticist program by proposing that language is the fabric of thought. Thoughts are produced as a kind of internal dialog using the same grammar as the thinker's native language.[19]This opinion was part of a greater idea in which the assumptions of an ethnic nation, their "Weltanschauung", was considered as being represented by the grammar of their language. Von Humboldt argued that languages with aninflectionalmorphological type, such as German, English and the otherIndo-European languages, were the most perfect languages and that accordingly this explained the dominance of their speakers with respect to the speakers of less perfect languages. Wilhelm von Humboldt declared in 1820: The diversity of languages is not a diversity of signs and sounds but a diversity of views of the world.[19] In Humboldt's humanistic understanding of linguistics, each language creates the individual's worldview in its particular way through its lexical andgrammatical categories, conceptual organization, and syntactic models.[20] Herder worked alongside Hamann to establish the idea of whether or not language had a human/rational or a divine origin.[21]Herder added the emotional component of the hypothesis and Humboldt then took this information and applied to various languages to expand on the hypothesis. The idea that some languages are superior to others and that lesser languages maintained their speakers in intellectual poverty was widespread during the early 20th century.[22]American linguistWilliam Dwight Whitney, for example, actively strove to eradicateNative American languages, arguing that their speakers were savages and would be better off learning English and adopting a "civilized" way of life.[23]The first anthropologist and linguist to challenge this opinion wasFranz Boas.[24]While performing geographical research in northern Canada he became fascinated with theInuitand decided to become anethnographer. Boas stressed the equal worth of all cultures and languages, that there was no such thing as a primitive language and that all languages were capable of expressing the same content, albeit by widely differing means.[25]Boas saw language as an inseparable part of culture and he was among the first to require of ethnographers to learn the native language of the culture to be studied and to document verbal culture such asmythsand legends in the original language.[26][27] Boas: It does not seem likely [...] that there is any direct relation between the culture of a tribe and the language they speak, except in so far as the form of the language will be moulded by the state of the culture, but not in so far as a certain state of the culture is conditioned by the morphological traits of the language."[28] Boas' student Edward Sapir referred to the Humboldtian idea that languages were a major factor for understanding the cultural assumptions of peoples.[29]He espoused the opinion that because of the differences in the grammatical systems of languages no two languages were similar enough to allow for perfect cross-translation. Sapir also thought because language represented reality differently, it followed that the speakers of different languages would perceive reality differently. Sapir: No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are distinct worlds, not merely the same world with different labels attached.[30] However, Sapir explicitly rejected strong linguistic determinism by stating, "It would be naïve to imagine that any analysis of experience is dependent on pattern expressed in language."[31] Sapir was explicit that the associations between language and culture were neither extensive nor particularly profound, if they existed at all: It is easy to show that language and culture are not intrinsically associated. Totally unrelated languages share in one culture; closely related languages—even a single language—belong to distinct culture spheres. There are many excellent examples in Aboriginal America. The Athabaskan languages form as clearly unified, as structurally specialized, a group as any that I know of. The speakers of these languages belong to four distinct culture areas... The cultural adaptability of the Athabaskan-speaking peoples is in the strangest contrast to the inaccessibility to foreign influences of the languages themselves.[32] Sapir offered similar observations about speakers of so-called "world" or"modern" languages, noting, "possession of a common language is still and will continue to be a smoother of the way to a mutual understanding between England and America, but it is very clear that other factors, some of them rapidly cumulative, are working powerfully to counteract this leveling influence. A common language cannot indefinitely set the seal on a common culture when the geographical, physical, and economics determinants of the culture are no longer the same throughout the area."[33] While Sapir never made a practice of studying directly how languages affected thought, some notion of (probably "weak") linguistic relativity affected his basic understanding of language, and would be developed by Whorf.[34] Drawing on influences such as Humboldt andFriedrich Nietzsche, some European thinkers developed ideas similar to those of Sapir and Whorf, generally working in isolation from each other. Prominent in Germany from the late 1920s through the 1960s were the strongly relativist theories ofLeo Weisgerberand his concept of a 'linguistic inter-world', mediating between external reality and the forms of a given language, in ways peculiar to that language.[35]Russian psychologistLev Vygotskyread Sapir's work and experimentally studied the ways in which the development of concepts in children was influenced by structures given in language. His 1934 work "Thought and Language"[36]has been compared to Whorf's and taken as mutually supportive evidence of language's influence on cognition.[37]Drawing on Nietzsche's ideas of perspectivismAlfred Korzybskideveloped the theory ofgeneral semanticsthat has been compared to Whorf's notions of linguistic relativity.[38]Though influential in their own right, this work has not been influential in the debate on linguistic relativity, which has tended to be based on the American paradigm exemplified by Sapir and Whorf. More than any linguist, Benjamin Lee Whorf has become associated with what he termed the "linguistic relativity principle".[39]StudyingNative Americanlanguages, he attempted to account for the ways in which grammatical systems and language-use differences affected perception. Whorf's opinions regarding the nature of the relation between language and thought remain under contention. However, a version of theory holds some "merit", for example, "different words mean different things in different languages; not every word in every language has a one-to-one exact translation in a different language"[40]Critics such as Lenneberg,[41]Black, andPinker[42]attribute to Whorf a strong linguistic determinism, whileLucy,SilversteinandLevinsonpoint to Whorf's explicit rejections of determinism, and where he contends that translation andcommensurationare possible. Detractors such as Lenneberg,[41]Chomskyand Pinker[43]criticized him for insufficient clarity of his description of how language influences thought, and for not proving his conjectures. Most of his arguments were in the form of anecdotes and speculations that served as attempts to show how "exotic" grammatical traits were associated with what were apparently equally exotic worlds of thought. In Whorf's words: We dissect nature along lines laid down by our native language. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscope flux of impressions which has to be organized by our minds—and this means largely by the linguistic systems of our minds. We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language [...] all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated.[44] Among Whorf's best-known examples of linguistic relativity are instances where a non-European language has several terms for a concept that is only described with one word in European languages (Whorf used the acronym SAE "Standard Average European" to allude to the rather similar grammatical structures of the well-studied European languages in contrast to the greater diversity of less-studied languages). One of Whorf's examples was the supposedly large number of words for'snow' in the Inuit languages, an example that later was contested as a misrepresentation.[45] Another is theHopi language's words for water, one indicating drinking water in a container and another indicating a natural body of water.[46] These examples ofpolysemyserved the double purpose of showing that non-European languages sometimes made more specific semantic distinctions than European languages and that direct translation between two languages, even of seemingly basic concepts such as snow or water, is not always possible.[47] Another example is from Whorf's experience as a chemical engineer working for an insurance company as a fire inspector.[45]While inspecting a chemical plant he observed that the plant had two storage rooms for gasoline barrels, one for the full barrels and one for the empty ones. He further noticed that while no employees smoked cigarettes in the room for full barrels, no-one minded smoking in the room with empty barrels, although this was potentially much more dangerous because of the flammable vapors still in the barrels. He concluded that the use of the wordemptyin association to the barrels had resulted in the workers unconsciously regarding them as harmless, although consciously they were probably aware of the risk of explosion. This example was later criticized by Lenneberg[41]as not actually demonstrating causality between the use of the wordemptyand the action of smoking, but instead was an example ofcircular reasoning. Pinker inThe Language Instinctridiculed this example, claiming that this was a failing of human insight rather than language.[43] Whorf's most elaborate argument for linguistic relativity regarded what he believed to be a fundamental difference in the understanding oftime as a conceptual category among the Hopi.[48]He argued that in contrast to English and otherSAE languages, Hopi does not treat the flow of time as a sequence of distinct, countable instances, like "three days" or "five years", but rather as a single process and that consequently it has no nouns referring to units of time as SAE speakers understand them. He proposed that this view of time was fundamental toHopiculture and explained certain Hopi behavioral patterns. Ekkehart Malotkilater claimed that he had found no evidence of Whorf's claims in 1980's era Hopi speakers, nor in historical documents dating back to the arrival of Europeans. Malotki used evidence from archaeological data, calendars, historical documents, and modern speech; he concluded that there was no evidence that Hopi conceptualize time in the way Whorf suggested. Many universalist scholars such as Pinker consider Malotki's study as a final refutation of Whorf's claim about Hopi, whereas relativist scholars such asJohn A Lucyand Penny Lee criticized Malotki's study for mischaracterizing Whorf's claims and for forcing Hopi grammar into a model of analysis that does not fit the data.[49] Whorf's argument about Hopi speakers' conceptualization of time is an example of the structure-centered method of research into linguistic relativity, which Lucy identified as one of three main types of research of the topic.[50]The "structure-centered" method starts with a language's structural peculiarity and examines its possible ramifications for thought and behavior. The defining example is Whorf's observation of discrepancies between the grammar of time expressions in Hopi and English. More recent research in this vein is Lucy's research describing how usage of the categories of grammatical number and of numeral classifiers in theMayan languageYucatecresult in Mayan speakers classifying objects according to material rather than to shape as preferred by English speakers.[51]However, philosophers includingDonald DavidsonandJason Josephson Stormhave argued that Whorf's Hopi examples are self-refuting, as Whorf had to translate Hopi terms into English in order to explain how they are untranslatable.[52] Whorf died in 1941 at age 44, leaving multiple unpublished papers. His ideas were continued by linguists and anthropologists such as Hoijer andLee, who both continued investigating the effect of language on habitual thought, andTrager, who prepared a number of Whorf's papers for posthumous publishing. The most important event for the dissemination of Whorf's ideas to a larger public was the publication in 1956 of his major writings on the topic of linguistic relativity in a single volume titledLanguage, Thought and Reality. In 1953,Eric Lennebergcriticized Whorf's examples from anobjectivistphilosophy of language, claiming that languages are principally meant to represent events in the real world, and that even though languages express these ideas in various ways, the meanings of such expressions and therefore the thoughts of the speaker are equivalent. He argued that Whorf's English descriptions of a Hopi speaker's idea of time were in fact translations of the Hopi concept into English, therefore disproving linguistic relativity. However Whorf was concerned with how the habitualuseof language influences habitual behavior, rather than translatability. Whorf's point was that while English speakers may be able tounderstandhow a Hopi speaker thinks, they do notthinkin that way.[53] Lenneberg's main criticism of Whorf's works was that he never showed the necessary association between a linguistic phenomenon and a mental phenomenon. With Brown, Lenneberg proposed that proving such an association required directly matching linguistic phenomena with behavior. They assessed linguistic relativity experimentally and published their findings in 1954. Since neither Sapir nor Whorf had ever stated a formal hypothesis, Brown and Lenneberg formulated their own. Their two tenets were (i) "the world is differently experienced and conceived in different linguistic communities" and (ii) "language causes a particular cognitive structure".[54]Brown later developed them into the so-called "weak" and "strong" formulation: Brown's formulations became known widely and were retrospectively attributed to Whorf and Sapir although the second formulation, verging on linguistic determinism, was never advanced by either of them. Joshua Fishmanargued that Whorf's true assertion was largely overlooked. In 1978, he suggested that Whorf was a "neo-Herderianchampion"[56]and in 1982, he proposed "Whorfianism of the third kind" in an attempt to reemphasize what he claimed was Whorf's real interest, namely the intrinsic value of "little peoples" and "little languages".[57]Whorf had criticizedOgden'sBasic Englishthus: But to restrict thinking to the patterns merely of English [...] is to lose a power of thought which, once lost, can never be regained. It is the 'plainest' English which contains the greatest number of unconscious assumptions about nature. [...] We handle even our plain English with much greater effect if we direct it from the vantage point of a multilingual awareness.[58] Where Brown's weak version of the linguistic relativity hypothesis proposes that languageinfluencesthought and the strong version that languagedeterminesthought, Fishman's "Whorfianism of the third kind" proposes that languageis a key to culture. TheLeiden schoolis alinguistic theorythat models languages as parasites. Notable proponentFrederik Kortlandt, in a 1985 paper outlining Leiden school theory, advocates for a form of linguistic relativity: "The observation that in allYuman languagesthe word for 'work' is a loan fromSpanishshould be a major blow to any current economic theory." In the next paragraph, he quotes directly from Sapir: "Even in the most primitive cultures the strategic word is likely to be more powerful than the direct blow."[59] The publication of the 1996 anthologyRethinking Linguistic Relativityedited byGumperzandLevinsonbegan a new period of linguistic relativity studies that emphasized cognitive and social aspects. The book included studies on linguistic relativity and universalist traditions. Levinson documented significant linguistic relativity effects in the different linguistic conceptualization of spatial categories in different languages. For example, men speaking theGuugu Yimithirr languageinQueenslandgave accurate navigation instructions using a compass-like system of north, south, east and west, along with a hand gesture pointing to the starting direction.[60] Lucy defines this method as "domain-centered" because researchers select asemantic domainand compare it across linguistic and cultural groups.[50]Space is another semantic domain that has proven fruitful for linguistic relativity studies.[61]Spatial categories vary greatly across languages. Speakers rely on the linguistic conceptualization of space in performing many ordinary tasks. Levinson and others reported three basic spatial categorizations. While many languages use combinations of them, some languages exhibit only one type and related behaviors. For example,Yimithirronly uses absolute directions when describing spatial relations—the position of everything is described by using the cardinal directions. Speakers define a location as "north of the house", while an English speaker may use relative positions, saying "in front of the house" or "to the left of the house".[62] Separate studies byBowermanandSlobinanalyzed the role of language in cognitive processes. Bowerman showed that certain cognitive processes did not use language to any significant extent and therefore could not be subject to linguistic relativity.[clarification needed][63]Slobin described another kind of cognitive process that he named "thinking for speaking"—- the kind of process in which perceptional data and other kinds of prelinguistic cognition are translated into linguistic terms for communication.[clarification needed]These, Slobin argues, are the kinds of cognitive process that are the basis of linguistic relativity.[64] Since Brown and Lenneberg believed that the objective reality denoted by language was the same for speakers of all languages, they decided to test how different languages codified the same message differently and whether differences in codification could be proven to affect behavior. Brown and Lenneberg designed experiments involving the codification of colors. In their first experiment, they investigated whether it was easier for speakers of English to remember color shades for which they had a specific name than to remember colors that were not as easily definable by words. This allowed them to compare the linguistic categorization directly to a non-linguistic task. In a later experiment, speakers of two languages that categorize colors differently (EnglishandZuni) were asked to recognize colors. In this manner, it could be determined whether the differing color categories of the two speakers would determine their ability to recognize nuances within color categories. Brown and Lenneberg found that Zuni speakers whoclassify green and blue togetheras a single color did have trouble recognizing and remembering nuances within the green/blue category.[65]This method, which Lucy later classified as domain-centered,[50]is acknowledged to be sub-optimal, because color perception, unlike othersemantic domains, is hardwired into the neural system and as such is subject to more universal restrictions than other semantic domains. In a similar study done by German ophthalmologist Hugo Magnus during the 1870s, he circulated a questionnaire to missionaries and traders with ten standardized color samples and instructions for using them. These instructions contained an explicit warning that failure of a language to distinguish lexically between two colors did not necessarily imply that speakers of that language did not distinguish the two colors perceptually. Magnus received completed questionnaires on twenty-five African, fifteen Asian, three Australian, and two European languages. He concluded in part, "As regards the range of the color sense of the primitive peoples tested with our questionnaire, it appears in general to remain within the same bounds as the color sense of the civilized nations. At least, we could not establish a complete lack of the perception of the so-called main colors as a special racial characteristic of any one of the tribes investigated for us. We consider red, yellow, green, and blue as the main representatives of the colors of long and short wavelength; among the tribes we tested not a one lacks the knowledge of any of these four colors" (Magnus 1880, p. 6, as trans. in Berlin and Kay 1969, p. 141). Magnus did find widespread lexical neutralization of green and blue, that is, a single word covering both these colors, as have all subsequent comparative studies of color lexicons.[66] Brown and Lenneberg's study began a tradition of investigation of linguistic relativity through color terminology. The studies showed a correlation between color term numbers and ease of recall in both Zuni and English speakers. Researchers attributed this to focal colors having greater codability than less focal colors, and not to linguistic relativity effects. Berlin/Kay found universal typological color principles that are determined by biological rather than linguistic factors.[67]This study sparked studies into typological universals of color terminology. Researchers such as Lucy,[50]Saunders[68]and Levinson[69]argued that Berlin and Kay's study does not refute linguistic relativity in color naming, because of unsupported assumptions in their study (such as whether all cultures in fact have a clearly defined category of "color") and because of related data problems. Researchers such as Maclaury continued investigation into color naming. Like Berlin and Kay, Maclaury concluded that the domain is governed mostly by physical-biological universals.[70][71] Studies byBerlinandKaycontinued Lenneberg's color research. They studied color terminology formation and showed clear universal trends in color naming. For example, they found that even though languages have different color terminologies, they generally recognize certain hues as more focal than others. They showed that in languages with few color terms, it is predictable from the number of terms which hues are chosen as focal colors: For example, languages with only three color terms always have the focal colors black, white, and red.[67]The fact that what had been believed to be random differences between color naming in different languages could be shown to follow universal patterns was seen as a powerful argument against linguistic relativity.[72]Berlin and Kay's research has since been criticized by relativists such as Lucy, who argued that Berlin and Kay's conclusions were skewed by their insistence that color terms encode only color information.[51]This, Lucy argues, made them unaware of the instances in which color terms provided other information that might be considered examples of linguistic relativity. Universalist scholars began a period of dissent from ideas about linguistic relativity. Lenneberg was one of the first cognitive scientists to begin development of the Universalist theory of language that was formulated by Chomsky asuniversal grammar, effectively arguing that all languages share the same underlying structure. The Chomskyan school also includes the belief that linguistic structures are largely innate and that what are perceived as differences between specific languages are surface phenomena that do not affect the brain's universal cognitive processes. This theory became the dominant paradigm of American linguistics from the 1960s through the 1980s, while linguistic relativity became the object of ridicule.[73] Other universalist researchers dedicated themselves to dispelling other aspects of linguistic relativity, often attacking Whorf's specific examples. For example, Malotki's monumental study of time expressions in Hopi presented many examples that challenged Whorf's "timeless" interpretation of Hopi language and culture,[74]but seemingly failed to address the linguistic relativist argument actually posed by Whorf (i.e. that the understanding of time by native Hopi speakers differed from that of speakers of European languages due to the differences in the organization and construction of their respective languages; Whorf never claimed that Hopi speakers lacked any concept of time).[75]Malotki himself acknowledges that the conceptualizations are different, but because he ignores Whorf's use of quotes around the word "time" and the qualifier "what we call", takes Whorf to be arguing that the Hopi have no concept of time at all.[76][77][78] Currently many believers of the universalist school of thought still oppose linguistic relativity. For example, Pinker argues inThe Language Instinctthat thought is independent of language, that language is itself meaningless in any fundamental way to human thought, and that human beings do not even think in "natural" language, i.e. any language that we actually communicate in; rather, we think in a meta-language, preceding any natural language, termed "mentalese". Pinker attacks what he terms "Whorf's radical position", declaring, "the more you examine Whorf's arguments, the less sense they make".[43] Pinker and other universalists have been accused by relativists of misrepresenting Whorf's ideas and committing theStrawman fallacy.[79][80][53] During the late 1980s and early 1990s, advances incognitive psychologyand cognitive linguistics renewed interest in the Sapir–Whorf hypothesis.[81]One of those who adopted a more Whorfian philosophy wasGeorge Lakoff. He argued that language is often used metaphorically and that languages use differentcultural metaphorsthat reveal something about how speakers of that language think. For example, English employs conceptual metaphors likening time to money, so that time can be saved and spent and invested, whereas other languages do not talk about time in that manner. Other such metaphors are common to many languages because they are based on general human experience, for example, metaphors associatingupwithgoodandbadwithdown. Lakoff also argued that metaphor plays an important part in political debates such as the "right to life" or the "right to choose"; or "illegal aliens" or "undocumented workers".[82] An unpublished study by Boroditsky et al. in 2003 reported finding empirical evidence favoring the hypothesis and demonstrating that differences in languages' systems ofgrammatical gendercan affect the way speakers of those languages think about objects. Speakers of Spanish and German (which have different gender systems) were asked to use adjectives to describe various objects designated by words that were either masculine or feminine in their respective languages. Speakers tended to describe objects in ways that were consistent with the gender of the noun in their language, indicating that the gender system of a language can influence speakers' perceptions of objects. Despite numerous citations, the experiment was criticised after the reported effects could not be replicated by independent trials.[83][84]Additionally, a large-scale data analysis usingword embeddingsof language models found no correlation between adjectives and inanimate noun genders,[85]while another study using large text corpora found a slight correlation between the gender of animate and inanimate nouns and their adjectives as well as verbs by measuring theirmutual information.[86] Colin Murray Turbaynealso argued that the pervasive use of ancient "dead metaphors" by researchers within different linguistic traditions has contributed to needless confusion in the development of modern empirical theories over time.[87]He points to several examples within theRomanceandGermaniclanguages of the subtle manner in which mankind has become unknowingly victimized by such "unmasked metaphors". Cases include the incorporation of mechanistic metaphors first introduced byRene DescartesandIsaac Newtonduring the 17th century into scientific theories which were subsequently developed byGeorge Berkeley,David HumeandImmanuel Kantduring the 18th century;[88][89][90]and the influence exerted byPlatonicmetaphors in the dialogueTimaeusupon the development of contemporary theories oflanguagein modern times.[91][92] In his 1987 bookWomen, Fire, and Dangerous Things: What Categories Reveal About the Mind,[53]Lakoff reappraised linguistic relativity and especially Whorf's ideas about how linguistic categorization represents and/or influences mental categories. He concluded that the debate had been confused. He identified four parameters on which researchers differed in their opinions about what constitutes linguistic relativity: Lakoff concluded that many of Whorf's critics had criticized him using novel definitions of linguistic relativity, rendering their criticisms moot. Researchers such asBoroditsky,Choi,Majid, Lucy and Levinson believe that language influences thought in more limited ways than the broadest early claims. Researchers examine the interface between thought (or cognition), language and culture and describe the relevant influences. They use experimental data to back up their conclusions.[93][94]Kay ultimately concluded that "[the] Whorf hypothesis is supported in the right visual field but not the left".[95]His findings show that accounting forbrain lateralizationoffers another perspective. Recent studies have also used a "behavior-based" method, which starts by comparing behavior across linguistic groups and then searches for causes for that behavior in the linguistic system.[50]In an early example of this method, Whorf attributed the occurrence of fires at a chemical plant to the workers' use of the word 'empty' to describe barrels containing only explosive vapors. More recently, Bloom noticed that speakers of Chinese had unexpected difficulties answeringcounterfactualquestions posed to them in a questionnaire. He concluded that this was related to the way in which counter-factuality is marked grammatically in Chinese. Other researchers attributed this result to Bloom's flawed translations.[96]Strømnes examined why Finnish factories had a greater occurrence of work related accidents than similar Swedish ones. He concluded that cognitive differences between the grammatical usage of Swedishprepositionsand Finnishcasescould have caused Swedish factories to pay more attention to the work process while Finnish factory organizers paid more attention to the individual worker.[97] Everett's work on thePirahã languageof theBrazilianAmazon[98]found several peculiarities that he interpreted as corresponding to linguistically rare features, such as a lack of numbers and color terms in the way those are otherwise defined and the absence of certain types of clauses. Everett's conclusions were met with skepticism from universalists[99]who claimed that the linguistic deficit is explained by the lack of need for such concepts.[100] Recent research with non-linguistic experiments in languages with different grammatical properties (e.g., languages with and withoutnumeral classifiersor with different gender grammar systems) showed that language differences in human categorization are due to such differences.[101]Experimental research suggests that this linguistic influence on thought diminishes over time, as when speakers of one language are exposed to another.[102] Research on time-space congruency suggests that temporal perception is shaped by spatial metaphors embedded in language. Casasanto & Boroditsky (2008) found that people often use spatial metaphors to conceptualize time, linking longer distances with longer durations.[103]Research has shown that linguistic differences can influence the perception of time. Swedish, like English, tends to describe time in terms of spatial distance (e.g., "a long meeting"), whereas Spanish often uses quantity-based metaphors (e.g., "a big meeting"). These linguistic patterns correlate with differences in how speakers estimate temporal durations: Swedish speakers are more influenced by spatial length, while Spanish speakers are more sensitive to volume.[104] Expanding on this, research on time-space congruency suggests that temporal perception is shaped by spatial metaphors embedded in language. In many languages, time is conceptualized along a horizontal axis (e.g., "looking forward to the future" in English). However, Mandarin speakers also employ vertical metaphors for time, referring to earlier events as "up" and later events as "down".[105]Experiments have shown that Mandarin speakers are quicker to recognize temporal sequences when they are presented vertically, whereas English speakers exhibit no such bias. Kashima & Kashima observed a correlation between the perceivedindividualism or collectivismin the social norms of a given country, with the tendency to neglect the use ofpronounsin the country's language. They argued that explicit reference to "you" and "I" reinforces a distinction between theselfand the other in the speaker.[106] Research also suggests that this structural difference influences how speakers attribute intentionality in events. Fausey & Boroditsky (2010) conducted experiments comparing how English and Spanish speakers describe accidental versus intentional actions. Their results showed that English speakers, who are accustomed to using explicit pronouns, were more likely to specify the agent responsible for an accidental event (e.g., "John broke the vase"). In contrast, Spanish speakers, who frequently omit pronouns, were more likely to use agent-neutral descriptions for accidental events (e.g., "The vase broke").[107] A 2013 study found that those who speak "futureless" languages with no grammatical marking of the future tense save more, retire with more wealth, smoke less, practice safer sex, and are less obese than those who do not.[108]This effect has come to be termed the linguistic-savings hypothesis and has been replicated in several cross-cultural and cross-country studies. However, a study of Chinese, which can be spoken both with and without the grammatical future marking "will", found that subjects do not behave more impatiently when "will" is used repetitively. This laboratory-based finding of elective variation within a single language does not refute the linguistic savings hypothesis but some have suggested that it shows the effect may be due to culture or other non-linguistic factors.[109] Psycholinguisticstudies explored motion perception, emotion perception, object representation and memory.[110][111][112][113]The gold standard of psycholinguistic studies on linguistic relativity is now finding non-linguistic cognitive differences[example needed]in speakers of different languages (thus rendering inapplicable Pinker's criticism that linguistic relativity is "circular"). Recent work withbilingualspeakers attempts to distinguish the effects of language from those of culture on bilingual cognition including perceptions of time, space, motion, colors and emotion.[114]Researchers described differences between bilinguals andmonolingualsin perception of color,[115]representations of time[116][117][118]and other elements of cognition.[119] Linguistic relativity inspired others to consider whether thought and emotion could be influenced by manipulating language. The question bears on philosophical, psychological, linguistic and anthropological questions.[clarification needed] A major question is whether human psychological faculties are mostly innate or whether they are mostly a result of learning, and hence subject to cultural and social processes such as language. The innate opinion is that humans share the same set of basic faculties, variability due to cultural differences is less important, and the human mind is a mostly biological construction, so all humans who share the same neurological configuration can be expected to have similar cognitive patterns. Multiple alternatives have advocates. The contraryconstructivistposition holds that human faculties and concepts are largely influenced by socially constructed and learned categories, without many biological restrictions. Another variant isidealist, which holds that human mental capacities are generally unrestricted by biological-material structures. Another isessentialist, which holds that essential differences[clarification needed]may influence the ways individuals or groups experience and conceptualize the world. Yet another isrelativist(cultural relativism), which sees different cultural groups as employing different conceptual schemes that are not necessarily compatible or commensurable, nor more or less in accord with external reality.[120] Another debate considers whether thought is a type of internal speech or is independent of and prior to language.[121] In thephilosophy of language, the question addresses the relations between language, knowledge and the external world, and the concept oftruth. Philosophers such asPutnam,Fodor,Davidson, andDennettsee language as directly representing entities from the objective world, and categorization as reflecting that world. Other philosophers (e.g.Quine,Searle, andFoucault) argue that categorization and conceptualization issubjectiveand arbitrary. Another view, represented byJason Storm, seeks a third way by emphasizing how language changes and imperfectly represents reality without being completely divorced from ontology.[122] Another question is whether language is a tool for representing and referring to objects in the world, or whether it is a system used to construct mental representations that can be communicated.[clarification needed] Sapir/Whorf contemporaryAlfred Korzybskiwas independently developing his theory ofgeneral semantics, which was intended to use language's influence of thinking to maximize human cognitive abilities. Korzybski's thinking was influenced by logical philosophy such asRussellandWhitehead'sPrincipia MathematicaandWittgenstein'sTractatus Logico-Philosophicus.[123]Although Korzybski was not aware of Sapir and Whorf's writings, the philosophy was adopted by Whorf-admirer Stuart Chase, who fused Whorf's interest in cultural-linguistic variation with Korzybski's programme in his popular work "The Tyranny of Words".S. I. Hayakawawas a follower and popularizer of Korzybski's work, writingLanguage in Thought and Action. The general semantics philosophy influenced the development ofneuro-linguistic programming(NLP), another therapeutic technique that seeks to use awareness of language use to influence cognitive patterns.[124] Korzybski independently described a "strong" version of the hypothesis of linguistic relativity.[125] We do not realize what tremendous power the structure of an habitual language has. It is not an exaggeration to say that it enslaves us through the mechanism of s[emantic] r[eactions] and that the structure which a language exhibits, and impresses upon us unconsciously, is automatically projected upon the world around us. In their fiction, authors such asAyn RandandGeorge Orwellexplored how linguistic relativity might be exploited for political purposes. In Rand'sAnthem, a fictivecommunistsociety removed the possibility of individualism by removing the word "I" from the language.[127]In Orwell's1984the authoritarian state created the languageNewspeakto make it impossible for people to think critically about the government, or even to contemplate that they might be impoverished or oppressed, by reducing the number of words to reduce the thought of the locutor.[128] Others have been fascinated by the possibilities of creating new languages that could enable new, and perhaps better, ways of thinking. Examples of such languages designed to explore the human mind includeLoglan, explicitly designed byJames Cooke Brownto test the linguistic relativity hypothesis, by experimenting whether it would make its speakers think more logically.Suzette Haden Elgin, who was involved with the early development of neuro-linguistic programming, invented the languageLáadanto explore linguistic relativity by making it easier to express what Elgin considered the female worldview, as opposed toStandard Average Europeanlanguages, which she considered to convey a "male centered" worldview.[129]John Quijada's languageIthkuilwas designed to explore the limits of the number of cognitive categories a language can keep its speakers aware of at once.[130]Similarly, Sonja Lang'sToki Ponawas developed according to aTaoistphilosophy for exploring how (or if) such a language would direct human thought.[131] APL programming languageoriginatorKenneth E. Iversonbelieved that the Sapir–Whorf hypothesis applied to computer languages (without actually mentioning it by name). HisTuring Awardlecture, "Notation as a Tool of Thought", was devoted to this theme, arguing that more powerful notations aided thinking about computer algorithms.[132][133] The essays ofPaul Grahamexplore similar themes, such as a conceptual hierarchy of computer languages, with more expressive and succinct languages at the top. Thus, the so-calledblubparadox(after a hypothetical programming language of average complexity calledBlub) says that anyone preferentially using some particular programming language willknowthat it is more powerful than some, but not that it is less powerful than others. The reason is thatwritingin some language meansthinkingin that language. Hence the paradox, because typically programmers are "satisfied with whatever language they happen to use, because it dictates the way they think about programs".[134] In a 2003 presentation at anopen sourceconvention,Yukihiro Matsumoto, creator of theprogramming languageRuby, said that one of his inspirations for developing the language was the science fiction novelBabel-17, based on the Whorf Hypothesis.[135] Numerous examples of linguistic relativity have appeared in science fiction. Sociolinguistics affects some variables within language, including the manner in which words are pronounced, word selection in certain dialogue, context, and tone. It's suggested that these effects[138]may have implications for linguistic relativity.
https://en.wikipedia.org/wiki/Linguistic_relativity
Application virtualization softwarerefers to both applicationvirtual machinesand software responsible for implementing them. Application virtual machines are typically used to allow applicationbytecodeto run portably on many different computer architectures and operating systems. The application is usually run on the computer using aninterpreterorjust-in-time compilation(JIT). There are often several implementations of a given virtual machine, each covering a different set of functions. The table here summarizes elements for which the virtual machine designs are intended to be efficient, not the list of abilities present in any implementation. Virtual machine instructions process data in local variables using a mainmodel of computation, typically that of astack machine,register machine, orrandom access machineoften called the memory machine. Use of these three methods is motivated by different tradeoffs in virtual machines vs physical machines, such as ease of interpreting, compiling, and verifying for security. Memory managementin these portable virtual machines is addressed at a higher level of abstraction than in physical machines. Some virtual machines, such as the popularJava virtual machines(JVM), are involved with addresses in such a way as to require safe automatic memory management by allowing the virtual machine to trace pointer references, and disallow machine instructions from manually constructing pointers to memory. Other virtual machines, such as LLVM, are more like traditional physical machines, allowing direct use and manipulation of pointers.Common Intermediate Language(CIL) offers a hybrid in between, allowing both controlled use of memory (like the JVM, which allows safe automatic memory management), while also allowing an 'unsafe' mode that allows direct pointer manipulation in ways that can violate type boundaries and permission. Code securitygenerally refers to the ability of the portable virtual machine to run code while offering it only a prescribed set of abilities. For example, the virtual machine might only allow the code access to a certain set of functions or data. The same controls over pointers which make automatic memory management possible and allow the virtual machine to ensure typesafe data access are used to assure that a code fragment is only allowed to certain elements of memory and cannot bypass the virtual machine itself. Other security mechanisms are then layered on top as code verifiers, stack verifiers, and other methods. Aninterpreterallows programs made of virtual instructions to be loaded and run immediately without a potentially costly compile into native machine instructions. Any virtual machine which can be run can be interpreted, so the column designation here refers to whether the design includes provisions for efficient interpreting (for common usage). Just-in-time compilation(JIT), refers to a method of compiling to native instructions at the latest possible time, usually immediately before or during the running of the program. The challenge of JIT is more one of implementation than of virtual machine design, however, modern designs have begun to make considerations to help efficiency. The simplest JIT methods simply compile to a code fragment similar to an offline compiler. However, more complex methods are often employed, which specialize compiled code fragments to parameters known only at runtime (seeAdaptive optimization). Ahead-of-time compilation(AOT) refers to the more classic method of using a precompiler to generate a set of native instructions which do not change during the runtime of the program. Because aggressive compiling and optimizing can take time, a precompiled program may launch faster than one which relies on JIT alone for execution. JVM implementations have mitigated this startup cost by initial interpreting to speed launch times, until native code fragments can be generated by JIT. Shared librariesare a facility to reuse segments of native code across multiple running programs. In modern operating systems, this generally means usingvirtual memoryto share the memory pages containing a shared library across different processes which are protected from each other viamemory protection. It is interesting that aggressive JIT methods such as adaptive optimization often produce code fragments unsuitable for sharing across processes or successive runs of the program, requiring a tradeoff be made between the efficiencies of precompiled and shared code and the advantages of adaptively specialized code. For example, several design provisions of CIL are present to allow for efficient shared libraries, possibly at the cost of more specialized JIT code. The JVM implementation onOS Xuses a Java Shared Archive[3]to provide some of the benefits of shared libraries. In addition to the portable virtual machines described above, virtual machines are often used as an execution model for individual scripting languages, usually by an interpreter. This table lists specific virtual machine implementations, both of the above portable virtual machines, and of scripting language virtual machines.
https://en.wikipedia.org/wiki/Comparison_of_application_virtual_machines
Rule 30is anelementary cellular automatonintroduced byStephen Wolframin 1983.[2]UsingWolfram's classification scheme, Rule 30 is a Class III rule, displaying aperiodic,chaoticbehaviour. This rule is of particular interest because it produces complex, seemingly random patterns from simple, well-defined rules. Because of this, Wolfram believes that Rule 30, and cellular automata in general, are the key to understanding how simple rules produce complex structures and behaviour in nature. For instance, a pattern resembling Rule 30 appears on the shell of the widespread cone snail speciesConus textile. Rule 30 has also been used as arandom number generatorinMathematica,[3]and has also been proposed as a possiblestream cipherfor use incryptography.[4][5] Rule 30 is so named because 30 is the smallestWolfram codewhich describes its rule set (as described below). The mirror image, complement, and mirror complement of Rule 30 have Wolfram codes 86, 135, and 149, respectively. In all of Wolfram's elementary cellular automata, an infinite one-dimensional array of cellular automaton cells with only two states is considered, with each cell in some initial state. At discrete time intervals, every cell spontaneously changes state based on its current state and the state of its two neighbors. For Rule 30, the rule set which governs the next state of the automaton is: If the left, center, and right cells are denoted(p,q,r)then the corresponding formula for the next state of the center cell can be expressed asp xor (q or r). It is called Rule 30 because inbinary,000111102= 30. The following diagram shows the pattern created, with cells colored based on the previous state of their neighborhood. Darker colors represent "1" and lighter colors represent "0". Time increases down the vertical axis. The following pattern emerges from an initial state in which a single cell with state 1 (shown as black) is surrounded by cells with state 0 (white). Rule 30 cellular automaton Here, the vertical axis represents time and any horizontal cross-section of the image represents the state of all the cells in the array at a specific point in the pattern's evolution. Several motifs are present in this structure, such as the frequent appearance of white triangles and a well-defined striped pattern on the left side; however the structure as a whole has no discernible pattern. The number of black cells at generationn{\displaystyle n}is given by the sequence and is approximatelyn{\displaystyle n}.[citation needed] Rule 30 meets rigorous definitions of chaos proposed byDevaneyand Knudson. In particular, according to Devaney's criteria, Rule 30 displayssensitive dependence on initial conditions(two initial configurations that differ only in a small number of cells rapidly diverge), its periodic configurations are dense in the space of all configurations, according to theCantor topologyon the space of configurations (there is a periodic configuration with any finite pattern of cells), and it ismixing(for any two finite patterns of cells, there is a configuration containing one pattern that eventually leads to a configuration containing the other pattern). According to Knudson's criteria, it displays sensitive dependence and there is a dense orbit (an initial configuration that eventually displays any finite pattern of cells). Both of these characterizations of the rule's chaotic behavior follow from a simpler and easy to verify property of Rule 30: it isleft permutative, meaning that if two configurationsCandDdiffer in the state of a single cell at positioni, then after a single step the new configurations will differ at celli+ 1.[6] As is apparent from the image above, Rule 30 generates seeming randomness despite the lack of anything that could reasonably be considered random input. Stephen Wolfram proposed using its center column as apseudorandom number generator(PRNG); it passes many standard tests for randomness, and Wolfram previously used this rule in the Mathematica product for creating random integers.[7] Sipper and Tomassini have shown that as a random number generator Rule 30 exhibits poor behavior on achi squared testwhen applied to all the rule columns as compared to other cellular automaton-based generators.[8]The authors also expressed their concern that "The relatively low results obtained by the rule 30 CA may be due to the fact that we considered N random sequences generated in parallel, rather than the single one considered by Wolfram."[9] TheCambridge North railway stationis decorated with architectural panels displaying the evolution of Rule 30 (or equivalently under black-white reversal, Rule 135).[10]The design was described by its architect as inspired byConway's Game of Life, a different cellular automaton studied by Cambridge mathematicianJohn Horton Conway, but is not actually based on Life.[11][12] The state update can be done quickly bybitwise operations, if the cell values are represented by the bits within one (or more) computer words. Here shown inC++: This program produces the following output:
https://en.wikipedia.org/wiki/Rule_30
Anopen service interface definition(OSID) is a programmatic interface specification describing a service. These interfaces are specified by theOpen Knowledge Initiative(OKI) to implement aservice-oriented architecture(SOA) to achieveinteroperabilityamong applications across a varied base of underlying and changing technologies. To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer fromprotocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments. OSIDs assist insoftware designand development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider andbelowthe interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplifiedproject management. OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achievesreusabilityat a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes. An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means ofabstraction. When all the OSID providers implement the same service, this is called anadapterpattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application. This Internet-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Open_Service_Interface_Definitions
In economics, theJevons paradox(/ˈdʒɛvənz/; sometimesJevons effect) occurs whentechnological advancementsmake aresourcemoreefficientto use (thereby reducing the amount needed for a single application); however, as the cost of using the resource drops, if the price is highlyelastic, this results in overalldemand increasing, causing total resource consumption to rise.[1][2][3][4]Governments have typically expected efficiency gains to lowerresource consumption, rather than anticipating possible increases due to the Jevons paradox.[5] In 1865, the English economistWilliam Stanley Jevonsobserved that technological improvements that increased the efficiency of coal use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption.[6][7] The issue has been re-examined by modern economists studying consumptionrebound effectsfrom improvedenergy efficiency. In addition to reducing the amount needed for a given use, improved efficiency also lowers the relative cost of using a resource, which increases the quantity demanded. This may counteract (to some extent) the reduction in use from improved efficiency. Additionally, improved efficiency increases real incomes and accelerateseconomic growth, further increasing the demand for resources. The Jevons paradox occurs when the effect from increased demand predominates, and the improved efficiency results in a faster rate of resource utilization.[7] Considerable debate exists about the size of the rebound in energy efficiency and the relevance of the Jevons paradox toenergy conservation. Some dismiss the effect, while others worry that it may be self-defeating to pursuesustainabilityby increasing energy efficiency.[5]Some environmental economists have proposed that efficiency gains be coupled with conservation policies that keep the cost of use the same (or higher) to avoid the Jevons paradox.[8]Conservation policies that increase cost of use (such ascap and tradeorgreen taxes) can be used to control the rebound effect.[9] The Jevons paradox was first described by the English economistWilliam Stanley Jevonsin his 1865 bookThe Coal Question. Jevons observed thatEngland's consumption ofcoalsoared afterJames Wattintroduced theWatt steam engine, which greatly improved the efficiency of the coal-firedsteam enginefromThomas Newcomen's earlier design. Watt's innovations made coal a more cost-effective power source, leading to the increased use of the steam engine in a wide range of industries. This in turn increased total coal consumption, even as the amount of coal required for any particular application fell. Jevons argued that improvements infuel efficiencytend to increase (rather than decrease) fuel use, writing: "It is a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption. The very contrary is the truth."[6] At that time, many in Britain worried that coal reserves were rapidly dwindling, but some experts opined that improving technology would reduce coal consumption. Jevons argued that this view was incorrect, as further increases in efficiency would tend to increase the use of coal. Hence, improving technology would tend to increase the rate at which England's coal deposits were being depleted, and could not be relied upon to solve the problem.[6][7] Although Jevons originally focused on coal, the concept has since been extended to other resources, e.g.,water usage.[10]The Jevons paradox is also found insocio-hydrology, in the safe development paradox called thereservoir effect, where construction of a reservoir to reduce the risk of water shortage can instead exacerbate that risk, as increased water availability leads to more development and hence more water consumption.[11] Economists have observed that consumers tend to travel more when their cars are more fuel efficient, causing a 'rebound' in thedemandfor fuel.[12]An increase in the efficiency with which a resource (e.g., fuel) is used causes a decrease in thecostof using that resource when measured in terms of what it can achieve (e.g., travel). Generally speaking, a decrease in the cost (or price) of agood or servicewill increase the quantity demanded (thelaw of demand). With a lower cost for travel, consumers will travel more, increasing the demand for fuel. This increase in demand is known as therebound effect, and it may or may not be large enough to offset the original drop in fuel use from the increased efficiency. The Jevons paradox occurs when the rebound effect is greater than 100%, exceeding the original efficiency gains.[7] The size of the direct rebound effect is dependent on theprice elasticity of demandfor the good.[13]In a perfectly competitive market where fuel is the sole input used, if the price of fuel remains constant but efficiency is doubled, the effective price of travel would be halved (twice as much travel can be purchased). If in response, the amount of travel purchased more than doubles (i.e., demand isprice elastic), then fuel consumption would increase, and the Jevons paradox would occur. If demand is price inelastic, the amount of travel purchased would less than double, and fuel consumption would decrease. However, goods and services generally use more than one type of input (e.g. fuel, labour, machinery), and other factors besides input cost may also affect price. These factors tend to reduce the rebound effect, making the Jevons paradox less likely to occur.[7] As an example of where the paradox did not occur, large improvements in farming productivity (including theThird Agricultural Revolution) led to lower food prices but did not result in increased demand for food. (Demand for food is inelastic.) This instead led to lower employment in the farming sector, which declined from 40% of Americans in 1900 to less than 2% in 2024.[14] The following conditions are necessary for a Jevons paradox to occur:[14] In the 1980s, economists Daniel Khazzoom and Leonard Brookes revisited the Jevons paradox for the case of society'senergy use. Brookes, then chief economist at theUK Atomic Energy Authority, argued that attempts to reduce energy consumption by increasingenergy efficiencywould simply raise demand for energy in the economy as a whole. Khazzoom focused on the narrower point that the potential for rebound was ignored in mandatory performance standards for domestic appliances being set by theCalifornia Energy Commission.[15][16] In 1992, the economist Harry Saunders dubbed the hypothesis that improvements inenergy efficiencywork to increase (rather than decrease) energy consumption theKhazzoom–Brookes postulate, and argued that the hypothesis is broadly supported by neoclassicalgrowth theory(the mainstream economic theory ofcapital accumulation,technological progressandlong-runeconomic growth). Saunders showed that the Khazzoom–Brookes postulate occurs in theneoclassical growth modelunder a wide range of assumptions.[15][17] According to Saunders, increasedenergy efficiencytends to increase energy consumption by two means. First, increased energy efficiency makes the use of energy relatively cheaper, thus encouraging increased use (the direct rebound effect). Second, increased energy efficiency increases real incomes and leads to increased economic growth, which pulls up energy use for the whole economy. At themicroeconomiclevel (looking at an individual market), even with the rebound effect, improvements in energy efficiency usually result in reduced energy consumption.[18]That is, the rebound effect is usually less than 100%. However, at themacroeconomiclevel, more efficient (and hence comparatively cheaper) energy leads to faster economic growth, which increases energy use throughout the economy. Saunders argued that taking into account both microeconomic and macroeconomic effects, the technological progress that improves energy efficiency will tend to increase overall energy use.[15] Jevons warned that fuel efficiency gains tend to increase fuel use. However, this does not imply that improved fuel efficiency is worthless if the Jevons paradox occurs; higher fuel efficiency enables greater production and a higher materialquality of life.[19]For example, a more efficient steam engine allowed the cheaper transport of goods and people that contributed to theIndustrial Revolution. Nonetheless, if the Khazzoom–Brookes postulate is correct, increased fuel efficiency, by itself, will not reduce the rate of depletion offossil fuels.[15] There is considerable debate about whether the Khazzoom-Brookes Postulate is correct, and of the relevance of the Jevons paradox toenergy conservationpolicy. Most governments, environmentalists and NGOs pursue policies that improve efficiency, holding that these policies will lower resource consumption and reduce environmental problems. Others, including manyenvironmental economists, doubt this 'efficiency strategy' towardssustainability, and worry that efficiency gains may in fact lead to higher production and consumption. They hold that for resource use to fall, efficiency gains should be coupled with other policies that limit resource use.[5][17][20]However, other environmental economists argue that, while the Jevons paradox may occur in some situations, the empirical evidence for its widespread applicability is limited.[21] The Jevons paradox is sometimes used to argue thatenergy conservationefforts are futile, for example, that more efficient use of oil will lead to increased demand, and will not slow the arrival or the effects ofpeak oil. This argument is usually presented as a reason not to enact environmental policies or pursue fuel efficiency (e.g., if cars are more efficient, it will simply lead to more driving).[22][23]Several points have been raised against this argument. First, in the context of a mature market such as for oil in developed countries, the direct rebound effect is usually small, and so increased fuel efficiency usually reduces resource use, other conditions remaining constant.[12][18][24]Second, even if increased efficiency does not reduce the total amount of fuel used, there remain other benefits associated with improved efficiency. For example, increased fuel efficiency may mitigate the price increases, shortages and disruptions in the global economy associated with crude oil depletion.[25]Third, environmental economists have pointed out that fuel use will unambiguously decrease if increased efficiency is coupled with an intervention (e.g., afuel tax) that keeps the cost of fuel use the same or higher.[8] The Jevons paradox indicates that increased efficiency by itself may not reduce fuel use, and thatsustainable energypolicy must rely on other types of government interventions as well.[9]As the imposition of conservation standards or other government interventions that increase cost-of-use do not display the Jevons paradox, they can be used to control the rebound effect.[9]To ensure that efficiency-enhancing technological improvements reduce fuel use, efficiency gains can be paired with government intervention that reduces demand (e.g.,green taxes,cap and trade, or higheremissions standards). Theecological economistsMathis WackernagelandWilliam Reeshave suggested that any cost savings from efficiency gains be "taxed away or otherwise removed from further economic circulation. Preferably they should be captured for reinvestment innatural capitalrehabilitation."[8]By mitigating the economic effects of government interventions designed to promote ecologically sustainable activities, efficiency-improving technological progress may make the imposition of these interventions more palatable, and more likely to be implemented.[26][27][28] Increasing theyieldof a crop, such as wheat, for a given area will reduce the area required to achieve the same total yield. However, increasing efficiency may make it more profitable to grow wheat and lead farmers to convert land to the production of wheat, thereby increasing land use instead.[29] Microsoft CEOSatya Nadellahas referenced the Jevons paradox when describing artificial intelligence.[30]Erik Brynjolfssonstated that he believes there will be some occupations for which the three conditions for the paradox will be met, thereby causing increased employment in those fields, such asradiologists, translators, andcoders.[14]
https://en.wikipedia.org/wiki/Jevons_paradox
Inpsychologyandsociology, atrust metricis ameasurementormetricof the degree to which one social actor (an individual or a group)trustsanother social actor. Trust metrics may be abstracted in a manner that can be implemented oncomputers, making them of interest for the study and engineering ofvirtual communities, such asFriendsterandLiveJournal. Trust escapes a simple measurement because its meaning is too subjective for universally reliable metrics, and the fact that it is a mental process, unavailable to instruments. There is a strong argument[1]against the use of simplistic metrics to measure trust due to the complexity of the process and the 'embeddedness' of trust that makes it impossible to isolate trust from related factors. There is no generally agreed set of properties that make a particular trust metric better than others, as each metric is designed to serve different purposes, e.g.[2]provides certain classification scheme for trust metrics. Two groups of trust metrics can be identified: Trust metrics enable trust modelling[3]and reasoning about trust. They are closely related toreputation systems. Simple forms of binary trust metrics can be found e.g. in PGP.[4]The first commercial forms of trust metrics in computer software were in applications likeeBay's Feedback Rating.Slashdotintroduced its notion ofkarma, earned for activities perceived to promote group effectiveness, an approach that has been very influential in latervirtual communities.[citation needed] Empirical metrics capture the value of trust by exploring the behavior or introspection of people, to determine the perceived or expressed level of trust. Those methods combine theoretical background (determining what it is that they measure) with defined set of questions and statistical processing of results. The willingness to cooperate, as well as actual cooperation, are commonly used to both demonstrate and measure trust. The actual value (level of trust and/or trustworthiness) is assessed from the difference between observed and hypothetical behaviors i.e. those that would have been anticipated in the absence of cooperation. Surveys capture the level of trust by means of both observations or introspection, but without engaging into any experiments. Respondents are usually providing answers to a set of questions or statements and responses are e.g. structured according to aLikert scale. Differentiating factors are the underlying theoretical background and contextual relevance. One of the earliest surveys are McCroskey's scales[5]that have been used to determine authoritativeness (competence) and character (trustworthiness) of speakers. Rempel's trust scale[6]and Rotter's scale[7]are quite popular in determining the level of interpersonal trust in different settings. The Organizational Trust Inventory (OTI)[8]is an example of an exhaustive, theory-driven survey that can be used to determine the level of trust within the organisation. For a particular research area a more specific survey can be developed. For example, the interdisciplinary model of trust,[9]has been verified using a survey while[10]uses a survey to establish the relationship between design elements of the web site and perceived trustworthiness of it. Another empirical method to measure trust is to engage participants in experiments, treating the outcome of such experiments as estimates of trust. Several games and game-like scenarios have been tried, some of which estimate trust or confidence in monetary terms (see[11]for an interesting overview). Games of trust are designed in a way that theirNash equilibriumdiffer fromPareto optimumso that no player alone can maximize their own utility by altering his selfish strategy without cooperation, while cooperating partners can benefit. Trust can be therefore estimated on the basis of monetary gain attributable to cooperation. The original 'game of trust' has been described in[12]as an abstracted investment game between an investor and his broker. The game can be played once or several times, between randomly chosen players or in pairs that know each other, yielding different results. Several variants of the game exist, focusing on different aspects of trust as the observable behaviour. For example, rules of the game can be reversed into what can be called a game of distrust,[13]declaratory phase can be introduced[14]or rules can be presented in a variety of ways, altering the perception of participants. Other interesting games are e.g. binary-choice trust games,[15]the gift-exchange game,[16]cooperative trust games,[citation needed]and various other forms of social games. Specifically the Prisoners Dilemma[17]are popularly used to link trust with economic utility and demonstrate the rationality behind reciprocity. For multi-player games, different forms of close market simulations exist.[18] Formal metrics focus on facilitating trust modelling, specifically for large scale models that represent trust as an abstract system (e.g.social networkorweb of trust). Consequently, they may provide weaker insight into the psychology of trust, or in particulars of empirical data collection. Formal metrics tend to have a strong foundations inalgebra,probabilityorlogic. There is no widely recognised way to attribute value to the level of trust, with each representation of a 'trust value' claiming certain advantages and disadvantages. There are systems that assume only binary values,[19]that use fixed scale,[20]where confidence range from −100 to +100 (while excluding zero),[21]from 0 to 1[22][23]or from [−1 to +1);[24]where confidence is discrete or continuous, one-dimensional or have many dimensions.[25]Some metrics use ordered set of values without attempting to convert them to any particular numerical range (e.g.[26]See[27]for a detailed overview). There is also a disagreement about the semantics of some values. The disagreement regarding the attribution of values to levels of trust is specifically visible when it comes to the meaning of zero and to negative values. For example, zero may indicate either the lack of trust (but not distrust), or lack of information, or a deep distrust. Negative values, if allowed, usually indicate distrust, but there is a doubt[28]whether distrust is simply trust with a negative sign, or a phenomenon of its own. Subjective probability[29]focuses on trustor's self-assessment about his trust in the trustee. Such an assessment can be framed as an anticipation regarding future behaviour of the trustee, and expressed in terms of probability. Such a probability is subjective as it is specific to the given trustor, their assessment of the situation, information available to him etc. In the same situation other trustors may have a different level of a subjective probability. Subjective probability creates a valuable link between formalisation and empirical experimentation. Formally, subjective probability can benefit from available tools of probability and statistics. Empirically, subjective probability can be measured through one-side bets. Assuming that the potential gain is fixed, the amount that a person bets can be used to estimate his subjective probability of a transaction. The logic for uncertain probabilities (subjective logic) has been introduced by Josang,[30][31]where uncertain probabilities are calledsubjective opinions. This concept combines probability distribution with uncertainty, so that each opinion about trust can be viewed as a distribution of probability distributions where each distribution is qualified by associated uncertainty. The foundation of the trust representation is that an opinion (an evidence or a confidence) about trust can be represented as a four-tuple (trust, distrust, uncertainty, base rate), where trust, distrust and uncertainty must add up to one, and hence are dependent through additivity. Subjective logic is an example of computational trust where uncertainty is inherently embedded in the calculation process and is visible at the output. It is not the only one, it is e.g. possible to use a similar quadruplet (trust, distrust, unknown, ignorance) to express the value of confidence,[32]as long as the appropriate operations are defined. Despite the sophistication of the subjective opinion representation, the particular value of a four-tuple related to trust can be easily derived from a series of binary opinions about a particular actor or event, thus providing a strong link between this formal metric and empirically observable behaviour. Finally, there are CertainTrust[33]and CertainLogic.[34]Both share a common representation, which is equivalent to subjective opinions, but based on three independent parameters named 'average rating', 'certainty', and 'initial expectation'. Hence, there is a bijective mapping between the CertainTrust-triplet and the four-tuple of subjective opinions. Fuzzy systems,[35]as trust metrics can link natural language expressions with a meaningful numerical analysis. Application offuzzy logicto trust has been studied in the context ofpeer-to-peernetworks[36]to improve peer rating. Also for grid computing[37]it has been demonstrated that fuzzy logic allows to solve security issues in reliable and efficient manner. The set of properties that should be satisfied by a trust metric vary, depending on the application area. Following is a list of typical properties. Transitivity is a highly desired property of a trust metric.[38]In situations where A trusts B and B trusts C, transitivity concerns the extent to which A trusts C. Without transitivity, trust metrics are unlikely to be used to reason about trust in more complex relationships. The intuition behind transitivity follows everyday experience of 'friends of a friend' (FOAF), the foundation of social networks. However, the attempt to attribute exact formal semantics to transitivity reveals problems, related to the notion of a trust scope or context. For example,[39]defines conditions for the limited transitivity of trust, distinguishing between direct trust and referral trust. Similarly,[40]shows that simple trust transitivity does not always hold, based on information on theAdvogatomodel and, consequently, have proposed new trust metrics. The simple, holistic approach to transitivity is characteristic to social networks (FOAF,Advogato). It follows everyday intuition and assumes that trust and trustworthiness apply to the whole person, regardless of the particular trust scope or context. If one can be trusted as a friend, one can be also trusted to recommend or endorse another friend. Therefore, transitivity is semantically valid without any constraints, and is a natural consequence of this approach. The more thorough approach distinguishes between different scopes/contexts of trust, and does not allow for transitivity between contexts that are semantically incompatible or inappropriate. A contextual approach may, for instance, distinguish between trust in a particular competence, trust in honesty, trust in the ability to formulate a valid opinion, or trust in the ability to provide reliable advice about other sources of information. A contextual approach is often used in trust-based service composition.[41]The understanding that trust is contextual (has a scope) is a foundation of acollaborative filtering. For a formal trust metric to be useful, it should define a set of operations over values of trust in such way that the result of those operations produce values of trust. Usually at least two elementary operators are considered: The exact semantics of both operators are specific to the metric. Even within one representation, there is still a possibility for a variety of semantic interpretations. For example, for the representation as the logic for uncertain probabilities, trust fusion operations can be interpreted by applying different rules (Cumulative fusion, averaging fusion, constraint fusion (Dempster's rule), Yager's modified Dempster's rule, Inagaki's unified combination rule, Zhang's centre combination rule, Dubois and Prade's disjunctive consensus rule etc.). Each interpretations leads to different results, depending on the assumptions for trust fusion in the particular situatation to be modelled. See[42][43]for detailed discussions. The growing size of networks of trust make scalability another desired property, meaning that it is computationally feasible to calculate the metric for large networks. Scalability usually puts two requirements of the metric: Attack resistanceis an important non-functional property of trust metrics which reflects their ability not to be overly influenced by agents who try to manipulate the trust metric and who participate in bad faith (i.e. who aim to abuse the presumption of trust). Thefree softwaredeveloper resourceAdvogatois based on a novel approach to attack-resistant trust metrics ofRaph Levien. Levien observed thatGoogle'sPageRankalgorithm can be understood to be an attack resistant trust metric rather similar to that behind Advogato.
https://en.wikipedia.org/wiki/Trust_metric
A subsetS{\displaystyle S}of atopological spaceX{\displaystyle X}is called aregular open setif it is equal to theinteriorof itsclosure; expressed symbolically, ifInt⁡(S¯)=S{\displaystyle \operatorname {Int} ({\overline {S}})=S}or, equivalently, if∂(S¯)=∂S,{\displaystyle \partial ({\overline {S}})=\partial S,}whereInt⁡S,{\displaystyle \operatorname {Int} S,}S¯{\displaystyle {\overline {S}}}and∂S{\displaystyle \partial S}denote, respectively, the interior, closure andboundaryofS.{\displaystyle S.}[1] A subsetS{\displaystyle S}ofX{\displaystyle X}is called aregular closed setif it is equal to the closure of its interior; expressed symbolically, ifInt⁡S¯=S{\displaystyle {\overline {\operatorname {Int} S}}=S}or, equivalently, if∂(Int⁡S)=∂S.{\displaystyle \partial (\operatorname {Int} S)=\partial S.}[1] IfR{\displaystyle \mathbb {R} }has its usualEuclidean topologythen the open setS=(0,1)∪(1,2){\displaystyle S=(0,1)\cup (1,2)}is not a regular open set, sinceInt⁡(S¯)=(0,2)≠S.{\displaystyle \operatorname {Int} ({\overline {S}})=(0,2)\neq S.}Everyopen intervalinR{\displaystyle \mathbb {R} }is a regular open set and every non-degenerate closed interval (that is, a closed interval containing at least two distinct points) is a regular closed set. A singleton{x}{\displaystyle \{x\}}is a closed subset ofR{\displaystyle \mathbb {R} }but not a regular closed set because its interior is the empty set∅,{\displaystyle \varnothing ,}so thatInt⁡{x}¯=∅¯=∅≠{x}.{\displaystyle {\overline {\operatorname {Int} \{x\}}}={\overline {\varnothing }}=\varnothing \neq \{x\}.} A subset ofX{\displaystyle X}is a regular open set if and only if its complement inX{\displaystyle X}is a regular closed set.[2]Every regular open set is anopen setand every regular closed set is aclosed set. Eachclopen subsetofX{\displaystyle X}(which includes∅{\displaystyle \varnothing }andX{\displaystyle X}itself) is simultaneously a regular open subset and regular closed subset. The interior of a closed subset ofX{\displaystyle X}is a regular open subset ofX{\displaystyle X}and likewise, the closure of an open subset ofX{\displaystyle X}is a regular closed subset ofX.{\displaystyle X.}[2]The intersection (but not necessarily the union) of two regular open sets is a regular open set. Similarly, the union (but not necessarily the intersection) of two regular closed sets is a regular closed set.[2] The collection of all regular open sets inX{\displaystyle X}forms acomplete Boolean algebra; thejoinoperation is given byU∨V=Int⁡(U∪V¯),{\displaystyle U\vee V=\operatorname {Int} ({\overline {U\cup V}}),}themeetisU∧V=U∩V{\displaystyle U\land V=U\cap V}and the complement is¬U=Int⁡(X∖U).{\displaystyle \neg U=\operatorname {Int} (X\setminus U).}
https://en.wikipedia.org/wiki/Regular_closed_set
Human–computer interaction(HCI) is the process through which people operate and engage with computer systems. Research in HCI covers the design and the use ofcomputer technology, which focuses on theinterfacesbetween people (users) andcomputers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. These include visual, auditory, and tactile (haptic) feedback systems, which serve as channels for interaction in both traditional interfaces and mobile computing contexts.[1]A device that allows interaction between human being and a computer is known as a "human–computer interface". As a field of research, human–computer interaction is situated at the intersection ofcomputer science,behavioral sciences,design,media studies, andseveral other fields of study. The term was popularized byStuart K. Card,Allen Newell, andThomas P. Moranin their 1983 book,The Psychology of Human–Computer Interaction.The first known use was in 1975 by Carlisle.[2]The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human–computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field.[3][4] Humans interact with computers in many ways, and the interface between the two is crucial to facilitating this interaction. HCI is also sometimes termedhuman–machine interaction(HMI),man-machine interaction(MMI) orcomputer-human interaction(CHI). Desktop applications, web browsers, handheld computers, and computer kiosks make use of the prevalentgraphical user interfaces(GUI) of today.[5]Voice user interfaces(VUIs) are used forspeech recognitionand synthesizing systems, and the emergingmulti-modaland Graphical user interfaces (GUI) allow humans to engage withembodied character agentsin a way that cannot be achieved with other interface paradigms. TheAssociation for Computing Machinery(ACM) defines human–computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them".[5]A key aspect of HCI is user satisfaction, also referred to as End-User Computing Satisfaction. It goes on to say: "Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques incomputer graphics,operating systems,programming languages, and development environments are relevant. On the human side,communication theory,graphicandindustrial designdisciplines,linguistics,social sciences,cognitive psychology,social psychology, andhuman factorssuch ascomputer user satisfactionare relevant. And, of course, engineering and design methods are relevant."[5]HCI ensures that humans can safely and efficiently interact with complex technologies in fields like aviation and healthcare.[6] Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. Poorly designedhuman-machine interfacescan lead to many unexpected problems. A classic example is theThree Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster.[7][8][9]Similarly, some accidents in aviation have resulted from manufacturers' decisions to use non-standardflight instrumentsor throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout. Thus, the conceptually good idea had unintended results.[10] A human–computer interface can be described as the interface of communication between a human user and a computer. The flow of information between the human and computer is defined as theloop of interaction. The loop of interaction has several aspects to it, including: Human–computer interaction involves the ways in which humans make—or do not make—use of computational artifacts, systems, and infrastructures. Much of the research in this field seeks toimprovethe human–computer interaction by improving theusabilityof computer interfaces.[11]How usability is to be precisely understood, how it relates to other social and cultural values, and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.[12][13] Much of the research in the field of human–computer interaction takes an interest in: Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing apost-cognitivistperspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values. Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software, and hardware systems, exploring interaction paradigms, and developing models and theories of interaction. The following experimental design principles are considered, when evaluating a currentuser interface, or designing a new user interface: The iterative design process is repeated until a sensible, user-friendly interface is created.[16] Various strategies delineating methods for human–PCinteraction designhave developed since the conception of the field during the 1980s. Most plan philosophies come from a model for how clients, originators, and specialized frameworks interface. Early techniques treated clients' psychological procedures as unsurprising and quantifiable and urged plan specialists to look at subjective science to establish zones, (for example, memory and consideration) when structuring UIs. Present-day models, in general, center around a steady input and discussion between clients, creators, and specialists and push for specialized frameworks to be folded with the sorts of encounters clients need to have, as opposed to wrappinguser experiencearound a finished framework. Topics in human–computer interaction include the following: Human-AI Interaction explores how users engage with artificial intelligence systems, particularly focusing on usability, trust, and interpretability. The research mainly aims to design AI-driven interfaces that are transparent, explainable, and ethically responsible.[20]Studies highlight the importance of explainable AI (XAI) and human-in-the-loop decision-making, ensuring that AI outputs are understandable and trustworthy.[21]Researchers also develop design guidelines for human-AI interaction, improving the collaboration between users and AI systems.[22] Augmented reality (AR) integrates digital content with the real world. It enhances human perception and interaction with physical environments. AR research mainly focuses on adaptive user interfaces, multimodal input techniques, and real-world object interaction.[23]Advances in wearable AR technology improve usability, enabling more natural interaction with AR applications.[24] Virtual reality (VR) creates a fully immersive digital environment, allowing users to interact with computer-generated worlds through sensory input devices. Research focuses on user presence, interaction techniques, and cognitive effects of immersion.[25]A key area of study is the impact of VR on cognitive load and user adaptability, influencing how users process information in virtual spaces.[26] Mixed reality (MR) blends elements of both augmented reality (AR) and virtual reality (VR). It enables real-time interaction with both physical and digital objects. HCI research in MR concentrates on spatial computing, real-world object interaction, and context-aware adaptive interfaces.[27]MR technologies are increasingly applied in education, training simulations, and healthcare, enhancing learning outcomes and user engagement.[28] Extended reality (XR) is an umbrella term encompassing AR, VR, and MR, offering a continuum between real and virtual environments. Research investigates user adaptability, interaction paradigms, and ethical implications of immersive technologies.[29]Recent studies highlight how AI-driven personalization and adaptive interfaces improve the usability of XR applications.[30] Accessibility in human–computer interaction (HCI) focuses on designing inclusive digital experiences, ensuring usability for people with diverse abilities. Research in this area is related to assistive technologies, adaptive interfaces, and universal design principles.[31]Studies indicate that accessible design benefits not only people with disabilities but also enhances usability for all users.[32] Social computing is an interactive and collaborative behavior considered between technology and people. In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis, as there are a lot of social computing technologies that include blogs, emails, social networking, quick messaging, and various others. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name.[33]Other research finds that individuals perceive their interactions with computers more negatively than humans, despite behaving the same way towards these machines.[34] In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors.Ontology, as a formal representation of domain-specific knowledge, can be used to address this problem by solving the semantic ambiguities between the two parties.[35] In the interaction of humans and computers, research has studied how computers can detect, process, and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human–computer interaction. The influence of emotions in human–computer interaction has been studied in fields such as financial decision-making usingECGand organizational knowledge sharing usingeye-trackingand face readers as affect-detection channels. In these fields, it has been shown that affect-detection channels have the potential todetect human emotionsand those information systems can incorporate the data obtained from affect-detection channels to improve decision models. Abrain–computer interface(BCI), is a direct communication pathway between an enhanced or wiredbrainand an external device. BCI differs fromneuromodulationin that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[36] Security interactions are the study of interaction between humans and computers specifically as it pertains toinformation security. Its aim, in plain terms, is to improve theusabilityof security features inend userapplications. Unlike HCI, which has roots in the early days ofXerox PARCduring the 1970s, HCISec is a nascent field of study by comparison. Interest in this topic tracks with that ofInternet security, which has become an area of broad public concern only in very recent years. When security features exhibit poor usability, the following are common reasons: Traditionally, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human–computer interaction shifted focus beyond the interface to respond to observations as articulated byDouglas Engelbart: "If ease of use were the only valid criterion, people would stick to tricycles and never try bicycles."[37] How humans interact with computers continues to evolve rapidly. Human–computer interaction is affected by developments in computing. These forces include: As of 2010[update]the future for HCI is expected[38]to include the following characteristics: One of the main conferences for new research in human–computer interaction is the annually heldAssociation for Computing Machinery's (ACM)Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronouncedkai, orKhai). CHI is organized by ACM Special Interest Group on Computer-Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners, and industry people, with company sponsors such as Google, Microsoft, and PayPal. There are also dozens of other smaller, regional, or specialized HCI-related conferences held around the world each year, including:[39]
https://en.wikipedia.org/wiki/Human_Computer_Interaction
In the mathematical theory ofprobability, theexpectilesof aprobability distributionare related to theexpected valueof the distribution in a way analogous to that in which thequantilesof the distribution are related to themedian. Forτ∈(0,1){\textstyle \tau \in (0,1)}, the expectile of the probability distribution with cumulative distribution functionF{\textstyle F}is characterized by any of the following equivalent conditions:[1][2][3] Quantile regressionminimizes an asymmetricL1{\displaystyle L_{1}}loss (seeleast absolute deviations). Analogously, expectile regression minimizes an asymmetricL2{\displaystyle L_{2}}loss (seeordinary least squares): whereH{\displaystyle H}is theHeaviside step function.
https://en.wikipedia.org/wiki/Expectile
Apangramorholoalphabetic sentenceis asentenceusing every letter of a givenalphabetat least once. Pangrams have been used to displaytypefaces, test equipment, and develop skills inhandwriting,calligraphy, andtyping. The best-known English pangram is "The quick brown fox jumps over the lazy dog".[1]It has been used since at least the late 19th century[1]and was used byWestern Unionto testTelex/TWXdata communication equipment for accuracy and reliability.[2]Pangrams like this are now used by a number of computer programs to display computer typefaces. Short pangrams in English are more difficult to devise and tend to use uncommon words and unnatural sentences. Longer pangrams afford more opportunity for humor, cleverness, or thoughtfulness. The following are examples of pangrams that are shorter than "The quick brown fox jumps over a lazy dog" (which has 35 letters) and use standard written English without abbreviations or proper nouns: A perfect pangram contains every letter of the alphabet only once and can be considered ananagramof the alphabet. The only known perfect pangrams of the English alphabet use abbreviations or other non-dictionary words, such as "Blowzy night-frumps vex'd Jack Q." or "Mr. Jock, TV quiz PhD, bags few lynx."[3]or they include words so obscure that the phrase is challenging to understand, such as "Cwm fjord-bank glyphs vext quiz",[3]in whichcwmis aloan wordfrom theWelsh languagemeaning an amphitheatre-like glaciated depression,vextis an uncommon way to spellvexed, andquizis used in anarchaicsense to mean a puzzling or eccentric person. It means that symbols in the bowl-like depression on the edge of a long steep sea inlet confused an eccentric person. Other writing systems may present more options: TheIrohais a well-known perfect pangram of the Japanesesyllabary, while theHanacarakais a perfect pangram for theJavanesescript and is commonly used to order its letters in sequence. Whereas the English language uses all 26 letters of the Latin alphabet in native and naturalized words, many other languages using the same alphabet do not. Pangram writers in these languages are forced to choose between only using those letters found in native words or incorporating exotic loanwords into their pangrams. Some words, such as theGaelic-derivedwhisk(e)y, which has been borrowed by many languages and uses the lettersk,wandy, are a frequent fixture of many foreign pangrams. There are also languages that use other Latin characters thatdo not appearin the traditional 26 letters of the Latin alphabet. This differs further from English pangrams, with letters such asç,ä, andš. Non-Latin alphabetic or phonetic scripts such as Greek, Armenian, and others can also have pangrams.[12]In some writing systems, exactly what counts as a distinct symbol can be debated. For example, many languages have accents or other diacritics, but one might count "é" and "e" as the same for pangrams. A similar problem arises for older English orthography that includes thelong s("ſ"). ("Mr.Sangkhaphant Hengpithakfang - an elderly man who earns a living by selling bottles - was arrested for prosecution by police because he stole Lady Chatchada Chansamat's watch.") contains all the letters in theThai alphabet, both obsolete and non-obsolete. Logographic scripts, or writing systems such as Chinese that do not use an alphabet but are composed principally oflogograms, cannot produce pangrams in a literal sense (or at least, not pangrams of reasonable size). The total number of signs is large and imprecisely defined, so producing a text with every possible sign is practically impossible. However, various analogies to pangrams are feasible, including traditional pangrams in aromanization. InJapanese, although typical orthography useskanji(logograms), pangrams can be made using everykana, orsyllabiccharacter. TheIrohais a classic example of a perfect pangram in non-Latin script. In Chinese, theThousand Character Classicis a 1000-character poem in which each character is used exactly once; however, it does not include allChinese characters. The single character永(permanence) incorporates all the basic strokes used to write Chinese characters, using each stroke exactly once, as described in theEight Principles of Yong. Amongabugidascripts, an example of a perfect pangram is theHanacaraka (hana caraka; data sawala; padha jayanya; maga bathanga)of theJavanese script, which is used to write theJavanese languageinIndonesia. A self-enumerating pangram is a pangrammaticautogram, or a sentence that inventories its own letters, each of which occurs at least once. The first example was produced byRudy Kousbroek, a Dutch journalist and essayist, who publicly challengedLee Sallows, a Britishrecreational mathematicianresident in the Netherlands, to produce an English translation of his Dutch pangram. In the sequel, Sallows built an electronic "pangram machine", that performed a systematic search among millions of candidate solutions. The machine was successful in identifying the following 'magic' translation:[13][14][15] Chris Patuzzo was able to reduce the problem of finding a self-enumerating pangram to theboolean satisfiability problem. He did this by using a made-to-orderhardware description languageas a stepping stone and then applied theTseytin transformationto the resulting chip.[16][17] The pangram "The quick brown fox jumps over the lazy dog", and the search for a shorter pangram, are the cornerstone of the plot of the novelElla Minnow PeabyMark Dunn.[18]The search successfully comes to an end when the phrase "Pack my box with five dozen liquor jugs" is discovered (which has only 6 duplicated vowels). The scientific paperCneoridium dumosum(Nuttall) Hooker F. Collected March 26, 1960, at an Elevation of about 1450 Meters on Cerro Quemazón, 15 Miles South of Bahía de Los Angeles, Baja California, México, Apparently for a Southeastward Range Extension of Some 140 Mileshas a pangrammatic title, seemingly by pure chance.
https://en.wikipedia.org/wiki/Pangram
Eliminative materialism(also calledeliminativism) is amaterialistposition in thephilosophy of mindthat expresses the idea that the majority ofmental statesinfolk psychologydo not exist.[1]Some supporters of eliminativism argue that no coherentneural basiswill be found for many everyday psychological concepts such asbeliefordesire, since they are poorly defined. The argument is that psychological concepts ofbehaviorandexperienceshould be judged by how well they reduce to the biological level.[2]Other versionsentailthe nonexistence of conscious mental states such aspainandvisual perceptions.[3] Eliminativism about a class of entities is the view that the class of entities does not exist.[4]For example,materialismtends to be eliminativist about thesoul; modern chemists are eliminativist aboutphlogiston; modern biologists are eliminativist aboutélan vital; and modern physicists are eliminativist aboutluminiferous ether. Eliminativematerialismis the relatively new (1960s–70s) idea that certain classes of mental entities that common sense takes for granted, such as beliefs, desires, and the subjective sensation of pain, do not exist.[5][6]The most common versions are eliminativism aboutpropositional attitudes, as expressed byPaulandPatricia Churchland,[7]and eliminativism aboutqualia(subjective interpretations about particular instances of subjective experience), as expressed byDaniel Dennett,Georges Rey,[3]andJacy Reese Anthis.[8] In the context ofmaterialistunderstandings ofpsychology, eliminativism is the opposite ofreductive materialism, arguing that mental states as conventionally understooddoexist, anddirectly correspond to the physical state of the nervous system.[9]An intermediate position, revisionary materialism, often argues the mental state in question will prove to besomewhatreducible to physical phenomena—with some changes needed to the commonsense concept.[1][10] Since eliminative materialism arguably claims that future research will fail to find a neuronal basis for various mental phenomena, it may need to wait for science to progress further. One might question the position on these grounds, but philosophers like Churchland argue that eliminativism is often necessary in order to open the minds of thinkers to new evidence and better explanations.[9]Views closely related to eliminativism includeillusionismandquietism. Various arguments have been made for and against eliminative materialism over the last 50 years. The view's history can be traced toDavid Hume, who rejected the idea of the "self" on the grounds that it was not based on any impression.[11]Most arguments for the view are based on the assumption that people's commonsense view of the mind is actually an implicit theory. It is to be compared and contrasted with other scientific theories in its explanatory success, accuracy, and ability to predict the future. Eliminativists argue that commonsense "folk" psychology has failed and will eventually need to be replaced by explanations derived from neuroscience. These philosophers therefore tend to emphasize the importance of neuroscientific research as well as developments inartificial intelligence. Philosophers who argue against eliminativism may take several approaches. Simulation theorists, like Robert Gordon[12]andAlvin Goldman,[13]argue that folk psychology is not a theory, but depends on internal simulation of others, and therefore is not subject to falsification in the same way that theories are.Jerry Fodor, among others,[14]argues that folk psychology is, in fact, a successful (even indispensable) theory. Another view is that eliminativism assumes the existence of the beliefs and other entities it seeks to "eliminate" and is thus self-refuting.[15] Eliminativism maintains that the commonsense understanding of the mind is mistaken, and thatneurosciencewill one day reveal that mental states talked about in everyday discourse, using words such as "intend", "believe", "desire", and "love", do not refer to anything real. Because of the inadequacy of natural languages, people mistakenly think that they have such beliefs and desires.[2]Some eliminativists, such asFrank Jackson, claim thatconsciousnessdoes not exist except as anepiphenomenonofbrainfunction; others, such as Georges Rey, claim that the concept will eventually be eliminated as neuroscience progresses.[3][16]Consciousness and folk psychology are separate issues, and it is possible to take an eliminative stance on one but not the other.[4]The roots of eliminativism go back to the writings ofWilfred Sellars,W.V.O. Quine,Paul Feyerabend, andRichard Rorty.[5][6][17]The term "eliminative materialism" was first introduced byJames Cornmanin 1968 while describing a version of physicalism endorsed by Rorty. The laterLudwig Wittgensteinwas also an important inspiration for eliminativism, particularly with his attack on "private objects" as "grammatical fictions".[4] Early eliminativists such as Rorty and Feyerabend often confused two different notions of the sort of elimination that the term "eliminative materialism" entailed. On the one hand, they claimed, thecognitive sciencesthat will ultimately give people a correct account of the mind's workings will not employ terms that refer to commonsense mental states like beliefs and desires; these states will not be part of theontologyof a mature cognitive science.[5][6]But critics immediately countered that this view was indistinguishable from theidentity theory of mind.[2][18]Quine himself wondered what exactly was so eliminative about eliminative materialism: Is physicalism a repudiation of mental objects after all, or a theory of them? Does it repudiate the mental state of pain or anger in favor of its physical concomitant, or does it identify the mental state with a state of the physical organism (and so a state of the physical organism with the mental state)?[19] On the other hand, the same philosophers claimed that commonsense mental states simply do not exist. But critics pointed out that eliminativists could not have it both ways: either mental states exist and will ultimately be explained in terms of lower-level neurophysiological processes, or they do not.[2][18]Modern eliminativists have much more clearly expressed the view that mental phenomena simply do not exist and will eventually be eliminated from people's thinking about the brain in the same way that demons have been eliminated from people's thinking about mental illness and psychopathology.[4] While it was a minority view in the 1960s, eliminative materialism gained prominence and acceptance during the 1980s.[20]Proponents of this view, such asB.F. Skinner, often made parallels to previous superseded scientific theories (such as that ofthe four humours, thephlogiston theoryofcombustion, and thevital forcetheory of life) that have all been successfully eliminated in attempting to establish their thesis about the nature of the mental. In these cases, science has not produced more detailed versions or reductions of these theories, but rejected them altogether as obsolete.Radical behaviorists, such as Skinner, argued that folk psychology is already obsolete and should be replaced by descriptions of histories ofreinforcementandpunishment.[21]Such views were eventually abandoned. Patricia and Paul Churchland argued thatfolk psychologywill be gradually replaced asneurosciencematures.[20] Eliminativism is not only motivated by philosophical considerations, but is also a prediction about what form future scientific theories will take. Eliminativist philosophers therefore tend to be concerned with data from the relevantbrainandcognitive sciences.[22]In addition, because eliminativism is essentially predictive in nature, different theorists can and often do predict which aspects of folk psychology will be eliminated from folk psychological vocabulary. None of these philosophers are eliminativiststout court.[23][24][25] Today, the eliminativist view is most closely associated with the Churchlands, who deny the existence ofpropositional attitudes(a subclass ofintentional states), and withDaniel Dennett, who is generally considered an eliminativist aboutqualiaand phenomenal aspects of consciousness. One way to summarize the difference between the Churchlands' view and Dennett's is that the Churchlands are eliminativists about propositional attitudes, butreductionistsabout qualia, while Dennett is an anti-reductionist about propositional attitudes and an eliminativist about qualia.[4][25][26][27] More recently, Brian Tomasik andJacy Reese Anthishave made various arguments for eliminativism.[28][29]Elizabeth Irvine has argued that both science and folk psychology do not treatmental statesas having phenomenal properties so the hard problem "may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers), and questions about consciousness may well 'shatter' into more specific questions about particular capacities."[30]In 2022, Anthis publishedConsciousness Semanticism: A Precise Eliminativist Theory of Consciousness, which asserts that "formal argumentation from precise semantics" dissolves the hard problem because of the contradiction between precision implied in philosophical theory and the vagueness in its definition, which implies there is no fact of the matter for phenomenological consciousness.[8] Eliminativists such as Paul and Patricia Churchland argue thatfolk psychologyis a fully developed but non-formalized theory of human behavior. It is used to explain and make predictions about human mental states and behavior. This view is often referred to as thetheory of mindor just simplytheory-theory, for it theorizes the existence of an unacknowledged theory. As atheoryin the scientific sense, eliminativists maintain, folk psychology must be evaluated on the basis of its predictive power and explanatory success as a research program for the investigation of the mind/brain.[31][32] Such eliminativists have developed different arguments to show that folk psychology is a seriously mistaken theory and should be abolished. They argue that folk psychology excludes from its purview or has traditionally been mistaken about many important mental phenomena that can and are being examined and explained by modern neuroscience. Some examples aredreaming,consciousness,mental disorders,learningprocesses, andmemoryabilities. Furthermore, they argue, folk psychology's development in the last 2,500 years has not been significant and it is therefore stagnant. Theancient Greeksalready had a folk psychology comparable to modern views. But in contrast to this lack of development, neuroscience is rapidly progressing and, in their view, can explain manycognitive processesthat folk psychology cannot.[22][33] Folk psychology retains characteristics of now obsolete theories or legends from the past. Ancient societies tried to explain the physical mysteries ofnatureby ascribing mental conditions to them in such statements as "the sea is angry". Gradually, these everyday folk psychological explanations were replaced by more efficient scientific descriptions. Today, eliminativists argue, there is no reason not to accept an effective scientific account of cognition. If such an explanation existed, then there would be no need for folk-psychological explanations of behavior, and the latter would be eliminated the same way as themythologicalexplanations the ancients used.[34] Another line of argument is the meta-induction based on what eliminativists view as the disastrous historical record of folk theories in general. Ancient pre-scientific "theories" of folk biology, folk physics, and folk cosmology have all proven radically wrong. Eliminativists argue the same in the case of folk psychology. There seems no logical basis, to the eliminativist, to make an exception just because folk psychology has lasted longer and is more intuitive or instinctively plausible than other folk theories.[33]Indeed, the eliminativists warn, considerations of intuitive plausibility may be precisely the result of the deeply entrenched nature in society of folk psychology itself. It may be that people's beliefs and other such states are as theory-laden as external perceptions and hence that intuitions will tend to be biased in their favor.[23] Much of folk psychology involves the attribution ofintentional states(or more specifically as a subclass,propositional attitudes). Eliminativists point out that these states are generally ascribed syntactic and semantic properties. An example of this is thelanguage of thoughthypothesis, which attributes a discrete, combinatorial syntax and other linguistic properties to these mental phenomena. Eliminativists argue that such discrete, combinatorial characteristics have no place in neuroscience, which speaks ofaction potentials, spikingfrequencies, and other continuous and distributed effects. Hence, the syntactic structures assumed by folk psychology have no place in such a structure as the brain.[22]To this there have been two responses. On the one hand, some philosophers deny that mental states are linguistic and see this as astraw manargument.[35][36]The other view is represented by those who subscribe to "a language of thought". They assert that mental states can bemultiply realizedand that functional characterizations are just higher-level characterizations of what happens at the physical level.[37][38] It has also been argued against folk psychology that the intentionality of mental states like belief implies that they have semantic qualities. Specifically, their meaning is determined by the things they are about in the external world. This makes it difficult to explain how they can play the causal roles they are supposed to in cognitive processes.[39] In recent years, this latter argument has been fortified by the theory ofconnectionism. Many connectionist models of the brain have been developed in which the processes of language learning and other forms of representation are highly distributed and parallel. This tends to indicate that such discrete and semantically endowed entities as beliefs and desires are unnecessary.[40] The problem of intentionality poses a significant challenge to materialist accounts of cognition. If thoughts are neural processes, we must explain how specific neural networks can be "about" external objects or concepts. We can think about Paris, for instance, but there is no clear mechanism by which neurons can represent a city.[41] Traditional analogies fail to explain this phenomenon. Unlike a photograph, neurons do not physically resemble Paris. Nor can we appeal to conventional symbolism, as we might with a stop sign representing the action of stopping. Such symbols derive their meaning from social agreement and interpretation, which are not applicable to a brain's workings. Attempts to posit a separate neural process that assigns meaning to the "Paris neurons" merely shift the problem without resolving it, as we then need to explain how this secondary process can assign meaning, initiating an infinite regress.[42] The only way to break this regress is to postulate matter with intrinsic meaning, independent of external interpretation. But our current understanding of physics precludes the existence of such matter. The fundamental particles and forces physics describes have no inherent semantic properties that could ground intentionality. This physical limitation presents a formidable obstacle to materialist theories of mind that rely on neural representations. It suggests that intentionality, as commonly understood, may be incompatible with a purely physicalist worldview. This suggests that our folk psychological concepts of intentional states will be eliminated in light of scientific understanding.[41] Another argument for eliminative materialism stems from evolutionary theory. This argument suggests that natural selection, the process shaping our neural architecture, cannot solve the "disjunction problem", which challenges the idea that neural states can store specific, determinate propositional content. Natural selection, as Darwin described it, is primarily a process of selection against rather than selection for traits. It passively filters out traits below a certain fitness threshold rather than actively choosing beneficial ones. This lack of foresight or purpose in evolution becomes problematic when considering how neural states could represent unique propositions.[43][44] The disjunction problem arises from the fact that natural selection cannot discriminate between coextensive properties. For example, consider two genes close together on a chromosome. One gene might code for a beneficial trait, while the other codes for a neutral or even harmful trait. Due to their proximity, these genes are often inherited together, a phenomenon known as genetic linkage. Natural selection cannot distinguish between these linked traits; it can only act on their combined effect on the organism's fitness. Only random processes like genetic crossover—where chromosomes exchange genetic material during reproduction—can break these linkages. Until such a break occurs, natural selection remains "blind" to the linked genes' individual effects.[44][45] Eliminativists argue that if natural selection—the process responsible for shaping our neural architecture—cannot solve the disjunction problem, then our brains cannot store unique, non-disjunctive propositions, as required by folk psychology. Instead, they suggest that neural states contain inherently disjunctive or indeterminate content. This argument leads eliminativists to reject the notion that neural states have specific, determinate informational content corresponding to the discrete, non-disjunctive propositions of folk psychology. This evolutionary argument adds to the eliminativist case that our commonsense understanding of beliefs, desires, and other propositional attitudes is flawed and should be replaced by a neuroscientific account that acknowledges the indeterminate nature of neural representations.[46][47] Some eliminativists reject intentionality while accepting the existence of qualia. Other eliminativists reject qualia while accepting intentionality. Many philosophers argue that intentionality cannot exist without consciousness and vice versa, and so any philosopher who accepts one while rejecting the other is being inconsistent. They argue that, to be consistent, one must accept both qualia and intentionality or reject them both. Philosophers who argue for such a position includePhilip Goff, Terence Horgan, Uriah Kriegal, and John Tienson.[48][49]The philosopherKeith Frankishaccepts the existence of intentionality but holds to illusionism about consciousness because he rejects qualia. Goff notes that beliefs are a kind of propositional thought. The thesis of eliminativism seems so obviously wrong to many critics, who find it undeniable that people know immediately and indubitably that they have minds, that argumentation seems unnecessary. This sort of intuition-pumping is illustrated by asking what happens when one asks oneself honestly if one has mental states.[50]Eliminativists object to such a rebuttal of their position by claiming that intuitions often are mistaken. Analogies from the history of science are frequently invoked to buttress this observation: it may appear obvious that the sun travels around the earth, for example, but this was nevertheless proved wrong. Similarly, it may appear obvious that apart from neural events there are also mental conditions, but that could be false.[23] But even if one accepts the susceptibility to error of people's intuitions, the objection can be reformulated: if the existence of mental conditions seems perfectly obvious and is central to our conception of the world, then enormously strong arguments are needed to deny their existence. Furthermore, these arguments, to be consistent, must be formulated in a way that does not presuppose the existence of entities like "mental states", "logical arguments", and "ideas", lest they beself-contradictory.[51]Those who accept this objection say that the arguments for eliminativism are far too weak to establish such a radical claim and that there is thus no reason to accept eliminativism.[50] Some philosophers, such asPaul Boghossian, have attempted to show that eliminativism is in some senseself-refuting, since the theory presupposes the existence of mental phenomena. If eliminativism is true, then eliminativists must accept anintentionalproperty liketruth, supposing that in order to assert something one must believe it. Hence, for eliminativism to be asserted as a thesis, the eliminativist must believe that it is true; if so, there are beliefs, and eliminativism is false.[15][52] Georges ReyandMichael Devittreply to this objection by invokingdeflationary semantic theoriesthat avoid analyzingpredicateslike "x is true" as expressing a real property. They are instead construed as logical devices, so that asserting that a sentence is true is just a quoted way of asserting the sentence itself. To say "'God exists' is true" is just to say "God exists". This way, Rey and Devitt argue, insofar as dispositional replacements of "claims" and deflationary accounts of "true" are coherent, eliminativism is not self-refuting.[53] Several philosophers, such as the Churchlands andAlex Rosenberg,[43][54]have developed a theory of structural resemblance or physical isomorphism that could explain how neural states can instantiate truth within thecorrespondence theory of truth. Neuroscientists use the word "representation" to identify the neural circuits' encoding of inputs from the peripheral nervous system in, for example, the visual cortex. But they use the word without according it any commitment to intentional content. In fact, there is an explicit commitment to describing neural representations in terms of structures of neural axonal discharges that are physically isomorphic to the inputs that cause them. Suppose that this way of understanding representation in the brain is preserved in the long-term course of research providing an understanding of how the brain processes and stores information. Then there will be considerable evidence that the brain is a neural network whose physical structure is identical to the aspects of its environment it tracks and whose representations of these features consist in this physical isomorphism.[44] Experiments in the 1980s withmacaquesisolated the structural resemblance between input vibrations the finger feels, measured in cycles per second, and representations of them in neural circuits, measured in action-potential spikes per second. This resemblance between two easily measured variables makes it unsurprising that they would be among the first such structural resemblances to be discovered. Macaques and humans have the same peripheral nervous system sensitivities and can make the same tactile discriminations. Subsequent research into neural processing has increasingly vindicated a structural resemblance or physical isomorphism approach to how information enters the brain and is stored and deployed.[43][55] This isomorphism between brain and world is not a matter of some relationship between reality and a map of reality stored in the brain. Maps require interpretation if they are to be about what they map, and eliminativism and neuroscience share a commitment to explaining the appearance of aboutness by purely physical relationships between informational states in the brain and what they "represent". The brain-to-world relationship must be a matter of physical isomorphism—sameness of form, outline, structure—that does not require interpretation.[44] This machinery can be applied to make "sense" of eliminativism in terms of the sentences eliminativists say or write. When we say that eliminativism is true, that the brain does not store information in the form of unique sentences, statements, expressing propositions or anything like them, there is a set of neural circuits that has no trouble coherently carrying this information. There is a possible translation manual that will guide us back from the vocalization or inscription eliminativists express to these circuits. These neural structures will differ from the neural circuits of those who explicitly reject eliminativism in ways that our translation manual will presumably shed some light on, giving us a neurological handle on disagreement and on the structural differences in neural circuitry, if any, between asserting p and asserting not-p when p expresses the eliminativist thesis.[43] The physical isomorphism approach faces indeterminacy problems. Any given structure in the brain will be causally related to, and isomorphic in various respects to, many different structures in external reality. But we cannot discriminate the one it is intended to represent or that it is supposed to be true "of". These locutions are heavy with just the intentionality that eliminativism denies. Here is a problem of underdetermination or holism that eliminativism shares with intentionality-dependent theories of mind. Here, we can only invoke pragmatic criteria for discriminating successful structural representations—the substitution of true ones for unsuccessful ones—the ones we used to call false.[43] Dennett notes that it is possible that such indeterminacy problems remain only hypothetical, not occurring in reality. He constructs a 4x4 "Quinian crossword puzzle" with words that must satisfy both the across and down definitions. Since there are multiple constraints on this puzzle, there is one solution. Thus we can think of the brain and its relation to the external world as a very large crossword puzzle that must satisfy exceedingly many constraints to which there is only one possible solution. Therefore, in reality we may end up with only one physical isomorphism between the brain and the external world.[47] When indeterminacy problems arose because the brain is physically isomorphic to multiple structures of the external world, it was urged that a pragmatic approach be used to resolve the problem. Another approach argues that thepragmatic theory of truthshould be used from the start to decide whether certain neural circuits store true information about the external world.Pragmatismwas founded byCharles Sanders PeirceandWilliam James, and later refined by our understanding of thephilosophy of science. According to pragmatism, to say thatgeneral relativityis true is to say that it makes more accurate predictions than other theories (Newtonian mechanics,Aristotle's physics, etc.). If computer circuits lack intentionality and do not store information using propositions, then in what sense can computer A have true information about the world while computer B lacks it? If the computers were instantiated inautonomous cars, we could test whether A or B successfully complete a cross-country road trip. If A succeeds while B fails, the pragmatist can say that A holds true information about the world, because A's information allows it to make more accurate predictions (relative to B) about the world and to move around its environment more successfully. Similarly, if brain A has information that enables the biological organism to make more accurate predictions about the world and helps the organism successfully move around in the environment, then A has true information about the world. Although not advocates of eliminativism, John Shook and Tibor Solymosi argue that pragmatism is a promising program for understanding advancements in neuroscience and integrating them into a philosophical picture of the world.[56] The reason naturalism cannot be pragmatic in its epistemology starts with its metaphysics. Science tells us that we are components of the natural realm, indeed latecomers in the 13.8-billion-year-old universe. The universe was not organized around our needs and abilities, and what works for us is just a set of contingent facts that could have been otherwise. Once we have begun discovering things about the universe that work for us, science sets out to explain why they do. It is clear that one explanation for why things work for us that we must rule out as unilluminating, indeed question-begging, is that they work for us because they work for us. If something works for us, enables us to meet our needs and wants, there must be an explanation reflecting facts about us and the world that produce the needs and the means to satisfy them.[46] The explanation of why scientific methods work for us must be a causal explanation. It must show what facts about reality make the methods we employ to acquire knowledge suitable for doing so. The explanation must show that our methods work — for example, have reliable technological application — not by coincidence, still less miracle or accident. That means there must be some facts, events, processes that operate in reality and brought about our pragmatic success. The demand that success be explained is a consequence of science's epistemology. If the truth of such explanations consists in the fact that they work for us (as pragmatism requires), then the explanation of why our scientific methods work is that they work. That is not a satisfying explanation.[46] Some philosophers argue that folk psychology is quite successful.[14][57][58]Simulation theorists doubt that people's understanding of the mental can be explained in terms of a theory at all. Rather they argue that people's understanding of others is based on internal simulations of how they would act and respond in similar situations.[12][13]Jerry Fodorbelieves in folk psychology's success as a theory, because it makes for an effective way of communication in everyday life that can be implemented with few words. Such effectiveness could not be achieved with complex neuroscientific terminology.[14] Another problem for the eliminativist is the consideration that human beings undergo subjectiveexperiencesand hence their conscious mental states havequalia. Since qualia are generally regarded as characteristics of mental states, their existence does not seem compatible with eliminativism.[59]Eliminativists such as Dennett and Rey respond by rejecting qualia.[60][61]Opponents of eliminativism see this response as problematic, since many claim that existence of qualia is perfectly obvious. Many philosophers consider the "elimination" of qualia implausible, if not incomprehensible. They assert that, for instance, the existence of pain is simply beyond denial.[59] Admitting that the existence of qualia seems obvious, Dennett nevertheless holds that "qualia" is a theoretical term from an outdated metaphysics stemming fromCartesianintuitions. He argues that a precise analysis shows that the term is in the long run empty and full of contradictions. Eliminativism's claim about qualia is that there is no unbiased evidence for such experiences when regarded as something more thanpropositional attitudes.[25][62]In other words, it does not deny that pain exists, but holds that it exists independently of its effect on behavior. Influenced by Wittgenstein'sPhilosophical Investigations, Dennett and Rey have defended eliminativism about qualia even when other aspects of the mental are accepted. Dennett offers philosophical thought experiments to argue that qualia do not exist.[63]First he lists five properties of qualia: The first thought experiment Dennett uses to demonstrate that qualia lack the listed necessary properties to exist involvesinverted qualia: consider two people who have different qualia but the same external physical behavior. But now the qualia supporter can present an "intrapersonal" variation. Suppose a neurosurgeon works on your brain and you discover that grass now looks red. Would this not be a case where we could confirm the reality of qualia—by noticing how the qualia have changed while every other aspect of our conscious experience remains the same? Not quite, Dennett replies via the next "intuition pump" (his term for an intuition-based thought experiment), "alternative neurosurgery". There are two different ways the neurosurgeon might have accomplished the inversion. First, they might have tinkered with something "early on", so that signals from the eye when you look at grass contain the information "red" rather than "green". This would result in genuine qualia inversion. But they might instead have tinkered with your memory. Here your qualia would remain the same, but your memory would be altered so that your current green experience would contradict your earlier memories of grass. You would still feel that the color of grass had changed, but here the qualia have not changed, but your memories have. Would you be able to tell which of these scenarios is correct? No: your perceptual experience tells you that something has changed but not whether your qualia have changed. Dennett concludes, since (by hypothesis) the two surgical procedures can yield exactly the same introspective effects while only one inverts the qualia, nothing in the subject's experience can favor one hypothesis over the other. So unless he seeks outside help, the state of his own qualia must be as unknowable to him as the state of anyone else's. It is questionable, in short, that we have direct, infallible access to our conscious experience.[63] Dennett's second thought experiment involves beer. Many people think of beer as an acquired taste: one's first sip is often unpleasant, but one gradually comes to enjoy it. But wait, Dennett asks—what is the "it" here? Compare the flavor of that first taste with the flavor now. Does the beer taste exactly the same both then and now, only now you like that taste whereas before you disliked it? Or is it that the way beer tastes gradually shifts—so that the taste you did not like at the beginning is not the same taste you now like? In fact most people simply cannot tell which is the correct analysis. But that is to give up again on the idea that we have special and infallible access to our qualia. Further, when forced to choose, many people feel that the second analysis is more plausible. But then if one's reactions to an experience are in any way constitutive of it, the experience is not so "intrinsic" after all—and another qualia property falls.[63] Dennett's third thought experiment involves inverted goggles. Scientists have devised special eyeglasses that invert up and down for the wearer. When you put them on, everything looks upside down. When subjects first put them on, they can barely walk around without stumbling. But after subjects wear them for a while, something surprising occurs. They adapt and become able to walk around as easily as before. When you ask them whether they adapted by re-inverting their visual field or simply got used to walking around in an upside-down world, they cannot say. So as in our beer-drinking case, either we simply do not have the special, infallible access to our qualia that would allow us to distinguish the two cases or the way the world looks to us is actually a function of how we respond to the world—in which case qualia are not "intrinsic" properties of experience.[63] Edward Feser objects to Dennett's position as follows. That you need to appeal to third-person neurological evidence to determine whether your memory of your qualia has been tampered with does not seem to show that your qualia themselves—past or present—can be known only by appealing to that evidence. You might still be directly aware of your qualia from the first-person, subjective point of view even if you do not know whether they are the same as the qualia you had yesterday—just as you might really be aware of the article in front of you even if you do not know whether it is the same as the article you saw yesterday. Questions about memory do not necessarily bear on the nature of your awareness of objects present here and now (even if they bear on what you can justifiably claim to know about such objects), whatever those objects happen to be. Dennett's assertion that scientific objectivity requires appealing exclusively to third-person evidence appears mistaken. What scientific objectivity requires is not denial of the first-person subjective point of view but rather a means of communicating inter-subjectively about what one can grasp only from that point of view. Given the relational structure first-person phenomena like qualia appear to exhibit—a structure thatCarnapdevoted great effort to elucidating—such a means seems available: we can communicate what we know about qualia in terms of their structural relations to one another. Dennett fails to see that qualia can be essentially subjective and still relational or non-intrinsic, and thus communicable. This communicability ensures that claims about qualia are epistemologically objective; that is, they can in principle be grasped and evaluated by all competent observers even though they are claims about phenomena that are arguably not metaphysically objective, i.e., about entities that exist only as grasped by a subject of experience. It is only the former sort of objectivity that science requires. It does not require the latter, and cannot plausibly require it if the first-person realm of qualia is what we know better than anything else.[64] Illusionismis an active program within eliminative materialism to explainphenomenal consciousnessas an illusion. It is promoted by the philosophersDaniel Dennett,Keith Frankish, andJay Garfield, and the neuroscientistMichael Graziano.[65][66]Graziano has advanced theattention schema theory of consciousnessand postulates that consciousness is an illusion.[67][68]According toDavid Chalmers, proponents argue that once we can explain consciousness as an illusion without the need for a realist view of consciousness, we can construct a debunking argument against realist views of consciousness.[69]This line of argument draws from other debunking arguments like theevolutionary debunking argumentin the field ofmetaethics. Such arguments note that morality is explained by evolution without positingmoral realism, so there is a sufficient basis to debunk moral realism.[70] Illusionists generally hold that once it is explained why people believe and say they are conscious, the hard problem of consciousness will dissolve. Chalmers agrees that a mechanism for these beliefs and reports can and should be identified using the standard methods of physical science, but disagrees that this would support illusionism, saying that the datum illusionism fails to account for is not reports of consciousness but rather first-person consciousness itself.[71]He separates consciousness from beliefs and reports about consciousness, but holds that a fully satisfactory theory of consciousness should explain how the two are "inextricably intertwined" so that their alignment does not require an inexplicable coincidence.[71]Illusionism has also been criticized by the philosopherJesse Prinz.[72]
https://en.wikipedia.org/wiki/Eliminative_materialism
Γ(⌊k+1⌋,λ)⌊k⌋!,{\displaystyle {\frac {\Gamma (\lfloor k+1\rfloor ,\lambda )}{\lfloor k\rfloor !}},}ore−λ∑j=0⌊k⌋λjj!,{\displaystyle e^{-\lambda }\sum _{j=0}^{\lfloor k\rfloor }{\frac {\lambda ^{j}}{j!}},}orQ(⌊k+1⌋,λ){\displaystyle Q(\lfloor k+1\rfloor ,\lambda )} λ[1−log⁡(λ)]+e−λ∑k=0∞λklog⁡(k!)k!{\displaystyle \lambda {\Bigl [}1-\log(\lambda ){\Bigr ]}+e^{-\lambda }\sum _{k=0}^{\infty }{\frac {\lambda ^{k}\log(k!)}{k!}}}or for largeλ{\displaystyle \lambda } Inprobability theoryandstatistics, thePoisson distribution(/ˈpwɑːsɒn/) is adiscrete probability distributionthat expresses the probability of a given number ofeventsoccurring in a fixed interval of time if these events occur with a known constant mean rate andindependentlyof the time since the last event.[1]It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1 (e.g., number of events in a given area or volume). The Poisson distribution is named afterFrenchmathematicianSiméon Denis Poisson. It plays an important role fordiscrete-stable distributions. Under a Poisson distribution with theexpectationofλevents in a given interval, the probability ofkevents in the same interval is:[2]: 60 For instance, consider a call center which receives an average ofλ =3 calls per minute at all times of day. If the calls are independent, receiving one does not change the probability of when the next one will arrive. Under these assumptions, the numberkof calls received during any minute has a Poisson probability distribution. Receivingk =1 to 4 calls then has a probability of about 0.77, while receiving 0 or at least 5 calls has a probability of about 0.23. A classic example used to motivate the Poisson distribution is the number ofradioactive decayevents during a fixed observation period.[3] The distribution was first introduced bySiméon Denis Poisson(1781–1840) and published together with his probability theory in his workRecherches sur la probabilité des jugements en matière criminelle et en matière civile(1837).[4]: 205-207The work theorized about the number of wrongful convictions in a given country by focusing on certainrandom variablesNthat count, among other things, the number of discrete occurrences (sometimes called "events" or "arrivals") that take place during atime-interval of given length. The result had already been given in 1711 byAbraham de MoivreinDe Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus.[5]: 219[6]: 14-15[7]: 193[8]: 157This makes it an example ofStigler's lawand it has prompted some authors to argue that the Poisson distribution should bear the name of de Moivre.[9][10] In 1860,Simon Newcombfitted the Poisson distribution to the number of stars found in a unit of space.[11]A further practical application was made byLadislaus Bortkiewiczin 1898. Bortkiewicz showed that the frequency with which soldiers in the Prussian army were accidentally killed by horse kicks could be well modeled by a Poisson distribution.[12]: 23-25. A discreterandom variableXis said to have a Poisson distribution with parameterλ>0{\displaystyle \lambda >0}if it has aprobability mass functiongiven by:[2]: 60 where The positivereal numberλis equal to theexpected valueofXand also to itsvariance.[13] The Poisson distribution can be applied to systems with alarge number of possible events, each of which is rare. The number of such events that occur during a fixed time interval is, under the right circumstances, a random number with a Poisson distribution. The equation can be adapted if, instead of the average number of eventsλ,{\displaystyle \lambda ,}we are given the average rater{\displaystyle r}at which events occur. Thenλ=rt,{\displaystyle \lambda =rt,}and:[14] The Poisson distribution may be useful to model events such as: Examples of the occurrence of random points in space are: the locations of asteroid impacts with earth (2-dimensional), the locations of imperfections in a material (3-dimensional), and the locations of trees in a forest (2-dimensional).[15] The Poisson distribution is an appropriate model if the following assumptions are true: If these conditions are true, thenkis a Poisson random variable; the distribution ofkis a Poisson distribution. The Poisson distribution is also thelimitof abinomial distribution, for which the probability of success for each trial equalsλdivided by the number of trials, as the number of trials approaches infinity (seeRelated distributions). On a particular river, overflow floods occur once every 100 years on average. Calculate the probability ofk= 0, 1, 2, 3, 4, 5, or 6 overflow floods in a 100-year interval, assuming the Poisson model is appropriate. Because the average event rate is one overflow flood per 100 years,λ= 1 The probability for 0 to 6 overflow floods in a 100-year period. In this example, it is reported that the average number of goals in a World Cup soccer match is approximately 2.5 and the Poisson model is appropriate.[16]Because the average event rate is 2.5 goals per match,λ= 2.5 . The probability for 0 to 7 goals in a match. Suppose that astronomers estimate that large meteorites (above a certain size) hit the earth on average once every 100 years (λ= 1event per 100 years), and that the number of meteorite hits follows a Poisson distribution. What is the probability ofk= 0meteorite hits in the next 100 years? Under these assumptions, the probability that no large meteorites hit the earth in the next 100 years is roughly 0.37. The remaining1 − 0.37 = 0.63is the probability of 1, 2, 3, or more large meteorite hits in the next 100 years. In an example above, an overflow flood occurred once every 100 years(λ= 1).The probability of no overflow floods in 100 years was roughly 0.37, by the same calculation. In general, if an event occurs on average once per interval (λ= 1), and the events follow a Poisson distribution, thenP(0 events in next interval) = 0.37.In addition,P(exactly one event in next interval) = 0.37,as shown in the table for overflow floods. The number of students who arrive at thestudent unionper minute will likely not follow a Poisson distribution, because the rate is not constant (low rate during class time, high rate between class times) and the arrivals of individual students are not independent (students tend to come in groups). The non-constant arrival rate may be modeled as amixed Poisson distribution, and the arrival of groups rather than individual students as acompound Poisson process. The number of magnitude 5 earthquakes per year in a country may not follow a Poisson distribution, if one large earthquake increases the probability of aftershocks of similar magnitude. Examples in which at least one event is guaranteed are not Poisson distributed; but may be modeled using azero-truncated Poisson distribution. Count distributions in which the number of intervals with zero events is higher than predicted by a Poisson model may be modeled using azero-inflated model. Bounds for the median (ν{\displaystyle \nu }) of the distribution are known and aresharp:[18]λ−ln⁡2≤ν<λ+13.{\displaystyle \lambda -\ln 2\leq \nu <\lambda +{\frac {1}{3}}.} The higher non-centeredmomentsmkof the Poisson distribution areTouchard polynomialsinλ:mk=∑i=0kλi{ki},{\displaystyle m_{k}=\sum _{i=0}^{k}\lambda ^{i}{\begin{Bmatrix}k\\i\end{Bmatrix}},}where the braces { } denoteStirling numbers of the second kind.[19][1]: 6In other words,E[X]=λ,E[X(X−1)]=λ2,E[X(X−1)(X−2)]=λ3,⋯{\displaystyle E[X]=\lambda ,\quad E[X(X-1)]=\lambda ^{2},\quad E[X(X-1)(X-2)]=\lambda ^{3},\cdots }When the expected value is set toλ =1,Dobinski's formulaimplies that then‑th moment is equal to the number ofpartitions of a setof sizen. A simple upper bound is:[20]mk=E[Xk]≤(klog⁡(k/λ+1))k≤λkexp⁡(k22λ).{\displaystyle m_{k}=E[X^{k}]\leq \left({\frac {k}{\log(k/\lambda +1)}}\right)^{k}\leq \lambda ^{k}\exp \left({\frac {k^{2}}{2\lambda }}\right).} IfXi∼Pois⁡(λi){\displaystyle X_{i}\sim \operatorname {Pois} (\lambda _{i})}fori=1,…,n{\displaystyle i=1,\dotsc ,n}areindependent, then∑i=1nXi∼Pois⁡(∑i=1nλi).{\textstyle \sum _{i=1}^{n}X_{i}\sim \operatorname {Pois} \left(\sum _{i=1}^{n}\lambda _{i}\right).}[21]: 65A converse isRaikov's theorem, which says that if the sum of two independent random variables is Poisson-distributed, then so are each of those two independent random variables.[22][23] It is amaximum-entropy distributionamong the set of generalized binomial distributionsBn(λ){\displaystyle B_{n}(\lambda )}with meanλ{\displaystyle \lambda }andn→∞{\displaystyle n\rightarrow \infty },[24]where a generalized binomial distribution is defined as a distribution of the sum of N independent but not identically distributed Bernoulli variables. DKL⁡(P∥P0)=λ0−λ+λlog⁡λλ0.{\displaystyle \operatorname {D} _{\text{KL}}(P\parallel P_{0})=\lambda _{0}-\lambda +\lambda \log {\frac {\lambda }{\lambda _{0}}}.} P(X≥x)≤e−DKL⁡(Q∥P)max(2,4πDKL⁡(Q∥P)),forx>λ,{\displaystyle P(X\geq x)\leq {\frac {e^{-\operatorname {D} _{\text{KL}}(Q\parallel P)}}{\max {(2,{\sqrt {4\pi \operatorname {D} _{\text{KL}}(Q\parallel P)}}})}},{\text{ for }}x>\lambda ,}whereDKL⁡(Q∥P){\displaystyle \operatorname {D} _{\text{KL}}(Q\parallel P)}is the Kullback–Leibler divergence ofQ=Pois⁡(x){\displaystyle Q=\operatorname {Pois} (x)}fromP=Pois⁡(λ){\displaystyle P=\operatorname {Pois} (\lambda )}. Φ(sign⁡(k−λ)2DKL⁡(Q−∥P))<P(X≤k)<Φ(sign⁡(k+1−λ)2DKL⁡(Q+∥P)),fork>0,{\displaystyle \Phi \left(\operatorname {sign} (k-\lambda ){\sqrt {2\operatorname {D} _{\text{KL}}(Q_{-}\parallel P)}}\right)<P(X\leq k)<\Phi \left(\operatorname {sign} (k+1-\lambda ){\sqrt {2\operatorname {D} _{\text{KL}}(Q_{+}\parallel P)}}\right),{\text{ for }}k>0,}whereDKL⁡(Q−∥P){\displaystyle \operatorname {D} _{\text{KL}}(Q_{-}\parallel P)}is the Kullback–Leibler divergence ofQ−=Pois⁡(k){\displaystyle Q_{-}=\operatorname {Pois} (k)}fromP=Pois⁡(λ){\displaystyle P=\operatorname {Pois} (\lambda )}andDKL⁡(Q+∥P){\displaystyle \operatorname {D} _{\text{KL}}(Q_{+}\parallel P)}is the Kullback–Leibler divergence ofQ+=Pois⁡(k+1){\displaystyle Q_{+}=\operatorname {Pois} (k+1)}fromP{\displaystyle P}. LetX∼Pois⁡(λ){\displaystyle X\sim \operatorname {Pois} (\lambda )}andY∼Pois⁡(μ){\displaystyle Y\sim \operatorname {Pois} (\mu )}be independent random variables, withλ<μ,{\displaystyle \lambda <\mu ,}then we have thate−(μ−λ)2(λ+μ)2−e−(λ+μ)2λμ−e−(λ+μ)4λμ≤P(X−Y≥0)≤e−(μ−λ)2{\displaystyle {\frac {e^{-({\sqrt {\mu }}-{\sqrt {\lambda }})^{2}}}{(\lambda +\mu )^{2}}}-{\frac {e^{-(\lambda +\mu )}}{2{\sqrt {\lambda \mu }}}}-{\frac {e^{-(\lambda +\mu )}}{4\lambda \mu }}\leq P(X-Y\geq 0)\leq e^{-({\sqrt {\mu }}-{\sqrt {\lambda }})^{2}}} The upper bound is proved using a standard Chernoff bound. The lower bound can be proved by noting thatP(X−Y≥0∣X+Y=i){\displaystyle P(X-Y\geq 0\mid X+Y=i)}is the probability thatZ≥i2,{\textstyle Z\geq {\frac {i}{2}},}whereZ∼Bin⁡(i,λλ+μ),{\textstyle Z\sim \operatorname {Bin} \left(i,{\frac {\lambda }{\lambda +\mu }}\right),}which is bounded below by1(i+1)2e−iD(0.5‖λλ+μ),{\textstyle {\frac {1}{(i+1)^{2}}}e^{-iD\left(0.5\|{\frac {\lambda }{\lambda +\mu }}\right)},}whereD{\displaystyle D}isrelative entropy(See the entry onbounds on tails of binomial distributionsfor details). Further noting thatX+Y∼Pois⁡(λ+μ),{\displaystyle X+Y\sim \operatorname {Pois} (\lambda +\mu ),}and computing a lower bound on the unconditional probability gives the result. More details can be found in the appendix of Kamath et al.[30] The Poisson distribution can be derived as a limiting case to thebinomial distributionas the number of trials goes to infinity and theexpectednumber of successes remains fixed — seelaw of rare eventsbelow. Therefore, it can be used as an approximation of the binomial distribution ifnis sufficiently large andpis sufficiently small. The Poisson distribution is a good approximation of the binomial distribution ifnis at least 20 andpis smaller than or equal to 0.05, and an excellent approximation ifn≥ 100 andn p≤ 10.[31]LettingFB{\displaystyle F_{\mathrm {B} }}andFP{\displaystyle F_{\mathrm {P} }}be the respectivecumulative density functionsof the binomial and Poisson distributions, one has:FB(k;n,p)≈FP(k;λ=np).{\displaystyle F_{\mathrm {B} }(k;n,p)\ \approx \ F_{\mathrm {P} }(k;\lambda =np).} One derivation of this usesprobability-generating functions.[32]Consider aBernoulli trial(coin-flip) whose probability of one success (or expected number of successes) isλ≤1{\displaystyle \lambda \leq 1}within a given interval. Split the interval intonparts, and perform a trial in each subinterval with probabilityλn{\displaystyle {\tfrac {\lambda }{n}}}. The probability ofksuccesses out ofntrials over the entire interval is then given by the binomial distribution pk(n)=(nk)(λn)k(1−λn)n−k,{\displaystyle p_{k}^{(n)}={\binom {n}{k}}\left({\frac {\lambda }{n}}\right)^{\!k}\left(1{-}{\frac {\lambda }{n}}\right)^{\!n-k},} whose generating function is: P(n)(x)=∑k=0npk(n)xk=(1−λn+λnx)n.{\displaystyle P^{(n)}(x)=\sum _{k=0}^{n}p_{k}^{(n)}x^{k}=\left(1-{\frac {\lambda }{n}}+{\frac {\lambda }{n}}x\right)^{n}.} Taking the limit asnincreases to infinity (withxfixed) and applying the product limit definition of theexponential function, this reduces to the generating function of the Poisson distribution: limn→∞P(n)(x)=limn→∞(1+λ(x−1)n)n=eλ(x−1)=∑k=0∞e−λλkk!xk.{\displaystyle \lim _{n\to \infty }P^{(n)}(x)=\lim _{n\to \infty }\left(1{+}{\tfrac {\lambda (x-1)}{n}}\right)^{n}=e^{\lambda (x-1)}=\sum _{k=0}^{\infty }e^{-\lambda }{\frac {\lambda ^{k}}{k!}}x^{k}.} AssumeX1∼Pois⁡(λ1),X2∼Pois⁡(λ2),…,Xn∼Pois⁡(λn){\displaystyle X_{1}\sim \operatorname {Pois} (\lambda _{1}),X_{2}\sim \operatorname {Pois} (\lambda _{2}),\dots ,X_{n}\sim \operatorname {Pois} (\lambda _{n})}whereλ1+λ2+⋯+λn=1,{\displaystyle \lambda _{1}+\lambda _{2}+\dots +\lambda _{n}=1,}then[38](X1,X2,…,Xn){\displaystyle (X_{1},X_{2},\dots ,X_{n})}ismultinomially distributed(X1,X2,…,Xn)∼Mult⁡(N,λ1,λ2,…,λn){\displaystyle (X_{1},X_{2},\dots ,X_{n})\sim \operatorname {Mult} (N,\lambda _{1},\lambda _{2},\dots ,\lambda _{n})}conditioned onN=X1+X2+…Xn.{\displaystyle N=X_{1}+X_{2}+\dots X_{n}.} This means[27]: 101-102, among other things, that for any nonnegative functionf(x1,x2,…,xn),{\displaystyle f(x_{1},x_{2},\dots ,x_{n}),}if(Y1,Y2,…,Yn)∼Mult⁡(m,p){\displaystyle (Y_{1},Y_{2},\dots ,Y_{n})\sim \operatorname {Mult} (m,\mathbf {p} )}is multinomially distributed, thenE⁡[f(Y1,Y2,…,Yn)]≤emE⁡[f(X1,X2,…,Xn)]{\displaystyle \operatorname {E} [f(Y_{1},Y_{2},\dots ,Y_{n})]\leq e{\sqrt {m}}\operatorname {E} [f(X_{1},X_{2},\dots ,X_{n})]}where(X1,X2,…,Xn)∼Pois⁡(p).{\displaystyle (X_{1},X_{2},\dots ,X_{n})\sim \operatorname {Pois} (\mathbf {p} ).} The factor ofem{\displaystyle e{\sqrt {m}}}can be replaced by 2 iff{\displaystyle f}is further assumed to be monotonically increasing or decreasing. This distribution has been extended to thebivariatecase.[39]Thegenerating functionfor this distribution isg(u,v)=exp⁡[(θ1−θ12)(u−1)+(θ2−θ12)(v−1)+θ12(uv−1)]{\displaystyle g(u,v)=\exp[(\theta _{1}-\theta _{12})(u-1)+(\theta _{2}-\theta _{12})(v-1)+\theta _{12}(uv-1)]} withθ1,θ2>θ12>0{\displaystyle \theta _{1},\theta _{2}>\theta _{12}>0} The marginal distributions are Poisson(θ1) and Poisson(θ2) and the correlation coefficient is limited to the range0≤ρ≤min{θ1θ2,θ2θ1}{\displaystyle 0\leq \rho \leq \min \left\{{\sqrt {\frac {\theta _{1}}{\theta _{2}}}},{\sqrt {\frac {\theta _{2}}{\theta _{1}}}}\right\}} A simple way to generate a bivariate Poisson distributionX1,X2{\displaystyle X_{1},X_{2}}is to take three independent Poisson distributionsY1,Y2,Y3{\displaystyle Y_{1},Y_{2},Y_{3}}with meansλ1,λ2,λ3{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}and then setX1=Y1+Y3,X2=Y2+Y3.{\displaystyle X_{1}=Y_{1}+Y_{3},X_{2}=Y_{2}+Y_{3}.}The probability function of the bivariate Poisson distribution isPr(X1=k1,X2=k2)=exp⁡(−λ1−λ2−λ3)λ1k1k1!λ2k2k2!∑k=0min(k1,k2)(k1k)(k2k)k!(λ3λ1λ2)k{\displaystyle \Pr(X_{1}=k_{1},X_{2}=k_{2})=\exp \left(-\lambda _{1}-\lambda _{2}-\lambda _{3}\right){\frac {\lambda _{1}^{k_{1}}}{k_{1}!}}{\frac {\lambda _{2}^{k_{2}}}{k_{2}!}}\sum _{k=0}^{\min(k_{1},k_{2})}{\binom {k_{1}}{k}}{\binom {k_{2}}{k}}k!\left({\frac {\lambda _{3}}{\lambda _{1}\lambda _{2}}}\right)^{k}} The free Poisson distribution[40]with jump sizeα{\displaystyle \alpha }and rateλ{\displaystyle \lambda }arises infree probabilitytheory as the limit of repeatedfree convolution((1−λN)δ0+λNδα)⊞N{\displaystyle \left(\left(1-{\frac {\lambda }{N}}\right)\delta _{0}+{\frac {\lambda }{N}}\delta _{\alpha }\right)^{\boxplus N}}asN→ ∞. In other words, letXN{\displaystyle X_{N}}be random variables so thatXN{\displaystyle X_{N}}has valueα{\displaystyle \alpha }with probabilityλN{\textstyle {\frac {\lambda }{N}}}and value 0 with the remaining probability. Assume also that the familyX1,X2,…{\displaystyle X_{1},X_{2},\ldots }arefreely independent. Then the limit asN→∞{\displaystyle N\to \infty }of the law ofX1+⋯+XN{\displaystyle X_{1}+\cdots +X_{N}}is given by the Free Poisson law with parametersλ,α.{\displaystyle \lambda ,\alpha .} This definition is analogous to one of the ways in which the classical Poisson distribution is obtained from a (classical) Poisson process. The measure associated to the free Poisson law is given by[41]μ={(1−λ)δ0+ν,if0≤λ≤1ν,ifλ>1,{\displaystyle \mu ={\begin{cases}(1-\lambda )\delta _{0}+\nu ,&{\text{if }}0\leq \lambda \leq 1\\\nu ,&{\text{if }}\lambda >1,\end{cases}}}whereν=12παt4λα2−(t−α(1+λ))2dt{\displaystyle \nu ={\frac {1}{2\pi \alpha t}}{\sqrt {4\lambda \alpha ^{2}-(t-\alpha (1+\lambda ))^{2}}}\,dt}and has support[α(1−λ)2,α(1+λ)2].{\displaystyle [\alpha (1-{\sqrt {\lambda }})^{2},\alpha (1+{\sqrt {\lambda }})^{2}].} This law also arises inrandom matrixtheory as theMarchenko–Pastur law. Itsfree cumulantsare equal toκn=λαn.{\displaystyle \kappa _{n}=\lambda \alpha ^{n}.} We give values of some important transforms of the free Poisson law; the computation can be found in e.g. in the bookLectures on the Combinatorics of Free Probabilityby A. Nica and R. Speicher[42] The R-transform of the free Poisson law is given byR(z)=λα1−αz.{\displaystyle R(z)={\frac {\lambda \alpha }{1-\alpha z}}.} The Cauchy transform (which is the negative of theStieltjes transformation) is given byG(z)=z+α−λα−(z−α(1+λ))2−4λα22αz{\displaystyle G(z)={\frac {z+\alpha -\lambda \alpha -{\sqrt {(z-\alpha (1+\lambda ))^{2}-4\lambda \alpha ^{2}}}}{2\alpha z}}} The S-transform is given byS(z)=1z+λ{\displaystyle S(z)={\frac {1}{z+\lambda }}}in the case thatα=1.{\displaystyle \alpha =1.} Poisson's probability mass functionf(k;λ){\displaystyle f(k;\lambda )}can be expressed in a form similar to the product distribution of aWeibull distributionand a variant form of thestable count distribution. The variable(k+1){\displaystyle (k+1)}can be regarded as inverse of Lévy's stability parameter in the stable count distribution:f(k;λ)=∫0∞1uWk+1(λu)[(k+1)ukN1k+1(uk+1)]du,{\displaystyle f(k;\lambda )=\int _{0}^{\infty }{\frac {1}{u}}\,W_{k+1}\left({\frac {\lambda }{u}}\right)\left[(k+1)u^{k}\,{\mathfrak {N}}_{\frac {1}{k+1}}(u^{k+1})\right]\,du,}whereNα(ν){\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}is a standard stable count distribution of shapeα=1/(k+1),{\displaystyle \alpha =1/(k+1),}andWk+1(x){\displaystyle W_{k+1}(x)}is a standard Weibull distribution of shapek+1.{\displaystyle k+1.} Given a sample ofnmeasured valueski∈{0,1,…},{\displaystyle k_{i}\in \{0,1,\dots \},}fori= 1, ...,n,we wish to estimate the value of the parameterλof the Poisson population from which the sample was drawn. Themaximum likelihoodestimate is[43] Since each observation has expectationλ, so does the sample mean. Therefore, the maximum likelihood estimate is anunbiased estimatorofλ. It is also an efficient estimator since its variance achieves theCramér–Rao lower bound(CRLB).[44]Hence it isminimum-variance unbiased. Also it can be proven that the sum (and hence the sample mean as it is a one-to-one function of the sum) is a complete and sufficient statistic forλ. To prove sufficiency we may use thefactorization theorem. Consider partitioning the probability mass function of the joint Poisson distribution for the sample into two parts: one that depends solely on the samplex{\displaystyle \mathbf {x} }, calledh(x){\displaystyle h(\mathbf {x} )}, and one that depends on the parameterλ{\displaystyle \lambda }and the samplex{\displaystyle \mathbf {x} }only through the functionT(x).{\displaystyle T(\mathbf {x} ).}ThenT(x){\displaystyle T(\mathbf {x} )}is a sufficient statistic forλ.{\displaystyle \lambda .} The first termh(x){\displaystyle h(\mathbf {x} )}depends only onx{\displaystyle \mathbf {x} }. The second termg(T(x)|λ){\displaystyle g(T(\mathbf {x} )|\lambda )}depends on the sample only throughT(x)=∑i=1nxi.{\textstyle T(\mathbf {x} )=\sum _{i=1}^{n}x_{i}.}Thus,T(x){\displaystyle T(\mathbf {x} )}is sufficient. To find the parameterλthat maximizes the probability function for the Poisson population, we can use the logarithm of the likelihood function: We take the derivative ofℓ{\displaystyle \ell }with respect toλand compare it to zero: Solving forλgives a stationary point. Soλis the average of thekivalues. Obtaining the sign of the second derivative ofLat the stationary point will determine what kind of extreme valueλis. Evaluating the second derivativeat the stationary pointgives: which is the negative ofntimes the reciprocal of the average of the ki. This expression is negative when the average is positive. If this is satisfied, then the stationary point maximizes the probability function. Forcompleteness, a family of distributions is said to be complete if and only ifE(g(T))=0{\displaystyle E(g(T))=0}implies thatPλ(g(T)=0)=1{\displaystyle P_{\lambda }(g(T)=0)=1}for allλ.{\displaystyle \lambda .}If the individualXi{\displaystyle X_{i}}are iidPo(λ),{\displaystyle \mathrm {Po} (\lambda ),}thenT(x)=∑i=1nXi∼Po(nλ).{\textstyle T(\mathbf {x} )=\sum _{i=1}^{n}X_{i}\sim \mathrm {Po} (n\lambda ).}Knowing the distribution we want to investigate, it is easy to see that the statistic is complete. For this equality to hold,g(t){\displaystyle g(t)}must be 0. This follows from the fact that none of the other terms will be 0 for allt{\displaystyle t}in the sum and for all possible values ofλ.{\displaystyle \lambda .}Hence,E(g(T))=0{\displaystyle E(g(T))=0}for allλ{\displaystyle \lambda }implies thatPλ(g(T)=0)=1,{\displaystyle P_{\lambda }(g(T)=0)=1,}and the statistic has been shown to be complete. Theconfidence intervalfor the mean of a Poisson distribution can be expressed using the relationship between the cumulative distribution functions of the Poisson andchi-squared distributions. The chi-squared distribution is itself closely related to thegamma distribution, and this leads to an alternative expression. Given an observationkfrom a Poisson distribution with meanμ, a confidence interval forμwith confidence level1 –αis or equivalently, whereχ2(p;n){\displaystyle \chi ^{2}(p;n)}is thequantile function(corresponding to a lower tail areap) of the chi-squared distribution withndegrees of freedom andF−1(p;n,1){\displaystyle F^{-1}(p;n,1)}is the quantile function of agamma distributionwith shape parameter n and scale parameter 1.[8]: 176-178[45]This interval is 'exact' in the sense that itscoverage probabilityis never less than the nominal1 –α. When quantiles of the gamma distribution are not available, an accurate approximation to this exact interval has been proposed (based on theWilson–Hilferty transformation):[46] wherezα/2{\displaystyle z_{\alpha /2}}denotes thestandard normal deviatewith upper tail areaα / 2. For application of these formulae in the same context as above (given a sample ofnmeasured valueskieach drawn from a Poisson distribution with meanλ), one would set calculate an interval forμ=n λ,and then derive the interval forλ. InBayesian inference, theconjugate priorfor the rate parameterλof the Poisson distribution is thegamma distribution.[47]Let denote thatλis distributed according to the gammadensitygparameterized in terms of ashape parameterαand an inversescale parameterβ: Then, given the same sample ofnmeasured valueskias before, and a prior of Gamma(α,β), the posterior distribution is Note that the posterior mean is linear and is given by It can be shown that gamma distribution is the only prior that induces linearity of the conditional mean. Moreover, a converse result exists which states that if the conditional mean is close to a linear function in theL2{\displaystyle L_{2}}distance than the prior distribution ofλmust be close to gamma distribution inLevy distance.[48] The posterior mean E[λ] approaches the maximum likelihood estimateλ^MLE{\displaystyle {\widehat {\lambda }}_{\mathrm {MLE} }}in the limit asα→0,β→0,{\displaystyle \alpha \to 0,\beta \to 0,}which follows immediately from the general expression of the mean of thegamma distribution. Theposterior predictive distributionfor a single additional observation is anegative binomial distribution,[49]: 53sometimes called a gamma–Poisson distribution. SupposeX1,X2,…,Xp{\displaystyle X_{1},X_{2},\dots ,X_{p}}is a set of independent random variables from a set ofp{\displaystyle p}Poisson distributions, each with a parameterλi,{\displaystyle \lambda _{i},}i=1,…,p,{\displaystyle i=1,\dots ,p,}and we would like to estimate these parameters. Then, Clevenson and Zidek show that under the normalized squared error lossL(λ,λ^)=∑i=1pλi−1(λ^i−λi)2,{\textstyle L(\lambda ,{\hat {\lambda }})=\sum _{i=1}^{p}\lambda _{i}^{-1}({\hat {\lambda }}_{i}-\lambda _{i})^{2},}whenp>1,{\displaystyle p>1,}then, similar as inStein's examplefor the Normal means, the MLE estimatorλ^i=Xi{\displaystyle {\hat {\lambda }}_{i}=X_{i}}isinadmissible.[50] In this case, a family ofminimax estimatorsis given for any0<c≤2(p−1){\displaystyle 0<c\leq 2(p-1)}andb≥(p−2+p−1){\displaystyle b\geq (p-2+p^{-1})}as[51] Some applications of the Poisson distribution tocount data(number of events):[52] More examples of counting events that may be modelled as Poisson processes include: Inprobabilistic number theory,Gallaghershowed in 1976 that, if a certain version of the unprovedprime r-tuple conjectureholds,[61]then the counts ofprime numbersin short intervals would obey a Poisson distribution.[62] The rate of an event is related to the probability of an event occurring in some small subinterval (of time, space or otherwise). In the case of the Poisson distribution, one assumes that there exists a small enough subinterval for which the probability of an event occurring twice is "negligible". With this assumption one can derive the Poisson distribution from the binomial one, given only the information of expected number of total events in the whole interval. Let the total number of events in the whole interval be denoted byλ.{\displaystyle \lambda .}Divide the whole interval inton{\displaystyle n}subintervalsI1,…,In{\displaystyle I_{1},\dots ,I_{n}}of equal size, such thatn>λ{\displaystyle n>\lambda }(since we are interested in only very small portions of the interval this assumption is meaningful). This means that the expected number of events in each of thensubintervals is equal toλ/n.{\displaystyle \lambda /n.} Now we assume that the occurrence of an event in the whole interval can be seen as a sequence ofnBernoulli trials, where thei{\displaystyle i}-thBernoulli trialcorresponds to looking whether an event happens at the subintervalIi{\displaystyle I_{i}}with probabilityλ/n.{\displaystyle \lambda /n.}The expected number of total events inn{\displaystyle n}such trials would beλ,{\displaystyle \lambda ,}the expected number of total events in the whole interval. Hence for each subdivision of the interval we have approximated the occurrence of the event as a Bernoulli process of the formB(n,λ/n).{\displaystyle {\textrm {B}}(n,\lambda /n).}As we have noted before we want to consider only very small subintervals. Therefore, we take the limit asn{\displaystyle n}goes to infinity. In this case thebinomial distributionconverges to what is known as the Poisson distribution by thePoisson limit theorem. In several of the above examples — such as the number of mutations in a given sequence of DNA — the events being counted are actually the outcomes of discrete trials, and would more precisely be modelled using thebinomial distribution, that isX∼B(n,p).{\displaystyle X\sim {\textrm {B}}(n,p).} In such casesnis very large andpis very small (and so the expectationn pis of intermediate magnitude). Then the distribution may be approximated by the less cumbersome Poisson distributionX∼Pois(np).{\displaystyle X\sim {\textrm {Pois}}(np).} This approximation is sometimes known as thelaw of rare events,[63]: 5since each of thenindividualBernoulli eventsrarely occurs. The name "law of rare events" may be misleading because the total count of success events in a Poisson process need not be rare if the parametern pis not small. For example, the number of telephone calls to a busy switchboard in one hour follows a Poisson distribution with the events appearing frequent to the operator, but they are rare from the point of view of the average member of the population who is very unlikely to make a call to that switchboard in that hour. The variance of the binomial distribution is 1 −ptimes that of the Poisson distribution, so almost equal whenpis very small. The wordlawis sometimes used as a synonym ofprobability distribution, andconvergence in lawmeansconvergence in distribution. Accordingly, the Poisson distribution is sometimes called the "law of small numbers" because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen.The Law of Small Numbersis a book by Ladislaus Bortkiewicz about the Poisson distribution, published in 1898.[12][64] The Poisson distribution arises as the number of points of aPoisson point processlocated in some finite region. More specifically, ifDis some region space, for example Euclidean spaceRd, for which |D|, the area, volume or, more generally, the Lebesgue measure of the region is finite, and ifN(D)denotes the number of points inD, then Poisson regressionandnegative binomialregression are useful for analyses where the dependent (response) variable is the count(0, 1, 2, ... )of the number of events or occurrences in an interval. TheLuria–Delbrück experimenttested against the hypothesis of Lamarckian evolution, which should result in a Poisson distribution. Katz and Miledi measured themembrane potentialwith and without the presence ofacetylcholine(ACh).[65]When ACh is present,ion channelson the membrane would be open randomly at a small fraction of the time. As there are a large number of ion channels each open for a small fraction of the time, the total number of ion channels open at any moment is Poisson distributed. When ACh is not present, effectively no ion channels are open. The membrane potential isV=NopenVion+V0+Vnoise{\displaystyle V=N_{\text{open}}V_{\text{ion}}+V_{0}+V_{\text{noise}}}. Subtracting the effect of noise, Katz and Miledi found the mean and variance of membrane potential to be8.5×10−3V,(29.2×10−6V)2{\displaystyle 8.5\times 10^{-3}\;\mathrm {V} ,(29.2\times 10^{-6}\;\mathrm {V} )^{2}}, givingVion=10−7V{\displaystyle V_{\text{ion}}=10^{-7}\;\mathrm {V} }. (pp. 94-95[66]) During each cellular replication event, the number of mutations is roughly Poisson distributed.[67]For example, the HIV virus has 10,000 base pairs, and has a mutation rate of about 1 per 30,000 base pairs, meaning the number of mutations per replication event is distributed asPois(1/3){\displaystyle \mathrm {Pois} (1/3)}. (p. 64[66]) In a Poisson process, the number of observed occurrences fluctuates about its meanλwith astandard deviationσk=λ.{\displaystyle \sigma _{k}={\sqrt {\lambda }}.}These fluctuations are denoted asPoisson noiseor (particularly in electronics) asshot noise. The correlation of the mean and standard deviation in counting independent discrete occurrences is useful scientifically. By monitoring how the fluctuations vary with the mean signal, one can estimate the contribution of a single occurrence,even if that contribution is too small to be detected directly. For example, the chargeeon an electron can be estimated by correlating the magnitude of anelectric currentwith itsshot noise. IfNelectrons pass a point in a given timeton the average, themeancurrentisI=eN/t{\displaystyle I=eN/t}; since the current fluctuations should be of the orderσI=eN/t{\displaystyle \sigma _{I}=e{\sqrt {N}}/t}(i.e., the standard deviation of thePoisson process), the chargee{\displaystyle e}can be estimated from the ratiotσI2/I.{\displaystyle t\sigma _{I}^{2}/I.}[citation needed] An everyday example is the graininess that appears as photographs are enlarged; the graininess is due to Poisson fluctuations in the number of reducedsilvergrains, not to the individual grains themselves. Bycorrelatingthe graininess with the degree of enlargement, one can estimate the contribution of an individual grain (which is otherwise too small to be seen unaided).[citation needed] Incausal settheory the discrete elements of spacetime follow a Poisson distribution in the volume. The Poisson distribution also appears inquantum mechanics, especiallyquantum optics. Namely, for aquantum harmonic oscillatorsystem in acoherent state, the probability of measuring a particular energy level has a Poisson distribution. The Poisson distribution poses two different tasks for dedicated software libraries:evaluatingthe distributionP(k;λ){\displaystyle P(k;\lambda )}, anddrawing random numbersaccording to that distribution. ComputingP(k;λ){\displaystyle P(k;\lambda )}for givenk{\displaystyle k}andλ{\displaystyle \lambda }is a trivial task that can be accomplished by using the standard definition ofP(k;λ){\displaystyle P(k;\lambda )}in terms of exponential, power, and factorial functions. However, the conventional definition of the Poisson distribution contains two terms that can easily overflow on computers:λkandk!. The fraction ofλktok! can also produce a rounding error that is very large compared toe−λ, and therefore give an erroneous result. For numerical stability the Poisson probability mass function should therefore be evaluated as which is mathematically equivalent but numerically stable. The natural logarithm of theGamma functioncan be obtained using thelgammafunction in theCstandard library (C99 version) orR, thegammalnfunction inMATLABorSciPy, or thelog_gammafunction inFortran2008 and later. Some computing languages provide built-in functions to evaluate the Poisson distribution, namely The less trivial task is to draw integerrandom variatefrom the Poisson distribution with givenλ.{\displaystyle \lambda .} Solutions are provided by: A simple algorithm to generate random Poisson-distributed numbers (pseudo-random number sampling) has been given byKnuth:[70]: 137-138 The complexity is linear in the returned valuek, which isλon average. There are many other algorithms to improve this. Some are given in Ahrens & Dieter, see§ Referencesbelow. For large values ofλ, the value ofL=e−λmay be so small that it is hard to represent. This can be solved by a change to the algorithm which uses an additional parameter STEP such thate−STEPdoes not underflow:[citation needed] The choice of STEP depends on the threshold of overflow. For double precision floating point format the threshold is neare700, so 500 should be a safeSTEP. Other solutions for large values ofλincluderejection samplingand using Gaussian approximation. Inverse transform samplingis simple and efficient for small values ofλ, and requires only one uniform random numberuper sample. Cumulative probabilities are examined in turn until one exceedsu.
https://en.wikipedia.org/wiki/Poisson_distribution
In variousinterpretationsofquantum mechanics,wave function collapse, also calledreduction of the state vector,[1]occurs when awave function—initially in asuperpositionof severaleigenstates—reduces to a single eigenstate due tointeractionwith the external world. This interaction is called anobservationand is the essence of ameasurement in quantum mechanics, which connects the wave function with classicalobservablessuch aspositionandmomentum. Collapse is one of the two processes by whichquantum systemsevolve in time; the other is the continuous evolution governed by theSchrödinger equation.[2] In theCopenhagen interpretation, wave function collapse connects quantum to classical models, with a specialrole for the observer. By contrast,objective-collapseproposes an origin in physical processes. In themany-worlds interpretation, collapse does not exist; all wave function outcomes occur whilequantum decoherenceaccounts for the appearance of collapse. Historically,Werner Heisenbergwas the first to use the idea of wave function reduction to explain quantum measurement.[3][4] In quantum mechanics each measurable physical quantity of a quantum system is called anobservablewhich, for example, could be the positionr{\displaystyle r}and the momentump{\displaystyle p}but also energyE{\displaystyle E},z{\displaystyle z}components of spin (sz{\displaystyle s_{z}}), and so on. The observable acts as alinear functionon the states of the system; its eigenvectors correspond to the quantum state (i.e.eigenstate) and theeigenvaluesto the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writingϕi{\displaystyle \phi _{i}}for an eigenstate andci{\displaystyle c_{i}}for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector usingbra–ket notation:|ψ⟩=∑ici|ϕi⟩.{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}The kets{|ϕi⟩}{\displaystyle \{|\phi _{i}\rangle \}}specify the different available quantum "alternatives", i.e., particular quantum states. Thewave functionis a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true. To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation,[5]:566abruptly converting an arbitrary state into a single component eigenstate of the observable: where the arrow represents a measurement of the observable corresponding to theϕ{\displaystyle \phi }basis.[6]For any single event, only one eigenvalue is measured, chosen randomly from among the possible values. Thecomplexcoefficients{ci}{\displaystyle \{c_{i}\}}in the expansion of a quantum state in terms of eigenstates{|ϕi⟩}{\displaystyle \{|\phi _{i}\rangle \}},|ψ⟩=∑ici|ϕi⟩.{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}can be written as an (complex) overlap of the corresponding eigenstate and the quantum state:ci=⟨ϕi|ψ⟩.{\displaystyle c_{i}=\langle \phi _{i}|\psi \rangle .}They are called theprobability amplitudes. Thesquare modulus|ci|2{\displaystyle |c_{i}|^{2}}is the probability that a measurement of the observable yields the eigenstate|ϕi⟩{\displaystyle |\phi _{i}\rangle }. The sum of the probability over all possible outcomes must be one:[7] As examples, individual counts in adouble slit experimentwith electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern.[8]In aStern-Gerlach experimentwith silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area. This statistical aspect of quantum measurements differs fundamentally fromclassical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information.[5]: 17 The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. Aquantum stateis a mathematical description of a quantum system; aquantum state vectoruses Hilbert space vectors for the description.[9]: 159Reduction of the state vector replaces the full state vector with a single eigenstate of the observable. The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation".[9]: 324When the wave function representation is used, the "reduction" is called "wave function collapse". The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called themeasurement problemof quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses theBorn ruleto compute the probable outcomes.[10]Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrödinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics.[11]: 127 Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself".[12] Variousinterpretations of quantum mechanicsattempt to provide a physical model for collapse.[13]: 816Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories likede Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results fromtestsofBell's theoremshows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes themany-worlds interpretationandconsistent historiesmodels. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example theobjective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule.[13]: 819 The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as theCopenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.[citation needed] Quantum decoherence explains why a system interacting with an environment transitions from being apure state, exhibiting superpositions, to amixed state, an incoherent combination of classical alternatives.[14]This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in thesecond law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining theclassical limitof quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them.[15][16][14] The form of decoherence known asenvironment-induced superselectionproposes that when a quantum system interacts with the environment, the superpositionsapparentlyreduce to mixtures of classical alternatives. The combined wave function of the system and environment continue to obey the Schrödinger equation throughout thisapparentcollapse.[17]More importantly, this is not enough to explainactualwave function collapse, as decoherence does not reduce it to a single eigenstate.[15][14] The concept of wavefunction collapse was introduced byWerner Heisenbergin his 1927 paper on theuncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into themathematical formulation of quantum mechanicsbyJohn von Neumann, in his 1932 treatiseMathematische Grundlagen der Quantenmechanik.[4]Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process.[18]Niels Bohr never mentions wave function collapse in his published work, but he repeatedly cautioned that we must give up a "pictorial representation". Despite the differences between Bohr and Heisenberg, their views are often grouped together as the "Copenhagen interpretation", of which wave function collapse is regarded as a key feature.[19] John von Neumann's influential 1932 workMathematical Foundations of Quantum Mechanicstook a more formal approach, developing an "ideal" measurement scheme[20][21]:1270that postulated that there were two processes of wave function change: In 1957Hugh Everett IIIproposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe.[21]: 1288While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of theBorn rule.[21]: 1290[20]: 5 Beginning in 1970H. Dieter Zehsought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work byWojciech H. Zurekin 1980 lead eventually to a large number of papers on many aspects of the concept.[22]Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system".[21]: 1273Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics.[21]: 1302 By explicitly dealing with the interaction of object and measuring instrument, von Neumann[2]described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove thenecessityof such a collapse. Von Neumann's projection postulate was conceived based on experimental evidence available during the 1930s, in particularCompton scattering. Later work refined the notion of measurements into the more easily discussedfirst kind, that will give the same value when immediately repeated, and thesecond kindthat give different values when repeated.[23][24][25]
https://en.wikipedia.org/wiki/Wave_function_collapse
Self-evaluation maintenance(SEM) concerns discrepancies between two people in arelationship. The theory posits an individual will maintain as well as enhance their self-esteem via a social comparison to another individual.[1]Self-evaluationrefers to the self-perceivedsocial rankingone has towards themselves. It is the continuous process of determining personal growth and progress, which can be raised or lowered by the behavior of others.Abraham Tessercreated the self-evaluation maintenance theory in 1988. The self-evaluation maintenance model assumes two things: that a person will try to maintain or increase their own self-evaluation, and self-evaluation is influenced by relationships with others.[1] A person's self-evaluation (which is similar toself-esteem) may be raised when a close other performs well.[1]For example, a sibling scores the winning goal in an important game. Self-evaluation will increase because that person is sharing his/her success. The closer the psychological relationship and the greater the success, the more a person will share in the success.[1]This is considered thereflectionprocess. When closeness and performance are high, self-evaluation is raised in the reflection process. If someone who is psychologically close performs well on a task that is irrelevant to a person's self-definition, that person is able to benefit by sharing in the success of the achievement. At the same time, the success of a close other can decrease someone's self-evaluation in the comparison process. This is because the success of a close other invitescomparisonon one's own capabilities, thereby directly affecting one's own self-evaluation.[1]This is also strengthened with the closeness of the psychological relationship with the successful other. Using a similar example: a sibling scores the winning goal in an important game; but you are also on the same team and through comparison, your self-evaluation is lowered. When closeness (sibling) and performance (scored the winning goal) are high, self-evaluation is decreased in the comparison process. This is further expressed when the comparison is related to something you value in your personal identity. If you are aspiring to become a professional soccer player, but your sibling scores the winning goal and you do not, the comparison aspect of SEM will decrease your self-evaluation. In both the reflection and comparison processes, closeness and performance level are significant factors. If the closeness of another decreases, then a person is less likely to share the success and/or compare him/herself, which lessens the likelihood of decreasing self-evaluation. A person is more likely to compare him/herself to someone close to him/her, like a sibling or a best friend, than a stranger. There are different factors in which a person can assume closeness: family, friends, people with similar characteristics, etc. If an individual is not close to a particular person, then it makes sense that he/she will not share in their success or be threatened by their success. At the same time, if the person's performance is low, there is no reason to share the success and increase self-evaluation; there is also no reason to compare him/herself to the other person. Because their performance is low, there is no reason it should raise or lower his/her self-evaluation. According to Tesser's (1988) theory, if a sibling did not do well in his/her game, then there is no reason the individual's self-evaluation will be affected. Closeness and performance can either raise self-evaluation through reflection or lower self-evaluation through comparison. Relevance to self-identity determines whether reflection or comparison will occur. There are many different dimensions that can be important to an individual's self-definition. A self-defining factor is any factor that is personally relevant to your identity. For example, skills in music may be important to one's self-definition, but at the same time, being good in math may not be as important, even if you are skilled at it. Relating to your self-definition, you may consider yourself a musician but not a mathematician, even if you are skilled in both. Relevance assumes that a particular factor that is important to an individual is also important to another person. Relevance can be as simple as a shared dimension which one considers important to who they are. If relevance is high, then one will engage in comparison, but if relevance is low, one will engage in reflection.[1]For example, if athletics is important to a person and that person considers athletics to be an important dimension of his/her self-definition, then when a sibling does well in athletics, the comparison process will take place and his/her self-evaluation will decrease. On the other hand, if athletics is not a dimension he/she uses for self-definition, the reflection process will take place and he/she will celebrate the sibling's success with the sibling; his/her self-evaluation will increase along with the sibling's because he/she is not threatened or challenged by the sibling's athletic capability. Tesser (1988) suggests that people may do things to reduce the decrease in self-evaluation from comparison. One can spend less time with that particular individual, thereby reducing closeness or one can change their important self-definition and take up a new hobby or focus on a different self-defining activity, which reduces relevance (e.g., A siblings success in your favorite sport may lead you to stop playing). The third way of avoiding a decrease in self-evaluation through the comparison process is to affect another's performance (e.g., by hiding a sibling's favorite shoes or believe that his/her performance was based on luck) or one can improve their own skills by practicing more. The conditions that predict whether an individual will interfere with another's performance in the sake of their own self-evaluation include the closeness of the individuals and the relevance of the activity. When the relevance is high, the comparison process is more important than the reflection process. When the relevance is high and the activity is high in self-defining importance, the other person poses a larger threat than when the relevance is low. Mazar et al. (2008) investigated how self-concept maintenance applies tomoral behavior. They found that participants engaged in dishonest behaviors to achieve external benefits up to a point. However, their need to maintain a positive view of themselves, as beinghonest, limited the extent of their dishonest behavior.[2] Tesser & Smith (1980) experimented with this theory. Men were recruited and asked to bring a friend with them. They were then put into groups of four, Man A and Man A's friend along with Man B and Man B's friend. Half the subjects were told that the study's purpose was measuring important verbal skills and leadership. This was the high relevance group. The other two subjects were told that the task had nothing to do with verbal skills, leadership or anything important. This was considered the low relevance group. The activity was based on the game Password, where persons have to guess a word based on clues. Each man was given an opportunity to guess the word while the other three gave clues from a list. The other three can give clues that are easy or difficult based on their own judgment and whether or not they would like to help the other person guess the word. The clues given to the person were necessary to guess the word. The first pair of partners performed poorly (as instructed in the experimental design). The experiment was interested in the behavior of the second group of men. The next pairing was designed to partner a stranger with a friend. Researchers were trying to see when a friend was helped more than a stranger and when a stranger was helped more than a friend. The results supported their hypothesis. In 10 out of 13 sessions, when relevance was high (told that this activity measures important verbal and leadership skills) the stranger was helped more than a friend. Also, in 10 out of 13 sessions, when relevance was low (subjects were told that this activity determined nothing of importance) the friend was helped more than the stranger.[1]The prediction of the self-evaluation maintenance theory was strongly supported. Having previously discovered that the most positive evaluations occurred in participants when have low relevance with high closeness to another individual, Tesser (1989)[3]sought to test whether emotional arousal mediated this relation. In the above sibling sport examples, it is evident that the self-evaluation process is an emotionally stimulating one. Tesser was interested in whether the emotional effect was a side-effect of the self-evaluation process, or whether it was a mediating effect (i.e., whether it was a partial factor influencing the evaluation). Tesser believed that if emotion was a mediating factor, that if emotional arousal was engaged and misattributed, that the self-evaluation process would be activated with all other factors controlled. To test, subjects arrived in pairs that knew one another prior. Two conditions were given vitamin C pills, where in the control condition they were truthfully told the pills would have no effect, and in the misattribution condition, they were told these pills would cause arousal, activating a placebo effect. Subjects then completed both relevant and non-relevant tasks, both with other subjects close and not close with them, then ratings of the other participants were measured. The results found that subjects in the misattribution condition had much more extreme ratings of other participants. When the task was high in relevancy, the subject rating the other participant much worse than the control condition. The findings show that while emotional activation is not the only factor determining evaluations, it is a mediating factor with some effect. Zuckerman & Jost (2001) compares the self-evaluation maintenance theory to the work of Feld (1991). As the self-evaluatory maintenance theory would lead one to judge a stranger higher than their friends (based on popularity) in order to prevent a drop in self-evaluation, Feld's (1991) research demonstrated that people must have fewer friends than their friends do in order to remain popular. This is based on a mathematical equation that explains why popular people are involved in more social circles than unpopular people. These are not the only two research examples. For more examples see the references. This graph illustrates the basic principles of Tesser's (1988) self-evaluatory maintenance model of behavior. Relevance determines whether reflection or comparison will occur. When relevance is low (the factor does not affect self-definition) as the other's performance increases, so does self-evaluation, allowing that person to share in the celebration of the other person (reflection). When relevance is high (the factor is important to self-definition also) as the other's performance increases, self-evaluation decreases because that person is being compared to the other person (comparison). If relevance is high, then one will engage in comparison, but if relevance is low, one will engage in reflection.[1]
https://en.wikipedia.org/wiki/Self-evaluation_maintenance_theory
In engineering, afactor of safety(FoS) orsafety factor(SF) expresses how much stronger a system is than it needs to be for its specified maximum load. Safety factors are often calculated using detailed analysis because comprehensive testing is impractical on many projects, such as bridges and buildings, but the structure's ability to carry a load must be determined to a reasonable accuracy. Many systems are intentionally built much stronger than needed for normal usage to allow for emergency situations, unexpected loads, misuse, or degradation (reliability). Margin of safety(MoSorMS) is a related measure, expressed as arelative change. There are two definitions for the factor of safety (FoS): The realized factor of safety must be greater than the required design factor of safety. However, between various industries and engineering groups usage is inconsistent and confusing; there are several definitions used. The cause of much confusion is that various reference books and standards agencies use the factor of safety definitions and terms differently.Building codes,structuralandmechanical engineeringtextbooks often refer to the "factor of safety" as the fraction of total structural capability over what is needed. Those are realized factors of safety[1][2][3](first use). Many undergraduatestrength of materialsbooks use "Factor of Safety" as a constant value intended as a minimum target for design[4][5][6](second use). There are several ways to compare the factor of safety for structures. All the different calculations fundamentally measure the same thing: how much extra load beyond what is intended a structure will actually take (or be required to withstand). The difference between the methods is the way in which the values are calculated and compared. Safety factor values can be thought of as a standardized way for comparing strength and reliability between systems. The use of a factor of safety does not imply that an item, structure, or design is "safe". Manyquality assurance,engineering design,manufacturing, installation, and end-use factors may influence whether or not something is safe in any particular situation. The difference between the safety factor and design factor (design safety factor) is as follows: The safety factor, or yield stress, is how much the designed part actually will be able to withstand (first usage from above). The design factor, or working stress, is what the item is required to be able to withstand (second usage). The design factor is defined for an application (generally provided in advance and often set by regulatorybuilding codesor policy) and is not an actual calculation, the safety factor is a ratio of maximum strength to intended load for the actual item that was designed. By this definition, a structure with an FoS of exactly 1 will support only the design load and no more. Any additional load will cause the structure to fail. A structure with an FoS of 2 will fail at twice the design load. Many government agencies and industries (such as aerospace) require the use of amargin of safety(MoSorMS) to describe the ratio of the strength of the structure to the requirements. There are two separate definitions for the margin of safety so care is needed to determine which is being used for a given application. One usage of MS is as a measure of capability like FoS. The other usage of MS is as a measure of satisfying design requirements (requirement verification). Margin of safety can be conceptualized (along with the reserve factor explained below) to represent how much of the structure's total capability is held "in reserve" during loading. MS as a measure of structural capability: This definition of margin of safety commonly seen in textbooks[7][8]describes what additional load beyond the design load a part can withstand before failing. In effect, this is a measure of excess capability. If the margin is 0, the part will not take any additional load before it fails, if it is negative the part will fail before reaching its design load in service. If the margin is 1, it can withstand one additional load of equal force to the maximum load it was designed to support (i.e. twice the design load). MS as a measure of requirement verification: Many agencies and organizations such asNASA[9]andAIAA[10]define the margin of safety including the design factor, in other words, the margin of safety is calculated after applying the design factor. In the case of a margin of 0, the part is at exactly therequiredstrength (the safety factor would equal the design factor). If there is a part with a required design factor of 3 and a margin of 1, the part would have a safety factor of 6 (capable of supporting two loads equal to its design factor of 3, supporting six times the design load beforefailure). A margin of 0 would mean the part would pass with a safety factor of 3. If the margin is less than 0 in this definition, although the part will not necessarily fail, the design requirement has not been met. A convenience of this usage is that for all applications, a margin of 0 or higher is passing, one does not need to know application details or compare against requirements, just glancing at the margin calculation tells whether the design passes or not. This is helpful for oversight and reviewing on projects with various integrated components, as different components may have various design factors involved and the margin calculation helps prevent confusion. For a successful design, the realized safety factor must always equal or exceed the design safety factor so that the margin of safety is greater than or equal to zero. The margin of safety is sometimes, but infrequently, used as a percentage, i.e., a 0.50 MS is equivalent to a 50% MS. When a design satisfies this test it is said to have a "positive margin", and, conversely, a "negative margin" when it does not. In the field of nuclear safety (as implemented at US government-owned facilities) the margin of safety has been defined as a quantity that may not be reduced without review by the controlling government office. The US Department of Energy publishes DOE G 424.1-1, "Implementation Guide for Use in Addressing Unreviewed Safety Question Requirements" as a guide for determining how to identify and determine whether a margin of safety will be reduced by a proposed change. The guide develops and applies the concept of a qualitative margin of safety that may not be explicit or quantifiable, yet can be evaluated conceptually to determine whether an increase or decrease will occur with a proposed change. This approach becomes important when examining designs with large or undefined (historical) margins and those that depend on "soft" controls such as programmatic limits or requirements. The commercial US nuclear industry utilized a similar concept in evaluating planned changes until 2001, when 10 CFR 50.59 was revised to capture and apply the information available in facility-specific risk analyses and other quantitative risk management tools. A measure of strength frequently used in Europe is thereserve factor(RF). With the strength and applied loads expressed in the same units, the reserve factor is defined in one of two ways, depending on the industry: The applied loads have many factors, including factors of safety applied. Forductilematerials (e.g. most metals), it is often required that the factor of safety be checked against bothyieldandultimatestrengths. The yield calculation will determine the safety factor until the part starts todeform plastically. The ultimate calculation will determine the safety factor until failure. Inbrittlematerials the yield and ultimate strengths are often so close as to be indistinguishable, so it is usually acceptable to only calculate the ultimate safety factor. Appropriate design factors are based on several considerations, such as theaccuracyof predictions on the imposedloads, strength,wearestimates, and theenvironmentaleffects to which the product will be exposed in service; the consequences of engineering failure; and the cost of over-engineering the component to achieve that factor of safety[citation needed]. For example, components whose failure could result in substantial financial loss, serious injury, or death may use a safety factor of four or higher (often ten). Non-critical components generally might have a design factor of two.Risk analysis,failure mode and effects analysis, and other tools are commonly used. Design factors for specific applications are often mandated by law, policy, or industry standards. Buildings commonly use a factor of safety of 2.0 for each structural member. The value for buildings is relatively low because the loads are well understood and most structures areredundant.Pressure vesselsuse 3.5 to 4.0, automobiles use 3.0, and aircraft and spacecraft use 1.2 to 4.0 depending on the application and materials. Ductile, metallic materials tend to use the lower value while brittle materials use the higher values. The field ofaerospace engineeringuses generally lower design factors because the costs associated with structural weight are high (i.e. an aircraft with an overall safety factor of 5 would probably be too heavy to get off the ground). This low design factor is why aerospace parts and materials are subject to very stringentquality controland strict preventative maintenance schedules to help ensure reliability. A usually applied Safety Factor is 1.5, but for pressurized fuselage it is 2.0, and for main landing gear structures it is often 1.25.[11] In some cases it is impractical or impossible for a part to meet the "standard" design factor. The penalties (mass or otherwise) for meeting the requirement would prevent the system from being viable (such as in the case of aircraft or spacecraft). In these cases, it is sometimes determined to allow a component to meet a lower than normal safety factor, often referred to as "waiving" the requirement. Doing this often brings with it extra detailed analysis or quality control verifications to assure the part will perform as desired, as it will be loaded closer to its limits. For loading that is cyclical, repetitive, or fluctuating, it is important to consider the possibility ofmetal fatiguewhen choosing factor of safety. A cyclic load well below a material's yield strength can cause failure if it is repeated through enough cycles. According toElishakoff[12][13]the notion of factor of safety in engineering context was apparently first introduced in 1729 byBernard Forest de Bélidor(1698-1761)[14]who was a French engineer working in hydraulics, mathematics, civil, and military engineering. The philosophical aspects of factors of safety were pursued by Doorn and Hansson.[15]
https://en.wikipedia.org/wiki/Factor_of_safety
Probabilistic logic(alsoprobability logicandprobabilistic reasoning) involves the use of probability and logic to deal with uncertain situations. Probabilistic logic extends traditional logictruth tableswith probabilistic expressions. A difficulty of probabilistic logics is their tendency to multiply thecomputational complexitiesof their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion inDempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined insubjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals. There are numerous proposals for probabilistic logics. Very roughly, they can be categorized into two different classes: those logics that attempt to make a probabilistic extension tological entailment, such asMarkov logic networks, and those that attempt to address the problems of uncertainty and lack of evidence (evidentiary logics). That the concept of probability can have different meanings may be understood by noting that, despite the mathematization of probability in theEnlightenment, mathematicalprobability theoryremains, to this very day, entirely unused in criminal courtrooms, when evaluating the "probability" of the guilt of a suspected criminal.[1] More precisely, in evidentiary logic, there is a need to distinguish the objective truth of a statement from our decision about the truth of that statement, which in turn must be distinguished from our confidence in its truth: thus, a suspect's real guilt is not necessarily the same as the judge's decision on guilt, which in turn is not the same as assigning a numerical probability to the commission of the crime, and deciding whether it is above a numerical threshold of guilt. The verdict on a single suspect may be guilty or not guilty with some uncertainty, just as the flipping of a coin may be predicted as heads or tails with some uncertainty. Given a large collection of suspects, a certain percentage may be guilty, just as the probability of flipping "heads" is one-half. However, it is incorrect to take this law of averages with regard to a single criminal (or single coin-flip): the criminal is no more "a little bit guilty" than predicting a single coin flip to be "a little bit heads and a little bit tails": we are merely uncertain as to which it is. Expressing uncertainty as a numerical probability may be acceptable when making scientific measurements of physical quantities, but it is merely a mathematical model of the uncertainty we perceive in the context of "common sense" reasoning and logic. Just as in courtroom reasoning, the goal of employinguncertain inferenceis to gather evidence to strengthen the confidence of a proposition, as opposed to performing some sort of probabilistic entailment. Historically, attempts to quantify probabilistic reasoning date back to antiquity. There was a particularly strong interest starting in the 12th century, with the work of theScholastics, with the invention of thehalf-proof(so that two half-proofs are sufficient to prove guilt), the elucidation ofmoral certainty(sufficient certainty to act upon, but short of absolute certainty), the development ofCatholic probabilism(the idea that it is always safe to follow the established rules of doctrine or the opinion of experts, even when they are less probable), thecase-based reasoningofcasuistry, and the scandal ofLaxism(whereby probabilism was used to give support to almost any statement at all, it being possible to find an expert opinion in support of almost any proposition.).[1] Below is a list of proposals for probabilistic and evidentiary extensions to classical andpredicate logic.
https://en.wikipedia.org/wiki/Probabilistic_logic
Aroller containeris acontainertype that can be carried by trucks to be pushed to ground level by help of a hook and level arm with the container possibly sliding on steel roller wheels. Its original usage was in the collection of bulk waste resulting in the creation of theDIN standardsto be initiated by city cleaning companies. An additional part defines a transport frame mounted on specialized rail cars that allows easyintermodal transportfor these container types. Another important area is in thecontainerizationoffirefighting equipmentused asswap bodycontainers offire trucks. The term "roller container" has been introduced in the English summary of the DIN standards that refer to the prominent feature of steel wheels - such wide wheels are commonly known in English as rollers. It does also refer to the verb "to roll" which has the same meaning in German - the particle "ab-" in German "abroll container" designates downward/pushback operations so that the GermanAbrollcontaineris sometimes translated to English as "roll-off container". The DIN standard uses the German termAbrollbehälterwhere the generic Germanic "behälter" has replaced the Romanic "container" - the latter is more associated with transport containers in the German language so that the ACTS designation has picked upAbrollcontainerinstead of the synonymousAbrollbehälter. WithAbrollcontainerassociated to transport systems the variousAbrollbehältertypes are usually denoting firefighting equipment. In British English the firefighting containers are generically called "demountable pod" or just "pod" for example "foam pod" and while being a generic term these are universally roller containers as well. There is an additional term "hooklift container" that is related to the common designation of the hoist gear on trucks used for roller containers to be called "hook lift". This has influenced languages like Dutch where the truck is calledhaakarmvoertuig(hook arm vehicle) and the container being ahaakarmbak(hook arm pod). These terms refer to the level arm that matches with the grip hook on the container to lift it from the ground. The term hooklift container may refer to any container type with an additional hook bar whichdoes notnecessarily include roller wheels - this includes the NATO standard STANAG 2413 variations of 20' ISO containers having an additional hook bar.[1] Solutions for intermodal transport containers in road and rail transport already appeared in the 1930s. One of the systems was used since 1934 in the Netherlands to carry waste transportation and consumer goods. These "Laadkisten" had a permissible gross mass of 3,000 kg (6,600 lb) and dimensions of2.5 m × 2 m × 2 m (8 ft2+3⁄8in × 6 ft6+3⁄4in × 6 ft6+3⁄4in). Reloading held by dragging rope winch tow car.[3] After World War II that system was used for transports between Switzerland and the Netherlands. On 14–23 April 1951 in Zurich Tiefenbrunnen under the auspices of the Club «Museum of Transport, Switzerland, Swiss Transportation" and Bureau International des Containers "(BIC) held demonstrations container systems aim to select the best solution for Western Europe. There were representatives Belgium, France, the Netherlands, Germany, Switzerland, Sweden, Great Britain, Italy and the USA. The result of this meeting had been the first after World War II European standard UIC 590, also known as "Pa-Behälter" (porteur-aménagé-Behälter). This system has been implemented in Denmark, Belgium, the Netherlands, Luxembourg, West Germany, Switzerland and Sweden.[3]In Germany it was widely marketed as the "haus zu haus" (house to house) transport system which did include a variety of pod types.[4] Along with the gradual popularization of large container type ISO (first seen in Europe in 1966), the "Pa-Behälter" system fell out of use and it was subsequently withdrawn by the railways (no new containers produced after 1975, scrapped in the 2000s). The transport of waste containers was moved completely to road transportation in the 1970s. Previously the open-top middle-sized container Eoskrt of the "haus zu haus" series was widely used for waste transport by rail. It could be moved on four small wheels on to the flat car fitting into comparably narrow guide rails that were mounted on them. These early roll containers types had standard steel wheels of 75 mm in width with a diameter size of 200 mm. The axle track was 1400 mm across and the wheel base 1950 mm at length. Lashing eyelets are attached to every wheel case allowing to move and lock the container in place.[5] Roller containers have been standardized in DIN 30722 by theMunicipal Services Standards Committee(GermanNormenausschuss Kommunale Technik/ NKT). The first parts are subdivisions of different weight classes (part 1 up to 26 tonnes [28.7 short tons; 25.6 long tons], part 2 up to 32 tonnes [35.3 short tons; 31.5 long tons], part 3 up to 16 tonnes [17.6 short tons; 15.7 long tons]) that had been first published in April 1993 and the latest revision being published in February 2007.[6]The part 4 of the standard series covers the intermodal transport between rail and road with the issue of July 1994 being still current.[7] The DIN roller containers have a hook that is directed 45° upwards with the handle bar positioned at a height of 1,570 mm (61.81 in). The roller wheels have an inner distance of 1,560 mm (61.42 in) and an outer distance of 2,160 mm (85.04 in). The width of the containers does mostly follow intermodalshipping containersand there are undercarriage frames available for twenty-foot containers to be handled as a roller container. The length of DIN roller containers is standardized in steps of 250 mm (9.84 in) from an overall length of 4,000 to 7,000 mm (13 ft 1 in to 23 ft 0 in). The height has not been standardized and roller containers are commonly not used for stacking.[8] The NATO standard STANAG 2413 "demountable load carrying platforms (dlcp/flatracks)" references the DIN 30722 for the definition of the "hookbar".[9]The LHS rollers and ISO container twistlock pockets are optional in STANAG 2413 - the LHS designation references the Load Handling System (GermanHakenladesystem/ hook load system) derived from the DIN roller containers in use for firefighting equipment. The roller containers come in a larger variety for specific usages. For bulk waste a common type has additional impermeable doors on one side. There are low height containers that allow easy dumping of green care waste. There are squeeze containers that compress the garbage. Roller containers for construction waste need additional stability. The DIN standard does not define the height nor most of the other sizes - it concentrates on the hook for lifting the container and the wheels that allow sliding on the ground.[10] According to Marrel they have invented hooklift trucks marketing them since 1969 in the USA.[11]There have been various heights and sizes of the hook with the ACTS roller container system standardizing on 1,570 mm (61.81 in) (rounded to 61.75 inches or 1,568 millimetres for the US market at Stellar Industries).[12]Hook heights of 54" and 36" are also in common use.[13] The ACTS (from GermanAbrollcontainer Transportsystem/ roller container transport system) offers a loading principle from a roller container truck directly on to a rail car. There is no additional installation required for the process as the level arm of the truck can push the container on to a transport frame that is mounted on the rail car. The transport frame consists of two U-profile rail bars and a central pivot - this allows the frame to swing out for loading and to swing back to be parallel with the rail car for distance travel by rail. The ACTS found wider usage first in Switzerland where rail transport to remote villages is often easier than running large trucks through narrow streets. Rail transport of roller containers is now prevalent in German-speaking countries and neighbouring countries like the Netherlands and the Czech Republic. The roller container standards have become the basis of containerized firefighting equipment throughout Europe. The permanent mounting of equipment creates a large number of specialized fire trucks while containerization allows the use of only one transport truck with a level arm - in Germany it is called WLF (GermanWechselladerfahrzeug/ swap loader vehicle). In practical usage there are lighter specialized fire trucks for everyday usage while larger fire and catastrophic situations can be handled by using WLF in a shuttle operation bringing as much equipment to the scene as needed. The containers come in a great variety with national regulations to control common types, for example in Germany one may find For the most part the replacement of older fire trucks takes the form of new AB units in Germany. The AB units may be built from standard containers with equipment installed from different suppliers as required by the fire department. The WLF loader vehicle can be purchased independently with the availability of a wide variety of trucks on the market (that are not originally designed for firefighting) - the trucks are sent to specialized workshops that can convert them to WLF fire trucks by adding a hook lift, siren and communications. The AB units may be used far longer than the WLF trucks as the latter can be exchanged independently - this makes maintenance cheaper especially for special equipment that is only rarely needed. Additionally some firefighting equipment like the decontamination pod have advantages for military conversion dispatching them by standard NATO container transport. In the US theHeavy Expanded Mobility Tactical Truck(HEMTT) was produced in a version with a hooklift hoist gear named Load Handling System (LHS). TheM1120 HEMTT LHSis the basis of thePalletized Load Systemusing a flatbed platform to be mounted under ISO containers as a Container Handling Unit (CHU). This allows containers to be unloaded without the help of a forklift. Current NATO agreements require PLS to maintain interoperability with comparable British, German and French systems through the use of a common flatrack. The British Army has developed theDemountable Rack Offload and Pickup System(DROPS) using the Medium Mobility Load Carrier (MMLC) as an all-terrain truck with a hook loader system. As in the Palletized Load System a flatrack can be used to transport ISO containers. After an evolutionary step with the Improved Medium Mobility Load Carrier (IMMLC) the British Army is now transitioning the Enhanced Pallet Load System (EPLS). In the ELPS there is a different Container Handling Unit that is not put under the container but it uses an H Frame that fits into the corner locks of an ISO container on the back side.[14] Although roller containers are very easy to load/unload they do have some problems. The steel rollers that contact the ground can cause damage to asphalt pavement, a concrete surface is more suitable.[15]The angle that the container is tilted can cause loads to move if they are not secured. Aswap bodymay be used where the container needs to be kept level.
https://en.wikipedia.org/wiki/Roller_container
NoSQL(originally meaning "NotonlySQL" or "non-relational")[1]refers to a type ofdatabasedesign that stores and retrieves data differently from the traditional table-based structure ofrelational databases. Unlike relational databases, which organize data into rows and columns like a spreadsheet, NoSQL databases use a single data structure—such askey–value pairs,wide columns,graphs, ordocuments—to hold information. Since this non-relational design does not require a fixedschema, it scales easily to manage large, often unstructured datasets.[2]NoSQL systems are sometimes called"Not only SQL"because they can supportSQL-like query languages or work alongside SQL databases inpolyglot-persistentsetups, where multiple database types are combined.[3][4]Non-relational databases date back to the late 1960s, but the term "NoSQL" emerged in the early 2000s, spurred by the needs ofWeb 2.0companies like social media platforms.[5][6] NoSQL databases are popular inbig dataandreal-time webapplications due to their simple design, ability to scale acrossclusters of machines(calledhorizontal scaling), and precise control over dataavailability.[7][8]These structures can speed up certain tasks and are often considered more adaptable than fixed database tables.[9]However, many NoSQL systems prioritize speed and availability over strict consistency (per theCAP theorem), usingeventual consistency—where updates reach all nodes eventually, typically within milliseconds, but may cause brief delays in accessing the latest data, known asstale reads.[10]While most lack fullACIDtransaction support, some, likeMongoDB, include it as a key feature.[11] Barriers to wider NoSQL adoption include their use of low-levelquery languagesinstead of SQL, inability to perform ad hocjoinsacross tables, lack of standardized interfaces, and significant investments already made in relational databases.[12]Some NoSQL systems risklosing datathrough lost writes or other forms, though features likewrite-ahead logging—a method to record changes before they’re applied—can help prevent this.[13][14]Fordistributed transaction processingacross multiple databases, keeping data consistent is a challenge for both NoSQL and relational systems, as relational databases cannot enforce rules linking separate databases, and few systems support bothACIDtransactions andX/Open XAstandards for managing distributed updates.[15][16]Limitations within the interface environment are overcome using semantic virtualization protocols, such that NoSQL services are accessible to mostoperating systems.[17] The termNoSQLwas used by Carlo Strozzi in 1998 to name his lightweightStrozzi NoSQL open-source relational databasethat did not expose the standardStructured Query Language(SQL) interface, but was still relational.[18]His NoSQLRDBMSis distinct from the around-2009 general concept of NoSQL databases. Strozzi suggests that, because the current NoSQL movement "departs from the relational model altogether, it should therefore have been called more appropriately 'NoREL'",[19]referring to "not relational". Johan Oskarsson, then a developer atLast.fm, reintroduced the termNoSQLin early 2009 when he organized an event to discuss "open-sourcedistributed, non-relational databases".[20]The name attempted to label the emergence of an increasing number of non-relational, distributed data stores, including open source clones of Google'sBigtable/MapReduceand Amazon'sDynamoDB. There are various ways to classify NoSQL databases, with different categories and subcategories, some of which overlap. What follows is a non-exhaustive classification by data model, with examples:[21] Key–value (KV) stores use theassociative array(also called a map or dictionary) as their fundamental data model. In this model, data is represented as a collection of key–value pairs, such that each possible key appears at most once in the collection.[24][25] The key–value model is one of the simplest non-trivial data models, and richer data models are often implemented as an extension of it. The key–value model can be extended to a discretely ordered model that maintains keys inlexicographic order. This extension is computationally powerful, in that it can efficiently retrieve selective keyranges.[26] Key–value stores can useconsistency modelsranging fromeventual consistencytoserializability. Some databases support ordering of keys. There are various hardware implementations, and some users store data in memory (RAM), while others onsolid-state drives(SSD) orrotating disks(aka hard disk drive (HDD)). The central concept of a document store is that of a "document". While the details of this definition differ among document-oriented databases, they all assume that documents encapsulate and encode data (or information) in some standard formats or encodings. Encodings in use includeXML,YAML, andJSONandbinaryforms likeBSON. Documents are addressed in the database via a uniquekeythat represents that document. Another defining characteristic of a document-oriented database is anAPIor query language to retrieve documents based on their contents. Different implementations offer different ways of organizing and/or grouping documents: Compared to relational databases, collections could be considered analogous to tables and documents analogous to records. But they are different – every record in a table has the same sequence of fields, while documents in a collection may have fields that are completely different. Graph databases are designed for data whose relations are well represented as agraphconsisting of elements connected by a finite number of relations. Examples of data includesocial relations, public transport links, road maps, network topologies, etc. The performance of NoSQL databases is usually evaluated using the metric ofthroughput, which is measured as operations per second. Performance evaluation must pay attention to the rightbenchmarkssuch as production configurations, parameters of the databases, anticipated data volume, and concurrent userworkloads. Ben Scofield rated different categories of NoSQL databases as follows:[28] Performance and scalability comparisons are most commonly done using theYCSBbenchmark. Since most NoSQL databases lack ability for joins in queries, thedatabase schemagenerally needs to be designed differently. There are three main techniques for handling relational data in a NoSQL database. (Seetable join and ACID supportfor NoSQL databases that support joins.) Instead of retrieving all the data with one query, it is common to do several queries to get the desired data. NoSQL queries are often faster than traditional SQL queries, so the cost of additional queries may be acceptable. If an excessive number of queries would be necessary, one of the other two approaches is more appropriate. Instead of only storing foreign keys, it is common to store actual foreign values along with the model's data. For example, each blog comment might include the username in addition to a user id, thus providing easy access to the username without requiring another lookup. When a username changes, however, this will now need to be changed in many places in the database. Thus this approach works better when reads are much more common than writes.[29] With document databases like MongoDB it is common to put more data in a smaller number of collections. For example, in a blogging application, one might choose to store comments within the blog post document, so that with a single retrieval one gets all the comments. Thus in this approach a single document contains all the data needed for a specific task. A database is marked as supportingACIDproperties (atomicity, consistency, isolation, durability) orjoinoperations if the documentation for the database makes that claim. However, this doesn't necessarily mean that the capability is fully supported in a manner similar to most SQL databases. Different NoSQL databases, such asDynamoDB,MongoDB,Cassandra,Couchbase, HBase, and Redis, exhibit varying behaviors when querying non-indexed fields. Many perform full-table or collection scans for such queries, applying filtering operations after retrieving data. However, modern NoSQL databases often incorporate advanced features to optimize query performance. For example, MongoDB supports compound indexes and query-optimization strategies, Cassandra offers secondary indexes and materialized views, and Redis employs custom indexing mechanisms tailored to specific use cases. Systems like Elasticsearch use inverted indexes for efficient text-based searches, but they can still require full scans for non-indexed fields. This behavior reflects the design focus of many NoSQL systems on scalability and efficient key-based operations rather than optimized querying for arbitrary fields. Consequently, while these databases excel at basicCRUDoperations and key-based lookups, their suitability for complex queries involving joins or non-indexed filtering varies depending on the database type—document, key–value, wide-column, or graph—and the specific implementation.[33]
https://en.wikipedia.org/wiki/Structured_storage
Vulnerability managementis the "cyclical practice of identifying, classifying, prioritizing, remediating, and mitigating"software vulnerabilities.[1]Vulnerability management is integral tocomputer securityandnetwork security, and must not be confused withvulnerability assessment.[2] Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[3]such asopen ports, insecure software configurations, and susceptibility tomalwareinfections. They may also be identified by consulting public sources, such as NVD, vendor specific security updates or subscribing to a commercial vulnerability alerting service. Unknown vulnerabilities, such as azero-day,[3]may be found withfuzz testing. Fuzzing is a cornerstone technique where random or semi-random input data is fed to programs to detect unexpected behavior. Tools such as AFL (American Fuzzy Lop) and libFuzzer automate this process, making it faster and more efficient. Fuzzy testing can identify certain kinds of vulnerabilities, such as abuffer overflowwith relevanttest cases. Similarly,static analysistools analyze source code orbinariesto identify potential vulnerabilities without executing the program.Symbolic execution, an advanced technique combining static anddynamic analysis, further aids in pinpointing vulnerabilities.[4]Such analysis can be facilitated bytest automation. In addition,antivirus softwarecapable ofheuristicanalysis may discover undocumented malware if it finds software behaving suspiciously (such as attempting to overwrite asystem file). Correcting vulnerabilities may variously involve theinstallationof apatch, a change in network security policy, reconfiguration of software, or educatingusersaboutsocial engineering. Project vulnerabilityis the project's susceptibility to being subject to negative events, the analysis of their impact, and the project's capability to cope with negative events.[5]Based on Systems Thinking,project systemic vulnerability managementtakes a holistic vision, and proposes the following process: Coping with negative events is done, in this model, through: Redundancyis a specific method to increase resistance and resilience in vulnerability management.[6] Antifragilityis a concept introduced byNassim Nicholas Talebto describe the capacity of systems to not only resist or recover from adverse events, but also to improve because of them. Antifragility is similar to the concept ofpositive complexityproposed by Stefan Morcov.
https://en.wikipedia.org/wiki/Vulnerability_management
Inlogicandmathematics, atruth value, sometimes called alogical value, is a value indicating the relation of apropositiontotruth, which inclassical logichas only two possible values (trueorfalse).[1][2]Truth values are used incomputingas well as various types oflogic. In some programming languages, anyexpressioncan be evaluated in a context that expects aBoolean data type. Typically (though this varies by programming language) expressions like the numberzero, theempty string, empty lists, andnullare treated as false, and strings with content (like "abc"), other numbers, and objects evaluate to true. Sometimes these classes of expressions are calledfalsyandtruthy. For example, inLisp,nil, the empty list, is treated as false, and all other values are treated as true. InC, the number 0 or 0.0 is false, and all other values are treated as true. InJavaScript, the empty string (""),null,undefined,NaN, +0,−0andfalse[3]are sometimes calledfalsy(of which thecomplementistruthy) to distinguish between strictlytype-checkedandcoercedBooleans (see also:JavaScript syntax#Type conversion).[4]As opposed to Python, empty containers (Arrays, Maps, Sets) are considered truthy. Languages such asPHPalso use this approach. Inclassical logic, with its intended semantics, the truth values aretrue(denoted by1or theverum⊤), anduntrueorfalse(denoted by0or thefalsum⊥); that is, classical logic is atwo-valued logic. This set of two values is also called theBoolean domain. Corresponding semantics oflogical connectivesaretruth functions, whose values are expressed in the form oftruth tables.Logical biconditionalbecomes theequalitybinary relation, andnegationbecomes abijectionwhichpermutestrue and false. Conjunction and disjunction aredualwith respect to negation, which is expressed byDe Morgan's laws: Propositional variablesbecomevariablesin the Boolean domain. Assigning values for propositional variables is referred to asvaluation. Whereas in classical logic truth values form aBoolean algebra, inintuitionistic logic, and more generally,constructive mathematics, the truth values form aHeyting algebra. Such truth values may express various aspects of validity, including locality, temporality, or computational content. For example, one may use theopen setsof a topological space as intuitionistic truth values, in which case the truth value of a formula expresseswherethe formula holds, not whether it holds. Inrealizabilitytruth values are sets of programs, which can be understood as computational evidence of validity of a formula. For example, the truth value of the statement "for every number there is a prime larger than it" is the set of all programs that take as input a numbern{\displaystyle n}, and output a prime larger thann{\displaystyle n}. Incategory theory, truth values appear as the elements of thesubobject classifier. In particular, in atoposevery formula ofhigher-order logicmay be assigned a truth value in the subobject classifier. Even though a Heyting algebra may have many elements, this should not be understood as there being truth values that are neither true nor false, because intuitionistic logic proves¬(p≠⊤∧p≠⊥){\displaystyle \neg (p\neq \top \land p\neq \bot )}("it is not the case thatp{\displaystyle p}is neither true nor false").[5] Inintuitionistic type theory, theCurry-Howard correspondenceexhibits an equivalence of propositions and types, according to which validity is equivalent to inhabitation of a type. For other notions of intuitionistic truth values, see theBrouwer–Heyting–Kolmogorov interpretationandIntuitionistic logic § Semantics. Multi-valued logics(such asfuzzy logicandrelevance logic) allow for more than two truth values, possibly containing some internal structure. For example, on theunit interval[0,1]such structure is atotal order; this may be expressed as the existence of variousdegrees of truth. Not alllogical systemsare truth-valuational in the sense that logical connectives may be interpreted as truth functions. For example,intuitionistic logiclacks a complete set of truth values because its semantics, theBrouwer–Heyting–Kolmogorov interpretation, is specified in terms ofprovabilityconditions, and not directly in terms of thenecessary truthof formulae. But even non-truth-valuational logics can associate values with logical formulae, as is done inalgebraic semantics. The algebraic semantics of intuitionistic logic is given in terms ofHeyting algebras, compared toBoolean algebrasemantics of classical propositional calculus.
https://en.wikipedia.org/wiki/Truth_value
Signalsare standardized messages sent to a runningprogramto trigger specific behavior, such as quitting or error handling. They are a limited form ofinter-process communication(IPC), typically used inUnix,Unix-like, and otherPOSIX-compliant operating systems. A signal is anasynchronousnotification sent to aprocessor to a specificthreadwithin the same process to notify it of an event. Common uses of signals are to interrupt, suspend, terminate orkilla process. Signals originated in 1970sBell LabsUnix and were later specified in thePOSIXstandard. When a signal is sent, the operating system interrupts the target process's normalflow of executionto deliver the signal. Execution can be interrupted during anynon-atomic instruction. If the process has previously registered asignal handler, that routine is executed. Otherwise, the default signal handler is executed. Embedded programs may find signals useful for inter-process communications, as signals are notable for theiralgorithmic efficiency. Signals are similar tointerrupts, the difference being that interrupts are mediated by theCPUand handled by thekernelwhile signals are mediated by the kernel (possibly via system calls) and handled by individualprocesses.[citation needed]The kernel may pass an interrupt as a signal to the process that caused it (typical examples areSIGSEGV,SIGBUS,SIGILLandSIGFPE). Thekill(2)system call sends a specified signal to a specified process, if permissions allow. Similarly, thekill(1)command allows a user to send signals to processes. Theraise(3)library function sends the specified signal to the current process. Exceptionssuch asdivision by zero,segmentation violation(SIGSEGV), and floating point exception (SIGFPE) will cause acore dumpand terminate the program. The kernel can generate signals to notify processes of events. For example,SIGPIPEwill be generated when a process writes to a pipe which has been closed by the reader; by default, this causes the process to terminate, which is convenient when constructingshell pipelines. Typing certain key combinations at thecontrolling terminalof a running process causes the system to send it certain signals:[3] These default key combinations with modern operating systems can be changed with thesttycommand. Signal handlers can be installed with thesignal(2)orsigaction(2)system call. If a signal handler is not installed for a particular signal, the default handler is used. Otherwise the signal is intercepted and the signal handler is invoked. The process can also specify two default behaviors, without creating a handler: ignore the signal (SIG_IGN) and use the default signal handler (SIG_DFL). There are two signals which cannot be intercepted and handled:SIGKILLandSIGSTOP. Signal handling is vulnerable torace conditions. As signals are asynchronous, another signal (even of the same type) can be delivered to the process during execution of the signal handling routine. Thesigprocmask(2)call can be used to block and unblock delivery of signals. Blocked signals are not delivered to the process until unblocked. Signals that cannot be ignored (SIGKILL and SIGSTOP) cannot be blocked. Signals can cause the interruption of a system call in progress, leaving it to the application to manage anon-transparent restart. Signal handlers should be written in a way that does not result in any unwanted side-effects, e.g.errnoalteration, signal mask alteration, signal disposition change, and other globalprocessattribute changes. Use of non-reentrantfunctions, e.g.,mallocorprintf, inside signal handlers is also unsafe. In particular, the POSIX specification and the Linux man pagesignal (7)require that all system functions directly orindirectlycalled from a signal function areasync-signal safe.[6][7]Thesignal-safety(7)man page gives a list of such async-signal safe system functions (practically thesystem calls), otherwise it is anundefined behavior.[8]It is suggested to simply set somevolatile sig_atomic_tvariable in a signal handler, and to test it elsewhere.[9] Signal handlers can instead put the signal into aqueueand immediately return. The main thread will then continue "uninterrupted" until signals are taken from the queue, such as in anevent loop. "Uninterrupted" here means that operations thatblockmay return prematurely andmust be resumed, as mentioned above. Signals should be processed from the queue on the main thread and not byworker pools, as that reintroduces the problem of asynchronicity. However, managing a queue is not possible in an async-signal safe way with onlysig_atomic_t, as only single reads and writes to such variables are guaranteed to be atomic, not increments or (fetch-and)-decrements, as would be required for a queue. Thus, effectively, only one signal per handler can be queued safely withsig_atomic_tuntil it has been processed. Aprocess's execution may result in the generation of a hardwareexception, for instance, if the process attempts to divide by zero or incurs apage fault. InUnix-likeoperating systems, this event automatically changes the processorcontextto start executing akernelexception handler. In case of some exceptions, such as apage fault, the kernel has sufficient information to fully handle the event itself and resume the process's execution. Other exceptions, however, the kernel cannot process intelligently and it must instead defer the exception handling operation to the faulting process. This deferral is achieved via the signal mechanism, wherein the kernel sends to the process a signal corresponding to the current exception. For example, if a process attempted integer divide by zero on anx86CPU, adivide errorexception would be generated and cause the kernel to send theSIGFPEsignal to the process. Similarly, if the process attempted to access a memory address outside of itsvirtual address space, the kernel would notify the process of this violation via aSIGSEGV(segmentation violationsignal). The exact mapping between signal names and exceptions is obviously dependent upon the CPU, since exception types differ between architectures. The list below documents the signals specified in theSingle Unix SpecificationVersion 5. All signals are defined as macro constants in the<signal.h>header file. The name of the macro constant consists of a "SIG"prefixfollowed by a mnemonic name for the signal. A process can definehow to handle incoming POSIX signals. If a process does not define a behaviour for a signal, then thedefault handlerfor that signal is being used. The table below lists some default actions for POSIX-compliant UNIX systems, such asFreeBSD,OpenBSDandLinux. The following signals are not specified in thePOSIXspecification. They are, however, sometimes used on various systems.
https://en.wikipedia.org/wiki/Signal_(computing)
Incomputing,performance per wattis a measure of theenergy efficiencyof a particularcomputer architectureorcomputer hardware. Literally, it measures the rate of computation that can be delivered by a computer for everywattof power consumed. This rate is typically measured by performance on theLINPACKbenchmark when trying to compare between computing systems: an example using this is theGreen500list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing thanMoore's Law.[1] System designers buildingparallel computers, such asGoogle's hardware, pick CPUs based on their performance per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.[2] Spaceflight computers have hard limits on the maximum power available and also have hard requirements on minimum real-time performance. A ratio of processing speed to required electrical power is more useful than raw processing speed.[3] The performance and power consumption metrics used depend on the definition; reasonable measures of performance areFLOPS,MIPS, or the score for anyperformance benchmark. Several measures of power usage may be employed, depending on the purposes of the metric; for example, a metric might only consider the electrical power delivered to a machine directly, while another might include all power necessary to run a computer, such as cooling and monitoring systems. The power measurement is often the average power used while running the benchmark, but other measures of power usage may be employed (e.g. peak power, idle power). For example, the earlyUNIVAC Icomputer performed approximately 0.015 operations per watt-second (performing 1,905 operations per second (OPS), while consuming 125 kW). TheFujitsuFR-VVLIW/vector processorsystem on a chipin the 4 FR550 core variant released 2005 performs 51 Giga-OPS with 3 watts of power consumption resulting in 17 billion operations per watt-second.[4][5]This is an improvement by over a trillion times in 54 years. Most of the power a computer uses is converted into heat, so a system that takes fewer watts to do a job will require less cooling to maintain a givenoperating temperature. Reduced cooling demands makes it easier toquiet a computer. Lower energy consumption can also make it less costly to run, and reduce the environmental impact of powering the computer (seegreen computing). If installed where there is limitedclimate control, a lower power computer will operate at a lower temperature, which may make it more reliable. In a climate controlled environment, reductions in direct power use may also create savings in climate control energy. Computing energy consumption is sometimes also measured by reporting the energy required to run a particular benchmark, for instanceEEMBCEnergyBench. Energy consumption figures for a standard workload may make it easier to judge the effect of an improvement inenergy efficiency. When performance is defined as⁠operations/second⁠, then performance per watt can be written as⁠operations/watt-second⁠. Since a watt is one⁠joule/second⁠, then performance per watt can also be written as⁠operations/joule⁠. FLOPS per wattis a common measure. Like theFLOPS(Floating PointOperations Per Second) metric it is based on, the metric is usually applied toscientific computingand simulations involving manyfloating pointcalculations. As of June 2016[update], the Green500 list rates the two most efficient supercomputers highest – those are both based on the samemanycoreacceleratorPEZY-SCnpJapanese technology in addition to Intel Xeon processors – both atRIKEN, the top one at 6673.8 MFLOPS/watt; and the third ranked is the Chinese-technologySunway TaihuLight(a much bigger machine, that is the ranked 2nd onTOP500, the others are not on that list) at 6051.3 MFLOPS/watt.[6] In June 2012, the Green500 list ratedBlueGene/Q, Power BQC 16Cas the most efficient supercomputer on the TOP500 in terms of FLOPS per watt, running at 2,100.88 MFLOPS/watt.[7] In November 2010, IBM machine,Blue Gene/Qachieves 1,684 MFLOPS/watt.[8][9] On 9 June 2008, CNN reported thatIBM's Roadrunnersupercomputer achieves 376 MFLOPS/watt.[10][11] As part of theIntel Tera-Scaleresearch project, the team produced an 80-core CPU that can achieve over 16,000 MFLOPS/watt.[12][13]The future of that CPU is not certain. Microwulf, a low cost desktopBeowulf clusterof four dual-coreAthlon 64 X23800+ computers, runs at 58 MFLOPS/watt.[14] Kalray has developed a 256-core VLIW CPU that achieves 25,000 MFLOPS/watt. Next generation is expected to achieve 75,000 MFLOPS/watt.[15]However, in 2019 their latest chip for embedded is 80-core and claims up to 4 TFLOPS at 20 W.[16] Adaptevaannounced theEpiphany V, a 1024-core 64-bit RISC processor intended to achieve 75 GFLOPS/watt,[17][18]while they later announced that the Epiphany V was "unlikely" to become available as a commercial product US Patent10,020,436, July 2018 claims three intervals of 100, 300, and 600 GFLOPS/watt. Graphics processing units(GPU) have continued to increase in energy usage, while CPUs designers have recently[when?]focused on improving performance per watt. High performance GPUs may draw large amount of power, therefore intelligent techniques are required to manage GPU power consumption. Measures like3DMark2006 scoreper watt can help identify more efficient GPUs.[19]However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks.[20] With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design. Since GPUs may also be used for somegeneral purpose computation, sometimes their performance is measured in terms also applied to CPUs, such as FLOPS per watt. While performance per watt is useful, absolute power requirements are also important. Claims of improved performance per watt may be used to mask increasing power demands. For instance, though newer generation GPU architectures may provide better performance per watt, continued performance increases can negate the gains in efficiency, and the GPUs continue to consume large amounts of power.[22] Benchmarks that measure power under heavy load may not adequately reflect typical efficiency. For instance, 3DMark stresses the 3D performance of a GPU, but many computers spend most of their time doing less intense display tasks (idle, 2D tasks, displaying video). So the 2D or idle efficiency of the graphics system may be at least as significant for overall energy efficiency. Likewise, systems that spend much of their time in standby orsoft offare not adequately characterized by just efficiency under load. To help address this some benchmarks, likeSPECpower, include measurements at a series of load levels.[23] The efficiency of some electrical components, such asvoltage regulators, decreases with increasing temperature, so the power used may increase with temperature. Power supplies, motherboards, and some video cards are some of the subsystems affected by this. So their power draw may depend on temperature, and the temperature or temperature dependence should be noted when measuring.[24][25] Performance per watt also typically does not include fulllife-cycle costs. Since computer manufacturing is energy intensive, and computers often have a relatively short lifespan, energy and materials involved in production, distribution,disposalandrecyclingoften make up significant portions of their cost, energy use, and environmental impact.[26][27] Energy required for climate control of the computer's surroundings is often not counted in the wattage calculation, but it can be significant.[28] SWaP (space, wattage and performance) is aSun Microsystemsmetric fordata centers, incorporating power and space: Where performance is measured by any appropriate benchmark, and space is size of the computer.[29] Reduction of power, mass, and volume is also important for spaceflight computers.[3]
https://en.wikipedia.org/wiki/Performance_per_watt
Applied mechanicsis the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments.[1]In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life.[2]It has numerous applications in a wide variety of fields and disciplines, including but not limited tostructural engineering,astronomy,oceanography,meteorology,hydraulics,mechanical engineering,aerospace engineering,nanotechnology,structural design,earthquake engineering,fluid dynamics,planetary sciences, and other life sciences.[3][4]Connecting research between numerous disciplines, applied mechanics plays an important role in bothscienceandengineering.[1] Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application totechnology. Composed of two main categories, Applied Mechanics can be split intoclassical mechanics; the study of the mechanics of macroscopic solids, andfluid mechanics; the study of the mechanics of macroscopic fluids.[4]Each branch of applied mechanics contains subcategories formed through their own subsections as well.[4]Classical mechanics, divided intostaticsanddynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split intokinematicsandkinetics.[4]Likeclassical mechanics,fluid mechanicsis also divided into two sections: statics and dynamics.[4] Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools.[5]In the application of thenatural sciences, mechanics was said to be complemented bythermodynamics, the study of heat and more generallyenergy, andelectromechanics, the study ofelectricityandmagnetism. Engineering problems are generally tackled with applied mechanics through the application of theories ofclassical mechanicsandfluid mechanics.[4]Because applied mechanics can be applied in engineering disciplines likecivil engineering,mechanical engineering,aerospace engineering, materials engineering, andbiomedical engineering, it is sometimes referred to as engineering mechanics.[4] Science and engineering are interconnected with respect to applied mechanics, as researches in science are linked to research processes in civil, mechanical, aerospace, materials and biomedical engineering disciplines.[1]Incivil engineering, applied mechanics’ concepts can be applied to structural design and a variety of engineering sub-topics like structural, coastal, geotechnical, construction, andearthquake engineering.[4]Inmechanical engineering, it can be applied in mechatronics androbotics, design and drafting,nanotechnology, machine elements, structural analysis, friction stir welding, andacoustical engineering.[4]Inaerospace engineering, applied mechanics is used in aerodynamics, aerospace structural mechanics and propulsion, aircraft design and flight mechanics.[4]In materials engineering, applied mechanics’ concepts are used in thermoelasticity,elasticity theory, fracture and failure mechanisms, structural design optimisation, fracture and fatigue, active materials and composites, and computational mechanics.[6]Research in applied mechanics can be directly linked to biomedical engineering areas of interest like orthopaedics; biomechanics; human body motion analysis; soft tissue modelling of muscles, tendons, ligaments, and cartilage; biofluid mechanics; and dynamic systems, performance enhancement, and optimal control.[7] The first science with a theoretical foundation based inmathematicswasmechanics; the underlying principles of mechanics were first delineated byIsaac Newtonin his 1687 bookPhilosophiæ Naturalis Principia Mathematica[3].One of the earliest works to define applied mechanics as its own discipline was the three volumeHandbuch der Mechanikwritten by German physicist and engineerFranz Josef Gerstner.[8]The first seminal work on applied mechanics to be published in English was AManual of Applied Mechanicsin 1858 by English mechanical engineerWilliam Rankine.[8][9]August Föppl, a German mechanical engineer and professor, publishedVorlesungen über technische Mechanikin 1898 in which he introducedcalculusto the study of applied mechanics.[8] Applied mechanics was established as a discipline separate fromclassical mechanicsin the early 1920s with the publication ofJournal of Applied Mathematics and Mechanics, the creation of the Society of Applied Mathematics and Mechanics, and the first meeting of theInternational Congress of Applied Mechanics.[1]In 1921 Austrian scientistRichard von Misesstarted theJournal of Applied Mathematics and Mechanics(Zeitschrift für Angewante Mathematik und Mechanik) and in 1922 with German scientistLudwig Prandtlfounded the Society of Applied Mathematics and Mechanics (Gesellschaft für Angewandte Mathematik und Mechanik).[1]During a 1922 conference on hydrodynamics and aerodynamics inInnsbruck, Austria,Theodore von Kármán, a Hungarian engineer, andTullio Levi-Civita, an Italian mathematician, met and decided to organize a conference on applied mechanics.[1]In 1924 the first meeting of theInternational Congress of Applied Mechanicswas held inDelft, the Netherlands attended by more than 200 scientist from around the world.[1][3]Since this first meeting the congress has been held every four years, except duringWorld War II; the name of the meeting was changed toInternational Congress of Theoretical and Applied Mechanicsin 1960.[1] Due to the unpredictable political landscape in Europe after theFirst World Warand upheaval of World War II many European scientist and engineers emigrated to the United States.[1]Ukrainian engineerStephan Timoshenkofled theBolsheviksRed Army in 1918 and eventually emigrated to the U.S. in 1922; over the next twenty-two years he taught applied mechanics at theUniversity of MichiganandStanford University.[10]Timoshenko authored thirteen textbooks in applied mechanics, many considered the gold standard in their fields; he also founded theApplied Mechanics Divisionof theAmerican Society of Mechanical Engineersin 1927 and is considered “America’s Father of Engineering Mechanics.”[10]In 1930 Theodore von Kármán left Germany and became the first director of theAeronautical Laboratoryat theCalifornia Institute of Technology; von Kármán would later co-found theJet Propulsion Laboratoryin 1944.[1]With the leadership of Timoshenko and von Kármán, the influx of talent from Europe, and the rapid growth of the aeronautical and defense industries, applied mechanics became a mature discipline in the U.S. by 1950.[1] Dynamics, the study of the motion and movement of various objects, can be further divided into two branches,kinematicsandkinetics.[4]Forclassical mechanics, kinematics would be the analysis of moving bodies using time,velocities,displacement, andacceleration.[4]Kinetics would be the study of moving bodies through the lens of the effects of forces and masses.[4]In the context of fluid mechanics, fluid dynamics pertains to the flow and describing of the motion of various fluids.[4] The study of statics is the study and describing of bodies at rest.[4]Static analysis in classical mechanics can be broken down into two categories, non-deformable bodies and deformable bodies.[4]When studying non-deformable bodies, considerations relating to the forces acting on the rigid structures are analyzed. When studying deformable bodies, the examination of the structure and material strength is observed.[4]In the context of fluid mechanics, the resting state of the pressure unaffected fluid is taken into account.[4] Applied Mechanics is a result of the practical applications of various engineering/mechanical disciplines; as illustrated in the table below.[4] Fluid Mechanics Body Applications Engineering Body Engineering Engineering Engineering Being one of the first sciences for which a systematic theoretical framework was developed, mechanics was spearheaded by Sir Isaac Newton'sPrincipia(published in 1687).[3]It is the "divide and rule" strategy developed by Newton that helped to govern motion and split it into dynamics or statics.[3]Depending on the type offorce, type ofmatter, and theexternal forces,acting on said matter, will dictate the "Divide and Rule" strategy within dynamic and static studies.[3] Archimedes' principleis a major one that contains many defining propositions pertaining to fluid mechanics. As stated by proposition 7 of Archimedes' principle, a solid that is heavier than the fluid its placed in, will descend to the bottom of the fluid.[11]If the solid is to be weighed within the fluid, the fluid will be measured as lighter than the weight of the amount of fluid that was displaced by said solid.[11]Further developed upon by proposition 5, if the solid is lighter than the fluid it is placed in, the solid will have to be forcibly immersed to be fully covered by the liquid.[11]The weight of the amount of displaced fluids will then be equal to the weight of the solid.[11] This section based on the "AMR Subject Classification Scheme" from the journalApplied Mechanics Reviews[12].
https://en.wikipedia.org/wiki/Engineering_mechanics
Thecompact representationforquasi-Newton methodsis amatrix decomposition, which is typically used ingradientbasedoptimizationalgorithmsor for solvingnonlinear systems. The decomposition uses a low-rank representation for the direct and/or inverseHessianor theJacobianof a nonlinear system. Because of this, the compact representation is often used for large problems andconstrained optimization. The compact representation of a quasi-Newton matrix for the inverse HessianHk{\displaystyle H_{k}}or direct HessianBk{\displaystyle B_{k}}of a nonlinearobjective functionf(x):Rn→R{\displaystyle f(x):\mathbb {R} ^{n}\to \mathbb {R} }expresses a sequence of recursive rank-1 or rank-2 matrix updates as one rank-k{\displaystyle k}or rank-2k{\displaystyle 2k}update of an initial matrix.[1][2]Because it is derived from quasi-Newton updates, it uses differences of iterates and gradients∇f(xk)=gk{\displaystyle \nabla f(x_{k})=g_{k}}in its definition{si−1=xi−xi−1,yi−1=gi−gi−1}i=1k{\displaystyle \{s_{i-1}=x_{i}-x_{i-1},y_{i-1}=g_{i}-g_{i-1}\}_{i=1}^{k}}. In particular, forr=k{\displaystyle r=k}orr=2k{\displaystyle r=2k}the rectangularn×r{\displaystyle n\times r}matricesUk,Jk{\displaystyle U_{k},J_{k}}and ther×r{\displaystyle r\times r}square symmetric systemsMk,Nk{\displaystyle M_{k},N_{k}}depend on thesi,yi{\displaystyle s_{i},y_{i}}'s and define the quasi-Newton representations Because of the special matrix decomposition the compact representation is implemented in state-of-the-art optimization software.[3][4][5][6]When combined with limited-memory techniques it is a popular technique forconstrained optimizationwith gradients.[7]Linear algebra operations can be done efficiently, likematrix-vector products,solvesoreigendecompositions. It can be combined withline-searchandtrust regiontechniques, and the representation has been developed for many quasi-Newton updates. For instance, the matrix vector product with the direct quasi-Newton Hessian and an arbitrary vectorg∈Rn{\displaystyle g\in \mathbb {R} ^{n}}is: In the context of theGMRESmethod, Walker[8]showed that a product ofHouseholder transformations(an identity plus rank-1) can be expressed as a compact matrix formula. This led to the derivation of an explicit matrix expression for the product ofk{\displaystyle k}identity plus rank-1 matrices.[7]Specifically, forSk=[s0s1…sk−1],{\textstyle S_{k}={\begin{bmatrix}s_{0}&s_{1}&\ldots s_{k-1}\end{bmatrix}},}Yk=[y0y1…yk−1],{\displaystyle ~Y_{k}={\begin{bmatrix}y_{0}&y_{1}&\ldots y_{k-1}\end{bmatrix}},}(Rk)ij=si−1Tyj−1,{\displaystyle ~(R_{k})_{ij}=s_{i-1}^{T}y_{j-1},}ρi−1=1/si−1Tyi−1{\displaystyle ~\rho _{i-1}=1/s_{i-1}^{T}y_{i-1}}andVi=I−ρi−1yi−1si−1T{\textstyle ~V_{i}=I-\rho _{i-1}y_{i-1}s_{i-1}^{T}}when1≤i≤j≤k{\displaystyle 1\leq i\leq j\leq k}the product ofk{\displaystyle k}rank-1 updates to the identity is∏i=1kVi−1=(I−ρ0y0s0T)⋯(I−ρk−1yk−1sk−1T)=I−YkRk−1SkT{\displaystyle \prod _{i=1}^{k}V_{i-1}=\left(I-\rho _{0}y_{0}s_{0}^{T}\right)\cdots \left(I-\rho _{k-1}y_{k-1}s_{k-1}^{T}\right)=I-Y_{k}R_{k}^{-1}S_{k}^{T}}TheBFGSupdate can be expressed in terms of products of theVi{\displaystyle V_{i}}'s, which have a compact matrix formula. Therefore, the BFGS recursion can exploit these block matrix representations A parametric family of quasi-Newton updates includes many of the most known formulas.[9]For arbitrary vectorsvk{\displaystyle v_{k}}andck{\displaystyle c_{k}}such thatvkTyk≠0{\displaystyle v_{k}^{T}y_{k}\neq 0}andckTsk≠0{\displaystyle c_{k}^{T}s_{k}\neq 0}general recursive update formulas for the inverse and direct Hessian estimates are By making specific choices for the parameter vectorsvk{\displaystyle v_{k}}andck{\displaystyle c_{k}}well known methods are recovered Collecting the updating vectors of the recursive formulas into matrices, define Sk=[s0s1…sk−1],{\displaystyle S_{k}={\begin{bmatrix}s_{0}&s_{1}&\ldots &s_{k-1}\end{bmatrix}},}Yk=[y0y1…yk−1],{\displaystyle Y_{k}={\begin{bmatrix}y_{0}&y_{1}&\ldots &y_{k-1}\end{bmatrix}},}Vk=[v0v1…vk−1],{\displaystyle V_{k}={\begin{bmatrix}v_{0}&v_{1}&\ldots &v_{k-1}\end{bmatrix}},}Ck=[c0c1…ck−1],{\displaystyle C_{k}={\begin{bmatrix}c_{0}&c_{1}&\ldots &c_{k-1}\end{bmatrix}},} upper triangular (Rk)ij:=(RkSY)ij=si−1Tyj−1,(RkVY)ij=vi−1Tyj−1,(RkCS)ij=ci−1Tsj−1,for1≤i≤j≤k{\displaystyle {\big (}R_{k}{\big )}_{ij}:={\big (}R_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad {\big (}R_{k}^{\text{VY}}{\big )}_{ij}=v_{i-1}^{T}y_{j-1},\quad {\big (}R_{k}^{\text{CS}}{\big )}_{ij}=c_{i-1}^{T}s_{j-1},\quad \quad {\text{ for }}1\leq i\leq j\leq k} lower triangular (Lk)ij:=(LkSY)ij=si−1Tyj−1,(LkVY)ij=vi−1Tyj−1,(LkCS)ij=ci−1Tsj−1,for1≤j<i≤k{\displaystyle {\big (}L_{k}{\big )}_{ij}:={\big (}L_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad {\big (}L_{k}^{\text{VY}}{\big )}_{ij}=v_{i-1}^{T}y_{j-1},\quad {\big (}L_{k}^{\text{CS}}{\big )}_{ij}=c_{i-1}^{T}s_{j-1},\quad \quad {\text{ for }}1\leq j<i\leq k} and diagonal (Dk)ij:=(DkSY)ij=si−1Tyj−1,for1≤i=j≤k{\displaystyle (D_{k})_{ij}:={\big (}D_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad \quad {\text{ for }}1\leq i=j\leq k} With these definitions the compact representations of general rank-2 updates in (2) and (3) (including the well known quasi-Newton updates in Table 1) have been developed in Brust:[11] Hk=H0+UkMk−1UkT,{\displaystyle H_{k}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},} Uk=[VkSk−H0Yk]{\displaystyle U_{k}={\begin{bmatrix}V_{k}&S_{k}-H_{0}Y_{k}\end{bmatrix}}} Mk=[0k×kRkVY(RkVY)TRk+RkT−(Dk+YkTH0Yk)]{\displaystyle M_{k}={\begin{bmatrix}0_{k\times k}&R_{k}^{\text{VY}}\\{\big (}R_{k}^{\text{VY}}{\big )}^{T}&R_{k}+R_{k}^{T}-(D_{k}+Y_{k}^{T}H_{0}Y_{k})\end{bmatrix}}} and the formula for the direct Hessian is Bk=B0+JkNk−1JkT,{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},} Jk=[CkYk−B0Sk]{\displaystyle J_{k}={\begin{bmatrix}C_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}}} Nk=[0k×kRkCS(RkCS)TRk+RkT−(Dk+SkTB0Sk)]{\displaystyle N_{k}={\begin{bmatrix}0_{k\times k}&R_{k}^{\text{CS}}\\{\big (}R_{k}^{\text{CS}}{\big )}^{T}&R_{k}+R_{k}^{T}-(D_{k}+S_{k}^{T}B_{0}S_{k})\end{bmatrix}}} For instance, whenVk=Sk{\displaystyle V_{k}=S_{k}}the representation in (4) is the compact formula for the BFGS recursion in (1). Prior to the development of the compact representations of (2) and (3), equivalent representations have been discovered for most known updates (see Table 1). Along with the SR1 representation, the BFGS (Broyden-Fletcher-Goldfarb-Shanno) compact representation was the first compact formula known.[7]In particular, the inverse representation is given by Hk=H0+UkMk−1UkT,Uk=[SkH0Yk],Mk−1=[Rk−T(Dk+YkTH0Yk)Rk−1−Rk−T−Rk−10]{\displaystyle H_{k}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},\quad U_{k}={\begin{bmatrix}S_{k}&H_{0}Y_{k}\end{bmatrix}},\quad M_{k}^{-1}=\left[{\begin{smallmatrix}R_{k}^{-T}(D_{k}+Y_{k}^{T}H_{0}Y_{k})R_{k}^{-1}&-R_{k}^{-T}\\-R_{k}^{-1}&0\end{smallmatrix}}\right]}The direct Hessian approximation can be found by applying theSherman-Morrison-Woodbury identityto the inverse Hessian: Bk=B0+JkNk−1JkT,Jk=[B0SkYk],Nk=[STB0SkLkLkT−Dk]{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}={\begin{bmatrix}B_{0}S_{k}&Y_{k}\end{bmatrix}},\quad N_{k}=\left[{\begin{smallmatrix}S^{T}B_{0}S_{k}&L_{k}\\L_{k}^{T}&-D_{k}\end{smallmatrix}}\right]} The SR1 (Symmetric Rank-1) compact representation was first proposed in.[7]Using the definitions ofDk,Lk{\displaystyle D_{k},L_{k}}andRk{\displaystyle R_{k}}from above, the inverse Hessian formula is given by Hk=H0+UkMk−1UkT,Uk=Sk−H0Yk,Mk=Rk+RkT−Dk−YkTH0Yk{\displaystyle H_{k}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},\quad U_{k}=S_{k}-H_{0}Y_{k},\quad M_{k}=R_{k}+R_{k}^{T}-D_{k}-Y_{k}^{T}H_{0}Y_{k}} The direct Hessian is obtained by the Sherman-Morrison-Woodbury identity and has the form Bk=B0+JkNk−1JkT,Jk=Yk−B0Sk,Nk=Dk+Lk+LkT−SkTB0Sk{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}=Y_{k}-B_{0}S_{k},\quad N_{k}=D_{k}+L_{k}+L_{k}^{T}-S_{k}^{T}B_{0}S_{k}} The multipoint symmetric secant (MSS) method is a method that aims to satisfy multiple secant equations. The recursive update formula was originally developed by Burdakov.[12]The compact representation for the direct Hessian was derived in[13] Bk=B0+JkNk−1JkT,Jk=[SkYk−B0Sk],Nk=[Wk(SkTB0Sk−(Rk−Dk+RkT))WkWkWk0]−1,Wk=(SkTSk)−1{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}={\begin{bmatrix}S_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}},\quad N_{k}=\left[{\begin{smallmatrix}W_{k}(S_{k}^{T}B_{0}S_{k}-(R_{k}-D_{k}+R_{k}^{T}))W_{k}&W_{k}\\W_{k}&0\end{smallmatrix}}\right]^{-1},\quad W_{k}=(S_{k}^{T}S_{k})^{-1}} Another equivalent compact representation for the MSS matrix is derived by rewritingJk{\displaystyle J_{k}}in terms ofJk=[SkB0Yk]{\displaystyle J_{k}={\begin{bmatrix}S_{k}&B_{0}Y_{k}\end{bmatrix}}}.[14]The inverse representation can be obtained by application for the Sherman-Morrison-Woodbury identity. Since the DFP (Davidon Fletcher Powell) update is the dual of the BFGS formula (i.e., swappingHk↔Bk{\displaystyle H_{k}\leftrightarrow B_{k}},H0↔B0{\displaystyle H_{0}\leftrightarrow B_{0}}andyk↔sk{\displaystyle y_{k}\leftrightarrow s_{k}}in the BFGS update), the compact representation for DFP can be immediately obtained from the one for BFGS.[15] The PSB (Powell-Symmetric-Broyden) compact representation was developed for the direct Hessian approximation.[16]It is equivalent to substitutingCk=Sk{\displaystyle C_{k}=S_{k}}in (5) Bk=B0+JkNk−1JkT,Jk=[SkYk−B0Sk],Nk=[0RkSS(RkSS)TRk+RkT−(Dk+SkTB0Sk)]{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}={\begin{bmatrix}S_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}},\quad N_{k}=\left[{\begin{smallmatrix}0&R_{k}^{\text{SS}}\\(R_{k}^{\text{SS}})^{T}&R_{k}+R_{k}^{T}-(D_{k}+S_{k}^{T}B_{0}S_{k})\end{smallmatrix}}\right]} For structured optimization problems in which the objective function can be decomposed into two partsf(x)=k^(x)+u^(x){\displaystyle f(x)={\widehat {k}}(x)+{\widehat {u}}(x)}, where the gradients and Hessian ofk^(x){\displaystyle {\widehat {k}}(x)}are known but only the gradient ofu^(x){\displaystyle {\widehat {u}}(x)}is known, structured BFGS formulas exist. The compact representation of these methods has the general form of (5), with specificJk{\displaystyle J_{k}}andNk{\displaystyle N_{k}}.[17] The reduced compact representation (RCR) of BFGS is for linear equality constrained optimizationminimizef(x)subject to:Ax=b{\displaystyle {\text{ minimize }}f(x){\text{ subject to: }}Ax=b}, whereA{\displaystyle A}isunderdetermined. In addition to the matricesSk,Yk{\displaystyle S_{k},Y_{k}}the RCR also stores the projections of theyi{\displaystyle y_{i}}'s onto the nullspace ofA{\displaystyle A} Zk=[z0z1⋯zk−1],zi=Pyi,P=I−A(ATA)−1AT,0≤i≤k−1{\displaystyle Z_{k}={\begin{bmatrix}z_{0}&z_{1}&\cdots z_{k-1}\end{bmatrix}},\quad z_{i}=Py_{i},\quad P=I-A(A^{T}A)^{-1}A^{T},\quad 0\leq i\leq k-1} ForBk{\displaystyle B_{k}}the compact representation of the BFGS matrix (with a multiple of the identityB0{\displaystyle B_{0}}) the (1,1) block of the inverseKKTmatrix has the compact representation[18] Kk=[BkATA0],B0=1γkI,H0=γkI,γk>0{\displaystyle K_{k}={\begin{bmatrix}B_{k}&A^{T}\\A&0\end{bmatrix}},\quad B_{0}={\frac {1}{\gamma _{k}}}I,\quad H_{0}=\gamma _{k}I,\quad \gamma _{k}>0} (Kk−1)11=H0+UkMk−1UkT,Uk=[ATSkZk],Mk=[−AAT/γkGk],Gk=[Rk−T(Dk+YkTH0Yk)Rk−1−H0Rk−T−H0Rk−10]−1{\displaystyle {\big (}K_{k}^{-1}{\big )}_{11}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},\quad U_{k}={\begin{bmatrix}A^{T}&S_{k}&Z_{k}\end{bmatrix}},\quad M_{k}=\left[{\begin{smallmatrix}-AA^{T}/\gamma _{k}&\\&G_{k}\end{smallmatrix}}\right],\quad G_{k}=\left[{\begin{smallmatrix}R_{k}^{-T}(D_{k}+Y_{k}^{T}H_{0}Y_{k})R_{k}^{-1}&-H_{0}R_{k}^{-T}\\-H_{0}R_{k}^{-1}&0\end{smallmatrix}}\right]^{-1}} The most common use of the compact representations is for thelimited-memorysetting wherem≪n{\displaystyle m\ll n}denotes the memory parameter, with typical values aroundm∈[5,12]{\displaystyle m\in [5,12]}(see e.g.,[18][7]). Then, instead of storing the history of all vectors one limits this to them{\displaystyle m}most recent vectors{(si,yi}i=k−mk−1{\displaystyle \{(s_{i},y_{i}\}_{i=k-m}^{k-1}}and possibly{vi}i=k−mk−1{\displaystyle \{v_{i}\}_{i=k-m}^{k-1}}or{ci}i=k−mk−1{\displaystyle \{c_{i}\}_{i=k-m}^{k-1}}. Further, typically the initialization is chosen as an adaptive multiple of the identityHk(0)=γkI{\displaystyle H_{k}^{(0)}=\gamma _{k}I}, withγk=yk−1Tsk−1/yk−1Tyk−1{\displaystyle \gamma _{k}=y_{k-1}^{T}s_{k-1}/y_{k-1}^{T}y_{k-1}}andBk(0)=1γkI{\displaystyle B_{k}^{(0)}={\frac {1}{\gamma _{k}}}I}. Limited-memory methods are frequently used for large-scale problems with many variables (i.e.,n{\displaystyle n}can be large), in which the limited-memory matricesSk∈Rn×m{\displaystyle S_{k}\in \mathbb {R} ^{n\times m}}andYk∈Rn×m{\displaystyle Y_{k}\in \mathbb {R} ^{n\times m}}(and possiblyVk,Ck{\displaystyle V_{k},C_{k}}) are tall and very skinny:Sk=[sk−l−1…sk−1]{\displaystyle S_{k}={\begin{bmatrix}s_{k-l-1}&\ldots &s_{k-1}\end{bmatrix}}}andYk=[yk−l−1…yk−1]{\displaystyle Y_{k}={\begin{bmatrix}y_{k-l-1}&\ldots &y_{k-1}\end{bmatrix}}}. Open source implementations include: Non open source implementations include:
https://en.wikipedia.org/wiki/Compact_quasi-Newton_representation
Inlinear algebra,orthogonalizationis the process of finding asetoforthogonal vectorsthatspana particularsubspace. Formally, starting with alinearly independentset of vectors {v1, ... ,vk} in aninner product space(most commonly theEuclidean spaceRn), orthogonalization results in a set oforthogonalvectors {u1, ... ,uk} thatgeneratethe same subspace as the vectorsv1, ... ,vk. Every vector in the new set is orthogonal to every other vector in the new set; and the new set and the old set have the samelinear span. In addition, if we want the resulting vectors to all beunit vectors, then wenormalizeeach vector and the procedure is calledorthonormalization. Orthogonalization is also possible with respect to anysymmetric bilinear form(not necessarily an inner product, not necessarily overreal numbers), but standard algorithms may encounterdivision by zeroin this more general setting. Methods for performing orthogonalization include: When performing orthogonalization on a computer, the Householder transformation is usually preferred over the Gram–Schmidt process since it is morenumerically stable, i.e. rounding errors tend to have less serious effects. On the other hand, the Gram–Schmidt process produces the jth orthogonalized vector after the jth iteration, while orthogonalization using Householder reflections produces all the vectors only at the end. This makes only the Gram–Schmidt process applicable foriterative methodslike theArnoldi iteration. The Givens rotation is more easilyparallelizedthan Householder transformations. Symmetric orthogonalization was formulated byPer-Olov Löwdin.[1] To compensate for the loss of useful signal in traditional noise attenuation approaches because of incorrect parameter selection or inadequacy ofdenoisingassumptions, a weighting operator can be applied on the initially denoised section for the retrieval of useful signal from the initial noise section. The new denoising process is referred to as the local orthogonalization of signal and noise.[2]It has a wide range of applications in manysignals processingandseismic explorationfields.
https://en.wikipedia.org/wiki/Orthogonalization
Bluetooth beaconsare hardware transmitters — a class ofBluetooth Low Energy(LE) devices that broadcast their identifier to nearbyportable electronicdevices. The technology enablessmartphones,tabletsand other devices to perform actions when in close proximity to a beacon. Bluetooth beacons useBluetooth Low Energy proximity sensingto transmit auniversally unique identifier[1]picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location,[2]track customers, or trigger alocation-basedaction on the device such as acheck-in on social mediaor apush notification. One application is distributing messages at a specificpoint of interest, for example a store, a bus stop, a room or a more specific location like a piece of furniture or a vending machine. This is similar to previously used geopush technology based onGPS, but with a much reduced impact on battery life and much extended precision. Another application is anindoor positioning system,[3][4][5]which helps smartphones determine their approximate location or context. With the help of a Bluetooth beacon, a smartphone's software can approximately find its relative location to a Bluetooth beacon in a store.Brick and mortarretail stores use the beacons formobile commerce, offering customers special deals throughmobile marketing,[6]and can enablemobile paymentsthroughpoint of salesystems. Bluetooth beacons differ from some other location-based technologies as the broadcasting device (beacon) is only a 1-way transmitter to the receiving smartphone or receiving device, and necessitates a specific app installed on the device to interact with the beacons. Thus only the installed app, and not the Bluetooth beacon transmitter, can track users. Bluetooth beacon transmitters come in a variety of form factors, including small coin cell devices, USB sticks, and generic Bluetooth 4.0 capable USBdongles.[7] The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Dr. Nils Rydbeck CTO at Ericsson Mobile inLundand Dr.Johan Ullman. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman, SE 8902098–6, issued 1989-06-12 and SE 9202239, issued 1992-07-24. Since its creation the Bluetooth standard has gone through many generations each adding different features. Bluetooth 1.2 allowed for faster speed up to ≈700 kbit/s. Bluetooth 2.0 improved on this for speeds up to 3 Mbit/s. Bluetooth 2.1 improved device pairing speed and security. Bluetooth 3.0 again improved transfer speed up to 24 Mbit/s. In 2010 Bluetooth 4.0 (Low Energy) was released with its main focus being reduced power consumption. Before Bluetooth 4.0 the majority of connections using Bluetooth were two way, both devices listen and talk to each other. Although this two way communication is still possible with Bluetooth 4.0, one way communication is also possible. This one way communication allows a bluetooth device to transmit information but not listen for it. These one way "beacons" do not require a paired connection like previous Bluetooth devices so they have new useful applications. Bluetooth beacons operate using the Bluetooth 4.0 Low Energy standard so battery powered devices are possible. Battery life of devices varies depending on manufacturer. The Bluetooth LE protocol is significantly more power efficient than Bluetooth Classic. Several chipsets makers, includingTexas Instruments[8]andNordic Semiconductornow supply chipsets optimized for iBeacon use. Power consumption depends on iBeacon configuration parameters of advertising interval and transmit power. Battery life can range between 1–48 months. Apple's recommended setting of 100 ms advertising interval with a coin cell battery provides for 1–3 months of life, which increases to 2–3 years as advertising interval is increased to 900 ms.[9] Battery consumption of the phones is a factor that must be taken into account when deploying beacon enabled apps. A recent report has shown that older phones tend to draw more battery power in the vicinity of iBeacons, while the newer phones can be more efficient in the same environment.[10]In addition to the time spent by the phone scanning, number of scans and number of beacons in the vicinity are also significant factors for battery drain. An energy efficient iBeacon application needs to consider these aspects in order to strike a good balance between app responsiveness and battery consumption. Bluetooth beacons can also come in the form of USB dongles. These small USB beacons can be powered by a standard USB port which makes them ideal for long term permanent installations. Bluetooth beacons can be used to send a packet of information that contains a Universally Unique Identifier (UUID). This UUID is used to trigger events specific to that beacon. In the case of Apple's iBeacon the UUID will be recognized by an app on the user device that will trigger an event. This event is fully customizable by the app developer but in the case of advertising the event might be a push notification with an ad. However, with a UID based system the users device must connect to an online server which is capable of understanding the beacons UUID. Once the UUID is sent to the server the appropriate message action is sent to a users device. Other methods of advertising are also possible with beacons, URIBeacon and Google's Eddystone allow for a URI transmission mode that unlike iBeacons UID doesn't require an outside server for recognition. The URI beacons transmit a URI which could be a link to a webpage and the user will see that URI directly on their phone.[11] Beacons can be associated with the artpieces in a museum to encourage further interaction. For example, a notification can be sent to user's mobile device when user is in the proximity to a particular artpiece. By sending user the notification, user is alerted with the artpiece in his proximity, and if user indicates their further interest, a specific app can be installed to interact with the encountered artpiece.[12]In general, a native app is needed for a mobile device to interact with the beacon if the beacon uses iBeacon protocol; whereas if Eddystone is employed, user can interact with the artpiece through a physical web URL broadcast by the Eddystone. Indoor positioning with beacons falls into three categories. Implementations with many beacons per room, implementations with one beacon per room, and implementations with a few beacons per building. Indoor navigation with Bluetooth is still in its infancy but attempts have been made to find a working solution. With multiple beacons per roomtrilaterationcan be used to estimate a users' position to within about 2 meters.[13]Bluetooth beacons are capable of transmitting their Received Signal Strength Indicator (RSSI) value in addition to other data. This RSSI value is calibrated by the manufacturer of the beacon to be the signal strength of the beacon at a known distance, typically one meter. Using the known output signal strength of the beacon and the signal strength observed by the receiving device an approximation can be made about the distance between the beacon and the device. However this approximation is not very reliable, so for more accurate position tracking other methods are preferred. Since its release in 2010 many studies have been connected using Bluetooth beacons for tracking. A few methods have been tested to find the best way of combining the RSSI values for tracking. Neural networks have been proposed as a good way of reducing the error in estimation.[13]AStigmergicapproach has also been tested, this method uses an intensity map to estimate a users location.[14]Bluetooth LE specification 5.1 added further more precise methods for position determination using multiple beacons. With only one beacon per room, a user can use their known room position in conjunction with a virtual map of all the rooms in a building to navigate a building. A building with many separate rooms may need a different beacon configuration for navigation. With one beacon in each room a user can use an app to know the room they are in, and a simple shortest path algorithm can be used to give them the best route to the room they are looking for. This configuration requires a digital map of the building but attempts have been made to make this map creation easier.[15] Beacons can be used in conjunction withpedestrian dead reckoningtechniques to add checkpoints to a large open space.[16]PDR uses a known last location in conjunction with direction and speed information provided by the user to estimate a person's location. This technique can be used to estimate a person's location as they walk through a building. Using Bluetooth beacons as checkpoints the user's location can be recalculated to reduce error. In this way a few Bluetooth beacons can be used to cover a large area like a mall. Using the device tracking capabilities of Bluetooth beacons, in-home patient monitoring is possible. Using bluetooth beacons a person's movements and activities can be tracked in their home.[17]Bluetooth beacons are a good alternative to in house cameras due to their increased level of privacy. Additionally bluetooth beacons can be used in hospitals or other workplaces to ensure workers meet certain standards. For example, a beacon may be placed at a hand sanitizer dispenser in a hospital – the beacons can help ensure employees are using the station regularly. One use of beacons is as a "key finder" where a beacon is attached to, for example, a keyring and a smartphone app can be used to track the last time the device came in range. Another similar use is to track pets, objects (e.g. baggage) or people. The precision and range of BLE doesn't match GPS, but beacons are significantly less expensive. Several commercial and free solutions exist, which are based on proximity detection, not precise positioning. For example, Nivea launched the "kid-tracker" campaign in Brazil back in 2014.[18] In mid-2013,AppleintroducediBeaconsand experts wrote about how it is designed to help the retail industry by simplifying payments and enabling on-site offers. On December 6, 2013, Apple activated iBeacons across its 254 US retail stores.[19]McDonald's has used the devices to give special offers to consumers in its fast-food stores.[6]As of May 2014, different hardware iBeacons can be purchased for as little as $5 per device to more than $30 per device.[20]Each of these different iBeacons have varying default settings for their default transmit power and iBeacon advertisement frequency. Some hardware iBeacons advertise at as low as 1 Hz while others can be as fast as 10 Hz.[21] AltBeacon is an open source alternative to iBeacon created by Radius Networks.[22] URIBeacons are different from iBeacons and AltBeacons because rather than broadcasting an identifier, they send an URL which can be understood immediately.[22] Eddystoneis Google's standard for Bluetooth beacons. It supports three types of packets, Eddystone-UID, Eddystone-URL, and Eddystone-TLM.[11]Eddystone-UID functions in a very similar way to Apple's iBeacon, however, it supports additional telemetry data with Eddystone-TLM. The telemetry information is sent along with the UID data. The beacon information available includes battery voltage, beacon temperature, number of packets sent since last startup, and beacon uptime.[11]Using the Eddystone protocol, Google had built the now discontinued[23]Google Nearby that allowed Android users to receive beacon notifications without an app. Although thenear-field communication(NFC) environment is very different and has many non-overlapping applications, it is still compared with iBeacons.
https://en.wikipedia.org/wiki/Bluetooth_low_energy_beacon
Hierarchical modulation, also calledlayered modulation, is one of thesignal processingtechniques formultiplexingandmodulatingmultiple data streams into one single symbol stream, where base-layer symbols and enhancement-layer symbols are synchronously overlaid before transmission. Hierarchical modulation is particularly used to mitigate thecliff effectindigital televisionbroadcast, particularlymobile TV, by providing a (lower quality) fallback signal in case of weak signals, allowinggraceful degradationinstead of complete signal loss. It has been widely proven and included in various standards, such asDVB-T,MediaFLO, UMB (Ultra Mobile Broadband, a new 3.5th generation mobile network standard developed by 3GPP2), and is under study forDVB-H. Hierarchical modulation is also taken as one of the practical implementations ofsuperposition precoding, which can help achieve the maximum sum rate of broadcast channels. When hierarchical-modulated signals are transmitted, users with good reception and advanced receivers can demodulate multiple layers. For a user with a conventional receiver or poor reception, it may only demodulate the data stream embedded in the base layer. With hierarchical modulation, a network operator can target users of different types with different services orQoS. However, traditional hierarchical modulation suffers from serious inter-layer interference (ILI) with impact on the achievable symbol rate. For example, the figure depicts a layering scheme withQPSKbase layer, and a64QAMenhancement layer. The first layer is 2 bits (represented by the green circles). The signal detector only needs to establish which quadrant the signal is in, to recover the value (which is '10', the green circle in the lower right corner). In better signal conditions, the detector can establish the phase and amplitude more precisely, to recover four more bits of data ('1101'). Thus, the base layer carries '10', and the enhancement layer carries '1101'. For a hierarchically-modulated symbol with QPSK base layer and 16QAM enhancement layer, the base-layer throughput loss is up to about 1.5 bits/symbol with the total receivesignal-to-noise ratio(SNR) at about 23dB, about the minimum needed for the comparable non-hierarchical modulation, 64QAM. But unlayered 16QAM with the same SNR would approach full throughput. This means, due to ILI, about 1.5/4 = 37.5% loss of the base-layer achievable throughput. Furthermore, due to ILI and the imperfect demodulation of base-layer symbols, the demodulation error rate of higher-layer symbols increases too.
https://en.wikipedia.org/wiki/Hierarchical_modulation
Alockis amechanicalorelectronicfastening device that is released by a physical object (such as a key,keycard,fingerprint,RFIDcard,security tokenor coin), by supplying secret information (such as a number or letter permutation orpassword), by a combination thereof, or it may only be able to be opened from one side, such as a door chain. Akeyis a device that is used to operate a lock (to lock or unlock it). A typical key is a small piece of metal consisting of two parts: thebitorblade, which slides into thekeywayof the lock and distinguishes between different keys, and thebow, which is left protruding so that torque can be applied by the user. In its simplest implementation, a key operates one lock or set of locks that are keyed alike, a lock/key system where each similarly keyed lock requires the same, unique key. The key serves as asecurity tokenfor access to the locked area; locks are meant to only allow persons having the correct key to open it and gain access. In more complex mechanical lock/key systems, two different keys, one of which is known as the master key, serve to open the lock. Common metals includebrass, plated brass,nickel silver, andsteel. The act of opening a lock without a key is calledlock picking. Locks have been in use for over 6000 years, with one early example discovered in the ruins ofNineveh, the capital of ancientAssyria.[1]Locks such as this were developed into theEgyptianwoodenpin lock, which consisted of a bolt, door fixture or attachment, and key. When the key was inserted, pins within the fixture were lifted out of drilled holes within the bolt, allowing it to move. When the key was removed, the pins fell part-way into the bolt, preventing movement.[2] Thewarded lockwas also present from antiquity and remains the most recognizable lock and key design in the Western world. The first all-metal locks appeared between the years 870 and 900, and are attributed to English craftsmen.[3]It is also said that the key was invented byTheodorus of Samosin the 6th century BC.[1] The Romans invented metal locks and keys and the system of security provided by wards.[4] Affluent Romans often kept their valuables in secure locked boxes within their households, and wore the keys as rings on their fingers. The practice had two benefits: It kept the key handy at all times, while signaling that the wearer was wealthy and important enough to have money and jewellery worth securing.[5] A special type of lock, dating back to the 17th–18th century, although potentially older as similar locks date back to the 14th century, can be found in theBeguinageof the Belgian cityLier.[6][7]These locks are most likely Gothic locks, that were decorated with foliage, often in a V-shape surrounding the keyhole.[8]They are often calleddrunk man's lock, as these locks were, according to certain sources, designed in such a way a person can still find the keyhole in the dark, although this might not be the case as the ornaments might have been purely aesthetic.[6][7]In more recent times similar locks have been designed.[9][10] With the onset of theIndustrial Revolutionin the late 18th century and the concomitant development of precision engineering and component standardization, locks and keys were manufactured with increasing complexity and sophistication.[11] Thelever tumbler lock, which uses a set of levers to prevent the bolt from moving in the lock, was invented byRobert Barronin 1778.[12]His double acting lever lock required the lever to be lifted to a certain height by having a slot cut in the lever, so lifting the lever too far was as bad as not lifting the lever far enough. This type of lock is still used today.[13] The lever tumbler lock was greatly improved byJeremiah Chubbin 1818.[12]A burglary inPortsmouth Dockyardprompted theBritish Governmentto announce a competition to produce a lock that could be opened only with its own key.[5]Chubb developed theChubb detector lock, which incorporated anintegral security featurethat could frustrate unauthorized access attempts and would indicate to the lock's owner if it had been interfered with. Chubb was awarded £100 after a trainedlock-pickerfailed to break the lock after 3 months.[14] In 1820, Jeremiah joined his brotherCharlesin starting their own lock company,Chubb. Chubb made various improvements to his lock: his 1824 improved design did not require a special regulator key to reset the lock; by 1847 his keys used six levers rather than four; and he later introduced a disc that allowed the key to pass but narrowed the field of view, hiding the levers from anybody attempting to pick the lock.[15]The Chubb brothers also received a patent for the first burglar-resistingsafeand began production in 1835. The designs of Barron and Chubb were based on the use of movable levers, butJoseph Bramah, a prolific inventor, developed an alternative method in 1784. His lock used a cylindrical key with precise notches along the surface; these moved the metal slides that impeded the turning of the bolt into an exact alignment, allowing the lock to open. The lock was at the limits of the precision manufacturing capabilities of the time and was said by its inventor to be unpickable. In the same year Bramah started the Bramah Locks company at 124 Piccadilly, and displayed the "Challenge Lock" in the window of his shop from 1790, challenging "...the artist who can make an instrument that will pick or open this lock" for the reward of £200. The challenge stood for over 67 years until, at theGreat Exhibitionof 1851, the American locksmithAlfred Charles Hobbswas able to open the lock and, following some argument about the circumstances under which he had opened it, was awarded the prize. Hobbs' attempt required some 51 hours, spread over 16 days. The earliest patent for a double-actingpin tumbler lockwas granted to American physician Abraham O. Stansbury in England in 1805,[16]but the modern version, still in use today, was invented by AmericanLinus Yale Sr.in 1848.[17]This lock design usedpinsof varying lengths to prevent the lock from opening without the correct key. In 1861,Linus Yale Jr.was inspired by the original 1840s pin-tumbler lock designed by his father, thus inventing and patenting a smaller flat key with serrated edges as well as pins of varying lengths within the lock itself, the same design of the pin-tumbler lock which still remains in use today.[18]The modern Yale lock is essentially a more developed version of the Egyptian lock. Despite some improvement in key design since, the majority of locks today are still variants of the designs invented by Bramah, Chubb and Yale. Awarded lockuses a set of obstructions, or wards, to prevent the lock from opening unless the correct key is inserted. The key has notches or slots that correspond to the obstructions in the lock, allowing it to rotate freely inside the lock. Warded locks are typically reserved for low-security applications as a well-designedskeleton keycan successfully open a wide variety of warded locks. Thepin tumbler lockuses a set of pins to prevent the lock from opening unless the correct key is inserted. The key has a series of grooves on either side of the key's blade that limit the type of lock the key can slide into. As the key slides into the lock, the horizontal grooves on the blade align with thewardsin thekeywayallowing or denying entry to thecylinder. A series of pointed teeth and notches on the blade, calledbittings, then allowpinsto move up and down until they are in line with theshear lineof the inner and outer cylinder, allowing the cylinder orcamto rotate freely and the lock to open. An additional pin called the master pin is present between the key and driver pins in locks that accept master keys, to allow the plug to rotate at multiple pin elevations. Awafer tumbler lockis similar to the pin tumbler lock and works on a similar principle. However, unlike the pin lock (where each pin consists of two or more pieces) each wafer is a single piece. The wafer tumbler lock is often incorrectly referred to as a disc tumbler lock, which uses an entirely different mechanism. The wafer lock is relatively inexpensive to produce and is often used in automobiles and cabinetry. Thedisc tumbler lockorAbloylock is composed of slotted rotating detainer discs. Thelever tumbler lockuses a set of levers to prevent the bolt from moving in the lock. In its simplest form, lifting the tumbler above a certain height will allow the bolt to slide past. Lever locks are commonlyrecessedinside wooden doors or on some older forms of padlocks, including fire brigade padlocks. Amagnetic keyed lockis a locking mechanism whereby the key utilizes magnets as part of the locking and unlocking mechanism. A magnetic key would use from one to many small magnets oriented so that the North and South poles would equate to a combination to push or pull the lock's internal tumblers thus releasing the lock. Anelectronic lockworks by means of an electric current and is usually connected to anaccess controlsystem. In addition to the pin and tumbler used in standard locks, electronic locks connect theboltorcylinderto a motor within the door using a part called an actuator. Types of electronic locks include the following: Akeycard lockoperates with a flat card of similar dimensions as acredit card. In order to open the door, one needs to successfully match the signature within thekeycard. The lock in a typicalremote keyless systemoperates with asmart keyradio transmitter. The lock typically accepts a particular valid code only once, and the smart key transmits a differentrolling codeevery time the button is pressed. Generally the car door can be opened with either a valid code by radio transmission, or with a (non-electronic) pin tumbler key. The ignition switch may require atransponder car keyto both open a pin tumbler lock and also transmit a valid code by radio transmission. A smart lock is an electromechanics lock that gets instructions to lock and unlock the door from an authorized device using acryptographic keyand wireless protocol. Smart locks have begun to be used more commonly in residential areas, often controlled withsmartphones.[19][20]Smart locks are used incoworkingspaces and offices to enable keyless office entry.[21]In addition, electronic locks cannot be picked with conventional tools. Locksmithingis a traditional trade, and in most countries requires completion of anapprenticeship. The level of formal education required varies from country to country, from no qualifications required at all in the UK,[22]to a simple training certificate awarded by an employer, to a fulldiplomafrom anengineeringcollege. Locksmiths may be commercial (working out of a storefront), mobile (working out of a vehicle), institutional, or investigational (forensic locksmiths). They may specialize in one aspect of the skill, such as an automotive lock specialist, a master key system specialist or a safe technician. Many also act as security consultants, but not all security consultants have the skills and knowledge of a locksmith.[citation needed] Historically, locksmiths constructed or repaired an entire lock, including its constituent parts. The rise of cheap mass production has made this less common; the vast majority of locks are repaired through like-for-like replacements, high-security safes and strongboxes being the most common exception. Many locksmiths also work on any existing door hardware, including door closers, hinges, electric strikes, and frame repairs, or serviceelectronic locksby making keys for transponder-equipped vehicles and implementing access control systems. Although the fitting and replacement of keys remains an important part of locksmithing, modern locksmiths are primarily involved in the installation of high quality lock-sets and the design, implementation, and management of keying and key control systems. Locksmiths are frequently required to determine the level of risk to an individual or institution and then recommend and implement appropriate combinations of equipment and policies to create a "security layer" that exceeds the reasonable gain of an intruder.[citation needed] Traditionalkey cuttingis the primary method of key duplication. It is asubtractive processnamed after the metalworking process ofcutting, where a flatblankkey is ground down to form the same shape as thetemplate(original) key. The process roughly follows these stages: Modern key cutting replaces the mechanical key following aspect with a process in which the original key is scanned electronically, processed by software, stored, then used to guide a cutting wheel when a key is produced. The capability to store electronic copies of the key's shape allows for key shapes to be stored for key cutting by any party that has access to the key image. Different key cutting machines are more or less automated, using different milling or grinding equipment, and follow the design of early 20th century key duplicators. Key duplication is available in many retailhardware storesand as a service of the specialized locksmith, though the correct key blank may not be available. More recently, online services for duplicating keys have become available. Akeyhole(orkeyway) is a hole or aperture (as in a door or lock) for receiving a key.[23]Lock keyway shapes vary widely with lock manufacturer, and many manufacturers have a number of unique profiles requiring a specifically milledkey blankto engage the lock'stumblers. Keys appear in various symbols and coats of arms, the best-known being that of theHoly See:[24]derived from the phrase inMatthew 16:19which promisesSaint Peter, in Roman Catholic tradition the firstpope, theKeys of Heaven. But this is by no means the only case. Some works of art associate keys with the Greek goddess ofwitchcraft, known asHecate.[25] ThePalestinian keyis the Palestinian collective symbol of their homes lost in theNakba, when more than half of the population ofMandatory Palestinewasexpelled or fled violence in 1948and were subsequently refused theright to return.[26][27][28]Since 2016, a Palestinian restaurant inDoha,Qatar, holds theGuinness World Recordfor the world's largest key – 2.7 tonnes and 7.8 × 3 meters.[29][30]
https://en.wikipedia.org/wiki/Key_(lock)#Keycard
Post-structuralismis a philosophical movement that questions the objectivity or stability of the various interpretive structures that are posited bystructuralismand considers them to be constituted by broader systems ofpower.[1]Although different post-structuralists present different critiques of structuralism, common themes include the rejection of the self-sufficiency of structuralism, as well as an interrogation of thebinary oppositionsthat constitute its structures. Accordingly, post-structuralism discards the idea of interpreting media (or the world) within pre-established, socially constructed structures.[2][3][4][5] Structuralismproposes that humanculturecan be understood by means of astructure that is modeled on language. As a result, there is concreterealityon the one hand, abstractideasabout reality on the other hand, and a "third order" that mediates between the two.[6] A post-structuralist response, then, might suggest that in order to build meaning out of such an interpretation, one must (falsely) assume that the definitions of these signs are both valid and fixed, and that the author employing structuralist theory is somehow above and apart from these structures they are describing so as to be able to wholly appreciate them. The rigidity and tendency to categorize intimations of universal truths found in structuralist thinking is a common target of post-structuralist thought, while also building upon structuralist conceptions of reality mediated by the interrelationship between signs.[7] Writers whose works are often characterised as post-structuralist includeRoland Barthes,Jacques Derrida,Michel Foucault,Gilles Deleuze, andJean Baudrillard, although many theorists who have been called "post-structuralist" have rejected the label.[8] Post-structuralism emerged inFranceduring the 1960s as a movement critiquingstructuralism. According toJ. G. Merquior, alove–hate relationshipwith structuralism developed among many leading French thinkers in the 1960s.[4]The period was marked by the rebellion of students and workers against the state inMay 1968. In a 1966 lecture titled "Structure, Sign, and Play in the Discourse of the Human Sciences",Jacques Derridapresented a thesis on an apparent rupture in intellectual life. Derrida interpreted this event as a "decentering" of the former intellectual cosmos. Instead of progress or divergence from an identified centre, Derrida described this "event" as a kind of "play." A year later, in 1967,Roland Barthespublished "The Death of the Author", in which he announced a metaphorical event: the "death" of the author as an authentic source of meaning for a given text. Barthes argued that any literary text has multiple meanings and that the author was not the prime source of the work's semantic content. The "Death of the Author," Barthes maintained, was the "Birth of the Reader," as the source of the proliferation of meanings of the text.[9] InElements of Semiology(1967), Barthes advances the concept of themetalanguage, a systematized way of talking about concepts like meaning and grammar beyond the constraints of a traditional (first-order) language; in a metalanguage, symbols replace words and phrases. Insofar as one metalanguage is required for one explanation of the first-order language, another may be required, so metalanguages may actually replace first-order languages. Barthes exposes how this structuralist system is regressive; orders of language rely upon a metalanguage by which it is explained, and thereforedeconstructionitself is in danger of becoming a metalanguage, thus exposing all languages and discourse to scrutiny. Barthes' other works contributed deconstructive theories about texts. The occasional designation of post-structuralism as a movement can be tied to the fact that mounting criticism of Structuralism became evident at approximately the same time that Structuralism became a topic of interest in universities in the United States. This interest led to a colloquium atJohns Hopkins Universityin 1966 titled "The Languages of Criticism and the Sciences of Man", to which such French philosophers asJacques Derrida,Roland Barthes, andJacques Lacanwere invited to speak. Derrida's lecture at that conference, "Structure, Sign, and Play in the Human Sciences", was one of the earliest to propose some theoretical limitations to Structuralism, and to attempt to theorize on terms that were clearly no longer structuralist. The element of "play" in the title of Derrida's essay is often erroneously interpreted in a linguistic sense, based on a general tendency towards puns and humour, whilesocial constructionismas developed in the later work ofMichel Foucaultis said to create play in the sense of strategic agency by laying bare the levers of historical change. Structuralism, as an intellectual movement in France in the 1950s and 1960s, studied underlying structures incultural products(such astexts) and used analytical concepts fromlinguistics,psychology,anthropology, and other fields tointerpretthose structures. Structuralism posits the concept ofbinary opposition, in which frequently-used pairs of opposite-but-related words (concepts) are often arranged in a hierarchy; for example:Enlightenment/Romantic, male/female, speech/writing, rational/emotional, signified/signifier, symbolic/imaginary, and east/west. Post-structuralism rejects the structuralist notion that the dominant word in a pair is dependent on itssubservientcounterpart, and instead argues that founding knowledge on either pure experience (phenomenology) or onsystematicstructures (structuralism) is impossible,[10]because history and culture actually condition the study of underlying structures, and these are subject to biases and misinterpretations.Gilles Deleuzeand others saw this impossibility not as a failure or loss, but rather as a cause for "celebration and liberation."[11]A post-structuralist approach argues that to understand an object (a text, for example), one must study both the object itself and thesystemsof knowledge that produced the object.[12]The uncertain boundaries between structuralism and post-structuralism become further blurred by the fact that scholars rarely label themselves as post-structuralists. Some scholars associated with structuralism, such asRoland BarthesandMichel Foucault, also became noteworthy in post-structuralism.[13] The following are often said to be post-structuralists, or to have had a post-structuralist period: Some observers from outside of the post-structuralist camp have questioned the rigour and legitimacy of the field. American philosopherJohn Searlesuggested in 1990: "The spread of 'poststructuralist'literary theoryis perhaps the best-known example of a silly but non-catastrophic phenomenon."[45][46]Similarly, physicistAlan Sokalin 1997 criticized "thepostmodernist/poststructuralist gibberish that is nowhegemonicin some sectors of the American academy."[47] Literature scholarNorman Hollandin 1992 saw post-structuralism as flawed due to reliance onSaussure's linguistic model, which was seriously challenged by the 1950s and was soon abandoned by linguists: Saussure's views are not held, so far as I know, by modern linguists, only by literary critics and the occasional philosopher. [Strict adherence to Saussure] has elicited wrongfilmand literary theory on a grand scale. One can find dozens of books of literary theory bogged down in signifiers and signifieds, but only a handful that refers toChomsky."[48]
https://en.wikipedia.org/wiki/Post-structuralism
Indigital circuitsandmachine learning, aone-hotis a group ofbitsamong which the legal combinations of values are only those with a single high (1) bit and all the others low (0).[1]A similar implementation in which all bits are '1' except one '0' is sometimes calledone-cold.[2]Instatistics,dummy variablesrepresent a similar technique for representingcategorical data. One-hot encoding is often used for indicating the state of astate machine. When usingbinary, adecoderis needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in thenth state if, and only if, thenth bit is high. Aring counterwith 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15flip-flopschained in series with the Q output of each flip-flop connected to the D input of the next and the D input of the first flip-flop connected to the Q output of the 15th flip-flop. The first flip-flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip-flop, which represents the last state. Upon reset of the state machine all of the flip-flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip-flops advances the one 'hot' bit to the second flip-flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state. Anaddress decoderconverts from binary to one-hot representation. Apriority encoderconverts from one-hot representation to binary. Innatural language processing, a one-hot vector is a 1 ×Nmatrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary.[5]The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'. In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part.[6] Categorical data can be eithernominalorordinal.[7]Ordinal data has a ranked order for its values and can therefore be converted to numerical data through ordinal encoding.[8]An example of ordinal data would be the ratings on a test ranging from A to F, which could be ranked using numbers from 6 to 1. Since there is no quantitative relationship between nominal variables' individual values, using ordinal encoding can potentially create a fictional ordinal relationship in the data.[9]Therefore, one-hot encoding is often applied to nominal variables, in order to improve the performance of the algorithm. For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).[citation needed] Because this process creates multiple new variables, it is prone to creating a 'big p' problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.[citation needed] Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.[10] In practical usage, this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables. An example would be the dummyVars function of the Caret library in R.[11]
https://en.wikipedia.org/wiki/One-hot_encoding
Incomputing, theJava Secure Socket Extension(JSSE) is a Java API and a provider implementation named SunJSSE that enable secureInternetcommunications in theJava Runtime Environment. It implements aJavatechnology version of theSecure Sockets Layer(SSL) and theTransport Layer Security(TLS)protocols. It includes functionality for dataencryption,[1]serverauthentication,message integrity, and optional client-authentication. JSSE was originally developed as an optional package for Java versions 1.2 and 1.3, but was added as a standard API and implementation into JDK 1.4.
https://en.wikipedia.org/wiki/Java_Secure_Socket_Extension
Inrailroading,slack actionis the amount of free movement of one car before it transmits its motion to an adjoining coupled car. This free movement results from the fact that in railroad practice, cars are loosely coupled, and thecouplingis often combined with a shock-absorbing device, a "draft gear", which, under stress, substantially increases the free movement as the train is started or stopped. Loose coupling is necessary to enable the train to bend around curves and is an aid in starting heavy trains, since the application of the locomotive power to the train operates on each car in the train successively, and the power is thus utilized to start only one car at a time. The UK formerly usedthree-link couplings, which allowed a large amount of slack. These were soon replaced on passenger stock bybuffers and chain couplerswhere the couplings are held tight by buffers and shortened by a turnbuckle, while in most other parts of the world automatic couplings, such as theJanney couplerand theScharfenberg coupler, were adopted from the late nineteenth century on. Three-link couplings are a rarity in modern use. This rail-transport related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Slack_action
Grafis a German comital title, which is part of many compound titles. Grafmay also refer to:
https://en.wikipedia.org/wiki/Graf_(disambiguation)
This article listsmathematicalproperties and laws ofsets, involving the set-theoreticoperationsofunion,intersection, andcomplementationand therelationsof setequalityand setinclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations. Thebinary operationsof set union (∪{\displaystyle \cup }) and intersection (∩{\displaystyle \cap }) satisfy many identities. Several of these identities or "laws" have well established names. Throughout this article, capital letters (such asA,B,C,L,M,R,S,{\displaystyle A,B,C,L,M,R,S,}andX{\displaystyle X}) will denote sets. On the left hand side of an identity, typically, This is to facilitate applying identities to expressions that are complicated or use the same symbols as the identity.[note 1]For example, the identity(L∖M)∖R=(L∖R)∖(M∖R){\displaystyle (L\,\setminus \,M)\,\setminus \,R~=~(L\,\setminus \,R)\,\setminus \,(M\,\setminus \,R)}may be read as:(Left set∖Middle set)∖Right set=(Left set∖Right set)∖(Middle set∖Right set).{\displaystyle ({\text{Left set}}\,\setminus \,{\text{Middle set}})\,\setminus \,{\text{Right set}}~=~({\text{Left set}}\,\setminus \,{\text{Right set}})\,\setminus \,({\text{Middle set}}\,\setminus \,{\text{Right set}}).} For setsL{\displaystyle L}andR,{\displaystyle R,}define:L∪R=def{x:x∈Lorx∈R}L∩R=def{x:x∈Landx∈R}L∖R=def{x:x∈Landx∉R}{\displaystyle {\begin{alignedat}{4}L\cup R&&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x\in L\;&&{\text{ or }}\;\,&&\;x\in R~\}\\L\cap R&&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x\in L\;&&{\text{ and }}&&\;x\in R~\}\\L\setminus R&&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x\in L\;&&{\text{ and }}&&\;x\notin R~\}\\\end{alignedat}}}andL△R=def{x:xbelongs to exactly one ofLandR}{\displaystyle L\triangle R~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x{\text{ belongs to exactly one of }}L{\text{ and }}R~\}}where thesymmetric differenceL△R{\displaystyle L\triangle R}is sometimes denoted byL⊖R{\displaystyle L\ominus R}and equals:[1][2]L△R=(L∖R)∪(R∖L)=(L∪R)∖(L∩R).{\displaystyle {\begin{alignedat}{4}L\;\triangle \;R~&=~(L~\setminus ~&&R)~\cup ~&&(R~\setminus ~&&L)\\~&=~(L~\cup ~&&R)~\setminus ~&&(L~\cap ~&&R).\end{alignedat}}} One setL{\displaystyle L}is said tointersectanother setR{\displaystyle R}ifL∩R≠∅.{\displaystyle L\cap R\neq \varnothing .}Sets that do not intersect are said to bedisjoint. Thepower setofX{\displaystyle X}is the set of all subsets ofX{\displaystyle X}and will be denoted by℘(X)=def{L:L⊆X}.{\displaystyle \wp (X)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~L~:~L\subseteq X~\}.} Universe set and complement notation The notationL∁=defX∖L.{\displaystyle L^{\complement }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~X\setminus L.}may be used ifL{\displaystyle L}is a subset of some setX{\displaystyle X}that is understood (say from context, or because it is clearly stated what the supersetX{\displaystyle X}is). It is emphasized that the definition ofL∁{\displaystyle L^{\complement }}depends on context. For instance, hadL{\displaystyle L}been declared as a subset ofY,{\displaystyle Y,}with the setsY{\displaystyle Y}andX{\displaystyle X}not necessarily related to each other in any way, thenL∁{\displaystyle L^{\complement }}would likely meanY∖L{\displaystyle Y\setminus L}instead ofX∖L.{\displaystyle X\setminus L.} If it is needed then unless indicated otherwise, it should be assumed thatX{\displaystyle X}denotes theuniverse set, which means that all sets that are used in the formula are subsets ofX.{\displaystyle X.}In particular, thecomplement of a setL{\displaystyle L}will be denoted byL∁{\displaystyle L^{\complement }}where unless indicated otherwise, it should be assumed thatL∁{\displaystyle L^{\complement }}denotes the complement ofL{\displaystyle L}in (the universe)X.{\displaystyle X.} AssumeL⊆X.{\displaystyle L\subseteq X.} Identity:[3] Definition:e{\displaystyle e}is called aleft identity elementof abinary operator∗{\displaystyle \,\ast \,}ife∗R=R{\displaystyle e\,\ast \,R=R}for allR{\displaystyle R}and it is called aright identity elementof∗{\displaystyle \,\ast \,}ifL∗e=L{\displaystyle L\,\ast \,e=L}for allL.{\displaystyle L.}A left identity element that is also a right identity element if called anidentity element. The empty set∅{\displaystyle \varnothing }is an identity element of binary union∪{\displaystyle \cup }and symmetric difference△,{\displaystyle \triangle ,}and it is also a right identity element of set subtraction∖:{\displaystyle \,\setminus :} L∩X=L=X∩LwhereL⊆XL∪∅=L=∅∪LL△∅=L=∅△LL∖∅=L{\displaystyle {\begin{alignedat}{10}L\cap X&\;=\;&&L&\;=\;&X\cap L~~~~{\text{ where }}L\subseteq X\\[1.4ex]L\cup \varnothing &\;=\;&&L&\;=\;&\varnothing \cup L\\[1.4ex]L\,\triangle \varnothing &\;=\;&&L&\;=\;&\varnothing \,\triangle L\\[1.4ex]L\setminus \varnothing &\;=\;&&L\\[1.4ex]\end{alignedat}}}but∅{\displaystyle \varnothing }is not a left identity element of∖{\displaystyle \,\setminus \,}since∅∖L=∅{\displaystyle \varnothing \setminus L=\varnothing }so∅∖L=L{\textstyle \varnothing \setminus L=L}if and only ifL=∅.{\displaystyle L=\varnothing .} Idempotence[3]L∗L=L{\displaystyle L\ast L=L}andNilpotenceL∗L=∅{\displaystyle L\ast L=\varnothing }: L∪L=L(Idempotence)L∩L=L(Idempotence)L△L=∅(Nilpotence of index 2)L∖L=∅(Nilpotence of index 2){\displaystyle {\begin{alignedat}{10}L\cup L&\;=\;&&L&&\quad {\text{ (Idempotence)}}\\[1.4ex]L\cap L&\;=\;&&L&&\quad {\text{ (Idempotence)}}\\[1.4ex]L\,\triangle \,L&\;=\;&&\varnothing &&\quad {\text{ (Nilpotence of index 2)}}\\[1.4ex]L\setminus L&\;=\;&&\varnothing &&\quad {\text{ (Nilpotence of index 2)}}\\[1.4ex]\end{alignedat}}} Domination[3]/Absorbing element: Definition:z{\displaystyle z}is called aleft absorbing elementof abinary operator∗{\displaystyle \,\ast \,}ifz∗R=z{\displaystyle z\,\ast \,R=z}for allR{\displaystyle R}and it is called aright absorbing elementof∗{\displaystyle \,\ast \,}ifL∗z=z{\displaystyle L\,\ast \,z=z}for allL.{\displaystyle L.}A left absorbing element that is also a right absorbing element if called anabsorbing element. Absorbing elements are also sometime calledannihilating elementsorzero elements. A universe set is an absorbing element of binary union∪.{\displaystyle \cup .}The empty set∅{\displaystyle \varnothing }is an absorbing element of binary intersection∩{\displaystyle \cap }and binary Cartesian product×,{\displaystyle \times ,}and it is also a left absorbing element of set subtraction∖:{\displaystyle \,\setminus :} X∪L=X=L∪XwhereL⊆X∅∩L=∅=L∩∅∅×L=∅=L×∅∅∖L=∅{\displaystyle {\begin{alignedat}{10}X\cup L&\;=\;&&X&\;=\;&L\cup X~~~~{\text{ where }}L\subseteq X\\[1.4ex]\varnothing \cap L&\;=\;&&\varnothing &\;=\;&L\cap \varnothing \\[1.4ex]\varnothing \times L&\;=\;&&\varnothing &\;=\;&L\times \varnothing \\[1.4ex]\varnothing \setminus L&\;=\;&&\varnothing &\;\;&\\[1.4ex]\end{alignedat}}}but∅{\displaystyle \varnothing }is not a right absorbing element of set subtraction sinceL∖∅=L{\displaystyle L\setminus \varnothing =L}whereL∖∅=∅{\textstyle L\setminus \varnothing =\varnothing }if and only ifL=∅.{\textstyle L=\varnothing .} Double complementorinvolutionlaw: X∖(X∖L)=LAlso written(L∁)∁=LwhereL⊆X(Double complement/Involution law){\displaystyle {\begin{alignedat}{10}X\setminus (X\setminus L)&=L&&\qquad {\text{ Also written }}\quad &&\left(L^{\complement }\right)^{\complement }=L&&\quad &&{\text{ where }}L\subseteq X\quad {\text{ (Double complement/Involution law)}}\\[1.4ex]\end{alignedat}}} L∖∅=L{\displaystyle L\setminus \varnothing =L}∅=L∖L=∅∖L=L∖XwhereL⊆X{\displaystyle {\begin{alignedat}{4}\varnothing &=L&&\setminus L\\&=\varnothing &&\setminus L\\&=L&&\setminus X~~~~{\text{ where }}L\subseteq X\\\end{alignedat}}}[3] L∁=X∖L(definition of notation){\displaystyle L^{\complement }=X\setminus L\quad {\text{ (definition of notation)}}} L∪(X∖L)=XAlso writtenL∪L∁=XwhereL⊆XL△(X∖L)=XAlso writtenL△L∁=XwhereL⊆XL∩(X∖L)=∅Also writtenL∩L∁=∅{\displaystyle {\begin{alignedat}{10}L\,\cup (X\setminus L)&=X&&\qquad {\text{ Also written }}\quad &&L\cup L^{\complement }=X&&\quad &&{\text{ where }}L\subseteq X\\[1.4ex]L\,\triangle (X\setminus L)&=X&&\qquad {\text{ Also written }}\quad &&L\,\triangle L^{\complement }=X&&\quad &&{\text{ where }}L\subseteq X\\[1.4ex]L\,\cap (X\setminus L)&=\varnothing &&\qquad {\text{ Also written }}\quad &&L\cap L^{\complement }=\varnothing &&\quad &&\\[1.4ex]\end{alignedat}}}[3] X∖∅=XAlso written∅∁=X(Complement laws for the empty set))X∖X=∅Also writtenX∁=∅(Complement laws for the universe set){\displaystyle {\begin{alignedat}{10}X\setminus \varnothing &=X&&\qquad {\text{ Also written }}\quad &&\varnothing ^{\complement }=X&&\quad &&{\text{ (Complement laws for the empty set))}}\\[1.4ex]X\setminus X&=\varnothing &&\qquad {\text{ Also written }}\quad &&X^{\complement }=\varnothing &&\quad &&{\text{ (Complement laws for the universe set)}}\\[1.4ex]\end{alignedat}}} In the left hand sides of the following identities,L{\displaystyle L}is theLeft most set andR{\displaystyle R}is theRight most set. Assume bothLandR{\displaystyle L{\text{ and }}R}are subsets of some universe setX.{\displaystyle X.} In the left hand sides of the following identities,Lis theLeft most set andRis theRight most set. Whenever necessary, bothLandRshould be assumed to be subsets of some universe setX, so thatL∁:=X∖LandR∁:=X∖R.{\displaystyle L^{\complement }:=X\setminus L{\text{ and }}R^{\complement }:=X\setminus R.} L∩R=L∖(L∖R)=R∖(R∖L)=L∖(L△R)=L△(L∖R){\displaystyle {\begin{alignedat}{9}L\cap R&=L&&\,\,\setminus \,&&(L&&\,\,\setminus &&R)\\&=R&&\,\,\setminus \,&&(R&&\,\,\setminus &&L)\\&=L&&\,\,\setminus \,&&(L&&\,\triangle \,&&R)\\&=L&&\,\triangle \,&&(L&&\,\,\setminus &&R)\\\end{alignedat}}} L∪R=(L△R)∪L=(L△R)△(L∩R)=(R∖L)∪L(union is disjoint){\displaystyle {\begin{alignedat}{9}L\cup R&=(&&L\,\triangle \,R)&&\,\,\cup &&&&L&&&&\\&=(&&L\,\triangle \,R)&&\,\triangle \,&&(&&L&&\cap \,&&R)\\&=(&&R\,\setminus \,L)&&\,\,\cup &&&&L&&&&~~~~~{\text{ (union is disjoint)}}\\\end{alignedat}}} L△R=R△L=(L∪R)∖(L∩R)=(L∖R)∪(R∖L)(union is disjoint)=(L△M)△(M△R)whereMis an arbitrary set.=(L∁)△(R∁){\displaystyle {\begin{alignedat}{9}L\,\triangle \,R&=&&R\,\triangle \,L&&&&&&&&\\&=(&&L\,\cup \,R)&&\,\setminus \,&&(&&L\,\,\cap \,R)&&\\&=(&&L\,\setminus \,R)&&\cup \,&&(&&R\,\,\setminus \,L)&&~~~~~{\text{ (union is disjoint)}}\\&=(&&L\,\triangle \,M)&&\,\triangle \,&&(&&M\,\triangle \,R)&&~~~~~{\text{ where }}M{\text{ is an arbitrary set. }}\\&=(&&L^{\complement })&&\,\triangle \,&&(&&R^{\complement })&&\\\end{alignedat}}} L∖R=L∖(L∩R)=L∩(L△R)=L△(L∩R)=R△(L∪R){\displaystyle {\begin{alignedat}{9}L\setminus R&=&&L&&\,\,\setminus &&(L&&\,\,\cap &&R)\\&=&&L&&\,\,\cap &&(L&&\,\triangle \,&&R)\\&=&&L&&\,\triangle \,&&(L&&\,\,\cap &&R)\\&=&&R&&\,\triangle \,&&(L&&\,\,\cup &&R)\\\end{alignedat}}} De Morgan's lawsstate that forL,R⊆X:{\displaystyle L,R\subseteq X:} X∖(L∩R)=(X∖L)∪(X∖R)Also written(L∩R)∁=L∁∪R∁(De Morgan's law)X∖(L∪R)=(X∖L)∩(X∖R)Also written(L∪R)∁=L∁∩R∁(De Morgan's law){\displaystyle {\begin{alignedat}{10}X\setminus (L\cap R)&=(X\setminus L)\cup (X\setminus R)&&\qquad {\text{ Also written }}\quad &&(L\cap R)^{\complement }=L^{\complement }\cup R^{\complement }&&\quad &&{\text{ (De Morgan's law)}}\\[1.4ex]X\setminus (L\cup R)&=(X\setminus L)\cap (X\setminus R)&&\qquad {\text{ Also written }}\quad &&(L\cup R)^{\complement }=L^{\complement }\cap R^{\complement }&&\quad &&{\text{ (De Morgan's law)}}\\[1.4ex]\end{alignedat}}} Unions, intersection, and symmetric difference arecommutative operations:[3] L∪R=R∪L(Commutativity)L∩R=R∩L(Commutativity)L△R=R△L(Commutativity){\displaystyle {\begin{alignedat}{10}L\cup R&\;=\;&&R\cup L&&\quad {\text{ (Commutativity)}}\\[1.4ex]L\cap R&\;=\;&&R\cap L&&\quad {\text{ (Commutativity)}}\\[1.4ex]L\,\triangle R&\;=\;&&R\,\triangle L&&\quad {\text{ (Commutativity)}}\\[1.4ex]\end{alignedat}}} Set subtraction is not commutative. However, the commutativity of set subtraction can be characterized: from(L∖R)∩(R∖L)=∅{\displaystyle (L\,\setminus \,R)\cap (R\,\setminus \,L)=\varnothing }it follows that:L∖R=R∖Lif and only ifL=R.{\displaystyle L\,\setminus \,R=R\,\setminus \,L\quad {\text{ if and only if }}\quad L=R.}Said differently, if distinct symbols always represented distinct sets, then theonlytrue formulas of the form⋅∖⋅=⋅∖⋅{\displaystyle \,\cdot \,\,\setminus \,\,\cdot \,=\,\cdot \,\,\setminus \,\,\cdot \,}that could be written would be those involving a single symbol; that is, those of the form:S∖S=S∖S.{\displaystyle S\,\setminus \,S=S\,\setminus \,S.}But such formulas are necessarily true foreverybinary operation∗{\displaystyle \,\ast \,}(becausex∗x=x∗x{\displaystyle x\,\ast \,x=x\,\ast \,x}must hold by definition ofequality), and so in this sense, set subtraction is as diametrically opposite to being commutative as is possible for a binary operation. Set subtraction is also neitherleft alternativenorright alternative; instead,(L∖L)∖R=L∖(L∖R){\displaystyle (L\setminus L)\setminus R=L\setminus (L\setminus R)}if and only ifL∩R=∅{\displaystyle L\cap R=\varnothing }if and only if(R∖L)∖L=R∖(L∖L).{\displaystyle (R\setminus L)\setminus L=R\setminus (L\setminus L).}Set subtraction isquasi-commutativeand satisfies theJordan identity. Absorption laws: L∪(L∩R)=L(Absorption)L∩(L∪R)=L(Absorption){\displaystyle {\begin{alignedat}{4}L\cup (L\cap R)&\;=\;&&L&&\quad {\text{ (Absorption)}}\\[1.4ex]L\cap (L\cup R)&\;=\;&&L&&\quad {\text{ (Absorption)}}\\[1.4ex]\end{alignedat}}} Other properties L∖R=L∩(X∖R)Also writtenL∖R=L∩R∁whereL,R⊆XX∖(L∖R)=(X∖L)∪RAlso written(L∖R)∁=L∁∪RwhereR⊆XL∖R=(X∖R)∖(X∖L)Also writtenL∖R=R∁∖L∁whereL,R⊆X{\displaystyle {\begin{alignedat}{10}L\setminus R&=L\cap (X\setminus R)&&\qquad {\text{ Also written }}\quad &&L\setminus R=L\cap R^{\complement }&&\quad &&{\text{ where }}L,R\subseteq X\\[1.4ex]X\setminus (L\setminus R)&=(X\setminus L)\cup R&&\qquad {\text{ Also written }}\quad &&(L\setminus R)^{\complement }=L^{\complement }\cup R&&\quad &&{\text{ where }}R\subseteq X\\[1.4ex]L\setminus R&=(X\setminus R)\setminus (X\setminus L)&&\qquad {\text{ Also written }}\quad &&L\setminus R=R^{\complement }\setminus L^{\complement }&&\quad &&{\text{ where }}L,R\subseteq X\\[1.4ex]\end{alignedat}}} Intervals: (a,b)∩(c,d)=(max{a,c},min{b,d}){\displaystyle (a,b)\cap (c,d)=(\max\{a,c\},\min\{b,d\})}[a,b)∩[c,d)=[max{a,c},min{b,d}){\displaystyle [a,b)\cap [c,d)=[\max\{a,c\},\min\{b,d\})} The following statements are equivalent for anyL,R⊆X:{\displaystyle L,R\subseteq X:}[3] The following statements are equivalent for anyL,R⊆X:{\displaystyle L,R\subseteq X:} The following statements are equivalent: A setL{\displaystyle L}isemptyif the sentence∀x(x∉L){\displaystyle \forall x(x\not \in L)}is true, where the notationx∉L{\displaystyle x\not \in L}is shorthand for¬(x∈L).{\displaystyle \lnot (x\in L).} IfL{\displaystyle L}is any set then the following are equivalent: IfL{\displaystyle L}is any set then the following are equivalent: Given anyx,{\displaystyle x,}the following are equivalent: Moreover,(L∖R)∩R=∅always holds.{\displaystyle (L\setminus R)\cap R=\varnothing \qquad {\text{ always holds}}.} Inclusion is apartial order: Explicitly, this means thatinclusion⊆,{\displaystyle \,\subseteq ,\,}which is abinary operation, has the following three properties:[3] The following proposition says that for any setS,{\displaystyle S,}thepower setofS,{\displaystyle S,}ordered by inclusion, is abounded lattice, and hence together with the distributive and complement laws above, show that it is aBoolean algebra. Existence of aleast elementand agreatest element:∅⊆L⊆X{\displaystyle \varnothing \subseteq L\subseteq X} Joins/supremums exist:[3]L⊆L∪R{\displaystyle L\subseteq L\cup R} The unionL∪R{\displaystyle L\cup R}is the join/supremum ofL{\displaystyle L}andR{\displaystyle R}with respect to⊆{\displaystyle \,\subseteq \,}because: The intersectionL∩R{\displaystyle L\cap R}is the join/supremum ofL{\displaystyle L}andR{\displaystyle R}with respect to⊇.{\displaystyle \,\supseteq .\,} Meets/infimums exist:[3]L∩R⊆L{\displaystyle L\cap R\subseteq L} The intersectionL∩R{\displaystyle L\cap R}is the meet/infimum ofL{\displaystyle L}andR{\displaystyle R}with respect to⊆{\displaystyle \,\subseteq \,}because: The unionL∪R{\displaystyle L\cup R}is the meet/infimum ofL{\displaystyle L}andR{\displaystyle R}with respect to⊇.{\displaystyle \,\supseteq .\,} Other inclusion properties: L∖R⊆L{\displaystyle L\setminus R\subseteq L}(L∖R)∩L=L∖R{\displaystyle (L\setminus R)\cap L=L\setminus R} In the left hand sides of the following identities,L{\displaystyle L}is theLeft most set,M{\displaystyle M}is theMiddle set, andR{\displaystyle R}is theRight most set. There is no universal agreement on theorder of precedenceof the basic set operators. Nevertheless, many authors useprecedence rulesfor set operators, although these rules vary with the author. One common convention is to associate intersectionL∩R={x:(x∈L)∧(x∈R)}{\displaystyle L\cap R=\{x:(x\in L)\land (x\in R)\}}withlogical conjunction (and)L∧R{\displaystyle L\land R}and associate unionL∪R={x:(x∈L)∨(x∈R)}{\displaystyle L\cup R=\{x:(x\in L)\lor (x\in R)\}}withlogical disjunction (or)L∨R,{\displaystyle L\lor R,}and then transfer theprecedence of these logical operators(where∧{\displaystyle \,\land \,}has precedence over∨{\displaystyle \,\lor \,}) to these set operators, thereby giving∩{\displaystyle \,\cap \,}precedence over∪.{\displaystyle \,\cup .\,}So for example,L∪M∩R{\displaystyle L\cup M\cap R}would meanL∪(M∩R){\displaystyle L\cup (M\cap R)}since it would be associated with the logical statementL∨M∧R=L∨(M∧R){\displaystyle L\lor M\land R~=~L\lor (M\land R)}and similarly,L∪M∩R∪Z{\displaystyle L\cup M\cap R\cup Z}would meanL∪(M∩R)∪Z{\displaystyle L\cup (M\cap R)\cup Z}since it would be associated withL∨M∧R∨Z=L∨(M∧R)∨Z.{\displaystyle L\lor M\land R\lor Z~=~L\lor (M\land R)\lor Z.} Sometimes, set complement (subtraction)∖{\displaystyle \,\setminus \,}is also associated withlogical complement (not)¬,{\displaystyle \,\lnot ,\,}in which case it will have the highest precedence. More specifically,L∖R={x:(x∈L)∧¬(x∈R)}{\displaystyle L\setminus R=\{x:(x\in L)\land \lnot (x\in R)\}}is rewrittenL∧¬R{\displaystyle L\land \lnot R}so that for example,L∪M∖R{\displaystyle L\cup M\setminus R}would meanL∪(M∖R){\displaystyle L\cup (M\setminus R)}since it would be rewritten as the logical statementL∨M∧¬R{\displaystyle L\lor M\land \lnot R}which is equal toL∨(M∧¬R).{\displaystyle L\lor (M\land \lnot R).}For another example, becauseL∧¬M∧R{\displaystyle L\land \lnot M\land R}meansL∧(¬M)∧R,{\displaystyle L\land (\lnot M)\land R,}which is equal to both(L∧(¬M))∧R{\displaystyle (L\land (\lnot M))\land R}andL∧((¬M)∧R)=L∧(R∧(¬M)){\displaystyle L\land ((\lnot M)\land R)~=~L\land (R\land (\lnot M))}(where(¬M)∧R{\displaystyle (\lnot M)\land R}was rewritten asR∧(¬M){\displaystyle R\land (\lnot M)}), the formulaL∖M∩R{\displaystyle L\setminus M\cap R}would refer to the set(L∖M)∩R=L∩(R∖M);{\displaystyle (L\setminus M)\cap R=L\cap (R\setminus M);}moreover, sinceL∧(¬M)∧R=(L∧R)∧¬M,{\displaystyle L\land (\lnot M)\land R=(L\land R)\land \lnot M,}this set is also equal to(L∩R)∖M{\displaystyle (L\cap R)\setminus M}(other set identities can similarly be deduced frompropositional calculusidentitiesin this way). However, because set subtraction is not associative(L∖M)∖R≠L∖(M∖R),{\displaystyle (L\setminus M)\setminus R\neq L\setminus (M\setminus R),}a formula such asL∖M∖R{\displaystyle L\setminus M\setminus R}would be ambiguous; for this reason, among others, set subtraction is often not assigned any precedence at all. Symmetric differenceL△R={x:(x∈L)⊕(x∈R)}{\displaystyle L\triangle R=\{x:(x\in L)\oplus (x\in R)\}}is sometimes associated withexclusive or (xor)L⊕R{\displaystyle L\oplus R}(also sometimes denoted by⊻{\displaystyle \,\veebar }), in which case if the order of precedence from highest to lowest is¬,⊕,∧,∨{\displaystyle \,\lnot ,\,\oplus ,\,\land ,\,\lor \,}then the order of precedence (from highest to lowest) for the set operators would be∖,△,∩,∪.{\displaystyle \,\setminus ,\,\triangle ,\,\cap ,\,\cup .}There is no universal agreement on the precedence of exclusive disjunction⊕{\displaystyle \,\oplus \,}with respect to the other logical connectives, which is why symmetric difference△{\displaystyle \,\triangle \,}is not often assigned a precedence. Definition: Abinary operator∗{\displaystyle \,\ast \,}is calledassociativeif(L∗M)∗R=L∗(M∗R){\displaystyle (L\,\ast \,M)\,\ast \,R=L\,\ast \,(M\,\ast \,R)}always holds. The following set operators are associative:[3] (L∪M)∪R=L∪(M∪R)(L∩M)∩R=L∩(M∩R)(L△M)△R=L△(M△R){\displaystyle {\begin{alignedat}{5}(L\cup M)\cup R&\;=\;\;&&L\cup (M\cup R)\\[1.4ex](L\cap M)\cap R&\;=\;\;&&L\cap (M\cap R)\\[1.4ex](L\,\triangle M)\,\triangle R&\;=\;\;&&L\,\triangle (M\,\triangle R)\\[1.4ex]\end{alignedat}}} For set subtraction, instead of associativity, only the following is always guaranteed:(L∖M)∖R⊆L∖(M∖R){\displaystyle (L\,\setminus \,M)\,\setminus \,R\;~~{\color {red}{\subseteq }}~~\;L\,\setminus \,(M\,\setminus \,R)}where equality holds if and only ifL∩R=∅{\displaystyle L\cap R=\varnothing }(this condition does not depend onM{\displaystyle M}). Thus(L∖M)∖R=L∖(M∖R){\textstyle \;(L\setminus M)\setminus R=L\setminus (M\setminus R)\;}if and only if(R∖M)∖L=R∖(M∖L),{\displaystyle \;(R\setminus M)\setminus L=R\setminus (M\setminus L),\;}where the only difference between the left and right hand side set equalities is that the locations ofLandR{\displaystyle L{\text{ and }}R}have been swapped. Definition: If∗and∙{\displaystyle \ast {\text{ and }}\bullet }arebinary operatorsthen∗{\displaystyle \,\ast \,}left distributesover∙{\displaystyle \,\bullet \,}ifL∗(M∙R)=(L∗M)∙(L∗R)for allL,M,R{\displaystyle L\,\ast \,(M\,\bullet \,R)~=~(L\,\ast \,M)\,\bullet \,(L\,\ast \,R)\qquad \qquad {\text{ for all }}L,M,R}while∗{\displaystyle \,\ast \,}right distributesover∙{\displaystyle \,\bullet \,}if(L∙M)∗R=(L∗R)∙(M∗R)for allL,M,R.{\displaystyle (L\,\bullet \,M)\,\ast \,R~=~(L\,\ast \,R)\,\bullet \,(M\,\ast \,R)\qquad \qquad {\text{ for all }}L,M,R.}The operator∗{\displaystyle \,\ast \,}distributesover∙{\displaystyle \,\bullet \,}if it both left distributes and right distributes over∙.{\displaystyle \,\bullet \,.\,}In the definitions above, to transform one side to the other, the innermost operator (the operator inside the parentheses) becomes the outermost operator and the outermost operator becomes the innermost operator. Right distributivity:[3] (L∩M)∪R=(L∪R)∩(M∪R)(Right-distributivity of∪over∩)(L∪M)∪R=(L∪R)∪(M∪R)(Right-distributivity of∪over∪)(L∪M)∩R=(L∩R)∪(M∩R)(Right-distributivity of∩over∪)(L∩M)∩R=(L∩R)∩(M∩R)(Right-distributivity of∩over∩)(L△M)∩R=(L∩R)△(M∩R)(Right-distributivity of∩over△)(L∩M)×R=(L×R)∩(M×R)(Right-distributivity of×over∩)(L∪M)×R=(L×R)∪(M×R)(Right-distributivity of×over∪)(L∖M)×R=(L×R)∖(M×R)(Right-distributivity of×over∖)(L△M)×R=(L×R)△(M×R)(Right-distributivity of×over△)(L∪M)∖R=(L∖R)∪(M∖R)(Right-distributivity of∖over∪)(L∩M)∖R=(L∖R)∩(M∖R)(Right-distributivity of∖over∩)(L△M)∖R=(L∖R)△(M∖R)(Right-distributivity of∖over△)(L∖M)∖R=(L∖R)∖(M∖R)(Right-distributivity of∖over∖)=L∖(M∪R){\displaystyle {\begin{alignedat}{9}(L\,\cap \,M)\,\cup \,R~&~~=~~&&(L\,\cup \,R)\,&&\cap \,&&(M\,\cup \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cup \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\cup \,R~&~~=~~&&(L\,\cup \,R)\,&&\cup \,&&(M\,\cup \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cup \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,&&\cup \,&&(M\,\cap \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cap \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\cap \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,&&\cap \,&&(M\,\cap \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cap \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,&&\triangle \,&&(M\,\cap \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cap \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex](L\,\cap \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cap \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cup \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\setminus \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\setminus \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\triangle \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)\,&&\cup \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\cap \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)\,&&\cap \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)&&\,\triangle \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex](L\,\setminus \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)&&\,\setminus \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex]~&~~=~~&&~~\;~~\;~~\;~L&&\,\setminus \,&&(M\cup R)\\[1.4ex]\end{alignedat}}} Left distributivity:[3] L∪(M∩R)=(L∪M)∩(L∪R)(Left-distributivity of∪over∩)L∪(M∪R)=(L∪M)∪(L∪R)(Left-distributivity of∪over∪)L∩(M∪R)=(L∩M)∪(L∩R)(Left-distributivity of∩over∪)L∩(M∩R)=(L∩M)∩(L∩R)(Left-distributivity of∩over∩)L∩(M△R)=(L∩M)△(L∩R)(Left-distributivity of∩over△)L×(M∩R)=(L×M)∩(L×R)(Left-distributivity of×over∩)L×(M∪R)=(L×M)∪(L×R)(Left-distributivity of×over∪)L×(M∖R)=(L×M)∖(L×R)(Left-distributivity of×over∖)L×(M△R)=(L×M)△(L×R)(Left-distributivity of×over△){\displaystyle {\begin{alignedat}{5}L\cup (M\cap R)&\;=\;\;&&(L\cup M)\cap (L\cup R)\qquad &&{\text{ (Left-distributivity of }}\,\cup \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\cup (M\cup R)&\;=\;\;&&(L\cup M)\cup (L\cup R)&&{\text{ (Left-distributivity of }}\,\cup \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\cap (M\cup R)&\;=\;\;&&(L\cap M)\cup (L\cap R)&&{\text{ (Left-distributivity of }}\,\cap \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\cap (M\cap R)&\;=\;\;&&(L\cap M)\cap (L\cap R)&&{\text{ (Left-distributivity of }}\,\cap \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\cap (M\,\triangle \,R)&\;=\;\;&&(L\cap M)\,\triangle \,(L\cap R)&&{\text{ (Left-distributivity of }}\,\cap \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]L\times (M\cap R)&\;=\;\;&&(L\times M)\cap (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\times (M\cup R)&\;=\;\;&&(L\times M)\cup (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\times (M\,\setminus R)&\;=\;\;&&(L\times M)\,\setminus (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex]L\times (M\,\triangle R)&\;=\;\;&&(L\times M)\,\triangle (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]\end{alignedat}}} Intersection distributes over symmetric difference:L∩(M△R)=(L∩M)△(L∩R){\displaystyle {\begin{alignedat}{5}L\,\cap \,(M\,\triangle \,R)~&~~=~~&&(L\,\cap \,M)\,\triangle \,(L\,\cap \,R)~&&~\\[1.4ex]\end{alignedat}}}(L△M)∩R=(L∩R)△(M∩R){\displaystyle {\begin{alignedat}{5}(L\,\triangle \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,\triangle \,(M\,\cap \,R)~&&~\\[1.4ex]\end{alignedat}}} Union does not distribute over symmetric difference because only the following is guaranteed in general:L∪(M△R)⊇(L∪M)△(L∪R)=(M△R)∖L=(M∖L)△(R∖L){\displaystyle {\begin{alignedat}{5}L\cup (M\,\triangle \,R)~~{\color {red}{\supseteq }}~~\color {black}{\,}(L\cup M)\,\triangle \,(L\cup R)~&~=~&&(M\,\triangle \,R)\,\setminus \,L&~=~&&(M\,\setminus \,L)\,\triangle \,(R\,\setminus \,L)\\[1.4ex]\end{alignedat}}} Symmetric difference does not distribute over itself:L△(M△R)≠(L△M)△(L△R)=M△R{\displaystyle L\,\triangle \,(M\,\triangle \,R)~~{\color {red}{\neq }}~~\color {black}{\,}(L\,\triangle \,M)\,\triangle \,(L\,\triangle \,R)~=~M\,\triangle \,R}and in general, for any setsLandA{\displaystyle L{\text{ and }}A}(whereA{\displaystyle A}representsM△R{\displaystyle M\,\triangle \,R}),L△A{\displaystyle L\,\triangle \,A}might not be a subset, nor a superset, ofL{\displaystyle L}(and the same is true forA{\displaystyle A}). Failure of set subtraction to left distribute: Set subtraction isrightdistributive over itself. However, set subtraction isnotleft distributive over itself because only the following is guaranteed in general:L∖(M∖R)⊇(L∖M)∖(L∖R)=L∩R∖M{\displaystyle {\begin{alignedat}{5}L\,\setminus \,(M\,\setminus \,R)&~~{\color {red}{\supseteq }}~~&&\color {black}{\,}(L\,\setminus \,M)\,\setminus \,(L\,\setminus \,R)~~=~~L\cap R\,\setminus \,M\\[1.4ex]\end{alignedat}}}where equality holds if and only ifL∖M=L∩R,{\displaystyle L\,\setminus \,M=L\,\cap \,R,}which happens if and only ifL∩M∩R=∅andL∖M⊆R.{\displaystyle L\cap M\cap R=\varnothing {\text{ and }}L\setminus M\subseteq R.} For symmetric difference, the setsL∖(M△R){\displaystyle L\,\setminus \,(M\,\triangle \,R)}and(L∖M)△(L∖R)=L∩(M△R){\displaystyle (L\,\setminus \,M)\,\triangle \,(L\,\setminus \,R)=L\,\cap \,(M\,\triangle \,R)}are always disjoint. So these two sets are equal if and only if they are both equal to∅.{\displaystyle \varnothing .}Moreover,L∖(M△R)=∅{\displaystyle L\,\setminus \,(M\,\triangle \,R)=\varnothing }if and only ifL∩M∩R=∅andL⊆M∪R.{\displaystyle L\cap M\cap R=\varnothing {\text{ and }}L\subseteq M\cup R.} To investigate the left distributivity of set subtraction over unions or intersections, consider how the sets involved in (both of) De Morgan's laws are all related:(L∖M)∩(L∖R)=L∖(M∪R)⊆L∖(M∩R)=(L∖M)∪(L∖R){\displaystyle {\begin{alignedat}{5}(L\,\setminus \,M)\,\cap \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cup \,R)~&~~{\color {red}{\subseteq }}~~&&\color {black}{\,}L\,\setminus \,(M\,\cap \,R)~~=~~(L\,\setminus \,M)\,\cup \,(L\,\setminus \,R)\\[1.4ex]\end{alignedat}}}always holds (the equalities on the left and right are De Morgan's laws) but equality is not guaranteed in general (that is, the containment⊆{\displaystyle {\color {red}{\subseteq }}}might be strict). Equality holds if and only ifL∖(M∩R)⊆L∖(M∪R),{\displaystyle L\,\setminus \,(M\,\cap \,R)\;\subseteq \;L\,\setminus \,(M\,\cup \,R),}which happens if and only ifL∩M=L∩R.{\displaystyle L\,\cap \,M=L\,\cap \,R.} This observation about De Morgan's laws shows that∖{\displaystyle \,\setminus \,}isnotleft distributive over∪{\displaystyle \,\cup \,}or∩{\displaystyle \,\cap \,}because only the following are guaranteed in general:L∖(M∪R)⊆(L∖M)∪(L∖R)=L∖(M∩R){\displaystyle {\begin{alignedat}{5}L\,\setminus \,(M\,\cup \,R)~&~~{\color {red}{\subseteq }}~~&&\color {black}{\,}(L\,\setminus \,M)\,\cup \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cap \,R)\\[1.4ex]\end{alignedat}}}L∖(M∩R)⊇(L∖M)∩(L∖R)=L∖(M∪R){\displaystyle {\begin{alignedat}{5}L\,\setminus \,(M\,\cap \,R)~&~~{\color {red}{\supseteq }}~~&&\color {black}{\,}(L\,\setminus \,M)\,\cap \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cup \,R)\\[1.4ex]\end{alignedat}}}where equality holds for one (or equivalently, for both) of the above two inclusion formulas if and only ifL∩M=L∩R.{\displaystyle L\,\cap \,M=L\,\cap \,R.} The following statements are equivalent: Quasi-commutativity:(L∖M)∖R=(L∖R)∖M(Quasi-commutative){\displaystyle (L\setminus M)\setminus R~=~(L\setminus R)\setminus M\qquad {\text{ (Quasi-commutative)}}}always holds but in general,L∖(M∖R)≠L∖(R∖M).{\displaystyle L\setminus (M\setminus R)~~{\color {red}{\neq }}~~L\setminus (R\setminus M).}However,L∖(M∖R)⊆L∖(R∖M){\displaystyle L\setminus (M\setminus R)~\subseteq ~L\setminus (R\setminus M)}if and only ifL∩R⊆M{\displaystyle L\cap R~\subseteq ~M}if and only ifL∖(R∖M)=L.{\displaystyle L\setminus (R\setminus M)~=~L.} Set subtraction complexity: To manage the many identities involving set subtraction, this section is divided based on where the set subtraction operation and parentheses are located on the left hand side of the identity. The great variety and (relative) complexity of formulas involving set subtraction (compared to those without it) is in part due to the fact that unlike∪,∩,{\displaystyle \,\cup ,\,\cap ,}and△,{\displaystyle \triangle ,\,}set subtraction is neither associative nor commutative and it also is not left distributive over∪,∩,△,{\displaystyle \,\cup ,\,\cap ,\,\triangle ,}or even over itself. Set subtraction isnotassociative in general:(L∖M)∖R≠L∖(M∖R){\displaystyle (L\,\setminus \,M)\,\setminus \,R\;~~{\color {red}{\neq }}~~\;L\,\setminus \,(M\,\setminus \,R)}since only the following is always guaranteed:(L∖M)∖R⊆L∖(M∖R).{\displaystyle (L\,\setminus \,M)\,\setminus \,R\;~~{\color {red}{\subseteq }}~~\;L\,\setminus \,(M\,\setminus \,R).} (L∖M)∖R=L∖(M∪R)=(L∖R)∖M=(L∖M)∩(L∖R)=(L∖R)∖M=(L∖R)∖(M∖R){\displaystyle {\begin{alignedat}{4}(L\setminus M)\setminus R&=&&L\setminus (M\cup R)\\[0.6ex]&=(&&L\setminus R)\setminus M\\[0.6ex]&=(&&L\setminus M)\cap (L\setminus R)\\[0.6ex]&=(&&L\setminus R)\setminus M\\[0.6ex]&=(&&L\,\setminus \,R)\,\setminus \,(M\,\setminus \,R)\\[1.4ex]\end{alignedat}}} L∖(M∖R)=(L∖M)∪(L∩R){\displaystyle {\begin{alignedat}{4}L\setminus (M\setminus R)&=(L\setminus M)\cup (L\cap R)\\[1.4ex]\end{alignedat}}} Set subtraction on theleft, and parentheses on theleft (L∖M)∪R=(L∪R)∖(M∖R)=(L∖(M∪R))∪R(the outermost union is disjoint){\displaystyle {\begin{alignedat}{4}\left(L\setminus M\right)\cup R&=(L\cup R)\setminus (M\setminus R)\\&=(L\setminus (M\cup R))\cup R~~~~~{\text{ (the outermost union is disjoint) }}\\\end{alignedat}}} (L∖M)∩(L∖R)=L∖(M∪R)⊆L∖(M∩R)=(L∖M)∪(L∖R){\displaystyle {\begin{alignedat}{5}(L\,\setminus \,M)\,\cap \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cup \,R)~&~~{\color {red}{\subseteq }}~~&&\color {black}{\,}L\,\setminus \,(M\,\cap \,R)~~=~~(L\,\setminus \,M)\,\cup \,(L\,\setminus \,R)\\[1.4ex]\end{alignedat}}}(L∖M)△R=(L∖(M∪R))∪(R∖L)∪(L∩M∩R)(the three outermost sets are pairwise disjoint){\displaystyle {\begin{alignedat}{4}(L\setminus M)~\triangle ~R&=(L\setminus (M\cup R))\cup (R\setminus L)\cup (L\cap M\cap R)~~~{\text{ (the three outermost sets are pairwise disjoint) }}\\\end{alignedat}}} (L∖M)×R=(L×R)∖(M×R)(Distributivity){\displaystyle (L\,\setminus M)\times R=(L\times R)\,\setminus (M\times R)~~~~~{\text{ (Distributivity)}}} Set subtraction on theleft, and parentheses on theright L∖(M∪R)=(L∖M)∩(L∖R)(De Morgan's law)=(L∖M)∖R=(L∖R)∖M{\displaystyle {\begin{alignedat}{3}L\setminus (M\cup R)&=(L\setminus M)&&\,\cap \,(&&L\setminus R)~~~~{\text{ (De Morgan's law) }}\\&=(L\setminus M)&&\,\,\setminus &&R\\&=(L\setminus R)&&\,\,\setminus &&M\\\end{alignedat}}} L∖(M∩R)=(L∖M)∪(L∖R)(De Morgan's law){\displaystyle {\begin{alignedat}{4}L\setminus (M\cap R)&=(L\setminus M)\cup (L\setminus R)~~~~{\text{ (De Morgan's law) }}\\\end{alignedat}}}where the above two sets that are the subjects ofDe Morgan's lawsalways satisfyL∖(M∪R)⊆L∖(M∩R).{\displaystyle L\,\setminus \,(M\,\cup \,R)~~{\color {red}{\subseteq }}~~\color {black}{\,}L\,\setminus \,(M\,\cap \,R).} L∖(M△R)=(L∖(M∪R))∪(L∩M∩R)(the outermost union is disjoint){\displaystyle {\begin{alignedat}{4}L\setminus (M~\triangle ~R)&=(L\setminus (M\cup R))\cup (L\cap M\cap R)~~~{\text{ (the outermost union is disjoint) }}\\\end{alignedat}}} Set subtraction on theright, and parentheses on theleft (L∪M)∖R=(L∖R)∪(M∖R){\displaystyle {\begin{alignedat}{4}(L\cup M)\setminus R&=(L\setminus R)\cup (M\setminus R)\\\end{alignedat}}} (L∩M)∖R=(L∖R)∩(M∖R)=L∩(M∖R)=M∩(L∖R){\displaystyle {\begin{alignedat}{4}(L\cap M)\setminus R&=(&&L\setminus R)&&\cap (M\setminus R)\\&=&&L&&\cap (M\setminus R)\\&=&&M&&\cap (L\setminus R)\\\end{alignedat}}} (L△M)∖R=(L∖R)△(M∖R)=(L∪R)△(M∪R){\displaystyle {\begin{alignedat}{4}(L\,\triangle \,M)\setminus R&=(L\setminus R)~&&\triangle ~(M\setminus R)\\&=(L\cup R)~&&\triangle ~(M\cup R)\\\end{alignedat}}} Set subtraction on theright, and parentheses on theright L∪(M∖R)=L∪(M∖(R∪L))(the outermost union is disjoint)=[(L∖M)∪(R∩L)]∪(M∖R)(the outermost union is disjoint)=(L∖(M∪R))∪(R∩L)∪(M∖R)(the three outermost sets are pairwise disjoint){\displaystyle {\begin{alignedat}{3}L\cup (M\setminus R)&=&&&&L&&\cup \;&&(M\setminus (R\cup L))&&~~~{\text{ (the outermost union is disjoint) }}\\&=[&&(&&L\setminus M)&&\cup \;&&(R\cap L)]\cup (M\setminus R)&&~~~{\text{ (the outermost union is disjoint) }}\\&=&&(&&L\setminus (M\cup R))\;&&\;\cup &&(R\cap L)\,\,\cup (M\setminus R)&&~~~{\text{ (the three outermost sets are pairwise disjoint) }}\\\end{alignedat}}} L×(M∖R)=(L×M)∖(L×R)(Distributivity){\displaystyle L\times (M\,\setminus R)=(L\times M)\,\setminus (L\times R)~~~~~{\text{ (Distributivity)}}} Operations of the form(L∙M)∗(M∙R){\displaystyle (L\bullet M)\ast (M\bullet R)}: (L∪M)∪(M∪R)=L∪M∪R(L∪M)∩(M∪R)=M∪(L∩R)(L∪M)∖(M∪R)=L∖(M∪R)(L∪M)△(M∪R)=(L∖(M∪R))∪(R∖(L∪M))=(L△R)∖M(L∩M)∪(M∩R)=M∩(L∪R)(L∩M)∩(M∩R)=L∩M∩R(L∩M)∖(M∩R)=(L∩M)∖R(L∩M)△(M∩R)=[(L∩M)∪(M∩R)]∖(L∩M∩R)(L∖M)∪(M∖R)=(L∪M)∖(M∩R)(L∖M)∩(M∖R)=∅(L∖M)∖(M∖R)=L∖M(L∖M)△(M∖R)=(L∖M)∪(M∖R)=(L∪M)∖(M∩R)(L△M)∪(M△R)=(L∪M∪R)∖(L∩M∩R)(L△M)∩(M△R)=((L∩R)∖M)∪(M∖(L∪R))(L△M)∖(M△R)=(L∖(M∪R))∪((M∩R)∖L)(L△M)△(M△R)=L△R{\displaystyle {\begin{alignedat}{9}(L\cup M)&\,\cup \,&&(&&M\cup R)&&&&\;=\;\;&&L\cup M\cup R\\[1.4ex](L\cup M)&\,\cap \,&&(&&M\cup R)&&&&\;=\;\;&&M\cup (L\cap R)\\[1.4ex](L\cup M)&\,\setminus \,&&(&&M\cup R)&&&&\;=\;\;&&L\,\setminus \,(M\cup R)\\[1.4ex](L\cup M)&\,\triangle \,&&(&&M\cup R)&&&&\;=\;\;&&(L\,\setminus \,(M\cup R))\,\cup \,(R\,\setminus \,(L\cup M))\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\,\triangle \,R)\,\setminus \,M\\[1.4ex](L\cap M)&\,\cup \,&&(&&M\cap R)&&&&\;=\;\;&&M\cap (L\cup R)\\[1.4ex](L\cap M)&\,\cap \,&&(&&M\cap R)&&&&\;=\;\;&&L\cap M\cap R\\[1.4ex](L\cap M)&\,\setminus \,&&(&&M\cap R)&&&&\;=\;\;&&(L\cap M)\,\setminus \,R\\[1.4ex](L\cap M)&\,\triangle \,&&(&&M\cap R)&&&&\;=\;\;&&[(L\,\cap M)\cup (M\,\cap R)]\,\setminus \,(L\,\cap M\,\cap R)\\[1.4ex](L\,\setminus M)&\,\cup \,&&(&&M\,\setminus R)&&&&\;=\;\;&&(L\,\cup M)\,\setminus (M\,\cap \,R)\\[1.4ex](L\,\setminus M)&\,\cap \,&&(&&M\,\setminus R)&&&&\;=\;\;&&\varnothing \\[1.4ex](L\,\setminus M)&\,\setminus \,&&(&&M\,\setminus R)&&&&\;=\;\;&&L\,\setminus M\\[1.4ex](L\,\setminus M)&\,\triangle \,&&(&&M\,\setminus R)&&&&\;=\;\;&&(L\,\setminus M)\cup (M\,\setminus R)\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\,\cup M)\setminus (M\,\cap R)\\[1.4ex](L\,\triangle \,M)&\,\cup \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&(L\,\cup \,M\,\cup \,R)\,\setminus \,(L\,\cap \,M\,\cap \,R)\\[1.4ex](L\,\triangle \,M)&\,\cap \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&((L\,\cap \,R)\,\setminus \,M)\,\cup \,(M\,\setminus \,(L\,\cup \,R))\\[1.4ex](L\,\triangle \,M)&\,\setminus \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&(L\,\setminus \,(M\,\cup \,R))\,\cup \,((M\,\cap \,R)\,\setminus \,L)\\[1.4ex](L\,\triangle \,M)&\,\triangle \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&L\,\triangle \,R\\[1.7ex]\end{alignedat}}} Operations of the form(L∙M)∗(R∖M){\displaystyle (L\bullet M)\ast (R\,\setminus \,M)}: (L∪M)∪(R∖M)=L∪M∪R(L∪M)∩(R∖M)=(L∩R)∖M(L∪M)∖(R∖M)=M∪(L∖R)(L∪M)△(R∖M)=M∪(L△R)(L∩M)∪(R∖M)=[L∩(M∪R)]∪[R∖(L∪M)](disjoint union)=(L∩M)△(R∖M)(L∩M)∩(R∖M)=∅(L∩M)∖(R∖M)=L∩M(L∩M)△(R∖M)=(L∩M)∪(R∖M)(disjoint union)(L∖M)∪(R∖M)=L∪R∖M(L∖M)∩(R∖M)=(L∩R)∖M(L∖M)∖(R∖M)=L∖(M∪R)(L∖M)△(R∖M)=(L△R)∖M(L△M)∪(R∖M)=(L∪M∪R)∖(L∩M)(L△M)∩(R∖M)=(L∩R)∖M(L△M)∖(R∖M)=[L∖(M∪R)]∪(M∖L)(disjoint union)=(L△M)∖(L∩R)(L△M)△(R∖M)=L△(M∪R){\displaystyle {\begin{alignedat}{9}(L\cup M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\cup M\cup R\\[1.4ex](L\cup M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap R)\,\setminus \,M\\[1.4ex](L\cup M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&M\cup (L\,\setminus \,R)\\[1.4ex](L\cup M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&M\cup (L\,\triangle \,R)\\[1.4ex](L\cap M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&[L\cap (M\cup R)]\cup [R\,\setminus \,(L\cup M)]\qquad {\text{ (disjoint union)}}\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\cap M)\,\triangle \,(R\,\setminus \,M)\\[1.4ex](L\cap M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&\varnothing \\[1.4ex](L\cap M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\cap M\\[1.4ex](L\cap M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap M)\cup (R\,\setminus \,M)\qquad {\text{ (disjoint union)}}\\[1.4ex](L\,\setminus \,M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\cup R\,\setminus \,M\\[1.4ex](L\,\setminus \,M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap R)\,\setminus \,M\\[1.4ex](L\,\setminus \,M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\,\setminus \,(M\cup R)\\[1.4ex](L\,\setminus \,M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\,\triangle \,R)\,\setminus \,M\\[1.4ex](L\,\triangle \,M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cup M\cup R)\,\setminus \,(L\cap M)\\[1.4ex](L\,\triangle \,M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap R)\,\setminus \,M\\[1.4ex](L\,\triangle \,M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&[L\,\setminus \,(M\cup R)]\cup (M\,\setminus \,L)\qquad {\text{ (disjoint union)}}\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\,\triangle \,M)\setminus (L\,\cap R)\\[1.4ex](L\,\triangle \,M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\,\triangle \,(M\cup R)\\[1.7ex]\end{alignedat}}} Operations of the form(L∖M)∗(L∖R){\displaystyle (L\,\setminus \,M)\ast (L\,\setminus \,R)}: (L∖M)∪(L∖R)=L∖(M∩R)(L∖M)∩(L∖R)=L∖(M∪R)(L∖M)∖(L∖R)=(L∩R)∖M(L∖M)△(L∖R)=L∩(M△R)=(L∩M)△(L∩R){\displaystyle {\begin{alignedat}{9}(L\,\setminus M)&\,\cup \,&&(&&L\,\setminus R)&&\;=\;&&L\,\setminus \,(M\,\cap \,R)\\[1.4ex](L\,\setminus M)&\,\cap \,&&(&&L\,\setminus R)&&\;=\;&&L\,\setminus \,(M\,\cup \,R)\\[1.4ex](L\,\setminus M)&\,\setminus \,&&(&&L\,\setminus R)&&\;=\;&&(L\,\cap \,R)\,\setminus \,M\\[1.4ex](L\,\setminus M)&\,\triangle \,&&(&&L\,\setminus R)&&\;=\;&&L\,\cap \,(M\,\triangle \,R)\\[1.4ex]&\,&&\,&&\,&&\;=\;&&(L\cap M)\,\triangle \,(L\cap R)\\[1.4ex]\end{alignedat}}} Other properties: L∩M=RandL∩R=Mif and only ifM=R⊆L.{\displaystyle L\cap M=R\;{\text{ and }}\;L\cap R=M\qquad {\text{ if and only if }}\qquad M=R\subseteq L.} Given finitely many setsL1,…,Ln,{\displaystyle L_{1},\ldots ,L_{n},}something belongs to theirsymmetric differenceif and only if it belongs to an odd number of these sets. Explicitly, for anyx,{\displaystyle x,}x∈L1△⋯△Ln{\displaystyle x\in L_{1}\triangle \cdots \triangle L_{n}}if and only if the cardinality|{i:x∈Li}|{\displaystyle \left|\left\{i:x\in L_{i}\right\}\right|}is odd. (Recall that symmetric difference is associative so parentheses are not needed for the setL1△⋯△Ln{\displaystyle L_{1}\triangle \cdots \triangle L_{n}}). Consequently, the symmetric difference of three sets satisfies:L△M△R=(L∩M∩R)∪{x:xbelongs to exactly one of the setsL,M,R}(the union is disjoint)=[L∩M∩R]∪[L∖(M∪R)]∪[M∖(L∪R)]∪[R∖(L∪M)](all 4 sets enclosed by [ ] are pairwise disjoint){\displaystyle {\begin{alignedat}{4}L\,\triangle \,M\,\triangle \,R&=(L\cap M\cap R)\cup \{x:x{\text{ belongs to exactly one of the sets }}L,M,R\}~~~~~~{\text{ (the union is disjoint) }}\\&=[L\cap M\cap R]\cup [L\setminus (M\cup R)]\cup [M\setminus (L\cup R)]\cup [R\setminus (L\cup M)]~~~~~~~~~{\text{ (all 4 sets enclosed by [ ] are pairwise disjoint) }}\\\end{alignedat}}} The binaryCartesian product⨯distributes overunions, intersections, set subtraction, and symmetric difference: (L∩M)×R=(L×R)∩(M×R)(Right-distributivity of×over∩)(L∪M)×R=(L×R)∪(M×R)(Right-distributivity of×over∪)(L∖M)×R=(L×R)∖(M×R)(Right-distributivity of×over∖)(L△M)×R=(L×R)△(M×R)(Right-distributivity of×over△){\displaystyle {\begin{alignedat}{9}(L\,\cap \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cap \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cup \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\setminus \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\setminus \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\triangle \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]\end{alignedat}}} L×(M∩R)=(L×M)∩(L×R)(Left-distributivity of×over∩)L×(M∪R)=(L×M)∪(L×R)(Left-distributivity of×over∪)L×(M∖R)=(L×M)∖(L×R)(Left-distributivity of×over∖)L×(M△R)=(L×M)△(L×R)(Left-distributivity of×over△){\displaystyle {\begin{alignedat}{5}L\times (M\cap R)&\;=\;\;&&(L\times M)\cap (L\times R)\qquad &&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\times (M\cup R)&\;=\;\;&&(L\times M)\cup (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\times (M\setminus R)&\;=\;\;&&(L\times M)\setminus (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex]L\times (M\triangle R)&\;=\;\;&&(L\times M)\triangle (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]\end{alignedat}}} But in general, ⨯ does not distribute over itself:L×(M×R)≠(L×M)×(L×R){\displaystyle L\times (M\times R)~\color {Red}{\neq }\color {Black}{}~(L\times M)\times (L\times R)}(L×M)×R≠(L×R)×(M×R).{\displaystyle (L\times M)\times R~\color {Red}{\neq }\color {Black}{}~(L\times R)\times (M\times R).} (L×R)∩(L2×R2)=(L∩L2)×(R∩R2){\displaystyle (L\times R)\cap \left(L_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(R\cap R_{2}\right)}(L×M×R)∩(L2×M2×R2)=(L∩L2)×(M∩M2)×(R∩R2){\displaystyle (L\times M\times R)\cap \left(L_{2}\times M_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(M\cap M_{2}\right)\times \left(R\cap R_{2}\right)} (L×R)∪(L2×R2)=[(L∖L2)×R]∪[(L2∖L)×R2]∪[(L∩L2)×(R∪R2)]=[L×(R∖R2)]∪[L2×(R2∖R)]∪[(L∪L2)×(R∩R2)]{\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\cup ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\setminus L_{2}\right)\times R\right]~\cup ~\left[\left(L_{2}\setminus L\right)\times R_{2}\right]~\cup ~\left[\left(L\cap L_{2}\right)\times \left(R\cup R_{2}\right)\right]\\[0.5ex]~&=~\left[L\times \left(R\setminus R_{2}\right)\right]~\cup ~\left[L_{2}\times \left(R_{2}\setminus R\right)\right]~\cup ~\left[\left(L\cup L_{2}\right)\times \left(R\cap R_{2}\right)\right]\\\end{alignedat}}} (L×R)∖(L2×R2)=[(L∖L2)×R]∪[L×(R∖R2)]{\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\setminus ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\,\setminus \,L_{2}\right)\times R\right]~\cup ~\left[L\times \left(R\,\setminus \,R_{2}\right)\right]\\\end{alignedat}}}and(L×M×R)∖(L2×M2×R2)=[(L∖L2)×M×R]∪[L×(M∖M2)×R]∪[L×M×(R∖R2)]{\displaystyle (L\times M\times R)~\setminus ~\left(L_{2}\times M_{2}\times R_{2}\right)~=~\left[\left(L\,\setminus \,L_{2}\right)\times M\times R\right]~\cup ~\left[L\times \left(M\,\setminus \,M_{2}\right)\times R\right]~\cup ~\left[L\times M\times \left(R\,\setminus \,R_{2}\right)\right]} (L∖L2)×(R∖R2)=(L×R)∖[(L2×R)∪(L×R2)]{\displaystyle \left(L\,\setminus \,L_{2}\right)\times \left(R\,\setminus \,R_{2}\right)~=~\left(L\times R\right)\,\setminus \,\left[\left(L_{2}\times R\right)\cup \left(L\times R_{2}\right)\right]} (L∖L2)×(M∖M2)×(R∖R2)=(L×M×R)∖[(L2×M×R)∪(L×M2×R)∪(L×M×R2)]{\displaystyle \left(L\,\setminus \,L_{2}\right)\times \left(M\,\setminus \,M_{2}\right)\times \left(R\,\setminus \,R_{2}\right)~=~\left(L\times M\times R\right)\,\setminus \,\left[\left(L_{2}\times M\times R\right)\cup \left(L\times M_{2}\times R\right)\cup \left(L\times M\times R_{2}\right)\right]} L×(R△R2)=[L×(R∖R2)]∪[L×(R2∖R)]{\displaystyle L\times \left(R\,\triangle \,R_{2}\right)~=~\left[L\times \left(R\,\setminus \,R_{2}\right)\right]\,\cup \,\left[L\times \left(R_{2}\,\setminus \,R\right)\right]}(L△L2)×R=[(L∖L2)×R]∪[(L2∖L)×R]{\displaystyle \left(L\,\triangle \,L_{2}\right)\times R~=~\left[\left(L\,\setminus \,L_{2}\right)\times R\right]\,\cup \,\left[\left(L_{2}\,\setminus \,L\right)\times R\right]} (L△L2)×(R△R2)=[(L∪L2)×(R∪R2)]∖[((L∩L2)×R)∪(L×(R∩R2))]=[(L∖L2)×(R2∖R)]∪[(L2∖L)×(R2∖R)]∪[(L∖L2)×(R∖R2)]∪[(L2∖L)∪(R∖R2)]{\displaystyle {\begin{alignedat}{4}\left(L\,\triangle \,L_{2}\right)\times \left(R\,\triangle \,R_{2}\right)~&=~&&&&\,\left[\left(L\cup L_{2}\right)\times \left(R\cup R_{2}\right)\right]\;\setminus \;\left[\left(\left(L\cap L_{2}\right)\times R\right)\;\cup \;\left(L\times \left(R\cap R_{2}\right)\right)\right]\\[0.7ex]&=~&&&&\,\left[\left(L\,\setminus \,L_{2}\right)\times \left(R_{2}\,\setminus \,R\right)\right]\,\cup \,\left[\left(L_{2}\,\setminus \,L\right)\times \left(R_{2}\,\setminus \,R\right)\right]\,\cup \,\left[\left(L\,\setminus \,L_{2}\right)\times \left(R\,\setminus \,R_{2}\right)\right]\,\cup \,\left[\left(L_{2}\,\setminus \,L\right)\cup \left(R\,\setminus \,R_{2}\right)\right]\\\end{alignedat}}} (L△L2)×(M△M2)×(R△R2)=[(L∪L2)×(M∪M2)×(R∪R2)]∖[((L∩L2)×M×R)∪(L×(M∩M2)×R)∪(L×M×(R∩R2))]{\displaystyle {\begin{alignedat}{4}\left(L\,\triangle \,L_{2}\right)\times \left(M\,\triangle \,M_{2}\right)\times \left(R\,\triangle \,R_{2}\right)~&=~\left[\left(L\cup L_{2}\right)\times \left(M\cup M_{2}\right)\times \left(R\cup R_{2}\right)\right]\;\setminus \;\left[\left(\left(L\cap L_{2}\right)\times M\times R\right)\;\cup \;\left(L\times \left(M\cap M_{2}\right)\times R\right)\;\cup \;\left(L\times M\times \left(R\cap R_{2}\right)\right)\right]\\\end{alignedat}}} In general,(L△L2)×(R△R2){\displaystyle \left(L\,\triangle \,L_{2}\right)\times \left(R\,\triangle \,R_{2}\right)}need not be a subset nor a superset of(L×R)△(L2×R2).{\displaystyle \left(L\times R\right)\,\triangle \,\left(L_{2}\times R_{2}\right).} (L×R)△(L2×R2)=(L×R)∪(L2×R2)∖[(L∩L2)×(R∩R2)]{\displaystyle {\begin{alignedat}{4}\left(L\times R\right)\,\triangle \,\left(L_{2}\times R_{2}\right)~&=~&&\left(L\times R\right)\cup \left(L_{2}\times R_{2}\right)\;\setminus \;\left[\left(L\cap L_{2}\right)\times \left(R\cap R_{2}\right)\right]\\[0.7ex]\end{alignedat}}} (L×M×R)△(L2×M2×R2)=(L×M×R)∪(L2×M2×R2)∖[(L∩L2)×(M∩M2)×(R∩R2)]{\displaystyle {\begin{alignedat}{4}\left(L\times M\times R\right)\,\triangle \,\left(L_{2}\times M_{2}\times R_{2}\right)~&=~&&\left(L\times M\times R\right)\cup \left(L_{2}\times M_{2}\times R_{2}\right)\;\setminus \;\left[\left(L\cap L_{2}\right)\times \left(M\cap M_{2}\right)\times \left(R\cap R_{2}\right)\right]\\[0.7ex]\end{alignedat}}} Let(Li)i∈I,{\displaystyle \left(L_{i}\right)_{i\in I},}(Rj)j∈J,{\displaystyle \left(R_{j}\right)_{j\in J},}and(Si,j)(i,j)∈I×J{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}}be indexedfamilies of sets. Whenever the assumption is needed, then allindexing sets, such asI{\displaystyle I}andJ,{\displaystyle J,}are assumed to be non-empty. Afamily of setsor (more briefly) afamilyrefers to a set whose elements are sets. Anindexed familyof setsis a function from some set, called itsindexing set, into some family of sets. An indexed family of sets will be denoted by(Li)i∈I,{\displaystyle \left(L_{i}\right)_{i\in I},}where this notation assigns the symbolI{\displaystyle I}for the indexing set and for every indexi∈I,{\displaystyle i\in I,}assigns the symbolLi{\displaystyle L_{i}}to the value of the function ati.{\displaystyle i.}The function itself may then be denoted by the symbolL∙,{\displaystyle L_{\bullet },}which is obtained from the notation(Li)i∈I{\displaystyle \left(L_{i}\right)_{i\in I}}by replacing the indexi{\displaystyle i}with a bullet symbol∙;{\displaystyle \bullet \,;}explicitly,L∙{\displaystyle L_{\bullet }}is the function:L∙:I→{Li:i∈I}i↦Li{\displaystyle {\begin{alignedat}{4}L_{\bullet }:\;&&I&&\;\to \;&\left\{L_{i}:i\in I\right\}\\[0.3ex]&&i&&\;\mapsto \;&L_{i}\\\end{alignedat}}}which may be summarized by writingL∙=(Li)i∈I.{\displaystyle L_{\bullet }=\left(L_{i}\right)_{i\in I}.} Any given indexed family of setsL∙=(Li)i∈I{\displaystyle L_{\bullet }=\left(L_{i}\right)_{i\in I}}(which is afunction) can be canonically associated with its image/rangeIm⁡L∙=def{Li:i∈I}{\displaystyle \operatorname {Im} L_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{L_{i}:i\in I\right\}}(which is a family of sets). Conversely, any given family of setsB{\displaystyle {\mathcal {B}}}may be associated with theB{\displaystyle {\mathcal {B}}}-indexed family of sets(B)B∈B,{\displaystyle (B)_{B\in {\mathcal {B}}},}which is technically theidentity mapB→B.{\displaystyle {\mathcal {B}}\to {\mathcal {B}}.}However, this isnota bijective correspondence because an indexed family of setsL∙=(Li)i∈I{\displaystyle L_{\bullet }=\left(L_{i}\right)_{i\in I}}isnotrequired to be injective (that is, there may exist distinct indicesi≠j{\displaystyle i\neq j}such asLi=Lj{\displaystyle L_{i}=L_{j}}), which in particular means that it is possible for distinct indexed families of sets (which are functions) to be associated with the same family of sets (by having the same image/range). Arbitrary unions defined[3] IfI=∅{\displaystyle I=\varnothing }then⋃i∈∅Li={x:there existsi∈∅such thatx∈Li}=∅,{\displaystyle \bigcup _{i\in \varnothing }L_{i}=\{x~:~{\text{ there exists }}i\in \varnothing {\text{ such that }}x\in L_{i}\}=\varnothing ,}which is somethings called thenullary union convention(despite being called a convention, this equality follows from the definition). IfB{\displaystyle {\mathcal {B}}}is a family of sets then∪B{\displaystyle \cup {\mathcal {B}}}denotes the set:⋃B=def⋃B∈BB=def{x:there existsB∈Bsuch thatx∈B}.{\displaystyle \bigcup {\mathcal {B}}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcup _{B\in {\mathcal {B}}}B~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{x~:~{\text{ there exists }}B\in {\mathcal {B}}{\text{ such that }}x\in B\}.} Arbitrary intersections defined IfI≠∅{\displaystyle I\neq \varnothing }then[3] IfB≠∅{\displaystyle {\mathcal {B}}\neq \varnothing }is anon-emptyfamily of sets then∩B{\displaystyle \cap {\mathcal {B}}}denotes the set:⋂B=def⋂B∈BB=def{x:x∈Bfor everyB∈B}={x:for allB,ifB∈Bthenx∈B}.{\displaystyle \bigcap {\mathcal {B}}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcap _{B\in B}B~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{x~:~x\in B{\text{ for every }}B\in {\mathcal {B}}\}~=~\{x~:~{\text{ for all }}B,{\text{ if }}B\in {\mathcal {B}}{\text{ then }}x\in B\}.} Nullary intersections IfI=∅{\displaystyle I=\varnothing }then⋂i∈∅Li={x:for alli,ifi∈∅thenx∈Li}{\displaystyle \bigcap _{i\in \varnothing }L_{i}=\{x~:~{\text{ for all }}i,{\text{ if }}i\in \varnothing {\text{ then }}x\in L_{i}\}}where every possible thingx{\displaystyle x}in the universevacuouslysatisfied the condition: "ifi∈∅{\displaystyle i\in \varnothing }thenx∈Li{\displaystyle x\in L_{i}}". Consequently,⋂i∈∅Li={x:true}{\displaystyle {\textstyle \bigcap \limits _{i\in \varnothing }}L_{i}=\{x:{\text{ true }}\}}consists ofeverythingin the universe. So ifI=∅{\displaystyle I=\varnothing }and: A consequence of this is the following assumption/definition: Some authors adopt the so callednullary intersectionconvention, which is the convention that an empty intersection of sets is equal to some canonical set. In particular, if all sets are subsets of some setX{\displaystyle X}then some author may declare that the empty intersection of these sets be equal toX.{\displaystyle X.}However, the nullary intersection convention is not as commonly accepted as the nullary union convention and this article will not adopt it (this is due to the fact that unlike the empty union, the value of the empty intersection depends onX{\displaystyle X}so if there are multiple sets under consideration, which is commonly the case, then the value of the empty intersection risks becoming ambiguous). Multiple index sets⋃j∈Ji∈I,Si,j=def⋃(i,j)∈I×JSi,j{\displaystyle \bigcup _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcup _{(i,j)\in I\times J}S_{i,j}}⋂j∈Ji∈I,Si,j=def⋂(i,j)∈I×JSi,j{\displaystyle \bigcap _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcap _{(i,j)\in I\times J}S_{i,j}} and[4] and[4] Naively swapping⋃i∈I{\displaystyle \;{\textstyle \bigcup \limits _{i\in I}}\;}and⋂j∈J{\displaystyle \;{\textstyle \bigcap \limits _{j\in J}}\;}may produce a different set The following inclusion always holds: In general, equality need not hold and moreover, the right hand side depends on how for each fixedi∈I,{\displaystyle i\in I,}the sets(Si,j)j∈J{\displaystyle \left(S_{i,j}\right)_{j\in J}}are labelled; and analogously, the left hand side depends on how for each fixedj∈J,{\displaystyle j\in J,}the sets(Si,j)i∈I{\displaystyle \left(S_{i,j}\right)_{i\in I}}are labelled. An example demonstrating this is now given. Equality inInclusion 1 ∪∩ is a subset of ∩∪can hold under certain circumstances, such as in7e, which is the special case where(Si,j)(i,j)∈I×J{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}}is(Li∖Rj)(i,j)∈I×J{\displaystyle \left(L_{i}\setminus R_{j}\right)_{(i,j)\in I\times J}}(that is,Si,j:=Li∖Rj{\displaystyle S_{i,j}\colon =L_{i}\setminus R_{j}}with the same indexing setsI{\displaystyle I}andJ{\displaystyle J}), or such as in7f, which is the special case where(Si,j)(i,j)∈I×J{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}}is(Li∖Rj)(j,i)∈J×I{\displaystyle \left(L_{i}\setminus R_{j}\right)_{(j,i)\in J\times I}}(that is,S^j,i:=Li∖Rj{\displaystyle {\hat {S}}_{j,i}\colon =L_{i}\setminus R_{j}}with the indexing setsI{\displaystyle I}andJ{\displaystyle J}swapped). For a correct formula that extends the distributive laws, an approach other than just switching∪{\displaystyle \cup }and∩{\displaystyle \cap }is needed. Suppose that for eachi∈I,{\displaystyle i\in I,}Ji{\displaystyle J_{i}}is a non-empty index set and for eachj∈Ji,{\displaystyle j\in J_{i},}letTi,j{\displaystyle T_{i,j}}be any set (for example, to apply this law to(Si,j)(i,j)∈I×J,{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J},}useJi:=J{\displaystyle J_{i}\colon =J}for alli∈I{\displaystyle i\in I}and useTi,j:=Si,j{\displaystyle T_{i,j}\colon =S_{i,j}}for alli∈I{\displaystyle i\in I}and allj∈Ji=J{\displaystyle j\in J_{i}=J}). Let∏J∙=def∏i∈IJi{\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\prod _{i\in I}J_{i}}denote theCartesian product, which can be interpreted as the set of all functionsf:I→⋃i∈IJi{\displaystyle f~:~I~\to ~{\textstyle \bigcup \limits _{i\in I}}J_{i}}such thatf(i)∈Ji{\displaystyle f(i)\in J_{i}}for everyi∈I.{\displaystyle i\in I.}Such a function may also be denoted using the tuple notation(fi)i∈I{\displaystyle \left(f_{i}\right)_{i\in I}}wherefi=deff(i){\displaystyle f_{i}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(i)}for everyi∈I{\displaystyle i\in I}and conversely, a tuple(fi)i∈I{\displaystyle \left(f_{i}\right)_{i\in I}}is just notation for the function with domainI{\displaystyle I}whose value ati∈I{\displaystyle i\in I}isfi;{\displaystyle f_{i};}both notations can be used to denote the elements of∏J∙.{\displaystyle {\textstyle \prod }J_{\bullet }.}Then where∏J∙=def∏i∈IJi.{\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}.} Example application: In the particular case where allJi{\displaystyle J_{i}}are equal (that is,Ji=Ji2{\displaystyle J_{i}=J_{i_{2}}}for alli,i2∈I,{\displaystyle i,i_{2}\in I,}which is the case with the family(Si,j)(i,j)∈I×J,{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J},}for example), then lettingJ{\displaystyle J}denote this common set, the Cartesian product will be∏J∙=def∏i∈IJi=∏i∈IJ=JI,{\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}={\textstyle \prod \limits _{i\in I}}J=J^{I},}which is theset of all functionsof the formf:I→J.{\displaystyle f~:~I~\to ~J.}The above set equalitiesEq. 5 ∩∪ to ∪∩andEq. 6 ∪∩ to ∩∪, respectively become:[3]⋂i∈I⋃j∈JSi,j=⋃f∈JI⋂i∈ISi,f(i){\displaystyle \bigcap _{i\in I}\;\bigcup _{j\in J}S_{i,j}=\bigcup _{f\in J^{I}}\;\bigcap _{i\in I}S_{i,f(i)}}⋃i∈I⋂j∈JSi,j=⋂f∈JI⋃i∈ISi,f(i){\displaystyle \bigcup _{i\in I}\;\bigcap _{j\in J}S_{i,j}=\bigcap _{f\in J^{I}}\;\bigcup _{i\in I}S_{i,f(i)}} which when combined withInclusion 1 ∪∩ is a subset of ∩∪implies:⋃i∈I⋂j∈JSi,j=⋂f∈JI⋃i∈ISi,f(i)⊆⋃g∈IJ⋂j∈JSg(j),j=⋂j∈J⋃i∈ISi,j{\displaystyle \bigcup _{i\in I}\;\bigcap _{j\in J}S_{i,j}~=~\bigcap _{f\in J^{I}}\;\bigcup _{i\in I}S_{i,f(i)}~~\color {Red}{\subseteq }\color {Black}{}~~\bigcup _{g\in I^{J}}\;\bigcap _{j\in J}S_{g(j),j}~=~\bigcap _{j\in J}\;\bigcup _{i\in I}S_{i,j}}where Example application: To apply the general formula to the case of(Ck)k∈K{\displaystyle \left(C_{k}\right)_{k\in K}}and(Dl)l∈L,{\displaystyle \left(D_{l}\right)_{l\in L},}useI:={1,2},{\displaystyle I\colon =\{1,2\},}J1:=K,{\displaystyle J_{1}\colon =K,}J2:=L,{\displaystyle J_{2}\colon =L,}and letT1,k:=Ck{\displaystyle T_{1,k}\colon =C_{k}}for allk∈J1{\displaystyle k\in J_{1}}and letT2,l:=Dl{\displaystyle T_{2,l}\colon =D_{l}}for alll∈J2.{\displaystyle l\in J_{2}.}Every mapf∈∏J∙=def∏i∈IJi=J1×J2=K×L{\displaystyle f\in {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}=J_{1}\times J_{2}=K\times L}can bebijectivelyidentified with the pair(f(1),f(2))∈K×L{\displaystyle \left(f(1),f(2)\right)\in K\times L}(the inverse sends(k,l)∈K×L{\displaystyle (k,l)\in K\times L}to the mapf(k,l)∈∏J∙{\displaystyle f_{(k,l)}\in {\textstyle \prod }J_{\bullet }}defined by1↦k{\displaystyle 1\mapsto k}and2↦l;{\displaystyle 2\mapsto l;}this is technically just a change of notation). Recall thatEq. 5 ∩∪ to ∪∩was⋂i∈I⋃j∈JiTi,j=⋃f∈∏J∙⋂i∈ITi,f(i).{\displaystyle ~\bigcap _{i\in I}\;\bigcup _{j\in J_{i}}T_{i,j}=\bigcup _{f\in {\textstyle \prod }J_{\bullet }}\;\bigcap _{i\in I}T_{i,f(i)}.~}Expanding and simplifying the left hand side gives⋂i∈I⋃j∈JiTi,j=(⋃j∈J1T1,j)∩(⋃j∈J2T2,j)=(⋃k∈KT1,k)∩(⋃l∈LT2,l)=(⋃k∈KCk)∩(⋃l∈LDl){\displaystyle \bigcap _{i\in I}\;\bigcup _{j\in J_{i}}T_{i,j}=\left(\bigcup _{j\in J_{1}}T_{1,j}\right)\cap \left(\;\bigcup _{j\in J_{2}}T_{2,j}\right)=\left(\bigcup _{k\in K}T_{1,k}\right)\cap \left(\;\bigcup _{l\in L}T_{2,l}\right)=\left(\bigcup _{k\in K}C_{k}\right)\cap \left(\;\bigcup _{l\in L}D_{l}\right)}and doing the same to the right hand side gives:⋃f∈∏J∙⋂i∈ITi,f(i)=⋃f∈∏J∙(T1,f(1)∩T2,f(2))=⋃f∈∏J∙(Cf(1)∩Df(2))=⋃(k,l)∈K×L(Ck∩Dl)=⋃l∈Lk∈K,(Ck∩Dl).{\displaystyle \bigcup _{f\in \prod J_{\bullet }}\;\bigcap _{i\in I}T_{i,f(i)}=\bigcup _{f\in \prod J_{\bullet }}\left(T_{1,f(1)}\cap T_{2,f(2)}\right)=\bigcup _{f\in \prod J_{\bullet }}\left(C_{f(1)}\cap D_{f(2)}\right)=\bigcup _{(k,l)\in K\times L}\left(C_{k}\cap D_{l}\right)=\bigcup _{\stackrel {k\in K,}{l\in L}}\left(C_{k}\cap D_{l}\right).} Thus the general identityEq. 5 ∩∪ to ∪∩reduces down to the previously given set equalityEq. 3b:(⋃k∈KCk)∩⋃l∈LDl=⋃l∈Lk∈K,(Ck∩Dl).{\displaystyle \left(\bigcup _{k\in K}C_{k}\right)\cap \;\bigcup _{l\in L}D_{l}=\bigcup _{\stackrel {k\in K,}{l\in L}}\left(C_{k}\cap D_{l}\right).} The next identities are known asDe Morgan's laws.[4] The following four set equalities can be deduced from the equalities7a-7dabove. In general, naively swapping∪{\displaystyle \;\cup \;}and∩{\displaystyle \;\cap \;}may produce a different set (seethis notefor more details). The equalities⋃i∈I⋂j∈J(Li∖Rj)=⋂j∈J⋃i∈I(Li∖Rj)and⋃j∈J⋂i∈I(Li∖Rj)=⋂i∈I⋃j∈J(Li∖Rj){\displaystyle \bigcup _{i\in I}\;\bigcap _{j\in J}\left(L_{i}\setminus R_{j}\right)~=~\bigcap _{j\in J}\;\bigcup _{i\in I}\left(L_{i}\setminus R_{j}\right)\quad {\text{ and }}\quad \bigcup _{j\in J}\;\bigcap _{i\in I}\left(L_{i}\setminus R_{j}\right)~=~\bigcap _{i\in I}\;\bigcup _{j\in J}\left(L_{i}\setminus R_{j}\right)}found inEq. 7eandEq. 7fare thus unusual in that they state exactly that swapping∪{\displaystyle \;\cup \;}and∩{\displaystyle \;\cap \;}willnotchange the resulting set. Commutativity:[3] ⋃j∈Ji∈I,Si,j=def⋃(i,j)∈I×JSi,j=⋃i∈I(⋃j∈JSi,j)=⋃j∈J(⋃i∈ISi,j){\displaystyle \bigcup _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcup _{(i,j)\in I\times J}S_{i,j}~=~\bigcup _{i\in I}\left(\bigcup _{j\in J}S_{i,j}\right)~=~\bigcup _{j\in J}\left(\bigcup _{i\in I}S_{i,j}\right)} ⋂j∈Ji∈I,Si,j=def⋂(i,j)∈I×JSi,j=⋂i∈I(⋂j∈JSi,j)=⋂j∈J(⋂i∈ISi,j){\displaystyle \bigcap _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcap _{(i,j)\in I\times J}S_{i,j}~=~\bigcap _{i\in I}\left(\bigcap _{j\in J}S_{i,j}\right)~=~\bigcap _{j\in J}\left(\bigcap _{i\in I}S_{i,j}\right)} Unions of unions and intersections of intersections:[3] (⋃i∈ILi)∪R=⋃i∈I(Li∪R){\displaystyle \left(\bigcup _{i\in I}L_{i}\right)\cup R~=~\bigcup _{i\in I}\left(L_{i}\cup R\right)}(⋂i∈ILi)∩R=⋂i∈I(Li∩R){\displaystyle \left(\bigcap _{i\in I}L_{i}\right)\cap R~=~\bigcap _{i\in I}\left(L_{i}\cap R\right)}and[3] and ifI=J{\displaystyle I=J}then also:[note 2][3] If(Si,j)(i,j)∈I×J{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}}is a family of sets then In particular, if(Li)i∈I{\displaystyle \left(L_{i}\right)_{i\in I}}and(Ri)i∈I{\displaystyle \left(R_{i}\right)_{i\in I}}are two families indexed by the same set then(∏i∈ILi)∩∏i∈IRi=∏i∈I(Li∩Ri){\displaystyle \left(\prod _{i\in I}L_{i}\right)\cap \prod _{i\in I}R_{i}~=~\prod _{i\in I}\left(L_{i}\cap R_{i}\right)}So for instance,(L×R)∩(L2×R2)=(L∩L2)×(R∩R2){\displaystyle (L\times R)\cap \left(L_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(R\cap R_{2}\right)}(L×R)∩(L2×R2)∩(L3×R3)=(L∩L2∩L3)×(R∩R2∩R3){\displaystyle (L\times R)\cap \left(L_{2}\times R_{2}\right)\cap \left(L_{3}\times R_{3}\right)~=~\left(L\cap L_{2}\cap L_{3}\right)\times \left(R\cap R_{2}\cap R_{3}\right)}and(L×M×R)∩(L2×M2×R2)=(L∩L2)×(M∩M2)×(R∩R2){\displaystyle (L\times M\times R)\cap \left(L_{2}\times M_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(M\cap M_{2}\right)\times \left(R\cap R_{2}\right)} Intersections of products indexed by different sets Let(Li)i∈I{\displaystyle \left(L_{i}\right)_{i\in I}}and(Rj)j∈J{\displaystyle \left(R_{j}\right)_{j\in J}}be two families indexed by different sets. Technically,I≠J{\displaystyle I\neq J}implies(∏i∈ILi)∩∏j∈JRj=∅.{\displaystyle \left({\textstyle \prod \limits _{i\in I}}L_{i}\right)\cap {\textstyle \prod \limits _{j\in J}}R_{j}=\varnothing .}However, sometimes these products are somehow identified as the same set through somebijectionor one of these products is identified as a subset of the other via someinjective map, in which case (byabuse of notation) this intersection may be equal to some other (possibly non-empty) set. The binaryCartesian product⨯distributes overarbitrary intersections (when the indexing set is not empty) and over arbitrary unions: L×(⋃i∈IRi)=⋃i∈I(L×Ri)(Left-distributivity of×over∪)L×(⋂i∈IRi)=⋂i∈I(L×Ri)(Left-distributivity of×over⋂i∈IwhenI≠∅)(⋃i∈ILi)×R=⋃i∈I(Li×R)(Right-distributivity of×over∪)(⋂i∈ILi)×R=⋂i∈I(Li×R)(Right-distributivity of×over⋂i∈IwhenI≠∅){\displaystyle {\begin{alignedat}{5}L\times \left(\bigcup _{i\in I}R_{i}\right)&\;=\;\;&&\bigcup _{i\in I}(L\times R_{i})\qquad &&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\times \left(\bigcap _{i\in I}R_{i}\right)&\;=\;\;&&\bigcap _{i\in I}(L\times R_{i})\qquad &&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\bigcap _{i\in I}\,{\text{ when }}I\neq \varnothing \,{\text{)}}\\[1.4ex]\left(\bigcup _{i\in I}L_{i}\right)\times R&\;=\;\;&&\bigcup _{i\in I}(L_{i}\times R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]\left(\bigcap _{i\in I}L_{i}\right)\times R&\;=\;\;&&\bigcap _{i\in I}(L_{i}\times R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\bigcap _{i\in I}\,{\text{ when }}I\neq \varnothing \,{\text{)}}\\[1.4ex]\end{alignedat}}} Suppose that for eachi∈I,{\displaystyle i\in I,}Ji{\displaystyle J_{i}}is a non-empty index set and for eachj∈Ji,{\displaystyle j\in J_{i},}letTi,j{\displaystyle T_{i,j}}be any set (for example, to apply this law to(Si,j)(i,j)∈I×J,{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J},}useJi:=J{\displaystyle J_{i}\colon =J}for alli∈I{\displaystyle i\in I}and useTi,j:=Si,j{\displaystyle T_{i,j}\colon =S_{i,j}}for alli∈I{\displaystyle i\in I}and allj∈Ji=J{\displaystyle j\in J_{i}=J}). Let∏J∙=def∏i∈IJi{\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\prod _{i\in I}J_{i}}denote theCartesian product, which (asmentioned above) can be interpreted as the set of all functionsf:I→⋃i∈IJi{\displaystyle f~:~I~\to ~{\textstyle \bigcup \limits _{i\in I}}J_{i}}such thatf(i)∈Ji{\displaystyle f(i)\in J_{i}}for everyi∈I{\displaystyle i\in I}. Then where∏J∙=def∏i∈IJi.{\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}.} For unions, only the following is guaranteed in general:⋃j∈J∏i∈ISi,j⊆∏i∈I⋃j∈JSi,jand⋃i∈I∏j∈JSi,j⊆∏j∈J⋃i∈ISi,j{\displaystyle \bigcup _{j\in J}\;\prod _{i\in I}S_{i,j}~~\color {Red}{\subseteq }\color {Black}{}~~\prod _{i\in I}\;\bigcup _{j\in J}S_{i,j}\qquad {\text{ and }}\qquad \bigcup _{i\in I}\;\prod _{j\in J}S_{i,j}~~\color {Red}{\subseteq }\color {Black}{}~~\prod _{j\in J}\;\bigcup _{i\in I}S_{i,j}}where(Si,j)(i,j)∈I×J{\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}}is a family of sets. However,(L×R)∪(L2×R2)=[(L∖L2)×R]∪[(L2∖L)×R2]∪[(L∩L2)×(R∪R2)]=[L×(R∖R2)]∪[L2×(R2∖R)]∪[(L∪L2)×(R∩R2)]{\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\cup ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\setminus L_{2}\right)\times R\right]~\cup ~\left[\left(L_{2}\setminus L\right)\times R_{2}\right]~\cup ~\left[\left(L\cap L_{2}\right)\times \left(R\cup R_{2}\right)\right]\\[0.5ex]~&=~\left[L\times \left(R\setminus R_{2}\right)\right]~\cup ~\left[L_{2}\times \left(R_{2}\setminus R\right)\right]~\cup ~\left[\left(L\cup L_{2}\right)\times \left(R\cap R_{2}\right)\right]\\\end{alignedat}}} If(Li)i∈I{\displaystyle \left(L_{i}\right)_{i\in I}}and(Ri)i∈I{\displaystyle \left(R_{i}\right)_{i\in I}}are two families of sets then:(∏i∈ILi)∖∏i∈IRi=⋃j∈I∏i∈I{Lj∖Rjifi=jLiifi≠j=⋃j∈I[(Lj∖Rj)×∏j≠ii∈I,Li]=⋃Lj⊈Rjj∈I,[(Lj∖Rj)×∏j≠ii∈I,Li]{\displaystyle {\begin{alignedat}{9}\left(\prod _{i\in I}L_{i}\right)~\setminus ~\prod _{i\in I}R_{i}~&=~\;~\bigcup _{j\in I}\;~\prod _{i\in I}{\begin{cases}L_{j}\,\setminus \,R_{j}&{\text{ if }}i=j\\L_{i}&{\text{ if }}i\neq j\\\end{cases}}\\[0.5ex]~&=~\;~\bigcup _{j\in I}\;~{\Big [}\left(L_{j}\,\setminus \,R_{j}\right)~\times ~\prod _{\stackrel {i\in I,}{j\neq i}}L_{i}{\Big ]}\\[0.5ex]~&=~\bigcup _{\stackrel {j\in I,}{L_{j}\not \subseteq R_{j}}}{\Big [}\left(L_{j}\,\setminus \,R_{j}\right)~\times ~\prod _{\stackrel {i\in I,}{j\neq i}}L_{i}{\Big ]}\\[0.3ex]\end{alignedat}}}so for instance,(L×R)∖(L2×R2)=[(L∖L2)×R]∪[L×(R∖R2)]{\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\setminus ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\,\setminus \,L_{2}\right)\times R\right]~\cup ~\left[L\times \left(R\,\setminus \,R_{2}\right)\right]\\\end{alignedat}}}and(L×M×R)∖(L2×M2×R2)=[(L∖L2)×M×R]∪[L×(M∖M2)×R]∪[L×M×(R∖R2)]{\displaystyle (L\times M\times R)~\setminus ~\left(L_{2}\times M_{2}\times R_{2}\right)~=~\left[\left(L\,\setminus \,L_{2}\right)\times M\times R\right]~\cup ~\left[L\times \left(M\,\setminus \,M_{2}\right)\times R\right]~\cup ~\left[L\times M\times \left(R\,\setminus \,R_{2}\right)\right]} (∏i∈ILi)△(∏i∈IRi)=(∏i∈ILi)∪(∏i∈IRi)∖∏i∈ILi∩Ri{\displaystyle {\begin{alignedat}{9}\left(\prod _{i\in I}L_{i}\right)~\triangle ~\left(\prod _{i\in I}R_{i}\right)~&=~\;~\left(\prod _{i\in I}L_{i}\right)~\cup ~\left(\prod _{i\in I}R_{i}\right)\;\setminus \;\prod _{i\in I}L_{i}\cap R_{i}\\[0.5ex]\end{alignedat}}} Letf:X→Y{\displaystyle f:X\to Y}be any function. LetLandR{\displaystyle L{\text{ and }}R}be completely arbitrary sets. AssumeA⊆XandC⊆Y.{\displaystyle A\subseteq X{\text{ and }}C\subseteq Y.} Letf:X→Y{\displaystyle f:X\to Y}be any function, where we denote itsdomainX{\displaystyle X}bydomain⁡f{\displaystyle \operatorname {domain} f}and denote itscodomainY{\displaystyle Y}bycodomain⁡f.{\displaystyle \operatorname {codomain} f.} Many of the identities below do not actually require that the sets be somehow related tof{\displaystyle f}'s domain or codomain (that is, toX{\displaystyle X}orY{\displaystyle Y}) so when some kind of relationship is necessary then it will be clearly indicated. Because of this, in this article, ifL{\displaystyle L}is declared to be "any set," and it is not indicated thatL{\displaystyle L}must be somehow related toX{\displaystyle X}orY{\displaystyle Y}(say for instance, that it be a subsetX{\displaystyle X}orY{\displaystyle Y}) then it is meant thatL{\displaystyle L}is truly arbitrary.[note 3]This generality is useful in situations wheref:X→Y{\displaystyle f:X\to Y}is a map between two subsetsX⊆U{\displaystyle X\subseteq U}andY⊆V{\displaystyle Y\subseteq V}of some larger setsU{\displaystyle U}andV,{\displaystyle V,}and where the setL{\displaystyle L}might not be entirely contained inX=domain⁡f{\displaystyle X=\operatorname {domain} f}and/orY=codomain⁡f{\displaystyle Y=\operatorname {codomain} f}(e.g. if all that is known aboutL{\displaystyle L}is thatL⊆U{\displaystyle L\subseteq U}); in such a situation it may be useful to know what can and cannot be said aboutf(L){\displaystyle f(L)}and/orf−1(L){\displaystyle f^{-1}(L)}without having to introduce a (potentially unnecessary) intersection such as:f(L∩X){\displaystyle f(L\cap X)}and/orf−1(L∩Y).{\displaystyle f^{-1}(L\cap Y).} Images and preimages of sets IfL{\displaystyle L}isanyset then theimageofL{\displaystyle L}underf{\displaystyle f}is defined to be the set:f(L)=def{f(l):l∈L∩domain⁡f}{\displaystyle f(L)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{\,f(l)~:~l\in L\cap \operatorname {domain} f\,\}}while thepreimageofL{\displaystyle L}underf{\displaystyle f}is:f−1(L)=def{x∈domain⁡f:f(x)∈L}{\displaystyle f^{-1}(L)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{\,x\in \operatorname {domain} f~:~f(x)\in L\,\}}where ifL={s}{\displaystyle L=\{s\}}is a singleton set then thefiberorpreimageofs{\displaystyle s}underf{\displaystyle f}isf−1(s)=deff−1({s})={x∈domain⁡f:f(x)=s}.{\displaystyle f^{-1}(s)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f^{-1}(\{s\})~=~\{\,x\in \operatorname {domain} f~:~f(x)=s\,\}.} Denote byIm⁡f{\displaystyle \operatorname {Im} f}orimage⁡f{\displaystyle \operatorname {image} f}theimageorrangeoff:X→Y,{\displaystyle f:X\to Y,}which is the set:Im⁡f=deff(X)=deff(domain⁡f)={f(x):x∈domain⁡f}.{\displaystyle \operatorname {Im} f~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(X)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(\operatorname {domain} f)~=~\{f(x)~:~x\in \operatorname {domain} f\}.} Saturated sets A setA{\displaystyle A}is said to bef{\displaystyle f}-saturatedor asaturated setif any of the following equivalent conditions are satisfied:[3] For a setA{\displaystyle A}to bef{\displaystyle f}-saturated, it is necessary thatA⊆domain⁡f.{\displaystyle A\subseteq \operatorname {domain} f.} Compositions and restrictions of functions Iff{\displaystyle f}andg{\displaystyle g}are maps theng∘f{\displaystyle g\circ f}denotes thecompositionmapg∘f:{x∈domain⁡f:f(x)∈domain⁡g}→codomain⁡g{\displaystyle g\circ f~:~\{\,x\in \operatorname {domain} f~:~f(x)\in \operatorname {domain} g\,\}~\to ~\operatorname {codomain} g}with domain and codomaindomain⁡(g∘f)={x∈domain⁡f:f(x)∈domain⁡g}codomain⁡(g∘f)=codomain⁡g{\displaystyle {\begin{alignedat}{4}\operatorname {domain} (g\circ f)&=\{\,x\in \operatorname {domain} f~:~f(x)\in \operatorname {domain} g\,\}\\[0.4ex]\operatorname {codomain} (g\circ f)&=\operatorname {codomain} g\\[0.7ex]\end{alignedat}}}defined by(g∘f)(x)=defg(f(x)).{\displaystyle (g\circ f)(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~g(f(x)).} Therestrictionoff:X→Y{\displaystyle f:X\to Y}toL,{\displaystyle L,}denoted byf|L,{\displaystyle f{\big \vert }_{L},}is the mapf|L:L∩domain⁡f→Y{\displaystyle f{\big \vert }_{L}~:~L\cap \operatorname {domain} f~\to ~Y}withdomain⁡f|L=L∩domain⁡f{\displaystyle \operatorname {domain} f{\big \vert }_{L}~=~L\cap \operatorname {domain} f}defined by sendingx∈L∩domain⁡f{\displaystyle x\in L\cap \operatorname {domain} f}tof(x);{\displaystyle f(x);}that is,f|L(x)=deff(x).{\displaystyle f{\big \vert }_{L}(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(x).}Alternatively,f|L=f∘In⁡{\displaystyle ~f{\big \vert }_{L}~=~f\circ \operatorname {In} ~}whereIn⁡:L∩X→X{\displaystyle ~\operatorname {In} ~:~L\cap X\to X~}denotes theinclusion map, which is defined byIn⁡(s)=defs.{\displaystyle \operatorname {In} (s)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~s.} If(Li)i∈I{\displaystyle \left(L_{i}\right)_{i\in I}}is a family of arbitrary sets indexed byI≠∅{\displaystyle I\neq \varnothing }then:[5]f(⋂i∈ILi)⊆⋂i∈If(Li)f(⋃i∈ILi)=⋃i∈If(Li)f−1(⋃i∈ILi)=⋃i∈If−1(Li)f−1(⋂i∈ILi)=⋂i∈If−1(Li){\displaystyle {\begin{alignedat}{4}f\left(\bigcap _{i\in I}L_{i}\right)\;&~\;\color {Red}{\subseteq }\color {Black}{}~\;\;\;\bigcap _{i\in I}f\left(L_{i}\right)\\f\left(\bigcup _{i\in I}L_{i}\right)\;&~=~\;\bigcup _{i\in I}f\left(L_{i}\right)\\f^{-1}\left(\bigcup _{i\in I}L_{i}\right)\;&~=~\;\bigcup _{i\in I}f^{-1}\left(L_{i}\right)\\f^{-1}\left(\bigcap _{i\in I}L_{i}\right)\;&~=~\;\bigcap _{i\in I}f^{-1}\left(L_{i}\right)\\\end{alignedat}}} So of these four identities, it isonlyimages of intersectionsthat are not always preserved. Preimages preserve all basic set operations. Unions are preserved by both images and preimages. If allLi{\displaystyle L_{i}}aref{\displaystyle f}-saturated then⋂i∈ILi{\displaystyle \bigcap _{i\in I}L_{i}}be will bef{\displaystyle f}-saturated and equality will hold in the first relation above; explicitly, this means: If(Ai)i∈I{\displaystyle \left(A_{i}\right)_{i\in I}}is a family of arbitrary subsets ofX=domain⁡f,{\displaystyle X=\operatorname {domain} f,}which means thatAi⊆X{\displaystyle A_{i}\subseteq X}for alli,{\displaystyle i,}thenConditional Equality 10abecomes: Throughout, letL{\displaystyle L}andR{\displaystyle R}be any sets and letf:X→Y{\displaystyle f:X\to Y}be any function. Summary As the table below shows, set equality isnotguaranteedonlyforimagesof: intersections, set subtractions, and symmetric differences. Preimages preserve set operations Preimages of sets are well-behaved with respect to all basic set operations: f−1(L∪R)=f−1(L)∪f−1(R)f−1(L∩R)=f−1(L)∩f−1(R)f−1(L∖R)=f−1(L)∖f−1(R)f−1(L△R)=f−1(L)△f−1(R){\displaystyle {\begin{alignedat}{4}f^{-1}(L\cup R)~&=~f^{-1}(L)\cup f^{-1}(R)\\f^{-1}(L\cap R)~&=~f^{-1}(L)\cap f^{-1}(R)\\f^{-1}(L\setminus \,R)~&=~f^{-1}(L)\setminus \,f^{-1}(R)\\f^{-1}(L\,\triangle \,R)~&=~f^{-1}(L)\,\triangle \,f^{-1}(R)\\\end{alignedat}}} In words, preimagesdistribute overunions, intersections, set subtraction, and symmetric difference. Imagesonlypreserve unions Images of unions are well-behaved: f(L∪R)=f(L)∪f(R){\displaystyle {\begin{alignedat}{4}f(L\cup R)~&=~f(L)\cup f(R)\\\end{alignedat}}} but images of the other basic set operations arenotsince only the following are guaranteed in general: f(L∩R)⊆f(L)∩f(R)f(L∖R)⊇f(L)∖f(R)f(L△R)⊇f(L)△f(R){\displaystyle {\begin{alignedat}{4}f(L\cap R)~&\subseteq ~f(L)\cap f(R)\\f(L\setminus R)~&\supseteq ~f(L)\setminus f(R)\\f(L\triangle R)~&\supseteq ~f(L)\,\triangle \,f(R)\\\end{alignedat}}} In words, imagesdistribute overunions but not necessarily over intersections, set subtraction, or symmetric difference. What these latter three operations have in common is set subtraction: they eitherareset subtractionL∖R{\displaystyle L\setminus R}or else they can naturallybe definedas the set subtraction of two sets:L∩R=L∖(L∖R)andL△R=(L∪R)∖(L∩R).{\displaystyle L\cap R=L\setminus (L\setminus R)\quad {\text{ and }}\quad L\triangle R=(L\cup R)\setminus (L\cap R).} IfL=X{\displaystyle L=X}thenf(X∖R)⊇f(X)∖f(R){\displaystyle f(X\setminus R)\supseteq f(X)\setminus f(R)}where as in the more general case, equality is not guaranteed. Iff{\displaystyle f}is surjective thenf(X∖R)⊇Y∖f(R),{\displaystyle f(X\setminus R)~\supseteq ~Y\setminus f(R),}which can be rewritten as:f(R∁)⊇f(R)∁{\displaystyle f\left(R^{\complement }\right)~\supseteq ~f(R)^{\complement }}ifR∁=defX∖R{\displaystyle R^{\complement }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~X\setminus R}andf(R)∁=defY∖f(R).{\displaystyle f(R)^{\complement }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~Y\setminus f(R).} Iff:{1,2}→Y{\displaystyle f:\{1,2\}\to Y}is constant,L={1},{\displaystyle L=\{1\},}andR={2}{\displaystyle R=\{2\}}then all four of the set containmentsf(L∩R)⊊f(L)∩f(R)f(L∖R)⊋f(L)∖f(R)f(X∖R)⊋f(X)∖f(R)f(L△R)⊋f(L)△f(R){\displaystyle {\begin{alignedat}{4}f(L\cap R)~&\subsetneq ~f(L)\cap f(R)\\f(L\setminus R)~&\supsetneq ~f(L)\setminus f(R)\\f(X\setminus R)~&\supsetneq ~f(X)\setminus f(R)\\f(L\triangle R)~&\supsetneq ~f(L)\triangle f(R)\\\end{alignedat}}}arestrict/proper(that is, the sets are not equal) since one side is the empty set while the other is non-empty. Thus equality is not guaranteed for even the simplest of functions. The example above is now generalized to show that these four set equalities can fail for anyconstant functionwhose domain contains at least two (distinct) points. Example:Letf:X→Y{\displaystyle f:X\to Y}be any constant function with imagef(X)={y}{\displaystyle f(X)=\{y\}}and suppose thatL,R⊆X{\displaystyle L,R\subseteq X}are non-empty disjoint subsets; that is,L≠∅,R≠∅,{\displaystyle L\neq \varnothing ,R\neq \varnothing ,}andL∩R=∅,{\displaystyle L\cap R=\varnothing ,}which implies that all of the setsL△R=L∪R,{\displaystyle L~\triangle ~R=L\cup R,}L∖R=L,{\displaystyle \,L\setminus R=L,}andX∖R⊇L∖R{\displaystyle X\setminus R\supseteq L\setminus R}are not empty and so consequently, their images underf{\displaystyle f}are all equal to{y}.{\displaystyle \{y\}.} What the set operations in these four examples have in common is that they eitherareset subtraction∖{\displaystyle \setminus }(examples (1) and (2)) or else they can naturallybe definedas the set subtraction of two sets (examples (3) and (4)). Mnemonic: In fact, for each of the above four set formulas for which equality is not guaranteed, the direction of the containment (that is, whether to use⊆or⊇{\displaystyle \,\subseteq {\text{ or }}\supseteq \,}) can always be deduced by imagining the functionf{\displaystyle f}as beingconstantand the two sets (L{\displaystyle L}andR{\displaystyle R}) as being non-empty disjoint subsets of its domain. This is becauseeveryequality fails for such a function and sets: one side will be always be∅{\displaystyle \varnothing }and the other non-empty − from this fact, the correct choice of⊆or⊇{\displaystyle \,\subseteq {\text{ or }}\supseteq \,}can be deduced by answering: "which side is empty?" For example, to decide if the?{\displaystyle ?}inf(L△R)∖f(R)?f((L△R)∖R){\displaystyle f(L\triangle R)\setminus f(R)~\;~?~\;~f((L\triangle R)\setminus R)}should be⊆or⊇,{\displaystyle \,\subseteq {\text{ or }}\supseteq ,\,}pretend[note 5]thatf{\displaystyle f}is constant and thatL△R{\displaystyle L\triangle R}andR{\displaystyle R}are non-empty disjoint subsets off{\displaystyle f}'s domain; then thelefthand side would be empty (sincef(L△R)∖f(R)={f's single value}∖{f's single value}=∅{\displaystyle f(L\triangle R)\setminus f(R)=\{f{\text{'s single value}}\}\setminus \{f{\text{'s single value}}\}=\varnothing }), which indicates that?{\displaystyle \,?\,}should be⊆{\displaystyle \,\subseteq \,}(the resulting statement is always guaranteed to be true) because this is the choice that will make∅=left hand side?right hand side{\displaystyle \varnothing ={\text{left hand side}}~\;~?~\;~{\text{right hand side}}}true. Alternatively, the correct direction of containment can also be deduced by consideration of any constantf:{1,2}→Y{\displaystyle f:\{1,2\}\to Y}withL={1}{\displaystyle L=\{1\}}andR={2}.{\displaystyle R=\{2\}.} Furthermore, this mnemonic can also be used to correctly deduce whether or not a set operation always distribute over images or preimages; for example, to determine whether or notf(L∩R){\displaystyle f(L\cap R)}always equalsf(L)∩f(R),{\displaystyle f(L)\cap f(R),}or alternatively, whether or notf−1(L∩R){\displaystyle f^{-1}(L\cap R)}always equalsf−1(L)∩f−1(R){\displaystyle f^{-1}(L)\cap f^{-1}(R)}(although∩{\displaystyle \,\cap \,}was used here, it can replaced by∪,∖,or△{\displaystyle \,\cup ,\,\setminus ,{\text{ or }}\,\triangle }). The answer to such a question can, as before, be deduced by consideration of this constant function: the answer for the general case (that is, for arbitraryf,L,{\displaystyle f,L,}andR{\displaystyle R}) is always the same as the answer for this choice of (constant) function and disjoint non-empty sets. Characterizations of when equality holds forallsets: For any functionf:X→Y,{\displaystyle f:X\to Y,}the following statements are equivalent: In particular, if a map is not known to be injective then barring additional information, there is no guarantee that any of the equalities in statements (b) - (e) hold. An example abovecan be used to help prove this characterization. Indeed, comparison of that example with such a proof suggests that the example is representative of the fundamental reason why one of these four equalities in statements (b) - (e) might not hold (that is, representative of "what goes wrong" when a set equality does not hold). f(L∩R)⊆f(L)∩f(R)always holds{\displaystyle f(L\cap R)~\subseteq ~f(L)\cap f(R)\qquad \qquad {\text{ always holds}}} Characterizations of equality: The following statements are equivalent: Sufficient conditions for equality: Equality holds if any of the following are true: In addition, the following always hold:f(f−1(L)∩R)=L∩f(R){\displaystyle f\left(f^{-1}(L)\cap R\right)~=~L\cap f(R)}f(f−1(L)∪R)=(L∩Im⁡f)∪f(R){\displaystyle f\left(f^{-1}(L)\cup R\right)~=~(L\cap \operatorname {Im} f)\cup f(R)} f(L∖R)⊇f(L)∖f(R)always holds{\displaystyle f(L\setminus R)~\supseteq ~f(L)\setminus f(R)\qquad \qquad {\text{ always holds}}} Characterizations of equality: The following statements are equivalent:[proof 1] Necessary conditions for equality(excluding characterizations): If equality holds then the following are necessarily true: Sufficient conditions for equality: Equality holds if any of the following are true: f(X∖R)⊇f(X)∖f(R)always holds, wheref:X→Y{\displaystyle f(X\setminus R)~\supseteq ~f(X)\setminus f(R)\qquad \qquad {\text{ always holds, where }}f:X\to Y} Characterizations of equality: The following statements are equivalent:[proof 1] where ifR⊆domain⁡f{\displaystyle R\subseteq \operatorname {domain} f}then this list can be extended to include: Sufficient conditions for equality: Equality holds if any of the following are true: f(L△R)⊇f(L)△f(R)always holds{\displaystyle f\left(L~\triangle ~R\right)~\supseteq ~f(L)~\triangle ~f(R)\qquad \qquad {\text{ always holds}}} Characterizations of equality: The following statements are equivalent: Necessary conditions for equality(excluding characterizations): If equality holds then the following are necessarily true: Sufficient conditions for equality: Equality holds if any of the following are true: For any functionf:X→Y{\displaystyle f:X\to Y}and any setsL{\displaystyle L}andR,{\displaystyle R,}[proof 2]f(L∖R)=Y∖{y∈Y:L∩f−1(y)⊆R}=f(L)∖{y∈f(L):L∩f−1(y)⊆R}=f(L)∖{y∈f(L∩R):L∩f−1(y)⊆R}=f(L)∖{y∈V:L∩f−1(y)⊆R}for any supersetV⊇f(L∩R)=f(S)∖{y∈f(S):L∩f−1(y)⊆R}for any supersetS⊇L∩X.{\displaystyle {\begin{alignedat}{4}f(L\setminus R)&=Y~~~\;\,\,\setminus \left\{y\in Y~~~~~~~~~~\;\,~:~L\cap f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(L)\setminus \left\{y\in f(L)~~~~~~~\,~:~L\cap f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(L)\setminus \left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(L)\setminus \left\{y\in V~~~~~~~~~~~~\,~:~L\cap f^{-1}(y)\subseteq R\right\}\qquad &&{\text{ for any superset }}\quad V\supseteq f(L\cap R)\\[0.4ex]&=f(S)\setminus \left\{y\in f(S)~~~~~~~\,~:~L\cap f^{-1}(y)\subseteq R\right\}\qquad &&{\text{ for any superset }}\quad S\supseteq L\cap X.\\[0.7ex]\end{alignedat}}} TakingL:=X=domain⁡f{\displaystyle L:=X=\operatorname {domain} f}in the above formulas gives:f(X∖R)=Y∖{y∈Y:f−1(y)⊆R}=f(X)∖{y∈f(X):f−1(y)⊆R}=f(X)∖{y∈f(R):f−1(y)⊆R}=f(X)∖{y∈W:f−1(y)⊆R}for any supersetW⊇f(R){\displaystyle {\begin{alignedat}{4}f(X\setminus R)&=Y~~~\;\,\,\setminus \left\{y\in Y~~~~\;\,\,:~f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(X)\setminus \left\{y\in f(X)~:~f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(X)\setminus \left\{y\in f(R)~:~f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(X)\setminus \left\{y\in W~~~\;\,\,:~f^{-1}(y)\subseteq R\right\}\qquad {\text{ for any superset }}\quad W\supseteq f(R)\\[0.4ex]\end{alignedat}}}where the set{y∈f(R):f−1(y)⊆R}{\displaystyle \left\{y\in f(R):f^{-1}(y)\subseteq R\right\}}is equal to the image underf{\displaystyle f}of the largestf{\displaystyle f}-saturated subset ofR.{\displaystyle R.} It follows fromL△R=(L∪R)∖(L∩R){\displaystyle L\,\triangle \,R=(L\cup R)\setminus (L\cap R)}and the above formulas for the image of a set subtraction that for any functionf:X→Y{\displaystyle f:X\to Y}and any setsL{\displaystyle L}andR,{\displaystyle R,}f(L△R)=Y∖{y∈Y:L∩f−1(y)=R∩f−1(y)}=f(L∪R)∖{y∈f(L∪R):L∩f−1(y)=R∩f−1(y)}=f(L∪R)∖{y∈f(L∩R):L∩f−1(y)=R∩f−1(y)}=f(L∪R)∖{y∈V:L∩f−1(y)=R∩f−1(y)}for any supersetV⊇f(L∩R)=f(S)∖{y∈f(S):L∩f−1(y)=R∩f−1(y)}for any supersetS⊇(L∪R)∩X.{\displaystyle {\begin{alignedat}{4}f(L\,\triangle \,R)&=Y~~~\;~~~\;~~~\;\setminus \left\{y\in Y~~~\,~~~\;~~~\,~~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\\[0.4ex]&=f(L\cup R)\setminus \left\{y\in f(L\cup R)~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\\[0.4ex]&=f(L\cup R)\setminus \left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\\[0.4ex]&=f(L\cup R)\setminus \left\{y\in V~~~\,~~~~~~~~~~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\qquad &&{\text{ for any superset }}\quad V\supseteq f(L\cap R)\\[0.4ex]&=f(S)~~\,~~~\,~\,\setminus \left\{y\in f(S)~~~\,~~~\;~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\qquad &&{\text{ for any superset }}\quad S\supseteq (L\cup R)\cap X.\\[0.7ex]\end{alignedat}}} It follows from the above formulas for the image of a set subtraction that for any functionf:X→Y{\displaystyle f:X\to Y}and any setL,{\displaystyle L,}f(L)=Y∖{y∈Y:f−1(y)∩L=∅}=Im⁡f∖{y∈Im⁡f:f−1(y)∩L=∅}=W∖{y∈W:f−1(y)∩L=∅}for any supersetW⊇f(L){\displaystyle {\begin{alignedat}{4}f(L)&=Y~~~\;\,\setminus \left\{y\in Y~~~\;\,~:~f^{-1}(y)\cap L=\varnothing \right\}\\[0.4ex]&=\operatorname {Im} f\setminus \left\{y\in \operatorname {Im} f~:~f^{-1}(y)\cap L=\varnothing \right\}\\[0.4ex]&=W~~~\,\setminus \left\{y\in W~~\;\,~:~f^{-1}(y)\cap L=\varnothing \right\}\qquad {\text{ for any superset }}\quad W\supseteq f(L)\\[0.7ex]\end{alignedat}}} This is more easily seen as being a consequence of the fact that for anyy∈Y,{\displaystyle y\in Y,}f−1(y)∩L=∅{\displaystyle f^{-1}(y)\cap L=\varnothing }if and only ify∉f(L).{\displaystyle y\not \in f(L).} It follows from the above formulas for the image of a set that for any functionf:X→Y{\displaystyle f:X\to Y}and any setsL{\displaystyle L}andR,{\displaystyle R,}f(L∩R)=Y∖{y∈Y:L∩R∩f−1(y)=∅}=f(L)∖{y∈f(L):L∩R∩f−1(y)=∅}=f(L)∖{y∈U:L∩R∩f−1(y)=∅}for any supersetU⊇f(L)=f(R)∖{y∈f(R):L∩R∩f−1(y)=∅}=f(R)∖{y∈V:L∩R∩f−1(y)=∅}for any supersetV⊇f(R)=f(L)∩f(R)∖{y∈f(L)∩f(R):L∩R∩f−1(y)=∅}{\displaystyle {\begin{alignedat}{4}f(L\cap R)&=Y~~~~~\setminus \left\{y\in Y~~~~~~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.4ex]&=f(L)\setminus \left\{y\in f(L)~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.4ex]&=f(L)\setminus \left\{y\in U~~~~~~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}\qquad &&{\text{ for any superset }}\quad U\supseteq f(L)\\[0.4ex]&=f(R)\setminus \left\{y\in f(R)~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.4ex]&=f(R)\setminus \left\{y\in V~~~~~~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}\qquad &&{\text{ for any superset }}\quad V\supseteq f(R)\\[0.4ex]&=f(L)\cap f(R)\setminus \left\{y\in f(L)\cap f(R)~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.7ex]\end{alignedat}}}where moreover, for anyy∈Y,{\displaystyle y\in Y,} The setsU{\displaystyle U}andV{\displaystyle V}mentioned above could, in particular, be any of the setsf(L∪R),Im⁡f,{\displaystyle f(L\cup R),\;\operatorname {Im} f,}orY,{\displaystyle Y,}for example. LetL{\displaystyle L}andR{\displaystyle R}be arbitrary sets,f:X→Y{\displaystyle f:X\to Y}be any map, and letA⊆X{\displaystyle A\subseteq X}andC⊆Y.{\displaystyle C\subseteq Y.} Equality holds if any of the following are true: (Pre)Images of operations on images Sincef(L)∖f(L∖R)={y∈f(L∩R):L∩f−1(y)⊆R},{\displaystyle f(L)\setminus f(L\setminus R)~=~\left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)\subseteq R\right\},} f−1(f(L)∖f(L∖R))=f−1({y∈f(L∩R):L∩f−1(y)⊆R})={x∈f−1(f(L∩R)):L∩f−1(f(x))⊆R}{\displaystyle {\begin{alignedat}{4}f^{-1}(f(L)\setminus f(L\setminus R))&=&&f^{-1}\left(\left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)\subseteq R\right\}\right)\\&=&&\left\{x\in f^{-1}(f(L\cap R))~:~L\cap f^{-1}(f(x))\subseteq R\right\}\\\end{alignedat}}} Sincef(X)∖f(L∖R)={y∈f(X):L∩f−1(y)⊆R},{\displaystyle f(X)\setminus f(L\setminus R)~=~\left\{y\in f(X)~:~L\cap f^{-1}(y)\subseteq R\right\},}f−1(Y∖f(L∖R))=f−1(f(X)∖f(L∖R))=f−1({y∈f(X):L∩f−1(y)⊆R})={x∈X:L∩f−1(f(x))⊆R}=X∖f−1(f(L∖R)){\displaystyle {\begin{alignedat}{4}f^{-1}(Y\setminus f(L\setminus R))&~=~&&f^{-1}(f(X)\setminus f(L\setminus R))\\&=&&f^{-1}\left(\left\{y\in f(X)~:~L\cap f^{-1}(y)\subseteq R\right\}\right)\\&=&&\left\{x\in X~:~L\cap f^{-1}(f(x))\subseteq R\right\}\\&~=~&&X\setminus f^{-1}(f(L\setminus R))\\\end{alignedat}}} UsingL:=X,{\displaystyle L:=X,}this becomesf(X)∖f(X∖R)={y∈f(R):f−1(y)⊆R}{\displaystyle ~f(X)\setminus f(X\setminus R)~=~\left\{y\in f(R)~:~f^{-1}(y)\subseteq R\right\}~}andf−1(Y∖f(X∖R))=f−1(f(X)∖f(X∖R))=f−1({y∈f(R):f−1(y)⊆R})={r∈R∩X:f−1(f(r))⊆R}⊆R{\displaystyle {\begin{alignedat}{4}f^{-1}(Y\setminus f(X\setminus R))&~=~&&f^{-1}(f(X)\setminus f(X\setminus R))\\&=&&f^{-1}\left(\left\{y\in f(R)~:~f^{-1}(y)\subseteq R\right\}\right)\\&=&&\left\{r\in R\cap X~:~f^{-1}(f(r))\subseteq R\right\}\\&\subseteq &&R\\\end{alignedat}}}and sof−1(Y∖f(L))=f−1(f(X)∖f(L))=f−1({y∈f(X∖L):f−1(y)∩L=∅})={x∈X∖L:f(x)∉f(L)}=X∖f−1(f(L))⊆X∖L{\displaystyle {\begin{alignedat}{4}f^{-1}(Y\setminus f(L))&~=~&&f^{-1}(f(X)\setminus f(L))\\&=&&f^{-1}\left(\left\{y\in f(X\setminus L)~:~f^{-1}(y)\cap L=\varnothing \right\}\right)\\&=&&\{x\in X\setminus L~:~f(x)\not \in f(L)\}\\&=&&X\setminus f^{-1}(f(L))\\&\subseteq &&X\setminus L\\\end{alignedat}}} Let∏Y∙=def∏j∈JYj{\displaystyle \prod Y_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\prod _{j\in J}Y_{j}}and for everyk∈J,{\displaystyle k\in J,}letπk:∏j∈JYj→Yk{\displaystyle \pi _{k}~:~\prod _{j\in J}Y_{j}~\to ~Y_{k}}denote the canonical projection ontoYk.{\displaystyle Y_{k}.} Definitions Given a collection of mapsFj:X→Yj{\displaystyle F_{j}:X\to Y_{j}}indexed byj∈J,{\displaystyle j\in J,}define the map(Fj)j∈J:X→∏j∈JYjx↦(Fj(xj))j∈J,{\displaystyle {\begin{alignedat}{4}\left(F_{j}\right)_{j\in J}:\;&&X&&\;\to \;&\prod _{j\in J}Y_{j}\\[0.3ex]&&x&&\;\mapsto \;&\left(F_{j}\left(x_{j}\right)\right)_{j\in J},\\\end{alignedat}}}which is also denoted byF∙=(Fj)j∈J.{\displaystyle F_{\bullet }=\left(F_{j}\right)_{j\in J}.}This is the unique map satisfyingπj∘F∙=Fjfor allj∈J.{\displaystyle \pi _{j}\circ F_{\bullet }=F_{j}\quad {\text{ for all }}j\in J.} Conversely, if given a mapF:X→∏j∈JYj{\displaystyle F~:~X~\to ~\prod _{j\in J}Y_{j}}thenF=(πj∘F)j∈J.{\displaystyle F=\left(\pi _{j}\circ F\right)_{j\in J}.}Explicitly, what this means is that ifFk=defπk∘F:X→Yk{\displaystyle F_{k}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\pi _{k}\circ F~:~X~\to ~Y_{k}}is defined for everyk∈J,{\displaystyle k\in J,}thenF{\displaystyle F}the unique map satisfying:πj∘F=Fj{\displaystyle \pi _{j}\circ F=F_{j}}for allj∈J;{\displaystyle j\in J;}or said more briefly,F=(Fj)j∈J.{\displaystyle F=\left(F_{j}\right)_{j\in J}.} The mapF∙=(Fj)j∈J:X→∏j∈JYj{\displaystyle F_{\bullet }=\left(F_{j}\right)_{j\in J}~:~X~\to ~\prod _{j\in J}Y_{j}}should not be confused with theCartesian product∏j∈JFj{\displaystyle \prod _{j\in J}F_{j}}of these maps, which is by definition is the map∏j∈JFj:∏j∈JX→∏j∈JYj(xj)j∈J↦(Fj(xj))j∈J{\displaystyle {\begin{alignedat}{4}\prod _{j\in J}F_{j}:\;&&\prod _{j\in J}X&&~\;\to \;~&\prod _{j\in J}Y_{j}\\[0.3ex]&&\left(x_{j}\right)_{j\in J}&&~\;\mapsto \;~&\left(F_{j}\left(x_{j}\right)\right)_{j\in J}\\\end{alignedat}}}with domain∏j∈JX=XJ{\displaystyle \prod _{j\in J}X=X^{J}}rather thanX.{\displaystyle X.} Preimage and images of a Cartesian product SupposeF∙=(Fj)j∈J:X→∏j∈JYj.{\displaystyle F_{\bullet }=\left(F_{j}\right)_{j\in J}~:~X~\to ~\prod _{j\in J}Y_{j}.} IfA⊆X{\displaystyle A~\subseteq ~X}thenF∙(A)⊆∏j∈JFj(A).{\displaystyle F_{\bullet }(A)~~\;\color {Red}{\subseteq }\color {Black}{}\;~~\prod _{j\in J}F_{j}(A).} IfB⊆∏j∈JYj{\displaystyle B~\subseteq ~\prod _{j\in J}Y_{j}}thenF∙−1(B)⊆⋂j∈JFj−1(πj(B)){\displaystyle F_{\bullet }^{-1}(B)~~\;\color {Red}{\subseteq }\color {Black}{}\;~~\bigcap _{j\in J}F_{j}^{-1}\left(\pi _{j}(B)\right)}where equality will hold ifB=∏j∈Jπj(B),{\displaystyle B=\prod _{j\in J}\pi _{j}(B),}in which caseF∙−1(B)=⋂j∈JFj−1(πj(B)){\textstyle F_{\bullet }^{-1}(B)=\displaystyle \bigcap _{j\in J}F_{j}^{-1}\left(\pi _{j}(B)\right)}and For equality to hold, it suffices for there to exist a family(Bj)j∈J{\displaystyle \left(B_{j}\right)_{j\in J}}of subsetsBj⊆Yj{\displaystyle B_{j}\subseteq Y_{j}}such thatB=∏j∈JBj,{\displaystyle B=\prod _{j\in J}B_{j},}in which case: andπj(B)=Bj{\displaystyle \pi _{j}(B)=B_{j}}for allj∈J.{\displaystyle j\in J.} Equivalences and implications of images and preimages IfC⊆Im⁡f{\displaystyle C~\subseteq ~\operatorname {Im} f}thenf−1(C)⊆f−1(R){\displaystyle f^{-1}(C)~\subseteq ~f^{-1}(R)}if and only ifC⊆R.{\displaystyle C~\subseteq ~R.} The following are equivalent whenA⊆X:{\displaystyle A\subseteq X:} Equality holds ifand only ifthe following is true: Equality holds if any of the following are true: Equality holds ifand only ifthe following is true: Equality holds if any of the following are true: Intersection of a set and a (pre)image The following statements are equivalent: Thus for anyt,{\displaystyle t,}[5]t∉f(L)if and only ifL∩f−1(t)=∅.{\displaystyle t\not \in f(L)\quad {\text{ if and only if }}\quad L\cap f^{-1}(t)=\varnothing .} Afamily of setsor simply afamilyis a set whose elements are sets. Afamily overX{\displaystyle X}is a family of subsets ofX.{\displaystyle X.} Thepower setof a setX{\displaystyle X}is the set of all subsets ofX{\displaystyle X}:℘(X):={S:S⊆X}.{\displaystyle \wp (X)~\colon =~\{\;S~:~S\subseteq X\;\}.} Notation for sequences of sets Throughout,SandT{\displaystyle S{\text{ and }}T}will be arbitrary sets andS∙{\displaystyle S_{\bullet }}and will denote anetor asequenceof sets where if it is a sequence then this will be indicated by either of the notationsS∙=(Si)i=1∞orS∙=(Si)i∈N{\displaystyle S_{\bullet }=\left(S_{i}\right)_{i=1}^{\infty }\qquad {\text{ or }}\qquad S_{\bullet }=\left(S_{i}\right)_{i\in \mathbb {N} }}whereN{\displaystyle \mathbb {N} }denotes thenatural numbers. A notationS∙=(Si)i∈I{\displaystyle S_{\bullet }=\left(S_{i}\right)_{i\in I}}indicates thatS∙{\displaystyle S_{\bullet }}is anetdirectedby(I,≤),{\displaystyle (I,\leq ),}which (by definition) is asequenceif the setI,{\displaystyle I,}which is called the net'sindexing set, is the natural numbers (that is, ifI=N{\displaystyle I=\mathbb {N} }) and≤{\displaystyle \,\leq \,}is the natural order onN.{\displaystyle \mathbb {N} .} Disjoint and monotone sequences of sets IfSi∩Sj=∅{\displaystyle S_{i}\cap S_{j}=\varnothing }for all distinct indicesi≠j{\displaystyle i\neq j}thenS∙{\displaystyle S_{\bullet }}is called apairwise disjointor simply adisjoint. A sequence or netS∙{\displaystyle S_{\bullet }}of set is calledincreasingornon-decreasingif (resp.decreasingornon-increasing) if for all indicesi≤j,{\displaystyle i\leq j,}Si⊆Sj{\displaystyle S_{i}\subseteq S_{j}}(resp.Si⊇Sj{\displaystyle S_{i}\supseteq S_{j}}). A sequence or netS∙{\displaystyle S_{\bullet }}of set is calledstrictly increasing(resp.strictly decreasing) if it is non-decreasing (resp. is non-increasing) and alsoSi≠Sj{\displaystyle S_{i}\neq S_{j}}for alldistinctindicesiandj.{\displaystyle i{\text{ and }}j.}It is calledmonotoneif it is non-decreasing or non-increasing and it is calledstrictly monotoneif it is strictly increasing or strictly decreasing. A sequences or netS∙{\displaystyle S_{\bullet }}is said toincrease toS,{\displaystyle S,}denoted byS∙↑S{\displaystyle S_{\bullet }\uparrow S}[11]orS∙↗S,{\displaystyle S_{\bullet }\nearrow S,}ifS∙{\displaystyle S_{\bullet }}is increasing and the union of allSi{\displaystyle S_{i}}isS;{\displaystyle S;}that is, if⋃nSn=SandSi⊆Sjwheneveri≤j.{\displaystyle \bigcup _{n}S_{n}=S\qquad {\text{ and }}\qquad S_{i}\subseteq S_{j}\quad {\text{ whenever }}i\leq j.}It is said todecrease toS,{\displaystyle S,}denoted byS∙↓S{\displaystyle S_{\bullet }\downarrow S}[11]orS∙↘S,{\displaystyle S_{\bullet }\searrow S,}ifS∙{\displaystyle S_{\bullet }}is increasing and the intersection of allSi{\displaystyle S_{i}}isS{\displaystyle S}that is, if⋂nSn=SandSi⊇Sjwheneveri≤j.{\displaystyle \bigcap _{n}S_{n}=S\qquad {\text{ and }}\qquad S_{i}\supseteq S_{j}\quad {\text{ whenever }}i\leq j.} Definitions of elementwise operations on families IfLandR{\displaystyle {\mathcal {L}}{\text{ and }}{\mathcal {R}}}are families of sets and ifS{\displaystyle S}is any set then define:[12]L(∪)R:={L∪R:L∈LandR∈R}{\displaystyle {\mathcal {L}}\;(\cup )\;{\mathcal {R}}~\colon =~\{~L\cup R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}}L(∩)R:={L∩R:L∈LandR∈R}{\displaystyle {\mathcal {L}}\;(\cap )\;{\mathcal {R}}~\colon =~\{~L\cap R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}}L(∖)R:={L∖R:L∈LandR∈R}{\displaystyle {\mathcal {L}}\;(\setminus )\;{\mathcal {R}}~\colon =~\{~L\setminus R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}}L(△)R:={L△R:L∈LandR∈R}{\displaystyle {\mathcal {L}}\;(\triangle )\;{\mathcal {R}}~\colon =~\{~L\;\triangle \;R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}}L|S:={L∩S:L∈L}=L(∩){S}{\displaystyle {\mathcal {L}}{\big \vert }_{S}~\colon =~\{L\cap S~:~L\in {\mathcal {L}}\}={\mathcal {L}}\;(\cap )\;\{S\}}which are respectively calledelementwiseunion,elementwiseintersection,elementwise(set)difference,elementwisesymmetric difference, and thetrace/restriction ofL{\displaystyle {\mathcal {L}}}toS.{\displaystyle S.}The regular union, intersection, and set difference are all defined as usual and are denoted with their usual notation:L∪R,L∩R,L△R,{\displaystyle {\mathcal {L}}\cup {\mathcal {R}},{\mathcal {L}}\cap {\mathcal {R}},{\mathcal {L}}\;\triangle \;{\mathcal {R}},}andL∖R,{\displaystyle {\mathcal {L}}\setminus {\mathcal {R}},}respectively. These elementwise operations on families of sets play an important role in, among other subjects, the theory offiltersand prefilters on sets. Theupward closureinX{\displaystyle X}of a familyL⊆℘(X){\displaystyle {\mathcal {L}}\subseteq \wp (X)}is the family:L↑X:=⋃L∈L{S:L⊆S⊆X}={S⊆X:there existsL∈Lsuch thatL⊆S}{\displaystyle {\mathcal {L}}^{\uparrow X}~\colon =~\bigcup _{L\in {\mathcal {L}}}\{\;S~:~L\subseteq S\subseteq X\;\}~=~\{\;S\subseteq X~:~{\text{ there exists }}L\in {\mathcal {L}}{\text{ such that }}L\subseteq S\;\}}and thedownward closure ofL{\displaystyle {\mathcal {L}}}is the family:L↓:=⋃L∈L℘(L)={S:there existsL∈Lsuch thatS⊆L}.{\displaystyle {\mathcal {L}}^{\downarrow }~\colon =~\bigcup _{L\in {\mathcal {L}}}\wp (L)~=~\{\;S~:~{\text{ there exists }}L\in {\mathcal {L}}{\text{ such that }}S\subseteq L\;\}.} The following table lists some well-known categories of families of sets having applications ingeneral topologyandmeasure theory. Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .} A familyL{\displaystyle {\mathcal {L}}}is calledisotone,ascending, orupward closedinX{\displaystyle X}ifL⊆℘(X){\displaystyle {\mathcal {L}}\subseteq \wp (X)}andL=L↑X.{\displaystyle {\mathcal {L}}={\mathcal {L}}^{\uparrow X}.}[12]A familyL{\displaystyle {\mathcal {L}}}is calleddownward closedifL=L↓.{\displaystyle {\mathcal {L}}={\mathcal {L}}^{\downarrow }.} A familyL{\displaystyle {\mathcal {L}}}is said to be: A familyL{\displaystyle {\mathcal {L}}}of sets is called a/an: Sequencesof sets often arise inmeasure theory. Algebra of sets AfamilyΦ{\displaystyle \Phi }of subsets of a setX{\displaystyle X}is said to bean algebra of setsif∅∈Φ{\displaystyle \varnothing \in \Phi }and for allL,R∈Φ,{\displaystyle L,R\in \Phi ,}all three of the setsX∖R,L∩R,{\displaystyle X\setminus R,\,L\cap R,}andL∪R{\displaystyle L\cup R}are elements ofΦ.{\displaystyle \Phi .}[13]Thearticle on this topiclists set identities and other relationships these three operations. Every algebra of sets is also aring of sets[13]and aπ-system. Algebra generated by a family of sets Given any familyS{\displaystyle {\mathcal {S}}}of subsets ofX,{\displaystyle X,}there is a unique smallest[note 7]algebra of sets inX{\displaystyle X}containingS.{\displaystyle {\mathcal {S}}.}[13]It is calledthe algebra generated byS{\displaystyle {\mathcal {S}}}and it will be denote it byΦS.{\displaystyle \Phi _{\mathcal {S}}.}This algebra can be constructed as follows:[13] LetL,M,{\displaystyle {\mathcal {L}},{\mathcal {M}},}andR{\displaystyle {\mathcal {R}}}be families of sets overX.{\displaystyle X.}On the left hand sides of the following identities,L{\displaystyle {\mathcal {L}}}is theLeft most family,M{\displaystyle {\mathcal {M}}}is in theMiddle, andR{\displaystyle {\mathcal {R}}}is theRight most set. Commutativity:[12]L(∪)R=R(∪)L{\displaystyle {\mathcal {L}}\;(\cup )\;{\mathcal {R}}={\mathcal {R}}\;(\cup )\;{\mathcal {L}}}L(∩)R=R(∩)L{\displaystyle {\mathcal {L}}\;(\cap )\;{\mathcal {R}}={\mathcal {R}}\;(\cap )\;{\mathcal {L}}} Associativity:[12][L(∪)M](∪)R=L(∪)[M(∪)R]{\displaystyle [{\mathcal {L}}\;(\cup )\;{\mathcal {M}}]\;(\cup )\;{\mathcal {R}}={\mathcal {L}}\;(\cup )\;[{\mathcal {M}}\;(\cup )\;{\mathcal {R}}]}[L(∩)M](∩)R=L(∩)[M(∩)R]{\displaystyle [{\mathcal {L}}\;(\cap )\;{\mathcal {M}}]\;(\cap )\;{\mathcal {R}}={\mathcal {L}}\;(\cap )\;[{\mathcal {M}}\;(\cap )\;{\mathcal {R}}]} Identity:L(∪){∅}=L{\displaystyle {\mathcal {L}}\;(\cup )\;\{\varnothing \}={\mathcal {L}}}L(∩){X}=L{\displaystyle {\mathcal {L}}\;(\cap )\;\{X\}={\mathcal {L}}}L(∖){∅}=L{\displaystyle {\mathcal {L}}\;(\setminus )\;\{\varnothing \}={\mathcal {L}}} Domination:L(∪){X}={X}ifL≠∅{\displaystyle {\mathcal {L}}\;(\cup )\;\{X\}=\{X\}~~~~{\text{ if }}{\mathcal {L}}\neq \varnothing }L(∩){∅}={∅}ifL≠∅{\displaystyle {\mathcal {L}}\;(\cap )\;\{\varnothing \}=\{\varnothing \}~~~~{\text{ if }}{\mathcal {L}}\neq \varnothing }L(∪)∅=∅{\displaystyle {\mathcal {L}}\;(\cup )\;\varnothing =\varnothing }L(∩)∅=∅{\displaystyle {\mathcal {L}}\;(\cap )\;\varnothing =\varnothing }L(∖)∅=∅{\displaystyle {\mathcal {L}}\;(\setminus )\;\varnothing =\varnothing }∅(∖)R=∅{\displaystyle \varnothing \;(\setminus )\;{\mathcal {R}}=\varnothing } ℘(L∩R)=℘(L)∩℘(R){\displaystyle \wp (L\cap R)~=~\wp (L)\cap \wp (R)}℘(L∪R)=℘(L)(∪)℘(R)⊇℘(L)∪℘(R).{\displaystyle \wp (L\cup R)~=~\wp (L)\ (\cup )\ \wp (R)~\supseteq ~\wp (L)\cup \wp (R).} IfL{\displaystyle L}andR{\displaystyle R}are subsets of a vector spaceX{\displaystyle X}and ifs{\displaystyle s}is a scalar then℘(sL)=s℘(L){\displaystyle \wp (sL)~=~s\wp (L)}℘(L+R)⊇℘(L)+℘(R).{\displaystyle \wp (L+R)~\supseteq ~\wp (L)+\wp (R).} Suppose thatL{\displaystyle L}is any set such thatL⊇Ri{\displaystyle L\supseteq R_{i}}for every indexi.{\displaystyle i.}IfR∙{\displaystyle R_{\bullet }}decreases toR{\displaystyle R}thenL∖R∙:=(L∖Ri)i{\displaystyle L\setminus R_{\bullet }:=\left(L\setminus R_{i}\right)_{i}}increases toL∖R{\displaystyle L\setminus R}[11]whereas if insteadR∙{\displaystyle R_{\bullet }}increases toR{\displaystyle R}thenL∖R∙{\displaystyle L\setminus R_{\bullet }}decreases toL∖R.{\displaystyle L\setminus R.} IfLandR{\displaystyle L{\text{ and }}R}are arbitrary sets and ifL∙=(Li)i{\displaystyle L_{\bullet }=\left(L_{i}\right)_{i}}increases (resp. decreases) toL{\displaystyle L}then(Li∖R)i{\displaystyle \left(L_{i}\setminus R\right)_{i}}increase (resp. decreases) toL∖R.{\displaystyle L\setminus R.} Suppose thatS∙=(Si)i=1∞{\displaystyle S_{\bullet }=\left(S_{i}\right)_{i=1}^{\infty }}is any sequence of sets, thatS⊆⋃iSi{\displaystyle S\subseteq \bigcup _{i}S_{i}}is any subset, and for every indexi,{\displaystyle i,}letDi=(Si∩S)∖⋃m=1i(Sm∩S).{\displaystyle D_{i}=\left(S_{i}\cap S\right)\setminus \bigcup _{m=1}^{i}\left(S_{m}\cap S\right).}ThenS=⋃iDi{\displaystyle S=\bigcup _{i}D_{i}}andD∙:=(Di)i=1∞{\displaystyle D_{\bullet }:=\left(D_{i}\right)_{i=1}^{\infty }}is a sequence of pairwise disjoint sets.[11] Suppose thatS∙=(Si)i=1∞{\displaystyle S_{\bullet }=\left(S_{i}\right)_{i=1}^{\infty }}is non-decreasing, letS0=∅,{\displaystyle S_{0}=\varnothing ,}and letDi=Si∖Si−1{\displaystyle D_{i}=S_{i}\setminus S_{i-1}}for everyi=1,2,….{\displaystyle i=1,2,\ldots .}Then⋃iSi=⋃iDi{\displaystyle \bigcup _{i}S_{i}=\bigcup _{i}D_{i}}andD∙=(Di)i=1∞{\displaystyle D_{\bullet }=\left(D_{i}\right)_{i=1}^{\infty }}is a sequence of pairwise disjoint sets.[11] Notes Proofs
https://en.wikipedia.org/wiki/List_of_set_identities_and_relations
In thecomputer sciencesubfield ofalgorithmic information theory, aChaitin constant(Chaitin omega number)[1]orhalting probabilityis areal numberthat, informally speaking, represents theprobabilitythat a randomly constructed program will halt. These numbers are formed from a construction due toGregory Chaitin. Although there are infinitely many halting probabilities, one for each (universal, see below) method of encoding programs, it is common to use the letterΩto refer to them as if there were only one. BecauseΩdepends on the program encoding used, it is sometimes calledChaitin's constructionwhen not referring to any specific encoding. Each halting probability is anormalandtranscendentalreal number that is notcomputable, which means that there is noalgorithmto compute its digits. Each halting probability isMartin-Löf random, meaning there is not even any algorithm which can reliably guess its digits. The definition of a halting probability relies on the existence of a prefix-free universal computable function. Such a function, intuitively, represents a program in a programming language with the property that no valid program can be obtained as a proper extension of another valid program. Suppose thatFis apartial functionthat takes one argument, a finite binary string, and possibly returns a single binary string as output. The functionFis calledcomputableif there is aTuring machinethat computes it, in the sense that for any finite binary stringsxandy,F(x) =yif and only if the Turing machine halts withyon its tape when given the inputx. The functionFis calleduniversalif for every computable functionfof a single variable there is a stringwsuch that for allx,F(wx) =f(x); herewxrepresents theconcatenationof the two stringswandx. This means thatFcan be used to simulate any computable function of one variable. Informally,wrepresents a "script" for the computable functionf, andFrepresents an "interpreter" that parses the script as a prefix of its input and then executes it on the remainder of input. ThedomainofFis the set of all inputspon which it is defined. ForFthat are universal, such apcan generally be seen both as the concatenation of a program part and a data part, and as a single program for the functionF. The functionFis called prefix-free if there are no two elementsp,p′in its domain such thatp′is a proper extension ofp. This can be rephrased as: the domain ofFis aprefix-free code(instantaneous code) on the set of finite binary strings. A simple way to enforce prefix-free-ness is to use machines whose means of input is a binary stream from which bits can be read one at a time. There is no end-of-stream marker; the end of input is determined by when the universal machine decides to stop reading more bits, and the remaining bits are not considered part of the accepted string. Here, the difference between the two notions of program mentioned in the last paragraph becomes clear: one is easily recognized by some grammar, while the other requires arbitrary computation to recognize. The domain of any universal computable function is acomputably enumerable setbut never acomputable set. The domain is alwaysTuring equivalentto thehalting problem. LetPFbe the domain of a prefix-free universal computable functionF. The constantΩFis then defined as ΩF=∑p∈PF2−|p|,{\displaystyle \Omega _{F}=\sum _{p\in P_{F}}2^{-|p|},} where|p|denotes the length of a stringp. This is aninfinite sumwhich has one summand for everypin the domain ofF. The requirement that the domain be prefix-free, together withKraft's inequality, ensures that this sum converges to areal numberbetween 0 and 1. IfFis clear from context thenΩFmay be denoted simplyΩ, although different prefix-free universal computable functions lead to different values ofΩ. Knowing the firstNbits ofΩ, one could calculate thehalting problemfor all programs of a size up toN. Let the programpfor which the halting problem is to be solved beNbits long. Indovetailingfashion, all programs of all lengths are run, until enough have halted to jointly contribute enough probability to match these firstNbits. If the programphas not halted yet, then it never will, since its contribution to the halting probability would affect the firstNbits. Thus, the halting problem would be solved forp. Because many outstanding problems in number theory, such asGoldbach's conjecture, are equivalent to solving the halting problem for special programs (which would basically search for counter-examples and halt if one is found), knowing enough bits of Chaitin's constant would also imply knowing the answer to these problems. But as the halting problem is not generally solvable, calculating any but the first few bits of Chaitin's constant is not possible for a universal language. This reduces hard problems to impossible ones, much like trying to build anoracle machine for the halting problemwould be. TheCantor spaceis the collection of all infinite sequences of 0s and 1s. A halting probability can be interpreted as themeasureof a certain subset of Cantor space under the usualprobability measureon Cantor space. It is from this interpretation that halting probabilities take their name. The probability measure on Cantor space, sometimes called the fair-coin measure, is defined so that for any binary stringxthe set of sequences that begin withxhas measure2−|x|. This implies that for each natural numbern, the set of sequencesfin Cantor space such thatf(n)= 1 has measure⁠1/2⁠, and the set of sequences whosenth element is 0 also has measure⁠1/2⁠. LetFbe a prefix-free universal computable function. The domainPofFconsists of an infinite set of binary strings P={p1,p2,…}.{\displaystyle P=\{p_{1},p_{2},\ldots \}.} Each of these stringspidetermines a subsetSiof Cantor space; the setSicontains all sequences in cantor space that begin withpi. These sets are disjoint becausePis a prefix-free set. The sum ∑p∈P2−|p|{\displaystyle \sum _{p\in P}2^{-|p|}} represents the measure of the set ⋃i∈NSi.{\displaystyle \bigcup _{i\in \mathbb {N} }S_{i}.} In this way,ΩFrepresents the probability that a randomly selected infinite sequence of 0s and 1s begins with a bit string (of some finite length) that is in the domain ofF. It is for this reason thatΩFis called a halting probability. Each Chaitin constantΩhas the following properties: Not every set that is Turing equivalent to the halting problem is a halting probability. Afinerequivalence relation, Solovay equivalence, can be used to characterize the halting probabilities among the left-c.e. reals.[4]One can show that a real number in[0,1]is a Chaitin constant (i.e. the halting probability of some prefix-free universal computable function) if and only if it is left-c.e. and algorithmically random.[4]Ωis among the fewdefinablealgorithmically random numbers and is the best-known algorithmically random number, but it is not at all typical of all algorithmically random numbers.[5] A real number is called computable if there is an algorithm which, givenn, returns the firstndigits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number. No halting probability is computable. The proof of this fact relies on an algorithm which, given the firstndigits ofΩ, solves Turing'shalting problemfor programs of length up ton. Since the halting problem isundecidable,Ωcannot be computed. The algorithm proceeds as follows. Given the firstndigits ofΩand ak≤n, the algorithm enumerates the domain ofFuntil enough elements of the domain have been found so that the probability they represent is within2−(k+1)ofΩ. After this point, no additional program of lengthkcan be in the domain, because each of these would add2−kto the measure, which is impossible. Thus the set of strings of lengthkin the domain is exactly the set of such strings already enumerated. A real number is random if the binary sequence representing the real number is analgorithmically random sequence. Calude, Hertling, Khoussainov, and Wang showed[6]that a recursively enumerable real number is an algorithmically random sequence if and only if it is a Chaitin'sΩnumber. For each specific consistent effectively representedaxiomatic systemfor thenatural numbers, such asPeano arithmetic, there exists a constantNsuch that no bit ofΩafter theNth can be proven to be 1 or 0 within that system. The constantNdepends on how theformal systemis effectively represented, and thus does not directly reflect the complexity of the axiomatic system. This incompleteness result is similar toGödel's incompleteness theoremin that it shows that no consistent formal theory for arithmetic can be complete. The firstnbits ofGregory Chaitin's constantΩare random or incompressible in the sense that they cannot be computed by a halting algorithm with fewer thann− O(1)bits. However, consider the short but never halting algorithm which systematically lists and runs all possible programs; whenever one of them halts its probability gets added to the output (initialized by zero). After finite time the firstnbits of the output will never change any more (it does not matter that this time itself is not computable by a halting program). So there is a short non-halting algorithm whose output converges (after finite time) onto the firstnbits ofΩ. In other words, theenumerablefirstnbits ofΩare highly compressible in the sense that they arelimit-computableby a very short algorithm; they are notrandomwith respect to the set of enumerating algorithms.Jürgen Schmidhuberconstructed a limit-computable "SuperΩ" which in a sense is much more random than the original limit-computableΩ, as one cannot significantly compress the SuperΩby any enumerating non-halting algorithm.[7] For an alternative "SuperΩ", theuniversality probabilityof aprefix-freeuniversal Turing machine(UTM) – namely, the probability that it remains universal even when every input of it (as abinary string) is prefixed by a random binary string – can be seen as the non-halting probability of a machine with oracle the third iteration of thehalting problem(i.e.,O(3)usingTuring jumpnotation).[8]
https://en.wikipedia.org/wiki/Omega_(computer_science)
Tamanna(Urdu:تمنا;transl.Desire) is2014British-Pakistani filmdistributed by Royal Palm Group t/a Summit Entertainment (Pak) and Super Cinema andARY Filmsand produced by Concordia Productions. A drama in theneo-noirgenre[3]the film is directed by aBritishdirector Steven Moore and produced by Pakistani producerSarah Tareen. The film starsOmair Rana,Salman Shahid,Mehreen RaheelandFaryal Gohar.[4] Prior to release the film won an award at theLondon Asian Film Festival[5]for the first released song byRahat Fateh Ali Khan. Other songs included in the film are sung byAli AzmatandAmanat Ali. The original score for the film was written by Arthur Rathbone Pullen son ofBooker Prizenominee and best selling English authorJulian Rathbone. The film incorporates elements of dark humour, melodrama, crime, passion and revenge and is based onAnthony Shaffer’s play,Sleuth. The film's hero is Rizwan Ahmed (Omair Rana) a struggling actor who meets Mian Tariq Ali (Salman Shahid), a relic of the once thriving film industry. The struggling actor is there to convince Ali to divorce his wife. A contest of male dominance between the two men ensues, starting quite reasonably, playfully even, but eventually turning angry and violent.[6] Whilst some of the interactions between the two men are similar to the playSleuth, the film has roles for not just the Wyke character's wife, but also his second, younger wife, who is the protagonist's object of desire. The milieu is Pakistan's film industry,Lollywoodin its dying days. The outcome for the characters is dark, with more emphasis on being sacrificed than self-sacrifice, and is used an allegory of wider issues. The dialogue, in Urdu, and the scenario are adapted in numerous ways for Pakistani culture. BothSalman ShahidandFaryal Goharwere cast first around 3 years before the film was eventually made, as early as 2009. They appear together in the video for the songKoi Dil Meintogether not long after that in 2010.Omair Ranawas not part of the film initially, his role as Riz theprotagonistwas given toHameed Sheikh, who is famous for his role of Sher Shah inShoaib Mansoor’sKhuda Kay Liyeand as Omar Boloch inKandahar Break.Mehreen Raheelwas brought in very late, just before principal photography which took place in October 2012. Amongst scenes that were cut in the edit was Omair Rana withRasheed NazinWazir Khan Mosquein theWalled City of Lahore, but does appear briefly on a TV set in the background at one point during the film.Sahir Lodhi, a famous Pakistani TV presenter, also makes a cameo voice appearance as a TV interviewer. The film contains both original score and individual songs. The score of the film is composed by British Composer and musician Arthur Rathbone Pullen. The songs are sung by Pakistan's best known playback singerRahat Fateh Ali KhanandAli Azmatand composed bySahir Ali Bagga.[7]Amanat Ali sung the title track of Tamanna, composed by Afzal Hussain. The video of the titular song was filmed at Lahore's historic Barood Khana Haveli.[2] The first Look trailer was released in June 2013 and Koi Dil Mein before that as a song with video including some of the film's oldLollywoodrecreation footage from amise en abymetechnique of a film within a film. The film premiered on 13 June 2014[8]in Lahore Karachi and Islamabad then ran subsequently in cinemas around Pakistan for two weeks.[2][9][10]The film is expected to screen at selected festivals in late 2014/2015 and was already screened at theTricycle Theatreas part of the 16th London Asian Film Festival on 8 June 2014.[11]The film had its worldwide TV premiere on 10 May 2015 onARY Digitaland is scheduled for regular showing by the channel which owns TV distribution rights.
https://en.wikipedia.org/wiki/Tamanna_(2014_film)
Model-dependent realismis a view of scientific inquiry that focuses on the role ofscientific modelsof phenomena.[1]It claims reality should be interpreted based upon these models, and where several models overlap in describing a particular subject, multiple, equally valid, realities exist. It claims that it is meaningless to talk about the "true reality" of a model as we can never beabsolutelycertain of anything. The only meaningful thing is theusefulness of the model.[2]The term "model-dependent realism" was coined byStephen HawkingandLeonard Mlodinowin their 2010 bookThe Grand Design.[3] Model-dependent realism asserts that all we can know about "reality" consists of networks ofworld picturesthat explainobservationsby connecting them byrulesto concepts defined inmodels. Will an ultimatetheory of everythingbe found? Hawking and Mlodinow suggest it is unclear: In the history of science we have discovered a sequence of better and better theories or models, from Plato to the classical theory of Newton to modern quantum theories. It is natural to ask: Will this sequence eventually reach an end point, an ultimate theory of the universe, that will include all forces and predict every observation we can make, or will we continue forever finding better theories, but never one that cannot be improved upon? We do not yet have a definitive answer to this question...[4] Aworld pictureconsists of the combination of a set of observations accompanied by a conceptual model and by rules connecting the model concepts to the observations. Different world pictures that describe particular data equally well all have equal claims to be valid. There is no requirement that a world picture be unique, or even that the data selected include all available observations. The universe of all observations at present is covered by anetworkof overlapping world pictures and, where overlap occurs; multiple, equally valid, world pictures exist. At present, science requires multiple models to encompass existing observations: Like the overlapping maps in a Mercator projection, where the ranges of different versions overlap, they predict the same phenomena. But just as there is no flat map that is a good representation of the earth's entire surface, there is no single theory that is a good representation of observations in all situations[5] Where several models are found for the same phenomena, no single model is preferable to the others within that domain of overlap. While not rejecting the idea of "reality-as-it-is-in-itself", model-dependent realism suggests that we cannot know "reality-as-it-is-in-itself", but only an approximation of it provided by the intermediary of models. The view of models in model-dependent realism also is related to theinstrumentalistapproach to modern science, that a concept or theory should be evaluated by how effectively it explains and predicts phenomena, as opposed to how accurately it describes objective reality (a matter possibly impossible to establish). A model is a good model if it:[6] "If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model."[7]Of course, an assessment like that is subjective, as are the other criteria.[8]According to Hawking and Mlodinow, even very successful models in use today do not satisfy all these criteria, which are aspirational in nature.[9]
https://en.wikipedia.org/wiki/Model-dependent_realism
AI washingis adeceptive marketingtactic that consists of promoting a product or a service by overstating the role ofartificial intelligence(AI) integration in it.[1][2]It raises concerns regarding transparency, consumer trust in the AI industry, and compliance with security regulations, potentially hampering legitimate advancements in AI.[3]U.S. Securities and Exchange Commission(SEC) chairmanGary Genslercompared it togreenwashing.[4]AI washing ranges from the use of buzzwords attached to products such as "smart" or "machine-learning," to more blatant cases of companies claiming to have used AI in their products or services, without actually having used AI. The term "AI washing" was first defined by theAI Now Institute, a research institute based atNew York Universityin 2019.[5]However, the act of AI washing had been used earlier in various campaigns trying to attract customers with "innovative" products or services. In September 2023,Coca-Colareleased a new product called Coca‑Cola® Y3000 Zero Sugar. The company stated that the Y3000 flavor had been "co-created with human and artificial intelligence", yet gave no real explanation of how AI was involved in the process.[6]The company was accused of AI washing due to no proof of AI involvement in the creation of the product. Critics believe that AI was used as a way to grab consumer attention more than it was used in the actual product creation.[7] Some companies have been accused and/or shuttered of trying to capitalize on this trend by exaggerating the role of AI in their offerings. In March 2024, the SEC imposed the first civil penalties on two companies, Delphia Inc and Global Predictions Inc, for misleading statements about their use of AI.[8][9]And in July 2024, the SEC shutdown and charged the CEO and founder of Joonko, a supposed AI hiring startup, with fraud alleging (amongst other serious charges) that he "engaged in an old school fraud using new school buzzwords like ‘artificial intelligence’ and ‘automation,’”[10] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it. Thismarketing-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/AI_washing
Inscienceandengineering,root cause analysis (RCA)is a method ofproblem solvingused for identifying the root causes of faults or problems.[1]It is widely used inIT operations,manufacturing,telecommunications,industrial process control,accident analysis(e.g., inaviation,[2]rail transport, ornuclear plants),medical diagnosis, thehealthcare industry(e.g., forepidemiology), etc. Root cause analysis is a form of inductive inference (first create a theory, orroot, based on empirical evidence, orcauses) and deductive inference (test the theory, i.e., the underlying causal mechanisms, with empirical data). RCA can be decomposed into four steps: RCA generally serves as input to a remediation process wherebycorrective actionsare taken to prevent the problem from recurring. The name of this process varies between application domains. According toISO/IEC 31010, RCA may include these techniques:Five whys,Failure mode and effects analysis(FMEA),Fault tree analysis,Ishikawa diagrams, andPareto analysis. There are essentially two ways of repairing faults and solving problems in science and engineering. Reactive management consists of reacting quickly after the problem occurs, by treating the symptoms. This type of management is implemented by reactive systems,[3][4]self-adaptive systems,[5]self-organized systems, andcomplex adaptive systems. The goal here is to react quickly and alleviate the effects of the problem as soon as possible. Proactive management, conversely, consists of preventing problems from occurring. Many techniques can be used for this purpose, ranging from good practices in design to analyzing in detail problems that have already occurred and taking actions to make sure they never recur. Speed is not as important here as theaccuracy and precisionof the diagnosis. The focus is on addressing the real cause of the problem rather than its effects. Root cause analysis is often used in proactive management to identify the root cause of a problem, that is, the factor that was the leading cause. It is customary to refer to the "root cause" in singular form, but one or several factors may constitute theroot cause(s)of the problem under study. A factor is considered the "root cause" of a problem if removing it prevents the problem from recurring. Conversely, a "causal factor" is a contributing action that affects an incident/event's outcome but is not the root cause. Although removing a causal factor can benefit an outcome, it does not prevent its recurrence with certainty. A great way to look at the proactive/reactive picture is to consider theBowtie Risk Assessmentmodel. In the center of the model is the event or accident. To the left, are the anticipated hazards and the line of defenses put in place to prevent those hazards from causing events. The line of defense is the regulatory requirements, applicable procedures, physical barriers, and cyber barriers that are in place to manage operations and prevent events. A great way to use root cause analysis is to proactively evaluate the effectiveness of those defenses by comparing actual performance against applicable requirements, identifying performance gaps, and then closing the gaps to strengthen those defenses. If an event occurs, then we are on the right side of the model, the reactive side where the emphasis is on identifying the root causes and mitigating the damage. Imagine an investigation into a machine that stopped because it was overloaded and the fuse blew.[6]Investigation shows that the machine was overloaded because it had a bearing that was not being sufficiently lubricated. The investigation proceeds further and finds that the automatic lubrication mechanism had a pump that was not pumping sufficiently, hence the lack of lubrication. Investigation of the pump shows that it has a worn shaft. Investigation of why the shaft was worn discovers that there is not an adequate mechanism to prevent metal scrap getting into the pump. This enabled scrap to get into the pump and damage it. The apparent root cause of the problem is that metal scrap can contaminate the lubrication system. Fixing this problem ought to prevent the whole sequence of events from recurring. Therealroot cause could be a design issue if there is no filter to prevent the metal scrap getting into the system. Or if it has a filter that was blocked due to a lack of routine inspection, then therealroot cause is a maintenance issue. Compare this with an investigation that does not find the root cause: replacing the fuse, the bearing, or the lubrication pump will probably allow the machine to go back into operation for a while. However there is a risk that the problem will simply recur until the root cause is dealt with. The above does not includecost/benefit analysis: does the cost of replacing one or more machines exceed the cost of downtime until the fuse is replaced? This situation is sometimes referred to asthe cure being worse than the disease.[7][8] As an unrelated example of the conclusions that can be drawn in the absence of the cost/benefit analysis, consider the tradeoff between some claimed benefits of population decline: In the short term there will be fewer payers into pension/retirement systems; whereas halting the population will require higher taxes to cover the cost of building more schools. This can help explain the problem of the cure being worse than the disease.[9] Costs to consider go beyond finances when considering the personnel who operate the machinery. Ultimately, the goal is to prevent downtime; but more so prevent catastrophic injuries. Prevention begins with being proactive. Despite the different approaches among the various schools of root cause analysis and the specifics of each application domain, RCA generally follows the same four steps: To be effective, root cause analysis must be performed systematically. The process enables the chance to not miss any other important details. A team effort is typically required, and ideally all persons involved should arrive at the same conclusion. In aircraft accident analyses, for example, the conclusions of the investigation and the root causes that are identified must be backed up by documented evidence.[10] The goal of RCA is to identify the root cause of the problem with the intent to stop the problem from recurring or worsening. The next step is to trigger long-term corrective actions to address the root cause identified during RCA, and make sure that the problem does not resurface. Correcting a problem is not formally part of RCA, however; these are different steps in a problem-solving process known asfault managementin IT and telecommunications,repairin engineering,remediationin aviation,environmental remediationinecology,therapyinmedicine, etc. Root cause analysis is used in many application domains. RCA is specifically called out in the United States Code of Federal Regulations in many of the Titles. For example: The example above illustrates how RCA can be used inmanufacturing. RCA is also routinely used inindustrial process control, e.g. to control the production of chemicals (quality control). RCA is also used forfailure analysisinengineeringandmaintenance. Root cause analysis is frequently used in IT and telecommunications to detect the root causes of serious problems. For example, in theITILservice management framework, the goal ofincident managementis to resume a faulty IT service as soon as possible (reactive management), whereas problem management deals with solving recurring problems for good by addressing their root causes (proactive management). Another example is thecomputer security incident management process, where root-cause analysis is often used to investigate security breaches.[11] RCA is also used in conjunction withbusiness activity monitoringandcomplex event processingto analyze faults inbusiness processes. Its use in the IT industry cannot always be compared to its use in safety critical industries, since in normality the use of RCA in IT industry isnotsupported by pre-existing fault trees or other design specs. Instead a mixture of debugging, event based detection and monitoring systems (where the services are individually modelled) is normally supporting the analysis. Training and supporting tools like simulation or different in-depth runbooks for all expected scenarios do not exist, instead they are created after the fact based on issues seen as 'worthy'. As a result the analysis is often limited to those things that have monitoring/observation interfaces and not the actual planned/seen function with focus on verification of inputs and outputs. Hence, the saying "there is no root cause" has become common in the IT industry. In the domains ofhealthandsafety, RCA is routinely used inmedicine(diagnosis) andepidemiology(e.g., to identify the source of an infectious disease), where causal inference methods often require both clinical and statistical expertise to make sense of the complexities of the processes.[12] RCA is used inenvironmental science(e.g., to analyze environmental disasters),accident analysis(aviation and rail industry), andoccupational safety and health.[13]In the manufacture of medical devices,[14]pharmaceuticals,[15]food,[16]and dietary supplements,[17]root cause analysis is a regulatory requirement. RCA is also used inchange management,risk management, andsystems analysis. Without delving in the idiosyncrasies of specific problems, several general conditions can make RCA more difficult than it may appear at first sight. First, important information is often missing because it is generally not possible, in practice, to monitor everything and store all monitoring data for a long time. Second, gathering data and evidence, and classifying them along a timeline of events to the final problem, can be nontrivial. In telecommunications, for instance, distributed monitoring systems typically manage between a million and a billion events per day. Finding a few relevant events in such a mass of irrelevant events is asking to find the proverbialneedle in a haystack. Third, there may be more than one root cause for a given problem, and this multiplicity can make the causal graph very difficult to establish. Fourth, causal graphs often have many levels, and root-cause analysis terminates at a level that is "root" to the eyes of the investigator. Looking again at the example above in industrial process control, a deeper investigation could reveal that the maintenance procedures at the plant included periodic inspection of the lubrication subsystem every two years, while the current lubrication subsystem vendor's product specified a 6-month period. Switching vendors may have been due to management's desire to save money, and a failure to consult with engineering staff on the implication of the change on maintenance procedures. Thus, while the "root cause" shown above may have prevented the quoted recurrence, it would not have prevented other  – perhaps more severe – failures affecting other machines.
https://en.wikipedia.org/wiki/Root_cause_analysis
Inmathematics, the concept of aninverse elementgeneralises the concepts ofopposite(−x) andreciprocal(1/x) of numbers. Given anoperationdenoted here∗, and anidentity elementdenotede, ifx∗y=e, one says thatxis aleft inverseofy, and thatyis aright inverseofx. (An identity element is an element such thatx*e=xande*y=yfor allxandyfor which the left-hand sides are defined.[1]) When the operation∗isassociative, if an elementxhas both a left inverse and a right inverse, then these two inverses are equal and unique; they are called theinverse elementor simply theinverse. Often an adjective is added for specifying the operation, such as inadditive inverse,multiplicative inverse, andfunctional inverse. In this case (associative operation), aninvertible elementis an element that has an inverse. In aring, aninvertible element, also called aunit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition). Inverses are commonly used ingroups—where every element is invertible, andrings—where invertible elements are also calledunits. They are also commonly used for operations that are not defined for all possible operands, such asinverse matricesandinverse functions. This has been generalized tocategory theory, where, by definition, anisomorphismis an invertiblemorphism. The word 'inverse' is derived fromLatin:inversusthat means 'turned upside down', 'overturned'. This may take its origin from the case offractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse ofxy{\displaystyle {\tfrac {x}{y}}}isyx{\displaystyle {\tfrac {y}{x}}}). The concepts ofinverse elementandinvertible elementare commonly defined forbinary operationsthat are everywhere defined (that is, the operation is defined for any two elements of itsdomain). However, these concepts are also commonly used withpartial operations, that is operations that are not defined everywhere. Common examples arematrix multiplication,function compositionand composition ofmorphismsin acategory. It follows that the common definitions ofassociativityandidentity elementmust be extended to partial operations; this is the object of the first subsections. In this section,Xis aset(possibly aproper class) on which a partial operation (possibly total) is defined, which is denoted with∗.{\displaystyle *.} A partial operation isassociativeif for everyx,y,zinXfor which one of the members of the equality is defined; the equality means that the other member of the equality must also be defined. Examples of non-total associative operations aremultiplication of matricesof arbitrary size, andfunction composition. Let∗{\displaystyle *}be a possiblypartialassociative operation on a setX. Anidentity element, or simply anidentityis an elementesuch that for everyxandyfor which the left-hand sides of the equalities are defined. Ifeandfare two identity elements such thate∗f{\displaystyle e*f}is defined, thene=f.{\displaystyle e=f.}(This results immediately from the definition, bye=e∗f=f.{\displaystyle e=e*f=f.}) It follows that a total operation has at most one identity element, and ifeandfare different identities, thene∗f{\displaystyle e*f}is not defined. For example, in the case ofmatrix multiplication, there is onen×nidentity matrixfor every positive integern, and two identity matrices of different size cannot be multiplied together. Similarly,identity functionsare identity elements forfunction composition, and the composition of the identity functions of two different sets are not defined. Ifx∗y=e,{\displaystyle x*y=e,}whereeis an identity element, one says thatxis aleft inverseofy, andyis aright inverseofx. Left and right inverses do not always exist, even when the operation is total and associative. For example, addition is a total associative operation onnonnegative integers, which has0asadditive identity, and0is the only element that has anadditive inverse. This lack of inverses is the main motivation for extending thenatural numbersinto the integers. An element can have several left inverses and several right inverses, even when the operation is total and associative. For example, consider thefunctionsfrom the integers to the integers. Thedoubling functionx↦2x{\displaystyle x\mapsto 2x}has infinitely many left inverses underfunction composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Similarly, every function that mapsnto either2n{\displaystyle 2n}or2n+1{\displaystyle 2n+1}is a right inverse of the functionn↦⌊n2⌋,{\textstyle n\mapsto \left\lfloor {\frac {n}{2}}\right\rfloor ,}thefloor functionthat mapsnton2{\textstyle {\frac {n}{2}}}orn−12,{\textstyle {\frac {n-1}{2}},}depending whethernis even or odd. More generally, a function has a left inverse forfunction compositionif and only if it isinjective, and it has a right inverse if and only if it issurjective. Incategory theory, right inverses are also calledsections, and left inverses are calledretractions. An element isinvertibleunder an operation if it has a left inverse and a right inverse. In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, iflandrare respectively a left inverse and a right inverse ofx, then The inverseof an invertible element is its unique left or right inverse. If the operation is denoted as an addition, the inverse, oradditive inverse, of an elementxis denoted−x.{\displaystyle -x.}Otherwise, the inverse ofxis generally denotedx−1,{\displaystyle x^{-1},}or, in the case of acommutativemultiplication1x.{\textstyle {\frac {1}{x}}.}When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as inx∗−1.{\displaystyle x^{*-1}.}The notationf∘−1{\displaystyle f^{\circ -1}}is not commonly used forfunction composition, since1f{\textstyle {\frac {1}{f}}}can be used for themultiplicative inverse. Ifxandyare invertible, andx∗y{\displaystyle x*y}is defined, thenx∗y{\displaystyle x*y}is invertible, and its inverse isy−1x−1.{\displaystyle y^{-1}x^{-1}.} An invertiblehomomorphismis called anisomorphism. Incategory theory, an invertiblemorphismis also called anisomorphism. Agroupis asetwith anassociative operationthat has an identity element, and for which every element has an inverse. Thus, the inverse is afunctionfrom the group to itself that may also be considered as an operation ofarityone. It is also aninvolution, since the inverse of the inverse of an element is the element itself. A group mayacton a set astransformationsof this set. In this case, the inverseg−1{\displaystyle g^{-1}}of a group elementg{\displaystyle g}defines a transformation that is the inverse of the transformation defined byg,{\displaystyle g,}that is, the transformation that "undoes" the transformation defined byg.{\displaystyle g.} For example, theRubik's cube grouprepresents the finite sequences of elementary moves. The inverse of such a sequence is obtained by applying the inverse of each move in the reverse order. Amonoidis a set with anassociative operationthat has anidentity element. Theinvertible elementsin a monoid form agroupunder monoid operation. Aringis a monoid for ring multiplication. In this case, the invertible elements are also calledunitsand form thegroup of unitsof the ring. If a monoid is notcommutative, there may exist non-invertible elements that have a left inverse or a right inverse (not both, as, otherwise, the element would be invertible). For example, the set of thefunctionsfrom a set to itself is a monoid underfunction composition. In this monoid, the invertible elements are thebijective functions; the elements that have left inverses are theinjective functions, and those that have right inverses are thesurjective functions. Given a monoid, one may want extend it by adding inverse to some elements. This is generally impossible for non-commutative monoids, but, in a commutative monoid, it is possible to add inverses to the elements that have thecancellation property(an elementxhas the cancellation property ifxy=xz{\displaystyle xy=xz}impliesy=z,{\displaystyle y=z,}andyx=zx{\displaystyle yx=zx}impliesy=z{\displaystyle y=z}).This extension of a monoid is allowed byGrothendieck groupconstruction. This is the method that is commonly used for constructingintegersfromnatural numbers,rational numbersfromintegersand, more generally, thefield of fractionsof anintegral domain, andlocalizationsofcommutative rings. Aringis analgebraic structurewith two operations,additionandmultiplication, which are denoted as the usual operations on numbers. Under addition, a ring is anabelian group, which means that addition iscommutativeandassociative; it has an identity, called theadditive identity, and denoted0; and every elementxhas an inverse, called itsadditive inverseand denoted−x. Because of commutativity, the concepts of left and right inverses are meaningless since they do not differ from inverses. Under multiplication, a ring is amonoid; this means that multiplication is associative and has an identity called themultiplicative identityand denoted1. Aninvertible elementfor multiplication is called aunit. The inverse ormultiplicative inverse(for avoiding confusion with additive inverses) of a unitxis denotedx−1,{\displaystyle x^{-1},}or, when the multiplication is commutative,1x.{\textstyle {\frac {1}{x}}.} The additive identity0is never a unit, except when the ring is thezero ring, which has0as its unique element. If0is the only non-unit, the ring is afieldif the multiplication is commutative, or adivision ringotherwise. In anoncommutative ring(that is, a ring whose multiplication is not commutative), a non-invertible element may have one or several left or right inverses. This is, for example, the case of thelinear functionsfrom aninfinite-dimensional vector spaceto itself. Acommutative ring(that is, a ring whose multiplication is commutative) may be extended by adding inverses to elements that are notzero divisors(that is, their product with a nonzero element cannot be0). This is the process oflocalization, which produces, in particular, the field ofrational numbersfrom the ring of integers, and, more generally, thefield of fractionsof anintegral domain. Localization is also used with zero divisors, but, in this case the original ring is not asubringof the localisation; instead, it is mapped non-injectively to the localization. Matrix multiplicationis commonly defined formatricesover afield, and straightforwardly extended to matrices overrings,rngsandsemirings. However,in this section, only matrices over acommutative ringare considered, because of the use of the concept ofrankanddeterminant. IfAis am×nmatrix (that is, a matrix withmrows andncolumns), andBis ap×qmatrix, the productABis defined ifn=p, and only in this case. Anidentity matrix, that is, an identity element for matrix multiplication is asquare matrix(same number for rows and columns) whose entries of themain diagonalare all equal to1, and all other entries are0. Aninvertible matrixis an invertible element under matrix multiplication. A matrix over a commutative ringRis invertible if and only if its determinant is aunitinR(that is, is invertible inR. In this case, itsinverse matrixcan be computed withCramer's rule. IfRis a field, the determinant is invertible if and only if it is not zero. As the case of fields is more common, one see often invertible matrices defined as matrices with a nonzero determinant, but this is incorrect over rings. In the case ofinteger matrices(that is, matrices with integer entries), an invertible matrix is a matrix that has an inverse that is also an integer matrix. Such a matrix is called aunimodular matrixfor distinguishing it from matrices that are invertible over thereal numbers. A square integer matrix is unimodular if and only if its determinant is1or−1, since these two numbers are the only units in the ring of integers. A matrix has a left inverse if and only if its rank equals its number of columns. This left inverse is not unique except for square matrices where the left inverse equal the inverse matrix. Similarly, a right inverse exists if and only if the rank equals the number of rows; it is not unique in the case of a rectangular matrix, and equals the inverse matrix in the case of a square matrix. Compositionis apartial operationthat generalizes tohomomorphismsofalgebraic structuresandmorphismsofcategoriesinto operations that are also calledcomposition, and share many properties with function composition. In all the case, composition isassociative. Iff:X→Y{\displaystyle f\colon X\to Y}andg:Y′→Z,{\displaystyle g\colon Y'\to Z,}the compositiong∘f{\displaystyle g\circ f}is defined if and only ifY′=Y{\displaystyle Y'=Y}or, in the function and homomorphism cases,Y⊂Y′.{\displaystyle Y\subset Y'.}In the function and homomorphism cases, this means that thecodomainoff{\displaystyle f}equals or is included in thedomainofg. In the morphism case, this means that thecodomainoff{\displaystyle f}equals thedomainofg. There is anidentityidX:X→X{\displaystyle \operatorname {id} _{X}\colon X\to X}for every objectX(set, algebraic structure orobject), which is called also anidentity functionin the function case. A function is invertible if and only if it is abijection. An invertible homomorphism or morphism is called anisomorphism. An homomorphism of algebraic structures is an isomorphism if and only if it is a bijection. The inverse of a bijection is called aninverse function. In the other cases, one talks ofinverse isomorphisms. A function has a left inverse or a right inverse if and only it isinjectiveorsurjective, respectively. An homomorphism of algebraic structures that has a left inverse or a right inverse is respectively injective or surjective, but the converse is not true in some algebraic structures. For example, the converse is true forvector spacesbut not formodulesover a ring: a homomorphism of modules that has a left inverse of a right inverse is called respectively asplit epimorphismor asplit monomorphism. This terminology is also used for morphisms in any category. LetS{\displaystyle S}be a unitalmagma, that is, asetwith abinary operation∗{\displaystyle *}and anidentity elemente∈S{\displaystyle e\in S}. If, fora,b∈S{\displaystyle a,b\in S}, we havea∗b=e{\displaystyle a*b=e}, thena{\displaystyle a}is called aleft inverseofb{\displaystyle b}andb{\displaystyle b}is called aright inverseofa{\displaystyle a}. If an elementx{\displaystyle x}is both a left inverse and a right inverse ofy{\displaystyle y}, thenx{\displaystyle x}is called atwo-sided inverse, or simply aninverse, ofy{\displaystyle y}. An element with a two-sided inverse inS{\displaystyle S}is calledinvertibleinS{\displaystyle S}. An element with an inverse element only on one side isleft invertibleorright invertible. Elements of a unital magma(S,∗){\displaystyle (S,*)}may have multiple left, right or two-sided inverses. For example, in the magma given by the Cayley table the elements 2 and 3 each have two two-sided inverses. A unital magma in which all elements are invertible need not be aloop. For example, in the magma(S,∗){\displaystyle (S,*)}given by theCayley table every element has a unique two-sided inverse (namely itself), but(S,∗){\displaystyle (S,*)}is not a loop because the Cayley table is not aLatin square. Similarly, a loop need not have two-sided inverses. For example, in the loop given by the Cayley table the only element with a two-sided inverse is the identity element 1. If the operation∗{\displaystyle *}isassociativethen if an element has both a left inverse and a right inverse, they are equal. In other words, in amonoid(an associative unital magma) every element has at most one inverse (as defined in this section). In a monoid, the set of invertible elements is agroup, called thegroup of unitsofS{\displaystyle S}, and denoted byU(S){\displaystyle U(S)}orH1. The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity; that is, in asemigroup. In a semigroupSan elementxis called(von Neumann) regularif there exists some elementzinSsuch thatxzx=x;zis sometimes called apseudoinverse. An elementyis called (simply) aninverseofxifxyx=xandy=yxy. Every regular element has at least one inverse: ifx=xzxthen it is easy to verify thaty=zxzis an inverse ofxas defined in this section. Another easy to prove fact: ifyis an inverse ofxthene=xyandf=yxareidempotents, that isee=eandff=f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, andex=xf=x,ye=fy=y, andeacts as a left identity onx, whilefacts a right identity, and the left/right roles are reversed fory. This simple observation can be generalized usingGreen's relations: every idempotentein an arbitrary semigroup is a left identity forReand right identity forLe.[2]An intuitive description of this fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity. In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in the Green classH1have an inverse from the unital magma perspective, whereas for any idempotente, the elements ofHehave an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called aninverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have anabsorbing element0 because 000 = 0, whereas a group may not. Outside semigroup theory, a unique inverse as defined in this section is sometimes called aquasi-inverse. This is generally justified because in most applications (for example, all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity (seeGeneralized inverse). A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)° =afor allainS; this endowsSwith a type ⟨2,1⟩ algebra. A semigroup endowed with such an operation is called aU-semigroup. Although it may seem thata° will be the inverse ofa, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes ofU-semigroups have been studied:[3] Clearly a group is both anI-semigroup and a *-semigroup. A class of semigroups important in semigroup theory arecompletely regular semigroups; these areI-semigroups in which one additionally hasaa° =a°a; in other words every element has commuting pseudoinversea°. There are few concrete examples of such semigroups however; most arecompletely simple semigroups. In contrast, a subclass of *-semigroups, the*-regular semigroups(in the sense of Drazin), yield one of best known examples of a (unique) pseudoinverse, theMoore–Penrose inverse. In this case however the involutiona* is not the pseudoinverse. Rather, the pseudoinverse ofxis the unique elementysuch thatxyx=x,yxy=y, (xy)* =xy, (yx)* =yx. Since *-regular semigroups generalize inverse semigroups, the unique element defined this way in a *-regular semigroup is called thegeneralized inverseorMoore–Penrose inverse. All examples in this section involve associative operators. The lower and upper adjoints in a (monotone)Galois connection,LandGare quasi-inverses of each other; that is,LGL=LandGLG=Gand one uniquely determines the other. They are not left or right inverses of each other however. Asquare matrixM{\displaystyle M}with entries in afieldK{\displaystyle K}is invertible (in the set of all square matrices of the same size, undermatrix multiplication) if and only if itsdeterminantis different from zero. If the determinant ofM{\displaystyle M}is zero, it is impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. Seeinvertible matrixfor more. More generally, a square matrix over acommutative ringR{\displaystyle R}is invertibleif and only ifits determinant is invertible inR{\displaystyle R}. Non-square matrices offull rankhave several one-sided inverses:[4] The left inverse can be used to determine the least norm solution ofAx=b{\displaystyle Ax=b}, which is also theleast squaresformula forregressionand is given byx=(ATA)−1ATb.{\displaystyle x=\left(A^{\text{T}}A\right)^{-1}A^{\text{T}}b.} Norank deficientmatrix has any (even one-sided) inverse. However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists. As an example of matrix inverses, consider: So, asm<n, we have a right inverse,Aright−1=AT(AAT)−1.{\displaystyle A_{\text{right}}^{-1}=A^{\text{T}}\left(AA^{\text{T}}\right)^{-1}.}By components it is computed as The left inverse doesn't exist, because which is asingular matrix, and cannot be inverted.
https://en.wikipedia.org/wiki/Invertible_element#In_the_integers_mod_n
Extendible hashingis a type ofhashsystem which treats a hash as a bit string and uses atriefor bucket lookup.[1]Because of the hierarchical nature of the system, re-hashing is an incremental operation (done one bucket at a time, as needed). This means that time-sensitive applications are less affected by table growth than by standard full-table rehashes. Extendible hashing was described byRonald Faginin 1979. Practically all modern filesystems use either extendible hashing orB-trees. In particular, theGlobal File System,ZFS, and the SpadFS filesystem use extendible hashing.[2] Assume that the hash functionh(k){\displaystyle h(k)}returns a string of bits. The firsti{\displaystyle i}bits of each string will be used as indices to figure out where they will go in the "directory" (hash table), wherei{\displaystyle i}is the smallest number such that the index of every item in the table is unique. Keys to be used: Let's assume that for this particular example, the bucket size is 1. The first two keys to be inserted, k1and k2, can be distinguished by themost significant bit, and would be inserted into the table as follows: Now, if k3were to be hashed to the table, it wouldn't be enough to distinguish all three keys by one bit (because both k3and k1have 1 as their leftmost bit). Also, because the bucket size is one, the table would overflow. Because comparing the first two most significant bits would give each key a unique location, the directory size is doubled as follows: And so now k1and k3have a unique location, being distinguished by the first two leftmost bits. Because k2is in the top half of the table, both 00 and 01 point to it because there is no other key to compare to that begins with a 0. The above example is fromFagin et al. (1979). Now, k4needs to be inserted, and it has the first two bits as 01..(1110), and using a 2 bit depth in the directory, this maps from 01 to Bucket A. Bucket A is full (max size 1), so it must be split; because there is more than one pointer to Bucket A, there is no need to increase the directory size. What is needed is information about: In order to distinguish the two action cases: Examining the initial case of an extendible hash structure, if each directory entry points to one bucket, then the local depth should be equal to the global depth. The number of directory entries is equal to 2global depth, and the initial number of buckets is equal to 2local depth. Thus if global depth = local depth = 0, then 20= 1, so an initial directory of one pointer to one bucket. Back to the two action cases; if the bucket is full: Key 01 points to Bucket A, and Bucket A's local depth of 1 is less than the directory's global depth of 2, which means keys hashed to Bucket A have only used a 1 bit prefix (i.e. 0), and the bucket needs to have its contents split using keys 1 + 1 = 2 bits in length; in general, for any local depth d where d is less than D, the global depth, then d must be incremented after a bucket split, and the new d used as the number of bits of each entry's key to redistribute the entries of the former bucket into the new buckets. Now, is tried again, with 2 bits 01.., and now key 01 points to a new bucket but there is still⁠k2{\displaystyle k_{2}}⁠in it (h(k2)=010110{\displaystyle h(k_{2})=010110}and also begins with 01). If⁠k2{\displaystyle k_{2}}⁠had been 000110, with key 00, there would have been no problem, because⁠k2{\displaystyle k_{2}}⁠would have remained in the new bucket A' and bucket D would have been empty. (This would have been the most likely case by far when buckets are of greater size than 1 and the newly split buckets would be exceedingly unlikely to overflow, unless all the entries were all rehashed to one bucket again. But just to emphasize the role of the depth information, the example will be pursued logically to the end.) So Bucket D needs to be split, but a check of its local depth, which is 2, is the same as the global depth, which is 2, so the directory must be split again, in order to hold keys of sufficient detail, e.g. 3 bits. Now,h(k2)=010110{\displaystyle h(k_{2})=010110}is in D andh(k4)=011110{\displaystyle h(k_{4})=011110}is tried again, with 3 bits 011.., and it points to bucket D which already contains⁠k2{\displaystyle k_{2}}⁠so is full; D's local depth is 2 but now the global depth is 3 after the directory doubling, so now D can be split into bucket's D' and E, the contents of D,⁠k2{\displaystyle k_{2}}⁠has itsh(k2){\displaystyle h(k_{2})}retried with a new global depth bitmask of 3 and⁠k2{\displaystyle k_{2}}⁠ends up in D', then the new entry⁠k4{\displaystyle k_{4}}⁠is retried withh(k4){\displaystyle h(k_{4})}bitmasked using the new global depth bit count of 3 and this gives 011 which now points to a new bucket E which is empty. So⁠k4{\displaystyle k_{4}}⁠goes in Bucket E. Below is the extendible hashing algorithm inPython, with the disc block / memory page association, caching and consistency issues removed. Note a problem exists if the depth exceeds the bit size of an integer, because then doubling of the directory or splitting of a bucket won't allow entries to be rehashed to different buckets. The code uses theleast significant bits, which makes it more efficient to expand the table, as the entire directory can be copied as one block (Ramakrishnan & Gehrke (2003)).
https://en.wikipedia.org/wiki/Extendible_hashing
Semantic foldingtheory describes a procedure for encoding thesemanticsofnatural languagetext in a semantically groundedbinary representation. This approach provides a framework for modelling how language data is processed by theneocortex.[1] Semantic folding theory draws inspiration fromDouglas R. Hofstadter'sAnalogy as the Core of Cognitionwhich suggests that the brain makes sense of the world by identifying and applyinganalogies.[2]The theory hypothesises that semantic data must therefore be introduced to the neocortex in such a form as to allow the application of asimilarity measureand offers, as a solution, thesparsebinary vectoremploying a two-dimensional topographicsemantic spaceas a distributional reference frame. The theory builds on the computational theory of the human cortex known ashierarchical temporal memory(HTM), and positions itself as a complementary theory for the representation of language semantics. A particular strength claimed by this approach is that the resulting binary representation enables complex semantic operations to be performed simply and efficiently at the most basic computational level. Analogous to the structure of the neocortex, Semantic Folding theory posits the implementation of a semantic space as a two-dimensional grid. This grid is populated by context-vectors[note 1]in such a way as to place similar context-vectors closer to each other, for instance, by using competitive learning principles. Thisvector space modelis presented in the theory as an equivalence to the well known word space model[3]described in theinformation retrievalliterature. Given a semantic space (implemented as described above) a word-vector[note 2]can be obtained for any given wordYby employing the followingalgorithm: The result of this process will be a word-vector containing all the contexts in which the word Y appears and will therefore be representative of the semantics of that word in the semantic space. It can be seen that the resulting word-vector is also in a sparse distributed representation (SDR) format [Schütze, 1993] & [Sahlgreen, 2006].[3][4]Some properties of word-SDRs that are of particular interest with respect tocomputational semanticsare:[5] Semantic spaces[note 3][6]in the natural language domain aim to create representations of natural language that are capable of capturing meaning. The original motivation for semantic spaces stems from two core challenges of natural language:Vocabulary mismatch(the fact that the same meaning can be expressed in many ways) andambiguityof natural language (the fact that the same term can have several meanings). The application of semantic spaces innatural language processing(NLP) aims at overcoming limitations ofrule-basedor model-based approaches operating on thekeywordlevel. The main drawback with these approaches is their brittleness, and the large manual effort required to create either rule-based NLP systems or training corpora for model learning.[7][8]Rule-based andmachine learning-based models are fixed on the keyword level and break down if the vocabulary differs from that defined in the rules or from the training material used for the statistical models. Research in semantic spaces dates back more than 20 years. In 1996, two papers were published that raised a lot of attention around the general idea of creating semantic spaces:latent semantic analysis[9]fromMicrosoftandHyperspace Analogue to Language[10]from theUniversity of California. However, their adoption was limited by the large computational effort required to construct and use those semantic spaces. A breakthrough with regard to theaccuracyof modelling associative relations between words (e.g. "spider-web", "lighter-cigarette", as opposed to synonymous relations such as "whale-dolphin", "astronaut-driver") was achieved byexplicit semantic analysis(ESA)[11]in 2007. ESA was a novel (non-machine learning) based approach that represented words in the form of vectors with 100,000dimensions(where each dimension represents an Article inWikipedia). However practical applications of the approach are limited due to the large number of required dimensions in the vectors. More recently, advances inneural networkingtechniques in combination with other new approaches (tensors) led to a host of new recent developments:Word2vec[12]fromGoogleandGloVe[13]fromStanford University. Semantic folding represents a novel, biologically inspired approach to semantic spaces where each word is represented as a sparse binary vector with 16,000 dimensions (a semantic fingerprint) in a 2D semantic map (the semantic universe). Sparse binary representation are advantageous in terms of computational efficiency, and allow for the storage of very large numbers of possible patterns.[5] The topological distribution over a two-dimensional grid (outlined above) lends itself to abitmaptype visualization of the semantics of any word or text, where each active semantic feature can be displayed as e.g. apixel. As can be seen in the images shown here, this representation allows for a direct visual comparison of the semantics of two (or more) linguistic items. Image 1 clearly demonstrates that the two disparate terms "dog" and "car" have, as expected, very obviously different semantics. Image 2 shows that only one of the meaning contexts of "jaguar", that of "Jaguar" the car, overlaps with the meaning of Porsche (indicating partial similarity). Other meaning contexts of "jaguar" e.g. "jaguar" the animal clearly have different non-overlapping contexts. The visualization of semantic similarity using Semantic Folding bears a strong resemblance to thefMRIimages produced in a research study conducted by A.G. Huth et al.,[14][15]where it is claimed that words are grouped in the brain by meaning.voxels, little volume segments of the brain, were found to follow a pattern were semantic information is represented along the boundary of the visual cortex with visual and linguistic categories represented on posterior and anterior side respectively.[16][17][18]
https://en.wikipedia.org/wiki/Semantic_folding
Incomputing, thestar schemaorstar modelis the simplest style ofdata martschemaand is the approach most widely used to develop data warehouses and dimensional data marts.[1]The star schema consists of one or morefact tablesreferencing any number ofdimension tables. The star schema is an important special case of thesnowflake schema, and is more effective for handling simpler queries.[2] The star schema gets its name from thephysical model's[3]resemblance to astar shapewith a fact table at its center and the dimension tables surrounding it representing the star's points. The star schema separates business process data into facts, which hold the measurable, quantitative data about a business, and dimensions which are descriptive attributes related to fact data. Examples of fact data include sales price, sale quantity, and time, distance, speed and weight measurements. Related dimension attribute examples include product models, product colors, product sizes, geographic locations, and salesperson names. A star schema that has many dimensions is sometimes called acentipede schema.[4]Having dimensions of only a few attributes, while simpler to maintain, results in queries with many table joins and makes the star schema less easy to use. Fact tables record measurements or metrics for a specific event. Fact tables generally consist of numeric values, and foreign keys to dimensional data where descriptive information is kept.[4]Fact tables are designed to a low level of uniform detail (referred to as "granularity" or "grain"), meaning facts can record events at a very atomic level. This can result in the accumulation of a large number of records in a fact table over time. Fact tables are defined as one of three types: Fact tables are generally assigned asurrogate keyto ensure each row can be uniquely identified. This key is a simple primary key. Dimension tables usually have a relatively small number of records compared to fact tables, but each record may have a very large number of attributes to describe the fact data. Dimensions can define a wide variety of characteristics, but some of the most common attributes defined by dimension tables include: Dimension tables are generally assigned asurrogate primary key, usually a single-column integer data type, mapped to the combination of dimension attributes that form the natural key. Star schemas aredenormalized, meaning the typical rules of normalization applied to transactional relational databases are relaxed during star-schema design and implementation. The benefits of star-schema denormalization are: Consider a database of sales, perhaps from a store chain, classified by date, store and product. The image of the schema to the right is a star schema version of the sample schema provided in thesnowflake schemaarticle. Fact_Salesis the fact table and there are three dimension tablesDim_Date,Dim_StoreandDim_Product. Each dimension table has a primary key on itsIdcolumn, relating to one of the columns (viewed as rows in the example schema) of theFact_Salestable's three-column (compound) primary key (Date_Id,Store_Id,Product_Id). The non-primary keyUnits_Soldcolumn of the fact table in this example represents a measure or metric that can be used in calculations and analysis. The non-primary key columns of the dimension tables represent additional attributes of the dimensions (such as theYearof theDim_Datedimension). For example, the following query answers how many TV sets have been sold, for each brand and country, in 1997:
https://en.wikipedia.org/wiki/Star_schema
TheWDR paper computerorKnow-how Computeris an educational model of a computer consisting only of a pen, a sheet of paper, and individual matches in the most simple case.[1]This allows anyone interested to learn how to program without having anelectronic computerat their disposal. The paper computer was created in the early 1980s when computer access was not yet widespread in Germany, to allow people to familiarize themselves with basic computer operation andassembly-like programming languages. It was distributed in over400000copies and at its time belonged to the computers with the widest circulation. The Know-how Computer was developed byWolfgang Back[de]and Ulrich Rohde and was first presented in the television programWDR Computerclub(broadcast byWestdeutscher Rundfunk) in 1983. It was also published in German computer magazinesmcandPC Magazin[de].[2] The original printed version of the paper computer has up to 21 lines of code on the left and eightregisterson the right, which are represented as boxes that contain as manymatchesas the value in the corresponding register.[3]A pen is used to indicate the line of code which is about to be executed. The user steps through the program, adding and subtracting matches from the appropriate registers and following program flow until the stop instruction is encountered. The instruction set of five commands is small butTuring completeand therefore enough to represent all mathematical functions: In the original newspaper article about this computer, it was written slightly differently (translation): [4] An emulator forWindowsis available on Wolfgang Back's website,[5]but a JavaScript emulator also exists.[6]Emulators place fewer restrictions on line count or the number of registers, allowing longer and more complex programs. The paper computer's method of operation is nominally based on aregister machineby Elmar Cohors-Fresenborg,[2][7]but follows more the approach ofJohn Cedric ShepherdsonandHoward E. Sturgisin theirShepherdson–Sturgis register machinemodel.[8] A derived version of the paper computer is used as a "Know-How Computer" inNamibianschool education.[9]
https://en.wikipedia.org/wiki/WDR_paper_computer
AnRF connector(radio frequency connector) is anelectrical connectordesigned to work atradio frequenciesin the multi-megahertz range. RF connectors are typically used withcoaxial cablesand are designed to maintain the shielding that the coaxial design offers. Better models also minimize the change in transmission lineimpedanceat the connection in order to reducesignal reflectionand power loss.[1]As the frequency increases,transmission lineeffects become more important, with small impedance variations from connectors causing the signal to reflect rather than pass through. An RF connector must not allow external signals into the circuit throughelectromagnetic interferenceand capacitive pickup. Mechanically, RF connectors may provide a fastening mechanism (thread,bayonet, braces,blind mate) andspringsfor a low ohmic electric contact while sparing the gold surface, thus allowing very high mating cycles and reducing theinsertion force. Research activity in the area of radio-frequency circuit design has surged in the 2000s in direct response to the enormous market demand for inexpensive, high-data-rate wireless transceivers.[2] Common types of RF connectors are used fortelevisionreceivers,two-way radio,Wi-FiPCIe cards with removable antennas, and industrial or scientific measurement instruments using radio frequencies. This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/RF_connector
Alatent space, also known as alatent feature spaceorembedding space, is anembeddingof a set of items within amanifoldin which items resembling each other are positioned closer to one another. Position within the latent space can be viewed as being defined by a set oflatent variablesthat emerge from the resemblances from the objects. In most cases, thedimensionalityof the latent space is chosen to be lower than the dimensionality of thefeature spacefrom which the data points are drawn, making the construction of a latent space an example ofdimensionality reduction, which can also be viewed as a form ofdata compression.[1]Latent spaces are usually fit[clarification needed]viamachine learning, and they can then be used as feature spaces in machine learning models, including classifiers and other supervised predictors. The interpretation of the latent spaces of machine learning models is an active field of study, but latent space interpretation is difficult to achieve. Due to the black-box nature of machine learning models, the latent space may be completely unintuitive. Additionally, the latent space may be high-dimensional, complex, and nonlinear, which may add to the difficulty of interpretation.[2]Some visualization techniques have been developed to connect the latent space to the visual world, but there is often not a direct connection between the latent space interpretation and the model itself. Such techniques includet-distributed stochastic neighbor embedding(t-SNE), where the latent space is mapped to two dimensions for visualization. Latent space distances lack physical units, so the interpretation of these distances may depend on the application.[3] Several embedding models have been developed to perform this transformation to create latent space embeddings given a set of data items and asimilarity function. These models learn the embeddings by leveraging statistical techniques and machine learning algorithms. Here are some commonly used embedding models: Multimodality refers to the integration and analysis of multiple modes or types of data within a single model or framework. Embedding multimodal data involves capturing relationships and interactions between different data types, such as images, text, audio, and structured data. Multimodal embedding models aim to learn joint representations that fuse information from multiple modalities, allowing for cross-modal analysis and tasks. These models enable applications like image captioning, visual question answering, and multimodal sentiment analysis. To embed multimodal data, specialized architectures such as deep multimodal networks or multimodal transformers are employed. These architectures combine different types of neural network modules to process and integrate information from various modalities. The resulting embeddings capture the complex relationships between different data types, facilitating multimodal analysis and understanding. Embedding latent space and multimodal embedding models have found numerous applications across various domains:
https://en.wikipedia.org/wiki/Latent_space
Thing theoryis a branch ofcritical theorythat focuses on human–object interactions in literature and culture. It borrows fromHeidegger's distinction between objects and things, which posits that an object becomes a thing when it can no longer serve its common function.[1]The Thing in Thing Theory is conceptually likeJacques Lacan'sReal; Felluga states that it is influenced byActor-network theoryand the work ofBruno Latour.[2] ForUniversity of ChicagoProfessorBill Brown, objects are items for which subjects have a known and clear sense of place, use and role.[3]Things, on the other hand, manifest themselves once they interact with our bodies unexpectedly, break down, malfunction, shed their encoded social values, or elude our understanding.[3]When one encounters an object which breaks outside of its expected, recognizable use, it causes a moment of judgement, which in turn causes a historical or narrative reconfiguration between the subject and the object which Brown refers to as thingness.[3]The theory was largely created by Prof. Brown, who edited a special issue ofCritical Inquiryon it in 2001[4]and published a monograph on the subject entitledA Sense of Things.[5] As Brown writes in his essay "Thing Theory": We begin to confront the thingness of objects when they stop working for us: when the drill breaks, when the car stalls, when the window gets filthy, when their flow within the circuits of production and distribution, consumption and exhibition, has been arrested, however momentarily. The story of objects asserting themselves as things, then, is the story of a changed relationship to the human subject and thus the story of how the thing really names less an object than a particular subject-object relation.[5]As they circulate through our lives, we look through objects (to see what they disclose about history, society, nature, or culture - above all, what they disclose about us), but we only catch a glimpse of things. Thingness can also extend to close interactions with the subject's body. Brown points to encounters like "cut[ing] your finger on a sheet of paper" or "trip[ping] over some toy" to argue that we are "caught up in things" and the "body is a thing among things."[3] Thing theory is particularly well suited to the study ofmodernism, due to the materialist preoccupations of modernist poets such asWilliam Carlos Williams, who declared that there should be "No ideas but in things" orT. S. Eliot's idea of theobjective correlative.[6]Thing theory has also found a home in the study of contemporaryMaker culture, which applies Brown's aesthetic theories to material practices of misuse.[7]Recent critics have also applied Thing Theory tohoardingpractices.[8] Thing Theory also has potential applications in the field ofanthropology. Brown refers toCornelius Castoriadis, who notes how perceptions of objects vary incross-cultural communication. Castoriadis states that the "perception of things" for an individual from one society, for instance, will be the perception of things "inhabited" and "animated". Whereas for an individual from another society may view things as "inert instruments, objects of possession".[9]Brown remarks that thingness can result when an object from a previous historical epoch is viewed in the present. He states that "however materially stable objects may seem, they are, let us say, different things in different scenes" He citesNicholas Thomas, who writes: "As socially and culturally salient entities, objects change in defiance of their material stability. The category to which a thing belongs, the emotion and judgment it prompts, and narrative it recalls, are all historically refigured."[3][10] Brown remarks how Thing Theory can be applied to understand perceptions of technological changes. He uses the example of a confused museum goer seeingClaes Oldenburg'sTypewriter Eraser, Scale Xand asking "How did that form ever function?" In this sense, Oldenburg's deliberate attempt to turn an object into a thing 'expresses the power of this particular work to dramatize a generational divide and to stage (to melodramatize, even) the question of obsolescence.'[3] Critics including Severin Fowles ofColumbia Universityand architect Thom Moran at theUniversity of Michiganhave begun to organize classes on "Thing Theory" in relation to literature and culture.[11]Fowles describes a blind spot in Thing Theory, which he attributes to a post-human, post-colonialist attention to physical presence. It fails to address the influence of "non-things, negative spaces, lost or forsaken objects, voids or gaps – absences, in other words, that also stand before us as entity-like presences with which we must contend."[12]For example, Fowles explains how a human subject is required to understand the difference between a set of keys and a missing set of keys, yet thisanthropocentricawareness is absent from Thing Theory.
https://en.wikipedia.org/wiki/Thing_theory
Instatistics,identifiabilityis a property which amodelmust satisfy for preciseinferenceto be possible. A model isidentifiableif it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an infinite number of observations from it. Mathematically, this is equivalent to saying that different values of the parameters must generate differentprobability distributionsof the observable variables. Usually the model is identifiable only under certain technical restrictions, in which case the set of these requirements is called theidentification conditions. A model that fails to be identifiable is said to benon-identifiableorunidentifiable: two or moreparametrizationsareobservationally equivalent. In some cases, even though a model is non-identifiable, it is still possible to learn the true values of a certain subset of the model parameters. In this case we say that the model ispartially identifiable. In other cases it may be possible to learn the location of the true parameter up to a certain finite region of the parameter space, in which case the model isset identifiable. Aside from strictly theoretical exploration of the model properties,identifiabilitycan be referred to in a wider scope when a model is tested with experimental data sets, usingidentifiability analysis.[1] LetP={Pθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{P_{\theta }:\theta \in \Theta \}}be astatistical modelwith parameter spaceΘ{\displaystyle \Theta }. We say thatP{\displaystyle {\mathcal {P}}}isidentifiableif the mappingθ↦Pθ{\displaystyle \theta \mapsto P_{\theta }}isone-to-one:[2] This definition means that distinct values ofθshould correspond to distinct probability distributions: ifθ1≠θ2, then alsoPθ1≠Pθ2.[3]If the distributions are defined in terms of theprobability density functions(pdfs), then two pdfs should be considered distinct only if they differ on a set of non-zero measure (for example two functions ƒ1(x) =10 ≤x< 1and ƒ2(x) =10 ≤x≤ 1differ only at a single pointx= 1 — a set ofmeasurezero — and thus cannot be considered as distinct pdfs). Identifiability of the model in the sense of invertibility of the mapθ↦Pθ{\displaystyle \theta \mapsto P_{\theta }}is equivalent to being able to learn the model's true parameter if the model can be observed indefinitely long. Indeed, if {Xt} ⊆Sis the sequence of observations from the model, then by thestrong law of large numbers, for every measurable setA⊆S(here1{...}is theindicator function). Thus, with an infinite number of observations we will be able to find the true probability distributionP0in the model, and since the identifiability condition above requires that the mapθ↦Pθ{\displaystyle \theta \mapsto P_{\theta }}be invertible, we will also be able to find the true value of the parameter which generated given distributionP0. LetP{\displaystyle {\mathcal {P}}}be thenormallocation-scale family: Then This expression is equal to zero for almost allxonly when all its coefficients are equal to zero, which is only possible when |σ1| = |σ2| andμ1=μ2. Since in the scale parameterσis restricted to be greater than zero, we conclude that the model is identifiable: ƒθ1= ƒθ2⇔θ1=θ2. LetP{\displaystyle {\mathcal {P}}}be the standardlinear regression model: (where ′ denotes matrixtranspose). Then the parameterβis identifiable if and only if the matrixE[xx′]{\displaystyle \mathrm {E} [xx']}is invertible. Thus, this is theidentification conditionin the model. SupposeP{\displaystyle {\mathcal {P}}}is the classicalerrors-in-variableslinear model: where (ε,η,x*) are jointly normal independent random variables with zero expected value and unknown variances, and only the variables (x,y) are observed. Then this model is not identifiable,[4]only the product βσ²∗is (where σ²∗is the variance of the latent regressorx*). This is also an example of aset identifiablemodel: although the exact value ofβcannot be learned, we can guarantee that it must lie somewhere in the interval (βyx, 1÷βxy), whereβyxis the coefficient inOLSregression ofyonx, andβxyis the coefficient in OLS regression ofxony.[5] If we abandon the normality assumption and require thatx*werenotnormally distributed, retaining only the independence conditionε⊥η⊥x*, then the model becomes identifiable.[4]
https://en.wikipedia.org/wiki/Model_identification
JoeorJOEmay refer to:
https://en.wikipedia.org/wiki/Joe_(disambiguation)
Inprobability theory, aMarkov kernel(also known as astochastic kernelorprobability kernel) is a map that in the general theory ofMarkov processesplays the role that thetransition matrixdoes in the theory of Markov processes with afinitestate space.[1] Let(X,A){\displaystyle (X,{\mathcal {A}})}and(Y,B){\displaystyle (Y,{\mathcal {B}})}bemeasurable spaces. AMarkov kernelwith source(X,A){\displaystyle (X,{\mathcal {A}})}and target(Y,B){\displaystyle (Y,{\mathcal {B}})}, sometimes written asκ:(X,A)→(Y,B){\displaystyle \kappa :(X,{\mathcal {A}})\to (Y,{\mathcal {B}})}, is a functionκ:B×X→[0,1]{\displaystyle \kappa :{\mathcal {B}}\times X\to [0,1]}with the following properties: In other words it associates to each pointx∈X{\displaystyle x\in X}aprobability measureκ(dy|x):B↦κ(B,x){\displaystyle \kappa (dy|x):B\mapsto \kappa (B,x)}on(Y,B){\displaystyle (Y,{\mathcal {B}})}such that, for every measurable setB∈B{\displaystyle B\in {\mathcal {B}}}, the mapx↦κ(B,x){\displaystyle x\mapsto \kappa (B,x)}is measurable with respect to theσ{\displaystyle \sigma }-algebraA{\displaystyle {\mathcal {A}}}.[2] TakeX=Y=Z{\displaystyle X=Y=\mathbb {Z} }, andA=B=P(Z){\displaystyle {\mathcal {A}}={\mathcal {B}}={\mathcal {P}}(\mathbb {Z} )}(thepower setofZ{\displaystyle \mathbb {Z} }). Then a Markov kernel is fully determined by the probability it assigns to singletons{m},m∈Y=Z{\displaystyle \{m\},\,m\in Y=\mathbb {Z} }for eachn∈X=Z{\displaystyle n\in X=\mathbb {Z} }: Now the random walkκ{\displaystyle \kappa }that goes to the right with probabilityp{\displaystyle p}and to the left with probability1−p{\displaystyle 1-p}is defined by whereδ{\displaystyle \delta }is theKronecker delta. The transition probabilitiesP(m|n)=κ({m}|n){\displaystyle P(m|n)=\kappa (\{m\}|n)}for the random walk are equivalent to the Markov kernel. More generally takeX{\displaystyle X}andY{\displaystyle Y}both countable andA=P(X),B=P(Y){\displaystyle {\mathcal {A}}={\mathcal {P}}(X),\ {\mathcal {B}}={\mathcal {P}}(Y)}. Again a Markov kernel is defined by the probability it assigns to singleton sets for eachi∈X{\displaystyle i\in X} We define a Markov process by defining a transition probabilityP(j|i)=Kji{\displaystyle P(j|i)=K_{ji}}where the numbersKji{\displaystyle K_{ji}}define a (countable)stochastic matrix(Kji){\displaystyle (K_{ji})}i.e. We then define Again the transition probability, the stochastic matrix and the Markov kernel are equivalent reformulations. Letν{\displaystyle \nu }be ameasureon(Y,B){\displaystyle (Y,{\mathcal {B}})}, andk:Y×X→[0,∞]{\displaystyle k:Y\times X\to [0,\infty ]}ameasurable functionwith respect to theproductσ{\displaystyle \sigma }-algebraA⊗B{\displaystyle {\mathcal {A}}\otimes {\mathcal {B}}}such that thenκ(dy|x)=k(y,x)ν(dy){\displaystyle \kappa (dy|x)=k(y,x)\nu (dy)}i.e. the mapping defines a Markov kernel.[3]This example generalises the countable Markov process example whereν{\displaystyle \nu }was thecounting measure. Moreover it encompasses other important examples such as the convolution kernels, in particular the Markov kernels defined by the heat equation. The latter example includes theGaussian kernelonX=Y=R{\displaystyle X=Y=\mathbb {R} }withν(dx)=dx{\displaystyle \nu (dx)=dx}standard Lebesgue measure and Take(X,A){\displaystyle (X,{\mathcal {A}})}and(Y,B){\displaystyle (Y,{\mathcal {B}})}arbitrary measurable spaces, and letf:X→Y{\displaystyle f:X\to Y}be a measurable function. Now defineκ(dy|x)=δf(x)(dy){\displaystyle \kappa (dy|x)=\delta _{f(x)}(dy)}i.e. Note that the indicator function1f−1(B){\displaystyle \mathbf {1} _{f^{-1}(B)}}isA{\displaystyle {\mathcal {A}}}-measurable for allB∈B{\displaystyle B\in {\mathcal {B}}}ifff{\displaystyle f}is measurable. This example allows us to think of a Markov kernel as a generalised function with a (in general) random rather than certain value. That is, it is amultivalued functionwhere the values are not equally weighted. As a less obvious example, takeX=N,A=P(N){\displaystyle X=\mathbb {N} ,{\mathcal {A}}={\mathcal {P}}(\mathbb {N} )}, and(Y,B){\displaystyle (Y,{\mathcal {B}})}the real numbersR{\displaystyle \mathbb {R} }with the standard sigma algebra ofBorel sets. Then wherex{\displaystyle x}is the number of element at the staten{\displaystyle n},ξi{\displaystyle \xi _{i}}arei.i.d.random variables(usually with mean 0) and where1B{\displaystyle \mathbf {1} _{B}}is the indicator function. For the simple case ofcoin flipsthis models the different levels of aGalton board. Given measurable spaces(X,A){\displaystyle (X,{\mathcal {A}})},(Y,B){\displaystyle (Y,{\mathcal {B}})}we consider a Markov kernelκ:B×X→[0,1]{\displaystyle \kappa :{\mathcal {B}}\times X\to [0,1]}as a morphismκ:X→Y{\displaystyle \kappa :X\to Y}. Intuitively, rather than assigning to eachx∈X{\displaystyle x\in X}a sharply defined pointy∈Y{\displaystyle y\in Y}the kernel assigns a "fuzzy" point inY{\displaystyle Y}which is only known with some level of uncertainty, much like actual physical measurements. If we have a third measurable space(Z,C){\displaystyle (Z,{\mathcal {C}})}, and probability kernelsκ:X→Y{\displaystyle \kappa :X\to Y}andλ:Y→Z{\displaystyle \lambda :Y\to Z}, we can define a compositionλ∘κ:X→Z{\displaystyle \lambda \circ \kappa :X\to Z}by theChapman-Kolmogorov equation The composition is associative by the Monotone Convergence Theorem and the identity function considered as a Markov kernel (i.e. the delta measureκ1(dx′|x)=δx(dx′){\displaystyle \kappa _{1}(dx'|x)=\delta _{x}(dx')}) is the unit for this composition. This composition defines the structure of acategoryon the measurable spaces with Markov kernels as morphisms, first defined by Lawvere,[4]thecategory of Markov kernels. A composition of a probability space(X,A,PX){\displaystyle (X,{\mathcal {A}},P_{X})}and a probability kernelκ:(X,A)→(Y,B){\displaystyle \kappa :(X,{\mathcal {A}})\to (Y,{\mathcal {B}})}defines a probability space(Y,B,PY=κ∘PX){\displaystyle (Y,{\mathcal {B}},P_{Y}=\kappa \circ P_{X})}, where the probability measure is given by Let(X,A,P){\displaystyle (X,{\mathcal {A}},P)}be a probability space andκ{\displaystyle \kappa }a Markov kernel from(X,A){\displaystyle (X,{\mathcal {A}})}to some(Y,B){\displaystyle (Y,{\mathcal {B}})}. Then there exists a unique measureQ{\displaystyle Q}on(X×Y,A⊗B){\displaystyle (X\times Y,{\mathcal {A}}\otimes {\mathcal {B}})}, such that: Let(S,Y){\displaystyle (S,Y)}be aBorel space,X{\displaystyle X}a(S,Y){\displaystyle (S,Y)}-valued random variable on the measure space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}andG⊆F{\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}}a sub-σ{\displaystyle \sigma }-algebra. Then there exists a Markov kernelκ{\displaystyle \kappa }from(Ω,G){\displaystyle (\Omega ,{\mathcal {G}})}to(S,Y){\displaystyle (S,Y)}, such thatκ(⋅,B){\displaystyle \kappa (\cdot ,B)}is a version of theconditional expectationE[1{X∈B}∣G]{\displaystyle \mathbb {E} [\mathbf {1} _{\{X\in B\}}\mid {\mathcal {G}}]}for everyB∈Y{\displaystyle B\in Y}, i.e. It is called regular conditional distribution ofX{\displaystyle X}givenG{\displaystyle {\mathcal {G}}}and is not uniquely defined. Transition kernelsgeneralize Markov kernels in the sense that for allx∈X{\displaystyle x\in X}, the map can be any type of (non negative) measure, not necessarily a probability measure.
https://en.wikipedia.org/wiki/Markov_kernel
Inphysical systems,dampingis the loss ofenergyof anoscillating systembydissipation.[1][2]Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation.[3]Examples of damping includeviscous dampingin a fluid (seeviscousdrag),surface friction,radiation,[1]resistanceinelectronic oscillators, and absorption and scattering of light inoptical oscillators. Damping not based on energy loss can be important in other oscillating systems such as those that occur inbiological systemsandbikes[4](ex.Suspension (mechanics)). Damping is not to be confused withfriction, which is a type of dissipative force acting on a system. Friction can cause or be a factor of damping. Many systems exhibit oscillatory behavior when they are disturbed from their position ofstatic equilibrium. A mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system tends to return to its equilibrium position, but overshoots it. Sometimes losses (e.g. frictional) damp the system and can cause the oscillations to gradually decay in amplitude towards zero orattenuate. Thedamping ratiois adimensionlessmeasure, amongst other measures, that characterises how damped a system is. It is denoted byζ("zeta") and varies fromundamped(ζ= 0),underdamped(ζ< 1) throughcritically damped(ζ= 1) tooverdamped(ζ> 1). The behaviour of oscillating systems is often of interest in a diverse range of disciplines that includecontrol engineering,chemical engineering,mechanical engineering,structural engineering, andelectrical engineering. The physical quantity that is oscillating varies greatly, and could be the swaying of a tall building in the wind, or the speed of anelectric motor, but a normalised, or non-dimensionalised approach can be convenient in describing common aspects of behavior. Depending on the amount of damping present, a system exhibits different oscillatory behaviors and speeds. Adamped sine waveordamped sinusoidis asinusoidal functionwhose amplitude approaches zero as time increases. It corresponds to theunderdampedcase of damped second-order systems, or underdamped second-order differential equations.[6]Damped sine waves are commonly seen inscienceandengineering, wherever aharmonic oscillatoris losingenergyfaster than it is being supplied. A true sine wave starting at time = 0 begins at the origin (amplitude = 0). A cosine wave begins at its maximum value due to its phase difference from the sine wave. A given sinusoidal waveform may be of intermediate phase, having both sine and cosine components. The term "damped sine wave" describes all such damped waveforms, whatever their initial phase. The most common form of damping, which is usually assumed, is the form found in linear systems. This form is exponential damping, in which the outer envelope of the successive peaks is an exponential decay curve. That is, when you connect the maximum point of each successive curve, the result resembles an exponential decay function. The general equation for an exponentially damped sinusoid may be represented as:y(t)=Ae−λtcos⁡(ωt−φ){\displaystyle y(t)=Ae^{-\lambda t}\cos(\omega t-\varphi )}where: Other important parameters include: Thedamping ratiois a dimensionless parameter, usually denoted byζ(Greek letter zeta),[7]that characterizes the extent of damping in a second-order ordinarydifferential equation. It is particularly important in the study ofcontrol theory. It is also important in theharmonic oscillator. The greater the damping ratio, the more damped a system is. The damping ratio expresses the level of damping in a system relative to critical damping and can be defined using the damping coefficient: The damping ratio is dimensionless, being the ratio of two coefficients of identical units. Taking the simple example of amass-spring-damper modelwith massm, damping coefficientc, andspring constantk, wherex{\displaystyle x}represents thedegree of freedom, the system'sequation of motionis given by: The corresponding critical damping coefficient is:cc=2km{\displaystyle c_{c}=2{\sqrt {km}}} and thenatural frequencyof the system is:ωn=km{\displaystyle \omega _{n}={\sqrt {\frac {k}{m}}}} Using these definitions, the equation of motion can then be expressed as: This equation is more general than just the mass-spring-damper system and applies to electrical circuits and to other domains. It can be solved with the approach whereCandsare bothcomplexconstants, withssatisfying Two such solutions, for the two values ofssatisfying the equation, can be combined to make the general real solutions, with oscillatory and decaying properties in several regimes: TheQfactor, damping ratioζ, and exponential decay rate α are related such that[9] When a second-order system hasζ<1{\displaystyle \zeta <1}(that is, when the system is underdamped), it has twocomplex conjugatepoles that each have areal partof−α{\displaystyle -\alpha }; that is, the decay rate parameter represents the rate ofexponential decayof the oscillations. A lower damping ratio implies a lower decay rate, and so very underdamped systems oscillate for long times.[10]For example, a high qualitytuning fork, which has a very low damping ratio, has an oscillation that lasts a long time, decaying very slowly after being struck by a hammer. For underdamped vibrations, the damping ratio is also related to thelogarithmic decrementδ{\displaystyle \delta }. The damping ratio can be found for any two peaks, even if they are not adjacent.[11]For adjacent peaks:[12] wherex0andx1are amplitudes of any two successive peaks. As shown in the right figure: wherex1{\displaystyle x_{1}},x3{\displaystyle x_{3}}are amplitudes of two successive positive peaks andx2{\displaystyle x_{2}},x4{\displaystyle x_{4}}are amplitudes of two successive negative peaks. Incontrol theory,overshootrefers to an output exceeding its final, steady-state value.[13]For astep input, thepercentage overshoot(PO) is the maximum value minus the step value divided by the step value. In the case of the unit step, theovershootis just the maximum value of the step response minus one. The percentage overshoot (PO) is related to damping ratio (ζ) by: Conversely, the damping ratio (ζ) that yields a given percentage overshoot is given by: When an object is falling through the air, the only force opposing its freefall is air resistance. An object falling through water or oil would slow down at a greater rate, until eventually reaching a steady-state velocity as the drag force comes into equilibrium with the force from gravity. This is the concept ofviscous drag, which for example is applied in automatic doors or anti-slam doors.[14] Electrical systems that operate withalternating current(AC) use resistors to damp LC resonant circuits.[14] Kinetic energy that causes oscillations is dissipated as heat by electriceddy currentswhich are induced by passing through a magnet's poles, either by a coil or aluminum plate. Eddy currents are a key component ofelectromagnetic inductionwhere they set up amagnetic fluxdirectly opposing the oscillating movement, creating a resistive force.[15]In other words, the resistance caused by magnetic forces slows a system down. An example of this concept being applied is thebrakeson roller coasters.[16] Magnetorheological dampers (MR Dampers) useMagnetorheological fluid, which changes viscosity when subjected to a magnetic field. In this case, Magnetorheological damping may be considered an interdisciplinary form of damping with both viscous and magnetic damping mechanisms.[17][18] Materials have varying degrees of internal damping properties due to microstructural mechanisms within them. This property is sometimes known asdamping capacity. In metals, this arises due to movements of dislocations, as demonstrated nicely in this video:[19]Metals, as well as ceramics and glass, are known for having very light material damping. By contrast, polymers have a much higher material damping that arises from the energy loss required to contiually break and reform theVan der Waals forcesbetween polymer chains. The cross-linking inthermosetplastics causes less movement of the polymer chains and so the damping is less. Material damping is best characterized by the loss factorη{\displaystyle \eta }, given by the equation below for the case of very light damping, such as in metals or ceramics: This is because many microstructural processes that contribute to material damping are not well modelled by viscous damping, and so the damping ratio varies with frequency. Adding the frequency ratio as a factor typically makes the loss factor constant over a wide frequency range.
https://en.wikipedia.org/wiki/Damped_sine_wave
Theconservation movement, also known asnature conservation, is a political, environmental, and social movement that seeks to manage and protectnatural resources, includinganimal,fungus, andplant speciesas well as their habitat for the future. Conservationists are concerned with leaving the environment in a better state than the condition they found it in.[1]Evidence-based conservationseeks to use high quality scientific evidence to make conservation efforts more effective. The early conservation movement evolved out of necessity to maintain natural resources such asfisheries,wildlife management,water,soil, as well asconservationandsustainable forestry. The contemporary conservation movement has broadened from the early movement's emphasis on use of sustainable yield of natural resources and preservation ofwildernessareas to include preservation ofbiodiversity. Some say the conservation movement is part of the broader and more far-reachingenvironmental movement, while others argue that they differ both in ideology and practice. Conservation is seen as differing fromenvironmentalismand it is generally a conservative school of thought which aims to preserve natural resources expressly for their continuedsustainableuse by humans.[2] The conservation movement can be traced back toJohn Evelyn's workSylva, which was presented as a paper to theRoyal Societyin 1662. Published as a book two years later, it was one of the most highly influential texts onforestryever published.[3]Timber resources in England were becoming dangerously depleted at the time, and Evelyn advocated the importance of conserving the forests by managing the rate of depletion and ensuring that the cut down trees get replenished. Khejarlimassacre: TheBishnoinarrate the story ofAmrita Devi, a member of the sect who inspired as many as 363 other Bishnois to go to their deaths in protest of the cutting down ofKhejritrees on 12 September 1730. The Maharaja ofJodhpur,Abhay Singh, requiring wood for the construction of a new palace, sent soldiers to cut trees in the village of Khejarli, which was called Jehnad at that time. Noticing their actions, Amrita Devi hugged a tree in an attempt to stop them. Her family then adopted the same strategy, as did other local people when the news spread. She told the soldiers that she considered their actions to be an insult to her faith and that she was prepared to die to save the trees. The soldiers did indeed kill her and others until Abhay Singh was informed of what was going on and intervened to stop the massacre. Some of the 363 Bishnois who were killed protecting the trees were buried in Khejarli, where a simple grave with four pillars was erected. Every year, in September, i.e., Shukla Dashmi of Bhadrapad (Hindi month) the Bishnois assemble there to commemorate the sacrifice made by their people to preserve the trees. The field developed during the 18th century, especially inPrussiaand France where scientific forestry methods were developed. These methods were first applied rigorously inBritish Indiafrom the early 19th century. The government was interested in the use offorest produceand began managing the forests with measures to reduce the risk of wildfire in order to protect the "household" of nature, as it was then termed. This early ecological idea was in order to preserve the growth of delicateteaktrees, which was an important resource for theRoyal Navy. Concerns over teak depletion were raised as early as 1799 and 1805 when the Navy was undergoing a massive expansion during theNapoleonic Wars; this pressure led to the first formal conservation Act, which prohibited the felling of small teak trees. The first forestry officer was appointed in 1806 to regulate and preserve the trees necessary for shipbuilding.[4] This promising start received a setback in the 1820s and 30s, whenlaissez-faireeconomics and complaints from private landowners brought these early conservation attempts to an end. In 1837, American poetGeorge Pope Morrispublished "Woodman, Spare that Tree!", aRomanticpoem urging a lumberjack to avoid anoak treethat has sentimental value. The poem was set to music later that year byHenry Russell. Lines from the song have been quoted by environmentalists.[5] Conservation was revived in the mid-19th century, with the first practical application of scientific conservation principles to the forests of India. The conservation ethic that began to evolve included three core principles: that human activity damaged theenvironment, that there was acivic dutyto maintain the environment for future generations, and that scientific, empirically based methods should be applied to ensure this duty was carried out. SirJames Ranald Martinwas prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities inBritish Indiathrough the establishment ofForest Departments.[6]Edward Percy Stebbingwarned ofdesertificationof India. TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation program based on scientific principles. This was the first case of state management of forests in the world.[7] These local attempts gradually received more attention by the British government as the unregulated felling of trees continued unabated. In 1850, theBritish Associationin Edinburgh formed a committee to study forest destruction at the behest ofHugh Cleghorna pioneer in the nascent conservation movement. He had become interested inforest conservationinMysorein 1847 and gave several lectures at the Association on the failure of agriculture in India. These lectures influenced the government underGovernor-GeneralLord Dalhousieto introduce the first permanent and large-scale forest conservation program in the world in 1855, a model that soon spread toother colonies, as well theUnited States. In the same year, Cleghorn organised theMadras Forest Departmentand in 1860 the department banned the useshifting cultivation.[8]Cleghorn's 1861 manual,The forests and gardens of South India, became the definitive work on the subject and was widely used by forest assistants in the subcontinent.[9]In 1861, the Forest Department extended its remit into thePunjab.[10] SirDietrich Brandis, aGermanforester, joined the British service in 1856 as superintendent of the teak forests of Pegu division in easternBurma. During that time Burma'steakforests were controlled by militantKarentribals. He introduced the "taungya" system,[11]in which Karen villagers provided labor for clearing, planting and weeding teak plantations. After seven years in Burma, Brandis was appointed Inspector General of Forests in India, a position he served in for 20 years. He formulated new forest legislation and helped establish research and training institutions. TheImperial Forest SchoolatDehradunwas founded by him.[12][13] Germans were prominent in the forestry administration of British India. As well as Brandis,Berthold RibbentropandSir William P.D. Schlichbrought new methods to Indian conservation, the latter becoming the Inspector-General in 1883 after Brandis stepped down. Schlich helped to establish the journalIndian Foresterin 1874, and became the founding director of the firstforestryschool in England atCooper's Hillin 1885.[14]He authored the five-volumeManual of Forestry(1889–96) onsilviculture,forest management,forest protection, and forest utilization, which became the standard and enduring textbook for forestry students. The American movement received its inspiration from 19th century works that exalted the inherent value of nature, quite apart from human usage. AuthorHenry David Thoreau(1817–1862) made key philosophical contributions that exalted nature. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the bookWalden,which argued that people should become intimately close with nature.[15]The ideas ofSir Brandis,Sir William P.D. SchlichandCarl A. Schenckwere also very influential—Gifford Pinchot, the first chief of theUSDA Forest Service, relied heavily upon Brandis' advice for introducing professional forest management in the U.S. and on how to structure the Forest Service.[16][17]In 1864Abraham Lincolnestablished thefederally preserved Yosemite, before the firstnational parkwas created (Yellowstone National Park). Both conservationists and preservationists appeared in political debates during theProgressive Era(the 1890s–early 1920s). There were three main positions. The debate between conservation and preservation reached its peak in the public debates over the construction of California'sHetch Hetchy daminYosemite National Parkwhich supplies the water supply of San Francisco. Muir, leading theSierra Club, declared that the valley must be preserved for the sake of its beauty: "No holier temple has ever been consecrated by the heart of man." PresidentRooseveltput conservationist issues high on the national agenda.[21]He worked with all the major figures of the movement, especially his chief advisor on the matter,Gifford Pinchotand was deeply committed to conserving natural resources. He encouraged theNewlands Reclamation Actof 1902 to promote federal construction of dams to irrigate small farms and placed 230 million acres (360,000 sq mi; 930,000 km2) under federal protection. Roosevelt set aside more federal land fornational parksandnature preservesthan all of his predecessors combined.[22] Roosevelt established theUnited States Forest Service, signed into law the creation of five national parks, and signed the year 1906Antiquities Act, under which he proclaimed 18 newnational monuments. He also established the first 51bird reserves, fourgame preserves, and 150national forests, includingShoshone National Forest, the nation's first. The area of the United States that he placed under public protection totals approximately 230,000,000 acres (930,000 km2). Gifford Pinchothad been appointed by McKinley as chief of Division of Forestry in the Department of Agriculture. In 1905, his department gained control of the national forest reserves. Pinchot promoted private use (for a fee) under federal supervision. In 1907, Roosevelt designated 16 million acres (65,000 km2) of new national forests just minutes before a deadline.[23] In May 1908, Roosevelt sponsored theConference of Governorsheld in the White House, with a focus on natural resources and their most efficient use. Roosevelt delivered the opening address: "Conservation as a National Duty". In 1903 Roosevelt toured the Yosemite Valley withJohn Muir, who had a very different view of conservation, and tried to minimize commercial use of water resources and forests. Working through the Sierra Club he founded, Muir succeeded in 1905 in having Congress transfer theMariposa Groveand Yosemite Valley to the federal government.[24]While Muir wanted nature preserved for its own sake, Roosevelt subscribed to Pinchot's formulation, "to make the forest produce the largest amount of whatever crop or service will be most useful, and keep on producing it for generation after generation of men and trees."[25] Theodore Roosevelt's view on conservationism remained dominant for decades;Franklin D. Rooseveltauthorised the building of many large-scale dams and water projects, as well as the expansion of the National Forest System to buy out sub-marginal farms. In 1937, thePittman–Robertson Federal Aid in Wildlife Restoration Actwas signed into law, providing funding for state agencies to carry out their conservation efforts. Environmental reemerged on the national agenda in 1970, with RepublicanRichard Nixonplaying a major role, especially with his creation of theEnvironmental Protection Agency. The debates over the public lands and environmental politics played a supporting role in the decline of liberalism and the rise of modern environmentalism. Although Americans consistently rank environmental issues as "important", polling data indicates that in the voting booth voters rank the environmental issues low relative to other political concerns. The growth of the Republican party's political power in the inland West (apart from the Pacific coast) was facilitated by the rise of popular opposition to public lands reform. Successful Democrats in the inland West and Alaska typically take more conservative positions on environmental issues than Democrats from the Coastal states. Conservatives drew on new organizational networks of think tanks, industry groups, and citizen-oriented organizations, and they began to deploy new strategies that affirmed the rights of individuals to their property, protection of extraction rights, to hunt and recreate, and to pursue happiness unencumbered by the federal government at the expense of resource conservation.[26] In 2019, convivial conservation was an idea proposed by Bram Büscher and Robert Fletcher. Convivial conservation draws on social movements and concepts likeenvironmental justiceand structural change to create a post-capitalist approach to conservation.[27]Convivial conservation rejects both human-nature dichotomies and capitalistic political economies. Built on a politics of equity, structural change and  environmental  justice, convivial conservation is considered a radical theory as it focuses on the structural political-economy of modern nation states and the need to create structural change.[28]Convivial conservation creates a more integrated approach which reconfigures the nature-human configuration to create a world in which humans are recognized as a part of nature. The emphasis on nature as for and by humans creates a human responsibility to care for the environment as a way of caring for themselves. It also redefines nature as not only being pristine and untouched, but cultivated by humans in everyday formats. The theory is a long-term process of structural change to move away from capitalist valuation in favor of a system emphasizing everyday and local living.[28]Convivial conservation creates a nature which includes humans rather than excluding them from the necessity of conservation. While other conservation theories integrate some of the elements of convivial conservation, none move away from both dichotomies and capitalist valuation principles. Source:[28] The early years of the environmental and conservation movements were rooted in the safeguarding of game to support the recreation activities of elite white men, such assport hunting.[29]This led to an economy to support and perpetuate these activities as well as the continued wilderness conservation to support the corporate interests supplying the hunters with the equipment needed for their sport.[29]Game parks in England and the United States allowed wealthy hunters and fishermen todeplete wildlife, while hunting by Indigenous groups, laborers and the working class, and poor citizens - especially for the express use of sustenance - was vigorously monitored.[29]Scholars have shown that the establishment of theU.S. national parks, while setting aside land for preservation, was also a continuation of preserving the land for the recreation and enjoyment of elite white hunters and nature enthusiasts.[29] While Theodore Roosevelt was one of the leading activists for the conservation movement in the United States, he also believed that the threats to the natural world were equally threats to white Americans. Roosevelt and his contemporaries held the belief that the cities, industries and factories that were overtaking the wilderness and threatening the native plants and animals were also consuming and threatening the racial vigor that they believed white Americans held which made them superior.[30]Roosevelt was a big believer that white male virility depended on wildlife for its vigor, and that, consequently, depleting wildlife would result in a racially weaker nation.[30]This lead Roosevelt to support the passing of many immigration restrictions,eugenicslegislations and wildlife preservation laws.[30]For instance, Roosevelt established the first national parks through the Antiquities Act of 1906 while also endorsing the removal of Indigenous Americans from their tribal lands within the parks.[31]This move was promoted and endorsed by other leaders of the conservation movement, includingFrederick Law Olmsted, a leading landscape architect, conservationist, and supporter of the national park system, andGifford Pinchot, a leading eugenicist and conservationist.[31]Furthering the economic exploitation of the environment and national parks for wealthy whites was the beginning ofecotourismin the parks, which included allowing some Indigenous Americans to remain so that the tourists could get what was to be considered the full "wilderness experience".[32] Another long-term supporter, partner, and inspiration to Roosevelt,Madison Grant, was a well known American eugenicist and conservationist.[30]Grant worked alongside Roosevelt in the American conservation movement and was even secretary and president of the Boone and Crockett Club.[33]In 1916, Grant published the book "The Passing of the Great Race, or The Racial Basis of European History", which based its premise on eugenics and outlined a hierarchy of races, with white, "Nordic" men at the top, and all other races below.[33]The German translation of this book was used by Nazi Germany as the source for many of their beliefs[33]and was even proclaimed by Hitler to be his "Bible".[31] One of the first established conservation agencies in the United States is theNational Audubon Society. Founded in 1905, its priority was to protect and conserve various waterbird species.[34]However, the first state-level Audubon group was created in 1896 by Harriet Hemenway and Minna B. Hall to convince women to refrain from buying hats made with bird feathers- a common practice at the time.[34]The organization is named afterJohn Audubon, a naturalist and legendary bird painter.[35]Audubon was also a slaveholder who also included manyracisttales in his books.[35]Despite his views of racial inequality, Audubon did find black and Indigenous people to be scientifically useful, often using their local knowledge in his books and relying on them to collect specimens for him.[35] The ideology of the conservation movement in Germany paralleled that of the U.S. and England.[36]Early German naturalists of the 20th century turned to the wilderness to escape the industrialization of cities. However, many of these early conservationists became part of and influenced theNazi party. Like elite and influential Americans of the early 20th century, they embraced eugenics and racism and promoted the idea that Nordic people aresuperior.[36] Although the conservation movement developed in Europe in the 18th century,Costa Ricaas a country has been heralded its champion in the current times.[37]Costa Rica hosts an astonishing number of species, given its size, having more animal and plant species than theUSandCanadacombined[38]hosting over 500,000 species of plants and animals. Despite this, Costa Rica is only 250 miles long and 150 miles wide. A widely accepted theory for the origin of this unusual density of species is the free mixing of species from both North and South America occurring on this "inter-oceanic" and "inter-continental" landscape.[38]Preserving the natural environment of this fragile landscape, therefore, has drawn the attention of many international scholars and scientists. MINAE(Ministry of Environment, Energy and Telecommunications) and its responsible for many conservation efforts in Costa Rica it achieves through its many agencies, including SINAC (National System of Conservation Areas), FONAFIFO (national forest fund), and CONAGEBIO (National Commission for Biodiversity Management). Costa Rica has made conservation a national priority, and has been at the forefront of preserving its natural environment with 28% of its land protected in the form of national parks, reserves, and wildlife refuges, which is under the administrative control ofSINAC(National System of Conservation Areas)[39]a division ofMINAE(Ministry of Environment, Energy and Telecommunications). SINAC has subdivided the country into various zones depending on the ecological diversity of that region - as seen in figure 1. The country has used this ecological diversity to its economic advantage in the form of a thrivingecotourism industry, putting its commitment to nature, on display to visitors from across the globe. The tourism market in Costa Rica is estimated to grow by USD 1.34 billion from 2023 to 2028, growing at a CAGR of 5.76%. You know, when we first set up WWF, our objective was to save endangered species from extinction. But we have failed completely; we haven't managed to save a single one. If only we had put all that money into condoms, we might have done some good. TheWorld Wide Fund for Nature(WWF) is aninternationalnon-governmental organizationfounded in 1961, working in the field of the wilderness preservation, and the reduction ofhuman impact on the environment.[42]It was formerly named the "World Wildlife Fund", which remains its official name inCanadaand theUnited States.[42] WWF is the world's largestconservation organizationwith over five million supporters worldwide, working in more than 100 countries, supporting around 1,300 conservation and environmental projects.[43]They have invested over $1 billion in more than 12,000 conservation initiatives since 1995.[44]WWF is afoundationwith 55% of funding from individuals and bequests, 19% from government sources (such as theWorld Bank,DFID,USAID) and 8% from corporations in 2014.[45][46] WWF aims to "stop the degradation of the planet's natural environment and to build a future in which humans live in harmony with nature."[47]TheLiving Planet Reportis published every two years by WWF since 1998; it is based on aLiving Planet Indexandecological footprintcalculation.[42]In addition, WWF has launched several notable worldwide campaigns includingEarth HourandDebt-for-Nature Swap, and its current work is organized around these six areas: food, climate, freshwater, wildlife, forests, and oceans.[42][44] Institutions such as the WWF have historically been the cause of the displacement and divide between Indigenous populations and the lands they inhabit. The reason is the organization's historically colonial, paternalistic, and neoliberal approaches to conservation. Claus, in her article "Drawing the Sea Near: Satoumi and Coral Reef Conservation in Okinawa", expands on this approach, called "conservation far", in which access to lands is open to external foreign entities, such as researchers or tourists, but prohibited to local populations. The conservation initiatives are therefore taking place "far" away. This entity is largely unaware of the customs and values held by those within the territory surrounding nature and their role within it.[48] In Japan, the town of Shiraho had traditional ways of tending to nature that were lost due to colonization and militarization by the United States. The return to traditional sustainability practices constituted a “conservation near” approach. This engages those near in proximity to the lands in the conservation efforts and holds them accountable for their direct effects on its preservation. While conservation-far drills visuals and sight as being the main interaction medium between people and the environment, conservation near includes a hands-on, full sensory experience permitted by conservation-near methodologies.[48]An emphasis on observation only stems from a deeper association with intellect and observation. The alternative to this is more of a bodily or "primitive" consciousness, which is associated with lower-intelligence and people of color. A new, integrated approach to conservation is being investigated in recent years by institutions such as WWF.[48]The socionatural relationships centered on the interactions based in reciprocity and empathy, making conservation efforts being accountable to the local community and ways of life, changing in response to values, ideals, and beliefs of the locals. Japanese seascapes are often integral to the identity of the residents and includes historical memories and spiritual engagements which need to be recognized and considered.[48]The involvement of communities gives residents a stake in the issue, leading to a long-term solution which emphasizes sustainable resource usage and the empowerment of the communities. Conservation efforts are able to take into consideration cultural values rather than the foreign ideals that are often imposed by foreign activists. Evidence-based conservationis the application of evidence in conservation biology and environmental management actions and policy making. It is defined as systematically assessing scientific information from published,peer-reviewedpublications and texts, practitioners' experiences, independent expert assessment, and local andindigenousknowledge on a specific conservation topic. This includes assessing the current effectiveness of different management interventions, threats and emerging problems and economic factors.[49] Evidence-based conservation was organized based on the observations that decision making in conservation was based onintuitionand or practitioner experience often disregarding other forms of evidence of successes and failures (e.g. scientific information). This has led to costly and poor outcomes.[50]Evidence-based conservation provides access to information that will support decision making through an evidence-based framework of "what works" in conservation.[51] Deforestationandoverpopulationare issues affecting all regions of the world. The consequent destruction of wildlife habitat has prompted the creation of conservation groups in other countries, some founded by local hunters who have witnessed declining wildlife populations first hand. Also, it was highly important for the conservation movement to solve problems of living conditions in the cities and the overpopulation of such places. The idea of incentive conservation is a modern one but its practice has clearly defended some of the sub Arctic wildernesses and the wildlife in those regions for thousands of years, especially by indigenous peoples such as the Evenk, Yakut, Sami, Inuit and Cree. The fur trade and hunting by these peoples have preserved these regions for thousands of years. Ironically, the pressure now upon them comes from non-renewable resources such as oil, sometimes to make synthetic clothing which is advocated as a humane substitute for fur. (SeeRaccoon dogfor case study of the conservation of an animal through fur trade.) Similarly, in the case of the beaver, hunting and fur trade were thought to bring about the animal's demise, when in fact they were an integral part of its conservation. For many years children's books stated and still do, that the decline in the beaver population was due to the fur trade. In reality however, the decline in beaver numbers was because of habitat destruction and deforestation, as well as its continued persecution as a pest (it causes flooding). In Cree lands, however, where the population valued the animal for meat and fur, it continued to thrive. The Inuit defend their relationship with the seal in response to outside critics.[52] TheIzoceño-GuaraníofSanta Cruz Department,Bolivia, is a tribe of hunters who were influential in establishing the Capitania del Alto y Bajo Isoso (CABI). CABI promotes economic growth and survival of the Izoceno people while discouraging the rapid destruction of habitat within Bolivia'sGran Chaco. They are responsible for the creation of the 34,000 square kilometre Kaa-Iya del Gran Chaco National Park and Integrated Management Area (KINP). The KINP protects the most biodiverse portion of the Gran Chaco, an ecoregion shared with Argentina, Paraguay and Brazil. In 1996, theWildlife Conservation Societyjoined forces with CABI to institute wildlife and hunting monitoring programs in 23 Izoceño communities. The partnership combines traditional beliefs and local knowledge with the political and administrative tools needed to effectively manage habitats. The programs rely solely on voluntary participation by local hunters who perform self-monitoring techniques and keep records of their hunts. The information obtained by the hunters participating in the program has provided CABI with important data required to make educated decisions about the use of the land. Hunters have been willing participants in this program because of pride in their traditional activities, encouragement by their communities and expectations of benefits to the area. In order to discourage illegal South African hunting parties and ensure future local use and sustainability, indigenous hunters inBotswanabegan lobbying for and implementing conservation practices in the 1960s. The Fauna Preservation Society of Ngamiland (FPS) was formed in 1962 by the husband and wife team: Robert Kay and June Kay, environmentalists working in conjunction with the Batawana tribes to preserve wildlife habitat. The FPS promotes habitat conservation and provides local education for preservation of wildlife. Conservation initiatives were met with strong opposition from the Botswana government because of the monies tied to big-game hunting. In 1963, BaTawanga Chiefs and tribal hunter/adventurers in conjunction with the FPS foundedMoremi National Park and Wildlife Refuge, the first area to be set aside by tribal people rather than governmental forces. Moremi National Park is home to a variety of wildlife, including lions, giraffes, elephants, buffalo, zebra, cheetahs and antelope, and covers an area of 3,000 square kilometers. Most of the groups involved with establishing this protected land were involved with hunting and were motivated by their personal observations of declining wildlife and habitat.
https://en.wikipedia.org/wiki/Conservation_movement
Technical support, commonly shortened astech support, is acustomer serviceprovided to customers to resolve issues, commonly withconsumer electronics. This is commonly provided viacall centers,online chatandemail.[1]Many companies providediscussion boardsfor users to provide support to other users, decreasing load and cost on these companies.[2] With the increasing use of technology in modern times, there is a growing requirement to provide technical support. Many organizations locate their technical support departments orcall centersin countries or regions with lower costs.Dellwas amongst the first companies to outsource their technical support and customer service departments to India in 2001.[3]There has also been a growth in companies specializing in providing technical support to other organizations. These are often referred to as MSPs (Managed Service Providers).[4] For businesses needing to provide technical support,outsourcingallows them to maintain high availability of service. Such need may result from peaks in call volumes during the day, periods of high activity due to the introduction of new products or maintenance service packs, or the requirement to provide customers with a high level of service at a low cost to the business. For businesses needing technical support assets,outsourcingenables their core employees to focus more on their work in order to maintain productivity.[5]It also enables them to utilize specialized personnel whose technicalknowledge baseand experience may exceed the scope of the business, thus providing a higher level of technical support to their employees. Technical support is often subdivided into tiers, or levels, in order to better serve a business or customer base. The number of levels a business uses to organize their technical support group is dependent on the business's needs regarding their ability to sufficiently serve their customers or users. The reason for providing a multi-tiered support system instead of one general support group is to provide the best possible service in the most efficient possible manner. Success of the organizational structure is dependent on thetechnicians' understanding of their level of responsibility and commitments, their customer response time commitments, and when to appropriately escalate an issue and to which level.[6]A common support structure revolves around a three-tiered technical support system. Remote computer repair is a method fortroubleshootingsoftware related problems viaremote desktopconnections.[7] Tier I (or Level 1, abbreviated as T1 or L1) is the first technical support level. The first job of a Tier I specialist is to gather the customer's information and to determine the customer's issue by analyzing the symptoms and figuring out the underlying problem.[6]When analyzing the symptoms, it is important for the technician to identify what the customer is trying to accomplish so that time is not wasted on "attempting to solve a symptom instead of a problem."[6] Once identification of the underlying problem is established, the specialist can begin sorting through the possible solutions available. Technical support specialists in this group typically handle straightforward and simple problems while "possibly using some kind of knowledge management tool."[8]This includes troubleshooting methods such as verifyingphysical layerissues, resolving username and password problems, uninstalling/reinstalling basicsoftware applications, verification of proper hardware and software set up, and assistance with navigating around application menus. Personnel at this level have a basic to general understanding of the product or service and may not always contain the competency required for solving complex issues.[9]Nevertheless, the goal for this group is to handle 70–80% of the user problems before finding it necessary to escalate the issue to a higher level.[9] Tier II (or Level 2, abbreviated asT2orL2) is a more in-depth technical support level than Tier I and therefore costs more as the technicians are more experienced and knowledgeable on a particular product or service. It is synonymous withlevel 2 support,support line 2,administrative level support, and various other headings denoting advanced technicaltroubleshootingand analysis methods. Technicians in this realm of knowledge are responsible for assisting Tier I personnel in solving basic technical problems and for investigating elevated issues by confirming the validity of the problem and seeking for known solutions related to these more complex issues.[9]However, prior to thetroubleshootingprocess, it is important that the technician review the work order to see what has already been accomplished by the Tier I technician and how long the technician has been working with the particular customer. This is a key element in meeting both the customer and business needs as it allows the technician to prioritize the troubleshooting process and properly manage their time.[6] If a problem is new and/or personnel from this group cannot determine a solution, they are responsible for elevating this issue to the Tier III technical support group. In addition, many companies may specify that certain troubleshooting solutions be performed by this group to help ensure the intricacies of a challenging issue are solved by providing experienced and knowledgeable technicians. This may include, but is not limited to, onsite installations or replacement of various hardware components, software repair, diagnostic testing, or the utilization of remote control tools to take over the user's machine for the sole purpose of troubleshooting and finding a solution to the problem.[6][10] Tier III (or Level 3, abbreviated as T3 or L3) is the highest level of support in a three-tiered technical support model responsible for handling the most difficult or advanced problems. It is synonymous with level 3 support, 3rd line support, back-end support, support line 3, high-end support, and various other headings denoting expert level troubleshooting and analysis methods. These individuals are experts in their fields and are responsible for not only assisting both Tier I and Tier II personnel, but with the research and development of solutions to new or unknown issues. Note that Tier III technicians have the same responsibility as Tier II technicians in reviewing the work order and assessing the time already spent with the customer so that the work is prioritized and time management is sufficiently utilized.[6]If it is at all possible, the technician will work to solve the problem with the customer as it may become apparent that the Tier I and/or Tier II technicians simply failed to discover the proper solution. Upon encountering new problems, however, Tier III personnel must first determine whether or not to solve the problem and may require the customer's contact information so that the technician can have adequate time to troubleshoot the issue and find a solution.[9]It is typical for a developer or someone who knows the code or backend of the product, to be the Tier 3 support person. In some instances, an issue may be so problematic to the point where the product cannot be salvaged and must be replaced. Such extreme problems are also sent to the original developers for in-depth analysis. If it is determined that a problem can be solved, this group is responsible for designing and developing one or more courses of action, evaluating each of these courses in a test case environment, and implementing the best solution to the problem.[9] While not universally used, a fourth level often represents an escalation point beyond the organization. L4 support is generally a hardware or software vendor.[11] A common scam typically involves acold callerclaiming to be from a technical support department of a company likeMicrosoft. Such cold calls are often made fromcall centersbased inIndiato users inEnglish-speaking countries, although increasingly these scams operate within the same country. The scammer will instruct the user to download aremote desktopprogram and once connected, usesocial engineeringtechniques that typically involveWindowscomponents to persuade the victim that they need to pay in order for the computer to be fixed and then proceeds to steal money from the victim's credit card.[12]
https://en.wikipedia.org/wiki/Technical_support
High-performance teams(HPTs) is a concept withinorganization developmentreferring to teams, organizations, or virtual groups that are highly focused on their goals and that achieve superior business results. High-performance teams outperform all other similar teams and they outperform expectations given their composition.[1] A high-performance team can be defined as a group of people with specific roles and complementary talents and skills, aligned with and committed to a common purpose, who consistently show high levels of collaboration and innovation, produce superior results, and extinguish radical or extreme opinions that could be damaging. The high-performance team is regarded as tight-knit, focused on their goal and have supportive processes that will enable any team member to surmount any barriers in achieving the team's goals.[2] Within the high-performance team, people are highly skilled and are able to interchange their roles[citation needed]. Also, leadership within the team is not vested in a single individual. Instead the leadership role is taken up by various team members, according to the need at that moment in time. High-performance teams have robust methods of resolving conflict efficiently, so that conflict does not become a roadblock to achieving the team's goals. There is a sense of clear focus and intense energy within a high-performance team. Collectively, the team has its own consciousness, indicating shared norms and values within the team. The team feels a strong sense of accountability for achieving their goals. Team members display high levels of mutual trust towards each other.[2] To supportteam effectivenesswithin high-performance teams, understanding of individual working styles is important. This can be done by applyingBelbin High Performing Teams,DISC assessment, theMyers-Briggs Type Indicatorand theHerrmann Brain Dominance Instrumentto understand behavior, personalities and thinking styles of team members. UsingTuckman's stages of group developmentas a basis, a HPT moves through the stages of forming, storming, norming and performing, as with other teams. However, the HPT uses thestorming and normingphase effectively to define who they are and what their overall goal is, and how to interact together and resolve conflicts. Therefore, when the HPT reaches the performing phase, they have highly effective behaviours that allow them to overachieve in comparison to regular teams. Later, leadership strategies (coordinating, coaching, empowering, and supporting) were connected to each stage to help facilitate teams to high performance.[3] Characteristics Different characteristics have been used to describe high-performance teams. Despite varying approaches to describing high-performance teams there is a set of common characteristics that are recognised to lead to success[4] There are many types of teams in organizations as well. The most traditional type of team is the manager-led team. Within this team, a manager fits the role of the team leader and is responsible for defining the team goals, methods, and functions. The remaining team members are responsible for carrying out their assigned work under the monitoring of the manager. Self-managing or self-regulating teams operate when the “manager” position determines the overall purpose or goal for the team and the remainder of the team are at liberty to manage the methods by which are needed to achieve the intended goal. Self-directing or self-designing teams determine their own team goals and the different methods needed in order to achieve the end goal. This offers opportunities for innovation, enhance goal commitment and motivation. Finally, self-governing teams are designed with high control and responsibility to execute a task or manage processes. Board of directors is a prime example of self-governing team.[5] Given the importance of team-based work in today's economy, much focus has been brought in recent years to useevidence-based organizational researchto pinpoint more accurately to the defining attributes of high-performance teams. The team at MIT'sHuman Dynamics Laboratoryinvestigated explicitly observable communication patterns and foundenergy,engagement, andexplorationto be surprisingly powerful predictive indicators for a team's ability to perform.[6] Other researchers focus on what supports group intelligence and allows a team to be smarter than their smartest individuals. A group at MIT'sCenter for Collective Intelligence, e.g., found that teams with more women and teams where team members share "airtime" equally showed higher group intelligence scores.[7] The Fundamental Interpersonal Relations Orientation – Behavior (FIRO-B) questionnaire is a resource that could help the individual help identify their personal orientation. In other words, the behavioral tendency a person in different environments, with different people. The theory of personal orientation was initially shared by Schultz (1958) who claimed personal orientation consists of three fundamental human needs: need for inclusion, need for control, and the need for affection. The FIRO-B test helps an individual identify their interpersonal compatibilities with these needs which can be directly correlated to their performance in a high-performance team.[8] First described in detail by theTavistock Institute, UK, in the 1950s, HPTs gained popular acceptance in the US by the 1980s, with adoption by organizations such asGeneral Electric,Boeing,Digital Equipment Corporation(nowHP), and others. In each of these cases, major change was created through the shifting oforganizational culture, merging the business goals of the organization with the social needs of the individuals. Often in less than a year, HPTs achieved a quantum leap in business results in all key success dimensions, including customer, employee, shareholder and operationalvalue-addeddimensions.[9] Due to its initial success, many organizations attempted to copy HPTs. However, without understanding the underlying dynamics that created them, and without adequate time and resources to develop them, most of these attempts failed. With this failure, HPTs fell out of general favor by 1995, and the termhigh-performancebegan to be used in a promotional context, rather than a performance-based one.[9] Recently, some private sector and government sector organizations have placed new focus on HPTs, as new studies and understandings have identified the key processes and team dynamics necessary to create all-around quantum performance improvements.[10]With these new tools, organizations such asKraft Foods,General Electric,Exelon, and the US government have focused new attention on high-performance teams. In Great Britain, high-performance workplaces are defined as being those organizations where workers are actively communicated with and involved in the decisions directly affecting the workers. By regulation of the UKDepartment of Trade and Industry, these workplaces will be required in most organizations by 2008[11]
https://en.wikipedia.org/wiki/High-performance_teams
Uniform distributionmay refer to:
https://en.wikipedia.org/wiki/Uniform_distribution
AArch64orARM64is the64-bitExecution state of theARM architecture family. It was first introduced with theArmv8-Aarchitecture, and has had many extension updates.[2] An Execution state, in ARMv8-A, ARMv8-R, and ARMv9-A, defines the number ofbitsin the primaryprocessor registers, the availableinstruction sets, and other aspects of the processor's execution environment. In those versions of the Arm architecture, there are two Execution states, the 64-bit AArch64 Execution state and the 32-bit AArch32 Execution state.[3] Extension: Data gathering hint (ARMv8.0-DGH). AArch64 was introduced in ARMv8-A and is included in subsequent versions of ARMv8-A, and in all versions of ARMv9-A. It was also introduced in ARMv8-R as an option, after its introduction in ARMv8-A; it is not included in ARMv8-M. The main opcode for selecting which group an A64 instruction belongs to is at bits 25–28. Announced in October 2011,[5]ARMv8-Arepresents a fundamental change to the ARM architecture. It adds an optional 64-bit Execution state, named "AArch64", and the associated new "A64" instruction set, in addition to a 32-bit Execution state, "AArch32", supporting the 32-bit "A32" (original 32-bit Arm) and "T32" (Thumb/Thumb-2) instruction sets. The latter instruction sets provideuser-spacecompatibility with the existing 32-bit ARMv7-A architecture. ARMv8-A allows 32-bit applications to be executed in a 64-bit OS, and a 32-bit OS to be under the control of a 64-bithypervisor.[1]ARM announced theirCortex-A53andCortex-A57cores on 30 October 2012.[6]Applewas the first to release an ARMv8-A compatible core (Cyclone) in a consumer product (iPhone 5S).AppliedMicro, using anFPGA, was the first to demo ARMv8-A.[7]The first ARMv8-ASoCfromSamsungis the Exynos 5433 used in theGalaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in abig.LITTLEconfiguration; but it will run only in AArch32 mode.[8]ARMv8-A includes the VFPv3/v4 and advanced SIMD (Neon) as standard features in both AArch32 and AArch64. It also adds cryptography instructions supportingAES,SHA-1/SHA-256andfinite field arithmetic.[9] An ARMv8-A processor can support one or both of AArch32 and AArch64; it may support AArch32 and AArch64 at lower Exception levels and only AArch64 at higher Exception levels.[10]For example, the ARM Cortex-A32 supports only AArch32,[11]theARM Cortex-A34supports only AArch64,[12]and theARM Cortex-A72supports both AArch64 and AArch32.[13]An ARMv9-A processor must support AArch64 at all Exception levels, and may support AArch32 at EL0.[10] In December 2014, ARMv8.1-A,[14]an update with "incremental benefits over v8.0", was announced. The enhancements fell into two categories: changes to the instruction set, and changes to the exception model and memory translation. Instruction set enhancements included the following: Enhancements for the exception model and memory translation system included the following: ARMv8.2-A was announced in January 2016.[17]Its enhancements fall into four categories: The Scalable Vector Extension (SVE) is "an optional extension to the ARMv8.2-A architecture and newer" developed specifically for vectorization ofhigh-performance computingscientific workloads.[18][19]The specification allows for variable vector lengths to be implemented from 128 to 2048 bits. The extension is complementary to, and does not replace, theNEONextensions. A 512-bit SVE variant has already been implemented on theFugaku supercomputerusing theFujitsu A64FXARM processor; this computer[20]was the fastest supercomputer in the world for two years, from June 2020[21]to May 2022.[22]A more flexible version, 2x256 SVE, was implemented by theAWS Graviton3ARM processor. SVE is supported byGCC, with GCC 8 supporting automatic vectorization[19]and GCC 10 supporting C intrinsics. As of July 2020[update],LLVMandclangsupport C and IR intrinsics. ARM's own fork of LLVM supports auto-vectorization.[23] In October 2016, ARMv8.3-A was announced. Its enhancements fell into six categories:[24] ARMv8.3-A architecture is now supported by (at least) theGCC7 compiler.[29] In November 2017, ARMv8.4-A was announced. Its enhancements fell into these categories:[30][31][32] In September 2018, ARMv8.5-A was announced. Its enhancements fell into these categories:[34][35][36] On 2 August 2019,GoogleannouncedAndroidwould adopt Memory Tagging Extension (MTE).[38] In March 2021, ARMv9-A was announced. ARMv9-A's baseline is all the features from ARMv8.5.[39][40][41]ARMv9-A also adds: In September 2019, ARMv8.6-A was announced. Its enhancements fell into these categories:[34][47] For example, fine-grained traps, Wait-for-Event (WFE) instructions, EnhancedPAC2 and FPAC. The bfloat16 extensions for SVE and Neon are mainly for deep learning use.[49] In September 2020, ARMv8.7-A was announced. Its enhancements fell into these categories:[34][50] In September 2021, ARMv8.8-A and ARMv9.3-A were announced. Their enhancements fell into these categories:[34][52] LLVM15 supports ARMv8.8-A and ARMv9.3-A.[53] In September 2022, ARMv8.9-A and ARMv9.4-A were announced, including:[54] In October 2023, ARMv9.5-A was announced, including:[55] In October 2024, ARMv9.6-A was announced, including:[56] TheARM-Rarchitecture, specifically the Armv8-R profile, is designed to address the needs of real-time applications, where predictable and deterministic behavior is essential. This profile focuses on delivering high performance, reliability, and efficiency in embedded systems where real-time constraints are critical. With the introduction of optional AArch64 support in the Armv8-R profile, the real-time capabilities have been further enhanced. The Cortex-R82[57]is the first processor to implement this extended support, bringing several new features and improvements to the real-time domain.[58]
https://en.wikipedia.org/wiki/AArch64#Scalable_Vector_Extension_(SVE)
Incomputer programming, especiallyfunctional programmingandtype theory, analgebraic data type(ADT) is a kind ofcomposite data type, i.e., adata typeformed by combining other types. Two common classes of algebraic types areproduct types(i.e.,tuples, andrecords) andsum types(i.e.,taggedordisjoint unions,coproducttypes orvariant types).[1] Thevaluesof a product type typically contain several values, calledfields. All values of that type have the same combination of field types. The set of all possible values of a product type is the set-theoretic product, i.e., theCartesian product, of the sets of all possible values of its field types. The values of a sum type are typically grouped into several classes, calledvariants. A value of a variant type is usually created with a quasi-functional entity called aconstructor. Each variant has its own constructor, which takes a specified number of arguments with specified types. The set of all possible values of a sum type is the set-theoretic sum, i.e., thedisjoint union, of the sets of all possible values of its variants.Enumerated typesare a special case of sum types in which the constructors take no arguments, as exactly one value is defined for each constructor. Values of algebraic types are analyzed withpattern matching, which identifies a value by its constructor or field names and extracts the data it contains. Algebraic data types were introduced inHope, a smallfunctionalprogramming languagedeveloped in the 1970s at theUniversity of Edinburgh.[2] One of the most common examples of an algebraic data type is thesingly linked list. A list type is a sum type with two variants,Nilfor an empty list andConsxxsfor the combination of a new elementxwith a listxsto create a new list. Here is an example of how a singly linked list would be declared inHaskell: or Consis an abbreviation ofconstruct. Many languages have special syntax for lists defined in this way. For example, Haskell andMLuse[]forNil,:or::forCons, respectively, and square brackets for entire lists. SoCons 1 (Cons 2 (Cons 3 Nil))would normally be written as1:2:3:[]or[1,2,3]in Haskell, or as1::2::3::[]or[1,2,3]in ML. For a slightly more complex example,binary treesmay be implemented in Haskell as follows: or Here,Emptyrepresents an empty tree,Leafrepresents a leaf node, andNodeorganizes the data into branches. In most languages that support algebraic data types, it is possible to defineparametrictypes. Examples are given later in this article. Somewhat similar to a function, a data constructor is applied to arguments of an appropriate type, yielding an instance of the data type to which the type constructor belongs. For example, the data constructorLeafis logically a functionInt -> Tree, meaning that giving an integer as an argument toLeafproduces a value of the typeTree. AsNodetakes two arguments of the typeTreeitself, the datatype isrecursive. Operations on algebraic data types can be defined by usingpattern matchingto retrieve the arguments. For example, consider a function to find the depth of aTree, given here in Haskell: Thus, aTreegiven todepthcan be constructed using any ofEmpty,Leaf, orNodeand must be matched for any of them respectively to deal with all cases. In case ofNode, the pattern extracts the subtreeslandrfor further processing. Algebraic data types are highly suited to implementingabstract syntax. For example, the following algebraic data type describes a simple language representing numerical expressions: An element of such a data type would have a form such asMult (Add (Number 4) (Minus (Number 0) (Number 1))) (Number 2). Writing an evaluation function for this language is a simple exercise; however, more complex transformations also become feasible. For example, an optimization pass in a compiler might be written as a function taking an abstract expression as input and returning an optimized form. Algebraic data types are used to represent values that can be one of severaltypes of things. Each type of thing is associated with an identifier called aconstructor, which can be considered a tag for that kind of data. Each constructor can carry with it a different type of data. For example, considering the binaryTreeexample shown above, a constructor could carry no data (e.g.,Empty), or one piece of data (e.g.,Leafhas one Int value), or multiple pieces of data (e.g.,Nodehas oneIntvalue and twoTreevalues). To do something with a value of thisTreealgebraic data type, it isdeconstructedusing a process calledpattern matching. This involves matching the data with a series ofpatterns. The example functiondepthabove pattern-matches its argument with three patterns. When the function is called, it finds the first pattern that matches its argument, performs any variable bindings that are found in the pattern, and evaluates the expression corresponding to the pattern. Each pattern above has a form that resembles the structure of some possible value of this datatype. The first pattern simply matches values of the constructorEmpty. The second pattern matches values of the constructorLeaf. Patterns are recursive, so then the data that is associated with that constructor is matched with the pattern "n". In this case, a lowercase identifier represents a pattern that matches any value, which then is bound to a variable of that name — in this case, a variable "n" is bound to the integer value stored in the data type — to be used in the expression to evaluate. The recursion in patterns in this example are trivial, but a possible more complex recursive pattern would be something like: Nodei(Nodej(Leaf4)x)(Nodeky(NodeEmptyz)) Recursive patterns several layers deep are used for example in balancingred–black trees, which involve cases that require looking at colors several layers deep. The example above is operationally equivalent to the followingpseudocode: The advantages of algebraic data types can be highlighted by comparison of the above pseudocode with a pattern matching equivalent. Firstly, there istype safety. In the pseudocode example above, programmer diligence is required to not accessfield2when the constructor is aLeaf. The type system would have difficulties assigning a static type in a safe way for traditionalrecorddata structures. However, in pattern matching such problems are not faced. The type of each extracted value is based on the types declared by the relevant constructor. The number of values that can be extracted is known based on the constructor. Secondly, in pattern matching, the compiler performs exhaustiveness checking to ensure all cases are handled. If one of the cases of thedepthfunction above were missing, the compiler would issue a warning. Exhaustiveness checking may seem easy for simple patterns, but with many complex recursive patterns, the task soon becomes difficult for the average human (or compiler, if it must check arbitrary nested if-else constructs). Similarly, there may be patterns which never match (i.e., are already covered by prior patterns). The compiler can also check and issue warnings for these, as they may indicate an error in reasoning. Algebraic data type pattern matching should not be confused withregular expressionstring pattern matching. The purpose of both is similar (to extract parts from a piece of data matching certain constraints) however, the implementation is very different. Pattern matching on algebraic data types matches on the structural properties of an object rather than on the character sequence of strings. A general algebraic data type is a possibly recursivesum typeofproduct types. Each constructor tags a product type to separate it from others, or if there is only one constructor, the data type is a product type. Further, the parameter types of a constructor are the factors of the product type. A parameterless constructor corresponds to theempty product. If a datatype is recursive, the entire sum of products is wrapped in arecursive type, and each constructor also rolls the datatype into the recursive type. For example, the Haskell datatype: is represented intype theoryasλα.μβ.1+α×β{\displaystyle \lambda \alpha .\mu \beta .1+\alpha \times \beta }with constructorsnilα=roll(inl⟨⟩){\displaystyle \mathrm {nil} _{\alpha }=\mathrm {roll} \ (\mathrm {inl} \ \langle \rangle )}andconsαxl=roll(inr⟨x,l⟩){\displaystyle \mathrm {cons} _{\alpha }\ x\ l=\mathrm {roll} \ (\mathrm {inr} \ \langle x,l\rangle )}. The Haskell List datatype can also be represented in type theory in a slightly different form, thus:μϕ.λα.1+α×ϕα{\displaystyle \mu \phi .\lambda \alpha .1+\alpha \times \phi \ \alpha }. (Note how theμ{\displaystyle \mu }andλ{\displaystyle \lambda }constructs are reversed relative to the original.) The original formation specified a type function whose body was a recursive type. The revised version specifies a recursive function on types. (The type variableϕ{\displaystyle \phi }is used to suggest a function rather than abase typelikeβ{\displaystyle \beta }, sinceϕ{\displaystyle \phi }is like a Greekf.) The function must also now be appliedϕ{\displaystyle \phi }to its argument typeα{\displaystyle \alpha }in the body of the type. For the purposes of the List example, these two formulations are not significantly different; but the second form allows expressing so-callednested data types, i.e., those where the recursive type differs parametrically from the original. (For more information on nested data types, see the works ofRichard Bird,Lambert Meertens, and Ross Paterson.) Inset theorythe equivalent of a sum type is adisjoint union, a set whose elements are pairs consisting of a tag (equivalent to a constructor) and an object of a type corresponding to the tag (equivalent to the constructor arguments).[3] Many programming languages incorporate algebraic data types as a first class notion, including:
https://en.wikipedia.org/wiki/Algebraic_data_type
Guidance, navigation and control(abbreviatedGNC,GN&C, orG&C) is a branch ofengineeringdealing with the design of systems to control the movement of vehicles, especially,automobiles,ships,aircraft, andspacecraft. In many cases these functions can be performed by trained humans. However, because of the speed of, for example, a rocket's dynamics, human reaction time is too slow to control this movement. Therefore, systems—now almost exclusively digital electronic—are used for such control. Even in cases where humans can perform these functions, it is often the case that GNC systems provide benefits such as alleviating operator work load, smoothing turbulence, fuel savings, etc. In addition, sophisticated applications of GNC enableautomaticorremotecontrol. Guidance, navigation, and control systems consist of 3 essential parts:navigationwhich tracks current location,guidancewhich leverages navigation data and target information to direct flight control "where to go", andcontrolwhich accepts guidance commands to affect change in aerodynamic and/or engine controls. GNC systems are found in essentially all autonomous or semi-autonomous systems. These include: Related examples are:
https://en.wikipedia.org/wiki/Guidance,_navigation,_and_control