text
stringlengths
5
10.5k
source
stringlengths
33
146
early years of the French Revolution, with players taking control of a new Assassin named Arno Dorian. After Unity, Ubisoft released Assassin's Creed Syndicate in 2015. === Period two === Following Syndicate's release, Ubisoft decided that the series needed a major reinvention across both gameplay and narrative. It was decided to make the next game, Assassin's Creed Origins, closer to a role-playing video game than a stealth-action game, which would also bring a game with many more hours of play than previous titles. Some long-standing features of the series were eliminated for this purpose, such as the social stealth mechanic. This changed how missions were presented — rather than being linearly directed through the Animus, the player character could meet various quest givers in the game's world to receive missions. From the narrative side, Ubisoft placed the game before the formation of the Assassin Brotherhood in Ancient Egypt to make the player character, Bayek of Siwa, a medjay that people would respect and seek the help of. The modern-day storyline also shifted back to a single character, Layla Hassan. The developers limited the number of playable sequences for her character compared to previous games but gave them more meaning, such as allowing the player to explore Layla's laptop with background information on the game's universe. Origins was followed in 2018 by Assassin's Creed Odyssey, which shifted the setting to Classical Greece and followed a similar approach to its predecessor but with more emphasis on the role-playing elements. 2020's Assassin's Creed Valhalla, set in medieval England and Norway during the Viking Age, continued the same style as Origins and Odyssey. The developers recognized feedback from the previous two games and brought back the social stealth elements, as well as the concept of a customizable home base that was first introduced in
{"page_id": 23903477, "title": "Assassin's Creed"}
such as fertilizer) and chemical, into droplets, which can be large rain-type drops or tiny almost-invisible particles. This conversion is accomplished by forcing the spray mixture through a spray nozzle under pressure. The size of droplets can be altered through the use of different nozzle sizes, or by altering the pressure under which it is forced, or a combination of both. Large droplets have the advantage of being less susceptible to spray drift, but require more water per unit of land covered. Due to static electricity, small droplets are able to maximize contact with a target organism, but very still wind conditions are required. === Spraying pre- and post-emergent crops === Traditional agricultural crop pesticides can either be applied pre-emergent or post-emergent, a term referring to the germination status of the plant. Pre-emergent pesticide application, in conventional agriculture, attempts to reduce competitive pressure on newly germinated plants by removing undesirable organisms and maximizing the amount of water, soil nutrients, and sunlight available for the crop. An example of pre-emergent pesticide application is atrazine application for corn. Similarly, glyphosate mixtures are often applied pre-emergent on agricultural fields to remove early-germinating weeds and prepare for subsequent crops. Pre-emergent application equipment often has large, wide tires designed to float on soft soil, minimizing both soil compaction and damage to planted (but not yet emerged) crops. A three-wheel application machine, such as the one pictured on the right, is designed so that tires do not follow the same path, minimizing the creation of ruts in the field and limiting sub-soil damage. Post-emergent pesticide application requires the use of specific chemicals chosen to minimize harm to the desirable target organism. An example is 2,4-Dichlorophenoxyacetic acid, which will injure broadleaf weeds (dicots) but leave behind grasses (monocots). Such a chemical has been used extensively on wheat
{"page_id": 16334333, "title": "Pesticide application"}
an array directly behind the screen at equally spaced intervals. May additionally support: frame dimming: adjusts the brightness of the entire backlight based on the content displayed, as if local dimming was supported but only with a single zone local dimming: multiple direct-lit LED clusters (rectangles) are individually controlled. Commonly referred to as Full Array Local Dimming (FALD). Additionally a special diffusion panel (light guide plate, LGP) is often used to spread the light evenly behind the screen. The local dimming method of backlighting allows to dynamically control the level of light intensity of specific areas of darkness on the screen, resulting in much higher dynamic-contrast ratios, though at the cost of less detail in small, bright objects on a dark background, such as star fields or shadow details. A 2016 study by the University of California (Berkeley) suggests that the subjectively perceived visual enhancement with common contrast source material levels off at about 60 LCD local dimming zones. == Technology == LED-backlit LCDs are not self-illuminating (unlike pure-LED systems). There are several methods of backlighting an LCD panel using LEDs, including the use of either white or RGB (Red, Green, and Blue) LED arrays behind the panel and edge-LED lighting (which uses white LEDs around the inside frame of the TV and a light-diffusion panel to spread the light evenly behind the LCD panel). Variations in LED backlighting offer different benefits. The first commercial full-array LED-backlit LCD TV was the Sony Qualia 005 (introduced in 2004), which used RGB LED arrays to produce a color gamut about twice that of a conventional CCFL LCD television. This was possible because red, green and blue LEDs have sharp spectral peaks which (combined with the LCD panel filters) result in significantly less bleed-through to adjacent color channels. Unwanted bleed-through channels do not
{"page_id": 22141040, "title": "LED-backlit LCD"}
with the network to replace KTVI (channel 2) – which had been affiliated with ABC since it signed on as Belleville, Illinois–licensed WTVI on August 10, 1953 (when the station, then broadcasting on UHF channel 54, also maintained a primary CBS affiliation) – as its St. Louis affiliate. KTVI was among the thirteen "Big Three" network-affiliated television stations already owned or in the process of being acquired by New World Communications (and one of three out of the four stations that the group was acquiring from Argyle Television Holdings at the time) that were slated to switch to Fox under a long-term affiliation agreement announced between New World and then-Fox network parent News Corporation on May 23, 1994. Channel 11 station management would later turn the offer down; ABC instead reached an agreement with River City Broadcasting in August 1994 to shift the affiliation to outgoing Fox affiliate KDNL, which swapped network affiliations with KTVI on August 7, 1995. === WB affiliation === Upon that network's launch, on January 11, 1995, KPLR-TV became a charter affiliate of The WB (a venture between Time Warner and Chicago-based Tribune Broadcasting), marking the first time it maintained an affiliation with a broadcast television network. Koplar had reached a deal to affiliate with The WB in November 1993, more than a year before the network's launch. The WB offered prime time programs only on Wednesday evenings during its first half-season of operation, but would gradually evolve into offering a six-night-a-week schedule by September 1999; as such, for its first few years as a WB affiliate, KPLR continued to fill the 7–9 p.m. time slot with feature films and some first-run syndicated programs on nights when the network did not offer programming. During this period, alongside WB prime time programming and eventually animated series from
{"page_id": 850590, "title": "KPLR-TV"}
and for depriving her of money owed to her. By 1608, Erik Lange was living in Prague, and he died there in 1613 (Det Kongelige Bibliotek). Sophia was often ridiculed and avoided due to her personal life and marriage. Many alienated her due to her marriage to Erik Lange which was opposed by all in her family except for her brother Tycho. Sophia Brahe personally financed the restoration of the local church, Ivetofta Kyrka. She planned to be buried there, and the lid for her unused sarcophagus remains in the church's armory. But, by 1616 she had moved permanently to Zealand and settled in Helsingør. In Zealand, she lived specifically in Elsinore where she worked primarily on horticulture and healing plants. She spent her last years writing up the genealogy of Danish noble families, publishing the first major version in 1626 (there were later additions). Her work is still considered a major source for early history of Danish nobility (Det Kongelige Bibliotek). She died in Helsingør in the year 1643, and was buried in the Torrlösa old church in the village of Torrlösa, east of the town of Landskrona in what was then Denmark but now is southern Sweden. That church housed a burial chapel for the Thott family that remained for some time even after the church itself was pulled down in the mid-19th century (the new Torrlösa church was built nearby). Currently, a stone setting marks the outlines of the Thott chapel, while the tombstone for Sophie Brahe is still standing on the site. == Career and research == Tycho wrote that he had trained Sophia in horticulture and chemistry, but initially discouraged her from studying astronomy. Undeterred, Sophia learned astronomy on her own, studying books in German, and having Latin books translated with her own money so
{"page_id": 4918930, "title": "Sophia Brahe"}
interconnected only by gravity. Mathematicians have begun to classify well-known construction sets using group theory to study the combinatoric possibilities of structures that can be built. == Plumbing == In plumbing fittings, the M or F usually comes at the beginning rather than the end of the abbreviated designation. For example: MIPT denotes male iron pipe thread; FIPT denotes female iron pipe thread. A short length of pipe having an MIP thread at both ends is sometimes called a nipple. A short pipe fitting having an FIP thread at both ends is sometimes called a coupling. Hermaphroditic connections, which may include both male and female elements in a single unit, are used for some specialized tubing fittings, such as Storz fire hose connectors. A picture of such fittings appears in § Genderless (hermaphroditic), below. Interchangeable garden hose fittings made by GEKA are also hermaphroditic, relying on a rubber gasket to make the final connection. == Downspout == Downspouts (also called downpipes, rain conductors, or leaders) are used to convey rainwater from roof gutters to the ground through hollow pipes or tubes. These tubes usually come in sections, joined by inserting the male end (often crimped with a special tool to slightly reduce its size) into the female end of the next section. These connections are usually not sealed or caulked, instead relying on gravity to move the rainwater from the male end and into the receiving female connection located directly below. == Ductwork == Sheet metal ductwork for conveying air in HVAC systems typically uses gendered connections. Typically, the airflow through a ductwork connection is from male to female. However, connections formed opposite to this convention can be seen in some systems, since all connections are typically sealed with duct sealing mastic or tape to prevent leakage anyway. The flow
{"page_id": 1025265, "title": "Gender of connectors and fasteners"}
the physical world". Finally, the revolution in robotics will really be the development of strong AI, defined as machines which have human-level intelligence or greater. This development will be the most important of the century, "comparable in importance to the development of biology itself". Kurzweil concedes that every technology carries with it the risk of misuse or abuse, from viruses and nanobots to out-of-control AI machines. He believes the only countermeasure is to invest in defensive technologies, for example by allowing new genetics and medical treatments, monitoring for dangerous pathogens, and creating limited moratoriums on certain technologies. As for artificial intelligence Kurzweil feels the best defense is to increase the "values of liberty, tolerance, and respect for knowledge and diversity" in society, because "the nonbiological intelligence will be embedded in our society and will reflect our values". === The singularity === Kurzweil touches on the history of the singularity concept, tracing it back to John von Neumann in the 1950s and I. J. Good in the 1960s. He compares his singularity to that of a mathematical or astrophysical singularity. While his ideas of a singularity is not actually infinite, he says it looks that way from any limited perspective. During the singularity, Kurzweil predicts that "human life will be irreversibly transformed" and that humans will transcend the "limitations of our biological bodies and brain". He looks beyond the singularity to say that "the intelligence that will emerge will continue to represent the human civilization." Further, he feels that "future machines will be human-like, even if they are not biological". Kurzweil claims once nonbiological intelligence predominates the nature of human life will be radically altered: there will be radical changes in how humans learn, work and play. Kurzweil envisions nanobots which allow people to eat whatever they want while remaining thin
{"page_id": 767123, "title": "The Singularity Is Near"}
options are available if heterogeneity is identified among a group of studies that would otherwise be considered suitable for a meta-analysis. **MECIR Box 10.10.c**Relevant expectations for conduct of intervention reviews _C69:_ Considering statistical heterogeneity when interpreting the results (**Mandatory**) _Take into account any statistical heterogeneity when interpreting the results, particularly when there is variation in the direction of effect._ The presence of heterogeneity affects the extent to which generalizable conclusions can be formed. If a fixed-effect analysis is used, the confidence intervals ignore the extent of heterogeneity. If a random-effects analysis is used, the result pertains to the mean effect across studies. In both cases, the implications of notable heterogeneity should be addressed. It may be possible to understand the reasons for the heterogeneity if there are sufficient studies. 1. _Check again that the data are correct._ Severe apparent heterogeneity can indicate that data have been incorrectly extracted or entered into meta-analysis software. For example, if standard errors have mistakenly been entered as SDs for continuous outcomes, this could manifest itself in overly narrow confidence intervals with poor overlap and hence substantial heterogeneity. Unit-of-analysis errors may also be causes of heterogeneity (see [Chapter 6, Section 6.2]( 2. _Do not do a meta_ _-analysis._ A systematic review need not contain any meta-analyses. If there is considerable variation in results, and particularly if there is inconsistency in the direction of effect, it may be misleading to quote an average value for the intervention effect. 3. _Explore heterogeneity._ It is clearly of interest to determine the causes of heterogeneity among results of studies. This process is problematic since there are often many characteristics that vary across studies from which one may choose. Heterogeneity may be explored by conducting subgroup analyses (see Section [10.11.3]( or meta-regression (see Section [10.11.4]( Reliable conclusions can only be
{"source": 2792, "title": "from dpo"}
turn a tribute to the Kravitzes. Hilard is survived by three sons from his first marriage, two of whom are physicians, and by two daughters whom he and Ellen adopted. _The Emeritimes, Winter 2007_ !Image 78: Photo. At the time, the bowling program was recognized as one of the top bowling programs in the country, and his teams won numerous state, regional, and national tournaments. Seven of his bowlers won All-America honors. In addition to his coaching duties, Bud continued to teach in the Department of Physical Education and Athletics and serve in numerous other roles. He was well-liked by students, both physical education majors and non-majors. He was an excellent teacher, always prepared and expert in imparting his knowledge to help many of them become effective teachers and coaches in the greater Los Angeles area. Upon his retirement,
{"source": 5205, "title": "from dpo"}
( 𝑥 , 𝑦 ) ) ∈ Rep ( 𝑍 2 , 𝐺 ) for its equivalence class under simultaneous conjugation. When ( 𝑥 , 𝑦 ) ∈ 𝑇 2 ⁠, we will also write ( ( 𝑥 , 𝑦 ) ) ∈ 𝑇 2 / 𝑊 for its equivalence class under the diagonal action of 𝑊 ⁠. Clearly, if 𝑥 ∈ 𝐴 ⁠, then the assignment ( 𝑦 ) ↦ ( ( 𝑥 , 𝑦 ) ) defines a homeomorphism 𝑍 𝐺 ( 𝑥 ) / Ad 𝑍 𝐺 ( 𝑥 ) ≅ pr 1 − 1 ( 𝑥 ) . (11) On the other hand, if 𝑥 ∈ 𝐴 ( 𝑝 ) ⁠, then 𝑞 − 1 ( 𝑥 ) = pr 1 − 1 ( 𝑥 ) ∩ 𝑅 𝐺 ( 𝑝 ) corresponds to a subspace of 𝑍 𝐺 ( 𝑥 ) / Ad 𝑍 𝐺 ( 𝑥 ) ⁠. To describe it, we rewrite (11) using the finite covering 𝜌 : 𝐷 𝑍 𝐺 ( 𝑥 ) ~ × 𝑍 ( 𝑍 𝐺 ( 𝑥 ) ) 0 → 𝑍 𝐺 ( 𝑥 ) ( 𝑔 , 𝑡 ) ↦ 𝑢 ( 𝑔 ) 𝑡 , where 𝑍 ( 𝑍 𝐺 ( 𝑥 ) ) 0 is the identity component of the center of 𝑍 𝐺 ( 𝑥 ) ⁠, and 𝑢 : 𝐷 𝑍 𝐺 ( 𝑥 ) ~ → 𝐷 𝑍 𝐺 ( 𝑥 ) is the universal covering (see [15, IX §1.4 Corollary 1]). There is a finite subgroup 𝐶 ⩽ 𝑍 ( 𝐷 𝑍 𝐺 ( 𝑥 ) ~ ) × 𝑍 ( 𝑍 𝐺 ( 𝑥 ) ) 0 such that 𝜌 descends to an isomorphism 𝐷 𝑍 𝐺 ( 𝑥 ) ~ × 𝐶 𝑍 ( 𝑍 𝐺 (
{"source": 6389, "title": "from dpo"}
option 3). This way, positive shear stresses are plotted upward in the Mohr-circle space and the angle 2 θ {\displaystyle 2\theta } has a positive rotation counterclockwise in the Mohr-circle space. This alternative sign convention produces a circle that is identical to the sign convention #2 in Figure 5 because a positive shear stress τ n {\displaystyle \tau _{\mathrm {n} }} is also a counterclockwise shear stress, and both are plotted downward. Also, a negative shear stress τ n {\displaystyle \tau _{\mathrm {n} }} is a clockwise shear stress, and both are plotted upward. This article follows the engineering mechanics sign convention for the physical space and the alternative sign convention for the Mohr-circle space (sign convention #3 in Figure 5) === Drawing Mohr's circle === Assuming we know the stress components σ x {\displaystyle \sigma _{x}} , σ y {\displaystyle \sigma _{y}} , and τ x y {\displaystyle \tau _{xy}} at a point P {\displaystyle P} in the object under study, as shown in Figure 4, the following are the steps to construct the Mohr circle for the state of stresses at P {\displaystyle P} : Draw the Cartesian coordinate system ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} with a horizontal σ n {\displaystyle \sigma _{\mathrm {n} }} -axis and a vertical τ n {\displaystyle \tau _{\mathrm {n} }} -axis. Plot two points A ( σ y , τ x y ) {\displaystyle A(\sigma _{y},\tau _{xy})} and B ( σ x , − τ x y ) {\displaystyle B(\sigma _{x},-\tau _{xy})} in the ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} space corresponding to the known stress components on both perpendicular planes A {\displaystyle A} and B {\displaystyle B} , respectively (Figure 4 and
{"page_id": 1326660, "title": "Mohr's circle"}
A pollinator is an animal that moves pollen from the male anther of a flower to the female stigma of a flower. This helps to bring about fertilization of the ovules in the flower by the male gametes from the pollen grains. Insects are the major pollinators of most plants, and insect pollinators include all families of bees and most families of aculeate wasps; ants; many families of flies; many lepidopterans (both butterflies and moths); and many families of beetles. Vertebrates, mainly bats and birds, but also some non-bat mammals (monkeys, lemurs, possums, rodents) and some lizards pollinate certain plants. Among the pollinating birds are hummingbirds, honeyeaters and sunbirds with long beaks; they pollinate a number of deep-throated flowers. Humans may also carry out artificial pollination. A pollinator is different from a pollenizer, a plant that is a source of pollen for the pollination process. == Background == Plants fall into pollination syndromes that reflect the type of pollinator being attracted. These are characteristics such as: overall flower size, the depth and width of the corolla, the color (including patterns called nectar guides that are visible only in ultraviolet light), the scent, amount of nectar, composition of nectar, etc. For example, birds visit red flowers with long, narrow tubes and much nectar, but are not as strongly attracted to wide flowers with little nectar and copious pollen, which are more attractive to beetles. When these characteristics are experimentally modified (altering colour, size, orientation), pollinator visitation may decline. Although non-bee pollinators have been seen to be less effective at depositing pollen than bee pollinators one study showed that non-bees made more visits than bees resulting in non-bees performing 38% of visits to crop flowers, outweighing the ineffectiveness of their ability to pollinate. It has recently been discovered that cycads, which are
{"page_id": 106084, "title": "Pollinator"}
{\displaystyle \theta } , x ^ {\displaystyle {\hat {\mathbf {x} }}} , y ^ {\displaystyle {\hat {\mathbf {y} }}} defined by r , r ˙ {\displaystyle \mathbf {r} ,{\dot {\mathbf {r} }}} will all vary with time as opposed to the case of a Kepler orbit for which only the parameter θ {\displaystyle \theta } will vary. The Kepler orbit computed in this way having the same "state vector" as the solution to the "equation of motion" (59) at time t is said to be "osculating" at this time. This concept is for example useful in case F ( r , r ˙ , t ) = − α r ^ r 2 + f ( r , r ˙ , t ) {\displaystyle \mathbf {F} (\mathbf {r} ,{\dot {\mathbf {r} }},t)=-\alpha {\frac {\hat {\mathbf {r} }}{r^{2}}}+\mathbf {f} (\mathbf {r} ,{\dot {\mathbf {r} }},t)} where f ( r , r ˙ , t ) {\displaystyle \mathbf {f} (\mathbf {r} ,{\dot {\mathbf {r} }},t)} is a small "perturbing force" due to for example a faint gravitational pull from other celestial bodies. The parameters of the osculating Kepler orbit will then only slowly change and the osculating Kepler orbit is a good approximation to the real orbit for a considerable time period before and after the time of osculation. This concept can also be useful for a rocket during powered flight as it then tells which Kepler orbit the rocket would continue in case the thrust is switched off. For a "close to circular" orbit the concept "eccentricity vector" defined as e = e x ^ {\displaystyle \mathbf {e} =e{\hat {\mathbf {x} }}} is useful. From (53), (54) and (56) follows that i.e. e {\displaystyle \mathbf {e} } is a smooth differentiable function of the state vector ( r , v )
{"page_id": 18352021, "title": "Kepler orbit"}
Naubolos is an impact crater on Tethys, one of Saturn's moons. Its diameter is 54 kilometers (34 mi). It is named after Naubolus, father of Euryalus in Homer's Odyssey. == References ==
{"page_id": 57700628, "title": "Naubolos (crater)"}
KHII-TV (channel 9) is a television station in Honolulu, Hawaii, United States, serving the Hawaiian Islands as an affiliate of MyNetworkTV. It is owned by Nexstar Media Group alongside dual Fox affiliate/CW owned-and-operated station KHON-TV (channel 2). The two stations share studios at the Haiwaiki Tower in downtown Honolulu; KHII's main transmitter is located in Akupu, Hawaii. == History == === Early history === KHII signed on the air on February 7, 1988, as KFVE, broadcasting on channel 5, as the final VHF station in the market. Originally, KFVE focused on low-budget programming such as Hawaii Five-O repeats. Later under the moniker "Hawaii is Watching us Grow", it focused on movies and syndicated fare, and by 1990 acquired programs from KMGT (channel 26, now KAAH) before its format switch to religious programming. As a small-time independent station, it had to rely on low-budget programming and Japanese television dramas (most of which were later carried on independent station KIKU, channel 20) through much of its early existence. That all changed in 1993 when another local station, KHNL (then a Fox affiliate owned by the Providence Journal Company), took over management of KFVE through a local marketing agreement (LMA). KFVE then merged its operations into KHNL's facility. It was the first such local market sharing organization in the country. After KHNL took over oversight of the station in the mid-1990s, coverage of University of Hawaii athletics moved over to KFVE, which rebranded as "The Home Team". === UPN and WB affiliations === On January 16, 1995, KFVE became a charter affiliate of UPN under the brand name "K-5 UPN Hawaii". On December 28, 1998, the station began carrying programming from The WB as a secondary affiliation. Previously, The WB was carried throughout Hawaii on KWHE (channel 14). KFVE was acquired outright by
{"page_id": 845360, "title": "KHII-TV"}
additive manufacturing: STL file format, a de facto standard for transferring solid geometric models to SFF machines. To obtain the necessary motion control trajectories to drive the actual SFF, rapid prototyping, 3D printing or additive manufacturing mechanism, the prepared geometric model is typically sliced into layers, and the slices are scanned into lines (producing a "2D drawing" used to generate trajectory as in CNC's toolpath), mimicking in reverse the layer-to-layer physical building process. == Applications == Rapid prototyping is also commonly applied in software engineering to try out new business models and application architectures such as Aerospace, Automotive, Financial Services, Product development, and Healthcare. Aerospace design and industrial teams rely on prototyping in order to create new AM methodologies in the industry. Using SLA they can quickly make multiple versions of their projects in a few days and begin testing quicker. Rapid Prototyping allows designers/developers to provide an accurate idea of how the finished product will turn out before putting too much time and money into the prototype. 3D printing being used for Rapid Prototyping allows for Industrial 3D printing to take place. With this, you could have large-scale moulds to spare parts being pumped out quickly within a short period of time. == Types == Stereolithography (SLA) → a laser-cured photopolymer for materials such as thermoplastic-like photopolymers. Selective laser sintering (SLS) → a laser-sintered powder for materials such as Nylon or TPU. Direct metal laser sintering (DMLS) → laser-sintered metal powder for materials like stainless steel, titanium, chrome, and aluminum. Fused deposition modeling (FDM) → fused extrusions of filaments like ABS, PC, and PPCU. Multi-jet fusion (MJF) → it is an inkjet array selective fusing across bed of nylon powder for Black Nylon 12. PolyJet (PJET) → it is a uv-cured jetted photopolymer to work with acrylic-based and elastomeric
{"page_id": 10579736, "title": "Rapid prototyping"}
medical professionals suggest euthanasia to non-suicidal disabled patients. == Non-human animals == Some philosophers and those involved in animal welfare, ethology, the rights of animals, and related subjects, consider that certain or even all animals should also be considered to be persons and thus granted legal personhood. Commonly named species in this context include the apes, cetaceans, parrots, cephalopods, corvids, elephants, bears, pigs, leporids and rodents, because of their apparent intelligence, sentience, and intricate social rules. The idea of extending personhood to all animals has the support of legal scholars such as Alan Dershowitz and Laurence Tribe of Harvard Law School, and animal law courses are (as of 2008) taught in 92 out of 180 law schools in the United States. On May 9, 2008, Columbia University Press published Animals as Persons: Essays on the Abolition of Animal Exploitation by Professor Gary L. Francione of Rutgers University School of Law, a collection of writings that summarizes his work to date and makes the case for non-human animals as persons. Those who oppose personhood for non-human animals are known as human exceptionalists or human supremacists, and more pejoratively speciesists. Other theorists attempt to demarcate between degrees of personhood. For example, Peter Singer's two-tiered account distinguishes between basic sentience and the higher standard of self-consciousness which constitutes personhood. His approach has been criticized for accepting the personhood of some animals, but rejecting the personhood of people with disabilities such as dementia. It has also been given as an example of the limits of a capacities-based definition of personhood, in that they tend to be defined in ways that reinforce existing systems of power and privilege by preferring the capacities that are valued by those who write the definitions. A squirrel would value agility and balance in defining personhood; a tree might grant
{"page_id": 33690397, "title": "Personhood"}
areas of regulation and cannot be unlimited in scope. An example of this is the reporting requirement under the Epidemics Act, which cannot be fully codified due to the need for ongoing scientific progress. == References == === Notes === === Citations === == External links == English page about Epidemics Act at the Federal Office of Public Health
{"page_id": 74623488, "title": "Epidemics Act"}
and partner. Tamara Taylor as Prof. Angela Wheatley (seasons 1 & 2), a former math professor at Columbia University, ex-wife of Richard Wheatley, and a suspect in the hit ordered on Kathy Stabler. Ainsley Seiger as Detective 2nd Grade Jet Slootmaekers (seasons 1–5), a former independent hacker who is recruited to the OCCB task force on Stabler's recommendation. She was requalified as an NYPD officer to work with the OCCB task force. In season 3 episode 16 ("Chinatown"), she is promoted from Detective 3rd Grade to Detective 2nd Grade. In season 5, she leaves the OCCB task force to work with the FBI. Dylan McDermott as Richard Wheatley (né Sinatra) (seasons 1 & 2), son of notorious mobster Manfredi Sinatra, now a businessman and owner of an online pharmaceutical company who leads a second life as a crime boss, and was a suspect in the murder of Stabler's wife. He was presumed murdered by Angela after she discovered that Wheatley murdered their son Richie. Nona Parker Johnson as Detective 3rd Grade Carmen "Nova" Riley (season 2), an undercover narcotics detective working under Brewster's command to infiltrate the Marcy Killers. She retires from the NYPD and leaves New York after the murder of the gang's leader Preston Webb. Brent Antonello as Detective 2nd Grade Jamie Whelan (season 3), a detective with the OCCB. In the season 3 finale, he is shot and paralyzed by Kyle Wilkie and later dies in the hospital. Rick Gonzalez as Detective 2nd Grade Roberto "Bobby" Reyes (season 3–present), an undercover detective with the OCCB. Dean Norris as Randall Stabler (season 5; recurring season 4), Stabler's older brother. === Recurring === Jen Jacob as Bridget Donnelly (Season 2), Denis Leary's wife. Ben Chase as Detective 1st Grade Freddie Washburn (season 1), a detective from the Narcotics unit
{"page_id": 64295042, "title": "Law & Order: Organized Crime"}
will not provide an external pusher-vehicle as they did provide to facilitate student team pod testing in both the January and August 2017 competitions. Ultra-small pods will not be allowed this time, with minimum pod length set at 1.5 m (5 ft). There will be an additional sub-competition with up to three qualifying teams allowed to take part in a Levitation Sub-Competition that will require non-wheeled pod levitation and will be tested on an external (non-vacuum) test track. The pods will need to translate at least 75 ft (23 m) down the track, stop, reverse, and translate back to the original position, all while levitating the entire duration. Fastest full cycle wins the levitation sub-competition. The 2018 competition will take place 22 July 2018. As of 2018, Steve Davis—who joined SpaceX as employee no. 9 in 2003,: 43:40 and was then project leader for The Boring Company—had been the operations manager for the Hyperloop Pod Competition since inception.: 26:50 The fourth year of competition was announced for the northern hemisphere summer of 2019, and the event was run 21 July 2019. The team from the Technical University of Munich—"Team TUM", formerly named "WARR Hyperloop"—again achieved the highest speed on the track at 463 km/h (288 mph). Although only slightly faster than the previous year, two other teams were able to achieve high-speed runs for the first time. A total of 21 teams competed with some 700 individuals involved from the teams. Four of the teams were able to qualify for track runs. Following the July 2019 competition, Musk announced that the 2020 competition will be run on a much longer—10 km (6.2 mi)—track that will include a curve, ten times as long as the 1 km straight track used in the first several years of the annual competition. By November
{"page_id": 49273370, "title": "Hyperloop pod competition"}
the algorithm execution time on the entire database. For SYNTH800 we obtain a speedup of more than 20 at small sample size and high support. For SYNTH250 we get more than 10 speedup in the same range. The performance at lower support is poor due to the large number of false large itemsets found. At higher sampling we get lower performance, since the reduction in database I/O is not that significant, and due to the introduction of more inaccuracies. For the smaller databases (ENROLL and TRBIB), at small sample size, we get no speedup, due to the large number of false large itemsets generated. We can observe that there is a trade-off between sampling size, minimum support and the performance. The performance gains are negated due to either a large number of false large itemsets at very low support or due to decreased gains in I/O vs. computation. We can conclude that in general sampling is a very effective technique in terms of performance, and we can expect it to work very well with large databases, as they have higher computation and I/O overhead. 4.4. Confidence: comparison with chernoff bounds In this section we compare the Chernoff bound with experimentally observed results. We show that for the databases we have considered the Chernoff bound is very conservative. Consider equations 1 and 2. For different values of ac-curacy, and for a given sampling size, for each itemset I, we can obtain the theoretical confidence value by simply evalu-ating the right hand side of the equations. For example, for the upper bound the confidence C = 1 − e−ǫ2nτ / 3. Recall that confidence provides information about an item’s actual support in the sample being away from the expected support by a certain amount ( nτ ǫ ). We can also obtain
{"source": 1476, "title": "from dpo"}
employee receive), - The `ELV` column contains the `y` values (the employee ELV after one year) !excel_file_for_linear_models.png ``` # Load data into a pandas dataframe df2 = pd.read_excel("data/ELV_vs_hours.ods", sheet_name="Data") # df2 df2.describe() # plot ELV vs. hours data sns.scatterplot(x='hours', y='ELV', data=df2) # linear model plot (preview) # sns.lmplot(x='hours', y='ELV', data=df2, ci=False) ``` #### Types of linear relationship between input and output Different possible relationships between the number of hours of stats training and ELV gains: !figures/ELV_as_function_of_stats_hours.png ## 4.2 Fitting linear models - Main idea: use `fit` method from `statsmodels.ols` and a formula (approach 1) - Visual inspection - Results of linear model fit are: - `beta0` = $\beta_0$ = baseline ELV (y-intercept) - `beta1` = $\beta_1$ = increase in ELV for each additional hour of stats training (slope) - Five more alternative fitting methods (bonus material): 2. fit using statsmodels `OLS` 3. solution using `linregress` from `scipy` 4. solution using `optimize` from `scipy` 5. linear algebra solution using `numpy` 6. solution using `LinearRegression` model from scikit-learn ### Using statsmodels formula API The `statsmodels` Python library offers a convenient way to specify statistics model as a "formula" that describes the relationship we're looking for. Mathematically, the linear model is written: $\large \textrm{ELV} \ \ \sim \ \ \beta_0\cdot 1 \ + \ \beta_1\cdot\textrm{hours}$ and the formula is: `ELV ~ 1 + hours` Note the variables $\beta_0$ and $\beta_1$ are omitted, since the whole point of fitting a linear model is to find these coefficients. The parameters of the model are: - Instead of $\beta_0$, the constant parameter will be called `Intercept` - Instead of a new name $\beta_1$, we'll call it `hours` coefficient (i.e. the coefficient associated with the `hours` variable in the model) ``` import statsmodels.formula.api as smf model = smf.ols('ELV ~ 1 + hours', data=df2) result = model.fit() #
{"source": 3869, "title": "from dpo"}
proactive approach to vulnerability management, enabling security teams to focus on the most pressing threats. Best Practices for Using CVSS Base Scores in Exposure Management ---------------------------------------------------------------- To maximize the effectiveness of CVSS Base Scores in your exposure management strategy, follow these best practices: 1. **Integrate CVSS Scores with Exposure Management and Environmental Metrics (which consider the unique aspects of your organization’s infrastructure and security posture). Adjusting the base score with these factors enables a more accurate and context-specific risk assessment. 3. **Implement Continuous Monitoring and Reassessment:** Vulnerabilities are dynamic; risk levels can change as new information emerges or your environment evolves. Regularly updating and reassessing CVSS scores is vital for accurately understanding your exposure. Continuous monitoring ensures you capture changes in the threat landscape, such as discovering new exploits or releasing patches, allowing you to adjust your risk priorities in real-time. 4. **Prioritize Based on Business Impact:** Not all vulnerabilities pose the same level of threat to your organization. Prioritize vulnerabilities not only by their CVSS Base Scores but also by their potential impact on your business operations. High CVSS scores should be cross-referenced with the criticality
{"source": 5796, "title": "from dpo"}
(when the mean of real data ≤ the mean of shuffled data) or above real data (when the mean of real data ≥ the mean of shuffled data). ∗p ≤ 0.05. ∗∗p ≤ 0.01. ∗∗∗p ≤ 0.001. Error bars: mean ± SEM. Using this method, we examined the relationship between grid phase and physical distance. In a cell group with one reference cell and its neighbors (maximal distance between neighboring cells and the reference cell: 119.5 μm, Figure 4, respectively (Figure 4 in local brain neighborhoods matches the arrangement of their phases determined by spatial responses (spatial tuning phases). To test this, we performed a “folding triangle analysis” (Figures S3, in which small groups of cells were
{"source": 7370, "title": "from dpo"}
An ascocarp, or ascoma (pl.: ascomata), is the fruiting body (sporocarp) of an ascomycete phylum fungus. It consists of very tightly interwoven hyphae and millions of embedded asci, each of which typically contains four to eight ascospores. Ascocarps are most commonly bowl-shaped (apothecia) but may take on a spherical or flask-like form that has a pore opening to release spores (perithecia) or no opening (cleistothecia). == Classification == The ascocarp is classified according to its placement (in ways not fundamental to the basic taxonomy). It is called epigeous if it grows above ground, as with the morels, while underground ascocarps, such as truffles, are termed hypogeous. The structure enclosing the hymenium is divided into the types described below (apothecium, cleistothecium, etc.) and this character is important for the taxonomic classification of the fungus. Apothecia can be relatively large and fleshy, whereas the others are microscopic—about the size of flecks of ground pepper. == Apothecium == An apothecium (plural: apothecia) is a wide, open, saucer-shaped or cup-shaped fruit body. It is sessile and fleshy. The structure of the apothecium chiefly consists of three parts: hymenium (upper concave surface), hypothecium, and excipulum (the "foot"). The asci are present in the hymenium layer. The asci are freely exposed at maturity. An example are the members of Dictyomycetes. Here the fertile layer is free, so that many spores can be dispersed simultaneously. The morel, Morchella, an edible ascocarp, not a mushroom, favored by gourmets, is a mass of apothecia fused together in a single large structure or cap. The genera Helvella and Gyromitra are similar. == Cleistothecium == A cleistothecium (plural: cleistothecia) is a globose, completely closed fruit body with no special opening to the outside. The ascomatal wall is called peridium and typically consists of densely interwoven hyphae or pseudoparenchyma cells. It may
{"page_id": 220445, "title": "Ascocarp"}
The Prout is an obsolete unit of energy, whose value is: 1 P r o u t = 2.9638 × 10 − 14 J {\displaystyle 1Prout=2.9638\times 10^{-14}J} This is equal to one twelfth of the binding energy of the deuteron. == History == The "Prout" is a unit of nuclear binding energy, and is 1/12 the binding energy of the deuteron, or 185.5 keV. It is named after William Prout. "Proutons" was an early candidate for the name of what are now called protons. == See also == William Prout Prout's hypothesis Atomic number
{"page_id": 52150400, "title": "Prout (unit)"}
Consider a sequence of four notes, n1–n4, the transition n2–n3 may be heard as a group boundary if: (slur/rest) the interval of time from the end of n2 is greater than that from the end of n1 to the beginning of n2 and that from the end of n3 to the beginning of n4 or if (attack/point) the interval of time between the attack points of n2 and n3 is greater than between those of n1 and n2 and between those of n3 and n4. (Change) Consider a sequence of four notes, n1–n4. The transition n2–n3 may be heard as a group boundary if marked by (Register) the transition n2-n3 involves a greater intervallic distance than both n1-n2 and n3-n4, or if (Dynamics) the transition n2-n3 involves a change in dynamics and n1-n2 and n3-n4 do not, or if (Articulation) the transition n2-n3 involves a change in articulation and n1-n2 and n3-n4 do not, or if (Length) n2 and n3 are of different length and both pairs n1,n2 and n3,n4 do not differ in length. (Intensification) A larger-level group may be placed where the effects picked out by GPRs 2 and 3 are more pronounced. (Symmetry) "Prefer grouping analyses that most closely approach the ideal subdivision of groups into two parts of equal length." (Parallelism) "Where two or more segments of music can be construed as parallel, they preferably form parallel parts of groups." (Time-span and prolongational stability) "Prefer a grouping structure that results in more stable time-span and/or prolongational reductions." ==== Transformational grouping rules ==== Grouping overlap (p. 60) Given a well-formed underlying grouping structure G as described by GWFRs 1-5, containing two adjacent groups g1 and g2 such that g1 ends with event e1, g2 begins with event e2, and e1 = e2 a well-formed surface grouping structure
{"page_id": 31992732, "title": "Generative theory of tonal music"}
B)} and sup ( A + B ) = ( sup A ) + ( sup B ) . {\displaystyle \sup(A+B)=(\sup A)+(\sup B).} Product of sets The multiplication of two sets A {\displaystyle A} and B {\displaystyle B} of real numbers is defined similarly to their Minkowski sum: A ⋅ B := { a ⋅ b : a ∈ A , b ∈ B } . {\displaystyle A\cdot B~:=~\{a\cdot b:a\in A,b\in B\}.} If A {\displaystyle A} and B {\displaystyle B} are nonempty sets of positive real numbers then inf ( A ⋅ B ) = ( inf A ) ⋅ ( inf B ) {\displaystyle \inf(A\cdot B)=(\inf A)\cdot (\inf B)} and similarly for suprema sup ( A ⋅ B ) = ( sup A ) ⋅ ( sup B ) . {\displaystyle \sup(A\cdot B)=(\sup A)\cdot (\sup B).} Scalar product of a set The product of a real number r {\displaystyle r} and a set B {\displaystyle B} of real numbers is the set r B := { r ⋅ b : b ∈ B } . {\displaystyle rB~:=~\{r\cdot b:b\in B\}.} If r > 0 {\displaystyle r>0} then inf ( r ⋅ A ) = r ( inf A ) and sup ( r ⋅ A ) = r ( sup A ) , {\displaystyle \inf(r\cdot A)=r(\inf A)\quad {\text{ and }}\quad \sup(r\cdot A)=r(\sup A),} while if r < 0 {\displaystyle r<0} then inf ( r ⋅ A ) = r ( sup A ) and sup ( r ⋅ A ) = r ( inf A ) . {\displaystyle \inf(r\cdot A)=r(\sup A)\quad {\text{ and }}\quad \sup(r\cdot A)=r(\inf A).} In the case r = 0 {\displaystyle r=0} , one has, if A ≠ ∅ {\displaystyle A\neq \varnothing } inf ( 0 ⋅ A ) = 0 and sup ( 0 ⋅ A
{"page_id": 39382, "title": "Infimum and supremum"}
In mathematics, Tucker's lemma is a combinatorial analog of the Borsuk–Ulam theorem, named after Albert W. Tucker. Let T be a triangulation of the closed n-dimensional ball B n {\displaystyle B_{n}} . Assume T is antipodally symmetric on the boundary sphere S n − 1 {\displaystyle S_{n-1}} . That means that the subset of simplices of T which are in S n − 1 {\displaystyle S_{n-1}} provides a triangulation of S n − 1 {\displaystyle S_{n-1}} where if σ is a simplex then so is −σ. Let L : V ( T ) → { + 1 , − 1 , + 2 , − 2 , . . . , + n , − n } {\displaystyle L:V(T)\to \{+1,-1,+2,-2,...,+n,-n\}} be a labeling of the vertices of T which is an odd function on S n − 1 {\displaystyle S_{n-1}} , i.e., L ( − v ) = − L ( v ) {\displaystyle L(-v)=-L(v)} for every vertex v ∈ S n − 1 {\displaystyle v\in S_{n-1}} . Then Tucker's lemma states that T contains a complementary edge - an edge (a 1-simplex) whose vertices are labelled by the same number but with opposite signs. == Proofs == The first proofs were non-constructive, by way of contradiction. Later, constructive proofs were found, which also supplied algorithms for finding the complementary edge. Basically, the algorithms are path-based: they start at a certain point or edge of the triangulation, then go from simplex to simplex according to prescribed rules, until it is not possible to proceed any more. It can be proved that the path must end in a simplex which contains a complementary edge. An easier proof of Tucker's lemma uses the more general Ky Fan lemma, which has a simple algorithmic proof. The following description illustrates the algorithm for n
{"page_id": 17633708, "title": "Tucker's lemma"}
be defined as the scientific study of ecosystems, systems ecology is more of a particular approach to the study of ecological systems and phenomena that interact with these systems. === Industrial ecology === Industrial ecology is the study of industrial processes as linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes become inputs for new processes. == See also == == References == == Bibliography == == External links == Organisations Systems Ecology Department at the Stockholm University. Systems Ecology Department at the University of Amsterdam. Systems ecology Lab at SUNY-ESF. Systems Ecology program at the University of Florida Systems Ecology program at the University of Montana Terrestrial Systems Ecology of ETH Zürich.
{"page_id": 3253299, "title": "Systems ecology"}
the Groninger Bodem Beweging was founded in 2009 in response to the Earthquake near Middelstum in 2006. According to a 2015 report by the Dutch Safety Board, until 2013, "the safety of the citizens of Groningen in relation to induced earthquakes had not influenced decision-making about the exploitation of the Groningen field." A tipping point was the Huizinge Earthquake on 16 August 16 2012. With an estimated moment magnitude of 3.6, it was the heaviest earthquake measured above the Groningen gas field. After the earthquake, the Staatstoezicht op de Mijnen (SodM) concluded that the earthquakes in the area could become stronger in the future, between 4 and 5 on the Richter scale. SodM recommended limiting gas extraction quickly, and as much as possible. In January 2013, the Minister of Economic Affairs Henk Kamp decided not to limit gas extraction immediately, but requested fourteen investigations. The report also coincided with austerity measures, which made the government reluctant to reduce natural gas revenues. At the end of the year it was announced that about 10% more gas, 54 billion cubic meters, had been extracted than previous years, according to the NAM because of the cold. === Phasing out gas production === In 2014, gas production from the Groningen gas field was limited due to earthquakes for the first time. In January 2014, Kamp announced that it would limit production to 42.5 billion m3 in 2014. At the beginning of 2015, the Dutch cabinet decided that a maximum of 39.4 billion m3 of gas could be produced from the gas field in that calendar year, which was further reduced to 30 billion m3 in June 2015. In November 2015, the Council of State lowered the limit to 27 billion m3. A month later, Kamp decided to maintain this maximum. In September 2016 it
{"page_id": 30295937, "title": "Groningen gas field"}
The Open Solar Outdoors Test Field (OSOTF) is a project organized under open-source principles, which is a fully grid-connected test system that continuously monitors the output of many solar photovoltaic modules and correlates their performance to a long list of highly accurate meteorological readings. == History == As the solar photovoltaic industry grows, there is an increased demand for high-quality research in solar systems design and optimization in realistic (and sometimes extreme) outdoor environments such as in Canada. To answer this need, a partnership has formed the Open Solar Outdoors Test Field (OSOTF). The OSOTF was originally developed with a strong partnership between the Queen's Applied Sustainability Research Group run by Joshua M. Pearce at Queen's University (now at Michigan Tech) and the Sustainable Energy Applied Research Centre (SEARC) at St. Lawrence College headed by Adegboyega Babasola.This collaboration has grown rapidly to include multiple industry partners, and the OSOTF has been redesigned to provide critical data and research for the team. The OSOTF is a fully grid-connected test system, which continuously monitors the output of over 100 photovoltaic modules and correlates their performance to a long list of highly accurate meteorological readings. The teamwork has resulted in one of the largest systems in the world for this detailed level of analysis, and can provide valuable information on the actual performance of photovoltaic modules in real-world conditions. Unlike many other projects, the OSOTF is organized under open source principles. All data and analysis when completed will be made freely available to the entire photovoltaic community and the general public. The first project for the OSOTF quantifies the losses due to snowfall of a solar photovoltaic system, generalizes these losses to any location with weather data and recommends best practices for system design in snowy climates. This work was accomplished by creating
{"page_id": 36888097, "title": "The Open Solar Outdoors Test Field"}
was run on an IBM Blue Gene supercomputer by the University of Nevada's research team and IBM Almaden in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models. IBM later in the same year increased the number of neurons to 16 million and 8000 synapses per neuron, 5 seconds of which was modelled in 265 s of real time. By 2009, the researchers were able to ramp up the numbers to 1.6 billion neurons and 9 trillion synapses, saturating entire 144 TB of supercomputer RAM. In 2019, Idan Segev, one of the computational neuroscientists working on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtual EEG experiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as a neural network (see citation for details). In 2023, researchers from Duke University performed a particularly high-resolution scan of a mouse brain. ==== Blue Brain ==== Blue Brain is a project that was launched in May 2005 by IBM and the Swiss Federal Institute of Technology in Lausanne. The intention of the project was to create a computer simulation of a mammalian cortical column down to the molecular level. The project uses a supercomputer based on IBM's Blue Gene design to simulate the electrical behavior of
{"page_id": 21855574, "title": "Brain simulation"}
specified register bank. undefined 0402h + 4i MCi_ADDR[5:0] Reports the instruction memory-address or data memory-address responsible for the machine-check error for the specified register bank. undefined 0403h + 4i MCi_MISC[5:0] Reports miscellaneous information about the machine-check error for the specified register bank. c00x_xxxx_xx00_0000 C000_0408h MC4_MISC1 c00x_xxxx_0000_0000 C000_0409h MC4_MISC2 C000_040Ah MC4_MISC3 C000_0410h MCA_INTR_CFG MCA interrupt configuration. 0000_0000_0000_0000h C000_2000h + i*10h MCA_CTL Control for error reporting banks per implementation. 0000_0000_0000_0000h C000_2001h + i*10h MCA_STATUS Status registers for each error-reporting register bank, used to report machine-check error information for the specified register bank. undefined C000_2002h + i*10h MCA_ADDR Reports the instruction memory-address or data memory-address responsible for the machine-check error for the specified register bank. undefined C000_2003h + i*10h MCA_MISC0 Reports miscellaneous information about the machine-check error for the specified register bank. c00x_xxxx_xx00_0000 [AMD Public Use] 723 MSR Cross-Reference AMD64 Technology 24593—Rev. 3.42—March 2024 # A.5 Software-Debug MSRs Table A -5 lists the MSRs used in support of the software-debug architecture. C000_2004h + i*10h MCA_CONFIG Controls configuration information of each MCA bank. See the appropriate BIOS and Kernel Developer’s Guide or Processor Programming Reference Manual .C000_2005h + i*10h MCA_IPID Holds information to identify the MCA bank type. See the appropriate BIOS and Kernel Developer’s Guide or Processor Programming Reference Manual .C000_2006h + i*10h MCA_SYND Holds syndrome associated with the error. See the appropriate BIOS and Kernel Developer’s Guide or Processor Programming Reference Manual .C000_2008h + i*10h MCA_DESTAT Holds status information for deferred errors. See the appropriate BIOS and Kernel Developer’s Guide or Processor Programming Reference Manual .C000_2009h + i*10h MCA_DEADDR Provides the address associated with the deferred error. See the appropriate BIOS and Kernel Developer’s Guide or Processor Programming Reference Manual .(C000_200Ah: C000_200Dh) + i*10h MCA_MISC[4:1] Reports miscellaneous-error information. See the appropriate BIOS and Kernel Developer’s Guide or Processor Programming Reference Manual .(C000_200Eh:
{"source": 76, "title": "from dpo"}
a fast computer. Cray continued his tradition of keeping computers simple in the CRAY-1. Commercial RISCs are built primarily on the work of three research projects: the Berkeley RISC processor, the IBM 801, and the Stanford MIPS processor. These architectures have attracted enormous industrial interest because of claims of a performance advantage of anywhere from two to five times over other computers using the same technology. Begun in 1975, the IBM project was the first to start but was the last to become public. The IBM computer was designed as a 24-bit ECL minicomputer, while the university projects were both MOS-based, 32-bit microprocessors. John Cocke is considered the father of the 801 design. He received both the Eckert –Mauchly and Turing awards in recognition of his contribution. Radin described the high-lights of the 801 architecture. The 801 was an experimental project that was never designed to be a product. In fact, to keep down costs and complexity, the computer was built with only 24-bit registers. In 1980, Patterson and his colleagues at Berkeley began the project that was to give this architectural approach its name (see Patterson and Ditzel ). They built two computers called RISC-I and RISC-II. Because the IBM project was not widely known or discussed, the role played by the Berkeley group in promoting the RISC approach was critical to acceptance of the technology. They also built one of M-20 ■ Appendix M Historical Perspectives and References the first instruction caches to support hybrid-format RISCs (see Patterson et al. ). It supported 16-bit and 32-bit instructions in memory but 32 bits in the cache. The Berkeley group went on to build RISC computers targeted toward Smalltalk, described by Ungar et al. , and LISP, described by Taylor et al. . In 1981, Hennessy and his colleagues
{"source": 2300, "title": "from dpo"}
work described in this paper. E. M., D. K., and K. M. conceived of the basic idea for this work. K. M., D. K., S. Z., and J. Y. designed and carried out the experiments, and analyzed the results. E. M. and D. K. supervised the research and the development of the manuscript. K. M. wrote the first draft of the manuscript; all authors subsequently took part in the revision process and approved the final copy of the manuscript. Marinna Madrid and Michael Mobius provided feedback on the manuscript throughout its development. Supplemental Material --------------------- figures that show distribution of metrics and additional information about the structure of the course from which the data used in this paper was collected. References (35) --------------- 1. C. C. Bonwell and J. A. Eison, Active learning: Creating excitement in the classroom, ASHE-ERIC Higher Educ. Rep.**808**, 19 (1991). 2. A. W. Chickering, Z. F. Gamson, and M. Sorcinelli, Research Findings on the Seven Principles, _Applying the Seven Principles for Good Practice in Undergraduate Education, New Directions in Teaching and Learning_ (Jossey-Bass/Wiley, San Francisco, 1987). 3. P. A. Rovai, Building sense of community at a distance, Int. Rev. Res. Open Dist. Learn.**3**, 1 (2002). 5. R. T. Berner, The benefits of bulletin board discussion in a literature of journalism course, The Technology Source Archives at the University of North Carolina; discussion_in_a_literature_of_journalism_course/. 6. M. Tallent-Runnels, J. Thomas, W. Lan, S. Cooper, T. Ahern, S. Shaw, and X. Liu, Teaching courses online: A review of the research, [Rev. Educ. Res.**76**, 93 (2006)]( 7. A. L. Brown and J. C. Campione, _Psychological Theory and the Design of Innovative Learning Environments: On
{"source": 4922, "title": "from dpo"}
Levi's theorem: Theorem E.1. Let 9 be a Lie algebra with radical r. Then there is a subalgebra 1 of 9 such that 9 = r $ I. > PROOF. There are several simple reductions. First, we may assume there is no nonzero ideal of 9 that is properly contained in r. For if 0 were such an ideal, by induction on the dimension of g, g/o would have a subalgebra complementary to r/o, and this subalgebra has the form I/o, with I as required. In particular, we may assume r is abelian, since otherwise .@r is a proper ideal in r which is an ideal in 9 by Corollary C.23. We may also assume that [g, r] = r, for if[g, r] = 0 then the adjoint representation factors through g/r, and since g/r is semisimple, the submodule reg has a complement, which is the required I. Now V = gl(g) is a g-module via the adjoint representation: for X E 9 and > ipEV, X'ip = [ad(X), ip] = ad(X) 0 ip - ip 0 ad (X). In other words, for X, Y E 9 and ip E V, (X, ip)(Y) = [X, ip(Y)] - ip([X, Y]). (E.2) 500 E. Ado's and Levi's Theorems The trick is to consider the following subspaces of V: C = { A ## = {ad(X): X E r}. These are easily checked to be g-submodules of V, included in each other as indicated. And ClB is a trivial g-module of rank 1, i.e. ClB = C, by taking IP in C to the scalar A. such that IPlt = A. . I. (Note that ClB :F 0 since one can find an endomorphism of the vector space 9 which is the identity on r and zero on a vector space complement to
{"source": 6137, "title": "from dpo"}
is one of the characteristic effects of cancer-causing mutations. Warburg articulated his hypothesis in a paper entitled The Prime Cause and Prevention of Cancer which he presented in lecture at the meeting of the Nobel-Laureates on June 30, 1966 at Lindau, Lake Constance, Germany. In this speech, Warburg presented additional evidence supporting his theory that the elevated anaerobiosis seen in cancer cells was a consequence of damaged or insufficient respiration. Put in his own words, "the prime cause of cancer is the replacement of the respiration of oxygen in normal body cells by a fermentation of sugar." The body often kills damaged cells by apoptosis, a mechanism of self-destruction that involves mitochondria, but this mechanism fails in cancer cells where the mitochondria are shut down. The reactivation of mitochondria in cancer cells restarts their apoptosis program. == See also == Carcinogen Carcinogenesis 2-Deoxy-D-glucose Pyruvic acid Cellular respiration Inverse Warburg effect == References == == Further reading == Warburg O (24 February 1956). "On the Origin of Cancer Cells". Science. 123 (3191): 309–14. Bibcode:1956Sci...123..309W. doi:10.1126/science.123.3191.309. PMID 13298683.
{"page_id": 7806713, "title": "Warburg hypothesis"}
A japanese term for a flour or Breadcrumbs coating on foods prior to immersing in hot oils or fats. The term is popular in North America. Panko consists of flour or crumbled bread of varying dryness, sometimes with seasonings added, used for breading or crumbing foods, topping casseroles, stuffing poultry, thickening stews, adding inexpensive bulk to soups, meatloaves and similar foods, and making a crisp and crunchy covering for fried foods, especially breaded cutlets like tonkatsu and schnitzel. The Japanese variety of breadcrumbs is called "panko". == Types == === Dry === Dry breadcrumbs are made from dry breads which have been baked or toasted to remove most remaining moisture, and may have a sandy or even powdery texture. Breadcrumbs are most easily produced by pulverizing slices of bread in a food processor, using a steel blade to make coarse crumbs, or a grating blade to make fine crumbs. A grater or similar tool will also do. === Fresh === The breads used to make soft or fresh breadcrumbs are not quite as dry, so the crumbs are larger and produce a softer coating, crust, or stuffing. The crumb of breadcrumb also refers to the texture of the soft, inner part of a bread loaf, as distinguished from the crust, or "skin". === Panko === Panko (パン粉) is a type of flaky breadcrumbs used in Japanese cuisine as a crunchy coating for fried foods, such as tonkatsu. Panko is made from bread baked by passing electrical current through the dough, which yields a bread without a crust, and then grinding the bread to create fine slivers of crumb. It has a crisper, airier texture than most types of breading found in Western cuisine and maintains its texture baked or deep-fried, resulting in a lighter coating. Outside Japan, its use has
{"page_id": 1277062, "title": "Breadcrumbs"}
its natural binding locations across a genome using formaldehyde. Cells are then collected, broken open, and the chromatin sheared and solubilized by sonication. An antibody is then used to immunoprecipitate the protein of interest, along with the crosslinked DNA. DNA PCR adaptors are then ligated to the ends, which serve as a priming point for second strand DNA synthesis after the exonuclease digestion. Lambda exonuclease then digests double DNA strands from the 5′ end until digestion is blocked at the border of the protein-DNA covalent interaction. Most contaminating DNA is degraded by the addition of a second single-strand specific exonuclease. After the cross-linking is reversed, the primers to the PCR adaptors are extended to form double stranded DNA, and a second adaptor is ligated to 5′ ends to demarcate the precise location of exonuclease digestion cessation. The library is then amplified by PCR, and the products are identified by high throughput sequencing. This method allows for resolution of up to a single base pair for any protein binding site within any genome, which is a much higher resolution than either ChIP-chip or ChIP-seq. == Advantages == ChIP-exo has been shown to give up to single base pair resolution in identifying protein binding locations. This is in contrast to ChIP-seq which can locate a protein's binding site only to with ±300 base pairs. Contamination of non-protein-bound DNA fragments can result in a high rate of false positives and negatives in ChIP experiments. The addition of exonucleases to the process not only improves resolution of binding-site calling, but removes contaminating DNA from the solution before sequencing. Proteins that are inefficiently bound to a nucleotide fragment are more likely to be detected by ChIP-exo. This has allowed, for example, the recognition of more CTCF transcription factor binding sites than previously discovered. Due to
{"page_id": 34858148, "title": "ChIP-exo"}
(Cint). Consider all possible non-overlapping subsets of Cint. There are at most 2 O ( r ⋅ n ) {\displaystyle 2^{O(r\cdot {\sqrt {n}})}} such subsets. For each such subset, recursively compute the MDS of Cleft and the MDS of Cright, and return the largest combined set. The run time of this algorithm satisfies the following recurrence relation: T ( 1 ) = 1 {\displaystyle T(1)=1} T ( n ) = 2 O ( r ⋅ n ) T ( 2 n 3 ) if n > 1 {\displaystyle T(n)=2^{O(r\cdot {\sqrt {n}})}T\left({\frac {2n}{3}}\right){\text{ if }}n>1} The solution to this recurrence is: T ( n ) = 2 O ( r ⋅ n ) {\displaystyle T(n)=2^{O(r\cdot {\sqrt {n}})}} == Local search algorithms == === Pseudo-disks: a PTAS === A pseudo-disks-set is a set of objects in which the boundaries of every pair of objects intersect at most twice (Note that this definition relates to a whole collection, and does not say anything about the shapes of the specific objects in the collection). A pseudo-disks-set has a bounded union complexity, i.e., the number of intersection points on the boundary of the union of all objects is linear in the number of objects. For example, a set of squares or circles of arbitrary sizes is a pseudo-disks-set. Let C be a pseudo-disks-set with n objects. A local search algorithm by Chan and Har-Peled finds a disjoint set of size at least ( 1 − O ( 1 b ) ) ⋅ | M D S ( C ) | {\displaystyle (1-O({\frac {1}{\sqrt {b}}}))\cdot |MDS(C)|} in time O ( n b + 3 ) {\displaystyle O(n^{b+3})} , for every integer constant b ≥ 0 {\displaystyle b\geq 0} : INITIALIZATION: Initialize an empty set, S {\displaystyle S} . SEARCH: Loop over all the subsets of C
{"page_id": 41701177, "title": "Maximum disjoint set"}
Hallin's spheres is a theory of news reporting and its rhetorical framing posited by journalism historian Daniel C. Hallin in his 1986 book The Uncensored War to explain the news coverage of the Vietnam War. Hallin divides the world of political discourse into three concentric spheres: consensus, legitimate controversy, and deviance. In the sphere of consensus, journalists assume everyone agrees. The sphere of legitimate controversy includes the standard political debates, and journalists are expected to remain neutral. The sphere of deviance falls outside the bounds of legitimate debate, and journalists can ignore it. These boundaries shift, as public opinion shifts. Hallin's spheres, which deals with the media, are similar to the Overton window, which deals with public opinion generally, and posits a sliding scale of public opinion on any given issue ranging from conventional wisdom to unacceptable. Hallin used the concept of framing to describe the presentation and reception of issues in public. For example, framing the use of drugs as criminal activity can encourage the public to consider that behavior anti-social. Hallin's work was later referred to in the controversial formulation of the concept of an opinion corridor, in which the range of acceptable public opinion narrows, and opinion outside that corridor moves from legitimate controversy into deviance. == Description == === Sphere of consensus === This sphere contains those topics on which there is widespread agreement, or at least the perception thereof. Within the sphere of consensus, 'journalists feel free to invoke a generalized "we" and to take for granted shared values and shared assumptions'. Examples include such things as motherhood and apple pie. For topics in this sphere, journalists feel free to be advocating cheerleaders without having to be neutral or present any opposing view point and be disinterested observers." === Sphere of legitimate controversy === For
{"page_id": 31244175, "title": "Hallin's spheres"}
160 feet (49 m) of alluvial deposits, varying from coarse, pervious sands and gravels to impermeable clays. Beneath these deposits lay a thick (approximately 1,000 feet (300 m)) deposit of Bear Paw shale. This shale is classified as a firm shale and contains thin (<1 to 6 in (2.5 to 15.2 cm) layers of bentonite. The topmost layer of soft clay was removed from the alluvium in order to found the dam on the stable sandy deposits beneath, at an elevation of approximately 2,050 feet (620 m). The remaining deposits consisted of the alluvial materials mentioned above. These deposits had many interconnecting layers of coarse sands and gravels, necessitating the installation of a steel sheet pile wall down to the firm shale, from the left to the right abutment. As designed, the dam extends to an elevation of 2,275 feet (693 m), for a total height of 225 feet (69 m) from the cleared river bed and has a length from the left to the right abutment of approximately 10,500 feet (3,200 m). The upstream face was designed with an average slope of one vertical on four horizontal and included three horizontal shelves built into the slope. A flatter (1 on 7.5) berm was to be placed between stations 30+00 and 75+00 (approximately the center half of the length of the dam). Since the construction method of hydraulic fill was chosen, four electric dredges were built. Because of the distance of the site from the nearest shoreline, a shipyard was started on the site, affectionately dubbed "The Fort Peck Navy" and "The Biggest Shipyard in Montana" by the workers. These dredges would pump material from nearby borrow pits to the dam site where it was discharged by pipes along the outside edges of the fill. The coarser material settled out
{"page_id": 1477664, "title": "Fort Peck Dam"}
The conversational model of psychotherapy was devised by the English psychiatrist Robert Hobson, and developed by the Australian psychiatrist Russell Meares. Hobson listened to recordings of his own psychotherapeutic practice with more disturbed clients, and became aware of the ways in which a patient's self—their unique sense of personal being—can come alive and develop, or be destroyed, in the flux of the conversation in the consulting room. The conversational model views the aim of therapy as allowing the growth of the patient's self through encouraging a form of conversational relating called 'aloneness-togetherness'. This phrase is reminiscent of Winnicott's idea of the importance of being able to be 'alone in the presence of another'. The client comes to eventually feel recognised, accepted and understood as who they are; their sense of personal being, or self, is fostered; and they can start to drop the destructive defenses which disrupt their sense of personal being. The development of the self implies a capacity to embody and span the dialectic of 'aloneness-togetherness'—rather than being disposed toward either schizoid isolation (aloneness) or merging identification with the other (togetherness). Although the therapy is described as psychodynamic, and is accordingly concerned to identify activity and personal meaning in the midst of apparent passivity, it relies more on careful empathic listening and the development of a common 'feeling language' than it does on psychoanalytic interpretation. == Psychodynamic Interpersonal Therapy (PIT) == In its manualised form ('PIT'), the conversational model is presented as having seven interconnected components. These are: Developing an exploratory rationale: Together with the patient generate an understanding which links emotional or somatic symptoms with interpersonal difficulties Shared understanding: In developing a shared understanding, the therapist uses statements rather than questions, uses mutual ('I' and 'We') language, deploys conditional rather than absolute statements of understanding, allows metaphorical
{"page_id": 10607441, "title": "Conversational model"}
be telecast in black and white through the end of the 1964–65 season. ABC delayed its first color programs until 1962, but these were initially only broadcasts of the cartoon shows The Flintstones, The Jetsons and Beany and Cecil. The DuMont network, although it did have a television-manufacturing parent company, was in financial decline by 1954 and was dissolved two years later. The only known original color programming broadcast over the DuMont network was a high school football Thanksgiving game from New Jersey in 1957, a year after the network had ceased regular operations. The relatively small amount of network color programming, combined with the high cost of color television sets, meant that as late as 1964 only 3.1 percent of television households in the US had a color set. However, by the mid-1960s, the subject of color programming turned into a ratings war. A 1965 American Research Bureau (ARB) study that proposed an emerging trend in color television set sales convinced NBC that a full shift to color would gain a ratings advantage over its two competitors. As a result, NBC provided the catalyst for rapid color expansion by announcing that its prime time schedule for fall 1965 would be almost entirely in color. ABC and CBS followed suit and over half of their combined prime-time programming also moved to color that season, but they were still reluctant to telecast all their programming in color due to production costs. All three broadcast networks were airing full color prime time schedules by the 1966–67 broadcast season, and ABC aired its last new black-and-white daytime programming in December 1967. Public broadcasting networks like NET, however, did not use color for a majority of their programming until 1968. The number of color television sets sold in the US did not exceed black-and-white
{"page_id": 162843, "title": "Color television"}
A single-board microcontroller differs from a single-board computer in that it lacks the general-purpose user interface and mass storage interfaces that a more general-purpose computer would have. Compared to a microprocessor development board, a microcontroller board would emphasize digital and analog control interconnections to some controlled system, whereas a development board might by have only a few or no discrete or analog input/output devices. The development board exists to showcase or train on some particular processor family and, therefore, internal implementation is more important than external function. == Internal bus == The bus of the early single-board devices, such as the Z80 and 6502, was universally a Von Neumann architecture. Program and data memory were accessed via the same shared bus, even though they were stored in fundamentally different types of memory: ROM for programs and RAM for data. This bus architecture was needed to economise the number of pins needed from the limited 40 available for the processor's ubiquitous dual-in-line IC package. It was common to offer access to the internal bus through an expansion connector, or at least provide space for a connector to be soldered on. This was a low-cost option and offered the potential for expansion, even if it was rarely used. Typical expansions would be I/O devices or additional memory. It was unusual to add peripheral devices such as tape or disk storage, or a CRT display Later, when single-chip microcontrollers, such as the 8048, became available, the bus no longer needed to be exposed outside the package, as all necessary memory could be provided within the chip package. This generation of processors used a Harvard architecture with separate program and data buses, both internal to the chip. Many of these processors used a modified Harvard architecture, where some write access was possible to the
{"page_id": 23794004, "title": "Single-board microcontroller"}
Kimmen Sjölander (née Warnow) is professor emerita at the University of California, Berkeley in the Department of Bioengineering. She is well known for her work on protein sequence analysis. == Biography == Sjölander did both her undergraduate and graduate work at the University of California, Santa Cruz in the Department of Computer Science, earning a bachelor's degree in 1993 and a PhD in 1997 under the supervision of David Haussler. She was the chief scientist in the Molecular Applications Group from 1997-1999 (company co-founded by Michael Levitt) and then principal scientist in Protein Informatics at Celera Genomics from 1999-2001, where she was a member of the team (along with J. Craig Venter and Gene Myers) who assembled and annotated the Human Genome. She joined the faculty at the University of California, Berkeley in the Department of Bioengineering in 2001 as an assistant professor. She was tenured in 2006, and promoted to full professor in 2012. == Awards == Sjölander received the NSF CAREER Award and the Presidential Early Career Award for Scientists and Engineers in 2003. == Research == Sjölander is most well known for her work in phylogenomic methods for protein sequence analysis, including machine learning methods for functional site prediction and ortholog identification, and hidden Markov model (HMM) methods for protein structure prediction, functional subfamily and ortholog classification, remote homology detection, multiple sequence alignment, and phylogenetic tree estimation. Her algorithms were used in the functional annotation of the human genome at Celera Genomics, in the PhyloFacts bioinformatics databases and portals, and contributed to the ModBase database. == Personal == Sjölander's twin sister Tandy Warnow is a professor of computer science at the University of Illinois Urbana–Champaign. == Selected publications == Venter, J Craig; et al. (2001), "The sequence of the human genome", Science, 291 (5507): 1304–1351, Bibcode:2001Sci...291.1304V, doi:10.1126/science.1058040,
{"page_id": 54241212, "title": "Kimmen Sjölander"}
close to the minimum, and three posted the maximum aver- age precision score. Figure 1 shows the differences between the TOFIC2 results and the medians for average precision. Preliminary analysis indicates little correlation between performance and searcher expertise or job function. In fact, of the two queries that were complete failures (for topics 131 and 141), one was done by an expert and one by a nov- > ice; and of the three queries that did best in category A (for topics 105, 111, and 145) two were done by experts, and one was done by a novice. We also did not find any interesting correlation between performance and the "hardness" of topics 101-150. In her RIAO'94 paper , Harman defines hardness in terms of > 4. As of this writing, we have not performed a detailed statistical examination of these correlations, so these observations should be > treated with some caution. the relative recall, which is a measure designed to show how systems perform at the early stage of retrieval, and while > this appears to be a reasonable predictor of the median aver- age precision scores for the TREC-3 routing experiment, we found it to be less effective in predicting our own average precision scores. The overall results for the routing experiment are some- what surprising since we had predicted that we would achieve significance performance gains vis a vis the TREC- 2 results. Our tentative explanation is that while our interac- tive approach can be made to work very well indeed (vide the performance on topics 111 and 145), human searchers are not, in general, very effective at exploiting large amounts^of training data compared with the statistical tech- niques used by the majority of the automatic methods par- ticipating in TREC-3. 4.2 The Adhoc Experiment All 50
{"source": 1021, "title": "from dpo"}
student’s IEP. If the Committee determines that a student requires a special education service to address his/her language needs as they relate to the IEP, the applicable box in this section of the IEP would be checked "Yes." The Committee must ensure that a device or service, including an intervention, accommodation or other program modification needed for the student to receive a free appropriate public education is indicated in the IEP under the applicable section of the IEP. Not all students who are limited English proficient may need a special education service to address his/her language needs as they relate to the IEP. In this case, the "No" box would be checked. 1. **Why isn’t “Orientation and Mobility” included in the Student Needs Related to Special Factors section of the IEP for students who are blind and visually impaired?****(Added 4/11)** The State’s IEP form includes in this section only those special factors a Committee must consider that are required by federal and State regulations and that are in addition to the factors that must be considered for all students (the results of the initial or most recent evaluation; the student’s strengths; the concerns of the parents for enhancing the education of their child; the academic, developmental and functional needs of the student). 1. **If “yes” is checked on the Consideration of Special Factors section for “student needs strategies, including positive behavioral interventions,” and “no” that they do not need a behavioral intervention plan (BIP), is there a way to document why a BIP is not needed?****(Added 4/11)** The IEP does not need to specify why a BIP is not recommended. As applicable, consideration of a student's need for a BIP should be provided to a parent in prior written notice. However, there is a text box on the State’s IEP
{"source": 2814, "title": "from dpo"}
splashdown spot. The computers also sounded the alarm at the first sign of trouble; any deviation from the projected flight path, evidence of malfunction on board the capsule, or abnormal vital signs from the astronaut, which were also being monitored and transmitted to doctors on the ground, would send Mission Control into troubleshooting mode. The launch date for Project Mercury’s first manned mission slipped into 1961, a year that announced itself as unpredictable from the start: on January 3, the United States cut diplomatic relations with Cuba, another step down the road in the Cold War with the Soviet Union. President Dwight Eisenhower, in his farewell speech in January 1961, railed against the United States’ growing military-industrial complex. On March 6, 1961, President John F. Kennedy, newly inaugurated, announced Executive Order 10925, ordering the federal government and its contractors to take “affirmative action” to ensure equal opportunity for all of their employees and applicants, regardless of race, creed, color, or national origin. Through it all, the Space Task Group, the Langley Research Group, the other NASA centers, and thousands of NASA contractors pressed forward on their aerodynamic, structural, materials, and component tests, closing in on a target launch date in May. “We could have beaten them, we should have beaten them,” Project Mercury flight director Chris Kraft recalled decades later. In the midst of America’s high hopes for redemption in the heavens, the Soviets struck again. On April 12, 1961, Russian cosmonaut Yuri Gagarin became in one fell swoop the first human in space and the first human to orbit Earth. Unlike the disorientation, anxiety, and fear that Sputnik provoked, the agency absorbed the blow. It was painful, certainly, and embarrassing as well, but they turned the welter of emotion into renewed intensity for the mission, employing all of their
{"source": 5205, "title": "from dpo"}
and pioneering technology at companies such as Telefonica, one of the largest telecom companies in the world. Later, he would serve as CEO of New Business and Innovation, which led to an interest in spearheading blockchain adoption through the tokenization of traditional markets. Carlos holds a Ph.D. in computer science, showcasing his deep technical understanding and commitment to advancing technological frontiers. Multilingual with experience across global markets, he speaks English, Spanish, and Japanese fluently. Based in Miami, Carlos continues to be a driving force at the intersection of technology, finance, and innovation, consistently pushing the boundaries of what is possible in the digital age. **Billy Miller, Head of Transfer Agent, Securitize, LLC**: Billy is an accomplished executive in the Transfer Agent industry. Billy’s success as the COO of Pacific Stock Transfer, led to a strategic acquisition by Securitize in early 2022 to bridge the gap between TradFi and digital processes. At Securitize, his experience was crucial in the institutional level design, launch, and ongoing operations of the BUIDL fund as the transfer agent for BlackRock. Billy also serves as a Board member of the Securities Transfer Association, where he advocates for the modernization of the transfer agent industry through the adoption of digital processes leveraging blockchain technology. **Joe Nikolson, CEO & CCO, Securitize Markets**: Joe is ex-Anchorage & ex-Coinbase, and has over 25 years of experience in various senior roles across trad-fi & digital asset capital markets with a particular emphasis in market structure evolution and digital asset securities. **The Fund’s Board of Directors** has overall responsibility for the management, operation, and administration of the Fund. **Noelle L’Heureux** (US) Managing Director, is the head of Private Markets Solutions within BlackRock’s Global Product Solutions (“GPS”). As head of Private Markets Solutions, Ms. L’Heureux is responsible for innovation, strategy design and product
{"source": 6413, "title": "from dpo"}
defects and misalignment of the interface. The voids present in the interface are filled with air. Heat transfer is therefore due to conduction across the actual contact area and to conduction (or natural convection) and radiation across the gaps. If the contact area is small, as it is for rough surfaces, the major contribution to the resistance is made by the gaps. To decrease the thermal contact resistance, the surface roughness can be decreased while the interface pressure is increased. However, these improving methods are not always practical or possible for electronic equipment. Thermal interface materials (TIM) are a common way to overcome these limitations. Properly applied thermal interface materials displace the air that is present in the gaps between the two objects with a material that has a much-higher thermal conductivity. Air has a thermal conductivity of 0.022 W/(m·K) while TIMs have conductivities of 0.3 W/(m·K) and higher. When selecting a TIM, care must be taken with the values supplied by the manufacturer. Most manufacturers give a value for the thermal conductivity of a material. However, the thermal conductivity does not take into account the interface resistances. Therefore, if a TIM has a high thermal conductivity, it does not necessarily mean that the interface resistance will be low. Selection of a TIM is based on three parameters: the interface gap which the TIM must fill, the contact pressure, and the electrical resistivity of the TIM. The contact pressure is the pressure applied to the interface between the two materials. The selection does not include the cost of the material. Electrical resistivity may be important depending upon electrical design details. === Light-emitting diode lamps === Light-emitting diode (LED) performance and lifetime are strong functions of their temperature. Effective cooling is therefore essential. A case study of a LED based downlighter
{"page_id": 344123, "title": "Heat sink"}
Phosphopyruvate dehydrogenase phosphatase may refer to: (pyruvate dehydrogenase (acetyl-transferring))-phosphatase, an enzyme Phosphoprotein phosphatase, an enzyme
{"page_id": 39034937, "title": "Phosphopyruvate dehydrogenase phosphatase"}
finite set are in one-to-one correspondence with preorders on the set, and T0 topologies are in one-to-one correspondence with partial orders. Therefore, the number of topologies on a finite set is equal to the number of preorders and the number of T0 topologies is equal to the number of partial orders. The table below lists the number of distinct (T0) topologies on a set with n elements. It also lists the number of inequivalent (i.e. nonhomeomorphic) topologies. Let T(n) denote the number of distinct topologies on a set with n points. There is no known simple formula to compute T(n) for arbitrary n. The Online Encyclopedia of Integer Sequences presently lists T(n) for n ≤ 18. The number of distinct T0 topologies on a set with n points, denoted T0(n), is related to T(n) by the formula T ( n ) = ∑ k = 0 n S ( n , k ) T 0 ( k ) {\displaystyle T(n)=\sum _{k=0}^{n}S(n,k)\,T_{0}(k)} where S(n,k) denotes the Stirling number of the second kind. == See also == Finite geometry Finite metric space Topological combinatorics == References == == External links == May, J.P. (2003). "Notes and reading materials on finite topological spaces" (PDF). Notes for REU.
{"page_id": 14682596, "title": "Finite topological space"}
Thushara Pillai is an Indian astrophysicist and astronomer with a senior research scientist position at Boston University's Institute for Astrophysical Research and MIT Haystack Observatory. Her research interests have included molecular clouds, high-mass star formation, magnetic fields, astrochemistry, and the Galactic Center. She is known for her work that looked to understand star formation by observing magnetized interstellar clouds, and Pillai is the first astronomer to capture images of magnetic fields reorienting near areas of star formation. == Early life and education == Pillai was born 20 June 1980 in Kerala, India to parents P. Gopalakrishna Pillai and K. S. Shyamala Kumar. She attended KV Pattom where she was a 1997 Class XII Batch alumna, and Pillai's mother was a teacher at this government day school. Pillai's parents and teachers from an early age encouraged her to pursue physics and higher education. Pillai attended Government Women's College in Thiruvananthapuram to receive her BSc in physics. Then, she pursued her Masters in Physics at IIT Madras, where one summer, she was given the opportunity to participate in an astronomy project at the National Center for Radio Astronomy in Pune, pushing her interest toward astronomy and astrophysics. She completed her highest education, a PhD in astronomy, at the Max Planck Institute for Radio Astronomy in Germany. == Career == Pillai holds the position of Senior Research Scientist at Boston University's Institute for Astrophysical Research. She also held a guest researcher position at her alma mater, the Max Planck Institute for Radio Astronomy, where she worked in the Department of Millimeter and Submillimeter Astronomy. There, her research focused on early phase chemistry, infrared dark clouds, high mass star formation, and star formation and cloud evolution in the galactic center. == Research == Pillai is most known for her paper published in Nature Astronomy,
{"page_id": 69497612, "title": "Thushara Pillai"}
HD 70060 is a class A8V (white main-sequence) star in the constellation Puppis. Its apparent magnitude is 4.45 and it is approximately 93.4 light years away based on parallax. == References ==
{"page_id": 38332540, "title": "HD 70060"}
on April 26. A total solar eclipse on May 11. A total lunar eclipse on October 21. An annular solar eclipse on November 4. === Metonic === Preceded by: Solar eclipse of January 16, 2094 Followed by: Solar eclipse of August 24, 2101 === Tzolkinex === Preceded by: Solar eclipse of September 23, 2090 Followed by: Solar eclipse of December 17, 2104 === Half-Saros === Preceded by: Lunar eclipse of October 30, 2088 Followed by: Lunar eclipse of November 11, 2106 === Tritos === Preceded by: Solar eclipse of December 6, 2086 Followed by: Solar eclipse of October 4, 2108 === Solar Saros 154 === Preceded by: Solar eclipse of October 24, 2079 Followed by: Solar eclipse of November 16, 2115 === Inex === Preceded by: Solar eclipse of November 24, 2068 Followed by: Solar eclipse of October 16, 2126 === Triad === Preceded by: Solar eclipse of January 4, 2011 Followed by: Solar eclipse of September 4, 2184 === Solar eclipses of 2094–2098 === This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. The solar eclipses on January 16, 2094 (total) and July 12, 2094 (partial) occur in the previous lunar year eclipse set, and the partial solar eclipses on April 1, 2098 and September 25, 2098 occur in the next lunar year eclipse set. === Saros 154 === This eclipse is a part of Saros series 154, repeating every 18 years, 11 days, and containing 71 events. The series started with a partial solar eclipse on July 19, 1917. It contains annular eclipses from October 3, 2043 through March 27, 2332; hybrid eclipses from April 7, 2350 through April 29, 2386; and
{"page_id": 25522861, "title": "Solar eclipse of November 4, 2097"}
the foundation. Careful attention to detail is required where the building interfaces with the ground, especially at entrances, stairways and ramps, to ensure sufficient relative motion of those structural elements. === Supplementary dampers === Supplementary dampers absorb the energy of motion and convert it to heat, thus damping resonant effects in structures that are rigidly attached to the ground. In addition to adding energy dissipation capacity to the structure, supplementary damping can reduce the displacement and acceleration demand within the structures. In some cases, the threat of damage does not come from the initial shock itself, but rather from the periodic resonant motion of the structure that repeated ground motion induces. In the practical sense, supplementary dampers act similarly to Shock absorbers used in automotive suspensions. === Tuned mass dampers === Tuned mass dampers (TMD) employ movable weights on some sort of springs. These are typically employed to reduce wind sway in very tall, light buildings. Similar designs may be employed to impart earthquake resistance in eight to ten story buildings that are prone to destructive earthquake induced resonances. === Slosh tank === A slosh tank is a large container of low viscosity fluid (usually water) that may be placed at locations in a structure where lateral swaying motions are significant, such as the roof, and tuned to counter the local resonant dynamic motion. During a seismic (or wind) event the fluid in the tank will slosh back and forth with the fluid motion usually directed and controlled by internal baffles – partitions that prevent the tank itself becoming resonant with the structure, see Slosh dynamics. The net dynamic response of the overall structure is reduced due to both the counteracting movement of mass, as well as energy dissipation or vibration damping which occurs when the fluid's kinetic energy is
{"page_id": 923301, "title": "Seismic retrofit"}
A Language for Process Specification (ALPS) is a model and data exchange language developed by the National Institute of Standards and Technology in the early 1990s to capture and communicate process plans developed in the discrete and process manufacturing industries. == References ==
{"page_id": 27438387, "title": "A Language for Process Specification"}
HD 164604 b is an extrasolar planet discovered in January 2010 in association with the Magellan Planet Search Program. It has a minimum mass 2.7 times the mass of Jupiter and an orbital period of 606.4 days. Its star is classified as a K2 V dwarf and is roughly 124 light-years away from Earth. HD 164604 b is named Caleuche. The name was selected in the NameExoWorlds campaign by Chile, during the 100th anniversary of the IAU. Caleuche is a large ghost ship from southern Chilean mythology which sails the seas around the island of Chiloé at night. An astrometric measurement of the planet's inclination and true mass was published in 2022 as part of Gaia DR3. == See also == HD 129445 b HD 152079 b HD 175167 b HD 86226 b == References ==
{"page_id": 26165858, "title": "HD 164604 b"}
first five digits of π, 3.1415. The form of a cadae is based on pi on two levels. There are five stanzas, with 3, 1, 4, 1, and 5 lines each, respectively for a total of fourteen lines in the poem. Each line of the poem also contains an appropriate number of syllables. The first line has three syllables, the second has one, the third has four, and so on, following the sequence of pi as it extends infinitely. Rachel Hommel wrote an untitled "Cadaeic Cadae", which uses the cadaeic form as explained above, and adds a level of complexity to it wherein the number of letters in each word represents a digit of pi. For his book, "The Burning Door," Tony Leuzzi wrote a series of 33 untitled poems in cadaeic form. === Cadaeic Cadenza === "Cadaeic Cadenza" is a 1996 short story by Mike Keith. It is an example of cadae and pilish; a cadenza is a solo passage in music. In addition to the main restriction, the author attempts to mimic portions, or entire works, of different types and pieces of literature ("The Raven", "Jabberwocky", the lyrics of Yes, "The Love Song of J. Alfred Prufrock", Rubaiyat, Hamlet, and Carl Sandburg's Grass) in story, structure, and rhyme. Some sections of the poem use words of more than ten letters as a one followed by another digit: where 11 represents two consecutive digit "1"s in pi. The first part of Cadaeic Cadenza is slightly changed from an earlier version, "Near a Raven", which was a retelling of Edgar Allan Poe's "The Raven". The text of poem begins: Poe, E. Near a Raven Midnights so dreary, tired and weary, Silently pondering volumes extolling all by-now obsolete lore. During my rather long nap - the weirdest tap! An ominous vibrating
{"page_id": 26991991, "title": "Pilish"}
for a time adjustment. 4.3. Contract documentation supporting the requested time adjustment. 4.4. TIA. The TIA must demonstrate entitlement to a time adjustment. 5. Identification and copies of your documents and copies of communications supporting the potential claim, including certified payrolls, bills, canceled checks, job cost reports, payment records, and rental agreements 6. Relevant information, references, and arguments that support the potential claim The Department does not consider a Full and Final Potential Claim Record form that does not have the same nature, circumstances, and basis of claim as those specified on the Initial Potential Claim Record form and Supplemental Potential Claim Record form. The Engineer evaluates the information presented in the Full and Final Potential Claim Record form and responds within 30 days of its receipt unless the Full and Final Potential Claim Record form is submitted after Contract acceptance, in which case, a response may not be provided. The Engineer's receipt of the Full and Final Potential Claim Record form must be evidenced by postal return receipt or the Engineer's written receipt if delivered by hand. 5-1.43E Alternative Dispute Resolution 5-1.43E(1) General 5-1.43E(1)(a) General Section 5-1.43E applies to a contract with 100 or more original working days. The ADR process must be used for the timely resolution of disputes that arise out of the work. You must comply with section 5-1.43E to pursue a claim, file for arbitration, or file for litigation. The ADR process is not a substitute for submitting an RFI or a potential claim record. Do not use the ADR process for disputes between you and subcontractors or suppliers that have no grounds for a legal action against the Department. If you fail to comply with section 5-1.43 for a potential claim on behalf of a subcontractor or supplier, you release the Department of the
{"source": 1498, "title": "from dpo"}
for the reduction of the risk of drug-induced QT interval prolongation in hospitalized patients]( **Mousoulis, Charilaos** (2012) Integration and applications of fluorocarbon phase change liquids (FPCL) with MEMS and microfluidics Interactive effects of elevated CO2 and salinity on three common grass species Physicochemical properties of fish protein hydrolysates from invasive silver carp (Hypophthalmicthys molitrix) for use as value-added cryoprotectant ingredients Deciphering the metabolic networks in flowers: An integrative approach "Let women build houses": American middle-income, single-family housing in the 1950s and the 1956 Women's Congress on Housing Two essays on technical efficiency of aquaculture production in Kenya: Parametric and non-parametric methodological approaches Profiling the atmosphere with the airborne radio occultation technique Natural user interfaces for engineering sketch understanding Two degree-of-freedom hysteresis compensation for a dynamic mirror with antagonistic piezoelectric stack actuation Electric utility planning methods for the design of one shot stability controls Development of dimeric prodrug inhibitors of P-glycoprotein and ABCG2 to enhance brain penetration of antiretroviral agents New computational approaches to the solution of mixed integer programs Development of a novel atmospheric pressure chemical ionization (APCI) method using palladium chloride for identification of lignin degradation products containing 3-phenylallyl alcohol Adapting a commercial power system simulator for smart grid based system study and vulnerability assessment Value chain development for tilapia and catfish products: Opportunities for female participation in Kenya [Discrete opto-fluidic chemical spectrophotometry system (DOCSS) for online batch-sampling of heavy metals and fabrication of dithizone based evanescent wave optical fiber sensor](
{"source": 3879, "title": "from dpo"}
\chi)}\textrm{rec}_{\Gamma, v} = {\mathcal{L}}_{\Sigma, \Gamma}^{\textrm{Gal}} \otimes \bigwedge_{v \in E(\Sigma^c, \chi)}\textrm{ord}_v\,. \end{align*}$$ Here, the equality takes place in $$ \begin{align*} & \det\left( {\operatorname{Hom}}_{\mathcal{O}}(\widetilde{H}_{\textrm{f}}^1(G_{K,S}, T,\Delta_{\Sigma}), \mathscr{A}_\Gamma^e/\mathscr{A}_\Gamma^{e+1})\right) \otimes_{{\mathbb{Z}}_p} {\mathbb{Q}}_p. \end{align*}$$ * ii) When |$e = 0$|⁠, we put |${\mathcal {L}}_{\Sigma ,\Gamma }^{\textrm {Gal}} = 1$|⁠. Lemma 3.24. There is a |${\mathbb {Z}}_p$|-extension |$K_\Gamma /K$| with Galois group |$\Gamma $| such that |${\mathcal {L}}_{\Sigma , \Gamma }^{\textrm {Gal}} \neq 0$|⁠. Proof. If |$K_\Gamma /K$| is unramified outside |$\Sigma $|⁠, then we have $$ \begin{align*} & {\mathcal{L}}_{\Sigma, \Gamma}^{\textrm{Gal}} = \prod_{v \in E(\Sigma^c, \chi)} (\textrm{Frob}_v - 1) \in \mathscr{A}_\Gamma^e. \end{align*}$$ It therefore suffices to prove the existence of a |${\mathbb {Z}}_p$|-extension |$K_\Gamma /K,$| which is unramified outside |$\Sigma $| and such that no prime in |$E(\Sigma ^c,\chi )$| splits completely in |$K_\Gamma $|⁠. By the adèlic description of class field theory, any |${\mathbb {Z}}_p$|-extension |$K_\Gamma /K$| that is unramified outside |$\Sigma $| corresponds to a subspace |$X \subset \prod _{v \in \Sigma }{\widehat {{\mathcal {O}}_{K_v}^{\times }}} \otimes _{{\mathbb {Z}}_p} {\mathbb {Q}}_p$| of dimension |$[K \colon {\mathbb {Q}}] -1$| containing the image of |${\mathcal {O}}_K^\times \otimes _{{\mathbb {Z}}} {\mathbb {Q}}_p$|⁠. Let |$h_K$| denote the class number of |$K$|⁠. For a prime |$v^c \in \Sigma ^c$|⁠, let |$\pi _{v^c}$| be a generator of the principal ideal |${\mathfrak {p}}_{v^c}^{h_K}$|⁠, where |${\mathfrak {p}}_{v^c}$| is the prime ideal of |${\mathcal {O}}_K$| corresponding |$v^c$|⁠. Then, by class field theory, |$v^c$| splits completely in the extension |$K_\Gamma /K$| if and only if |$ {\pi }_{v^c} \in X$|⁠. We therefore need to prove the existence of a subspace |$X \subset \prod _{v \in \Sigma }{\widehat {{\mathcal {O}}_{K_v}^{\times }}} \otimes _{{\mathbb {Z}}_p} {\mathbb {Q}}_p$| of dimension |$[K \colon {\mathbb {Q}}] -1$| such that * |$\bullet $| |$X$| contains the image of |${\mathcal {O}}_K^\times \otimes _{{\mathbb {Z}}} {\mathbb {Q}}_p$|⁠; * |$\bullet $| |$\pi _{v^c} \not \in
{"source": 5818, "title": "from dpo"}
The Zwanzig projection operator is a mathematical device used in statistical mechanics. This projection operator acts in the linear space of phase space functions and projects onto the linear subspace of "slow" phase space functions. It was introduced by Robert Zwanzig to derive a generic master equation. It is mostly used in this or similar context in a formal way to derive equations of motion for some "slow" collective variables. == Slow variables and scalar product == The Zwanzig projection operator operates on functions in the 6 N {\displaystyle 6N} -dimensional phase space Γ = { q i , p i } {\displaystyle \Gamma =\{\mathbf {q} _{i},\mathbf {p} _{i}\}} of N {\displaystyle N} point particles with coordinates q i {\displaystyle \mathbf {q} _{i}} and momenta p i {\displaystyle \mathbf {p} _{i}} . A special subset of these functions is an enumerable set of "slow variables" A ( Γ ) = { A n ( Γ ) } {\displaystyle A(\Gamma )=\{A_{n}(\Gamma )\}} . Candidates for some of these variables might be the long-wavelength Fourier components ρ k ( Γ ) {\displaystyle \rho _{k}(\Gamma )} of the mass density and the long-wavelength Fourier components π k ( Γ ) {\displaystyle \mathbf {\pi } _{\mathbf {k} }(\Gamma )} of the momentum density with the wave vector k {\displaystyle \mathbf {k} } identified with n {\displaystyle n} . The Zwanzig projection operator relies on these functions but does not tell how to find the slow variables of a given Hamiltonian H ( Γ ) {\displaystyle H(\Gamma )} . A scalar product between two arbitrary phase space functions f 1 ( Γ ) {\displaystyle f_{1}(\Gamma )} and f 2 ( Γ ) {\displaystyle f_{2}(\Gamma )} is defined by the equilibrium correlation ( f 1 , f 2 ) = ∫ d Γ ρ 0 (
{"page_id": 30838369, "title": "Zwanzig projection operator"}
A bioprocessor is a miniaturized bioreactor capable of culturing mammalian, insect and microbial cells. Bioprocessors are capable of mimicking performance of large-scale bioreactors, hence making them ideal for laboratory scale experimentation of cell culture processes. Bioprocessors are also used for concentrating bioparticles (such as cells) in bioanalytical systems. Microfluidic processes such as electrophoresis can be implemented by bioprocessors to aid in DNA isolation and purification. == References ==
{"page_id": 2186993, "title": "Bioprocessor"}
Nomenclature codes or codes of nomenclature are the various rulebooks that govern the naming of living organisms. Standardizing the scientific names of biological organisms allows researchers to discuss findings (including the discovery of new species). As the study of biology became increasingly specialized, specific codes were adopted for different types of organism. To an end-user who only deals with names of species, with some awareness that species are assignable to genera, families, and other taxa of higher ranks, it may not be noticeable that there is more than one code, but beyond this basic level these are rather different in the way they work. == Binomial Nomenclature == In taxonomy, binomial nomenclature ("two-term naming system"), also called binary nomenclature, is a formal system of naming species of living things by giving each a name composed of two parts, both of which use Latin grammatical forms, although they can be based on words from other languages. Such a name is called a binomial name (which may be shortened to just "binomial"), a binomen, binominal name, or a scientific name; more informally it is also historically called a Latin name. In the ICZN, the system is also called binominal nomenclature, "binomi'N'al" with an "N" before the "al", which is not a typographic error, meaning "two-name naming system". The first part of the name – the generic name – identifies the genus to which the species belongs, whereas the second part – the specific name or specific epithet – distinguishes the species within the genus. For example, modern humans belong to the genus Homo and within this genus to the species Homo sapiens. Tyrannosaurus rex is likely the most widely known non-human binomial. The formal introduction of this system of naming species is credited to Carl Linnaeus, effectively beginning with his work Species
{"page_id": 2736805, "title": "Nomenclature codes"}
Oceans north of the Antarctic Polar Front, but juvenile males have been seen wandering as far north as Brazil and South Africa. == Behavior and ecology == Typically, fur seals gather during the summer in large rookeries at specific beaches or rocky outcrops to give birth and breed. All species are polygynous, meaning dominant males reproduce with more than one female. For most species, total gestation lasts about 11.5 months, including a several-month period of delayed implantation of the embryo. Northern fur seal males aggressively select and defend the specific females in their harems. Females typically reach sexual maturity around 3–4 years. The males reach sexual maturity around the same time, but do not become territorial or mate until 6–10 years. The breeding season typically begins in November and lasts 2–3 months. The northern fur seals begin their breeding season as early as June due to their region, climate, and resources. In all cases, the males arrive a few weeks early to fight for their territory and groups of females with which to mate. They congregate at rocky, isolated breeding grounds and defend their territory through fighting and vocalization. Males typically do not leave their territory for the entirety of the breeding season, fasting and competing until all energy sources are depleted. The Juan Fernandez fur seals deviate from this typical behavior, using aquatic breeding territories not seen in other fur seals. They use rocky sites for breeding, but males fight for territory on land and on the shoreline and in the water. Upon arriving to the breeding grounds, females give birth to their pups from the previous season. About a week later, the females mate again and shortly after begin their feeding cycle, which typically consists of foraging and feeding at sea for about 5 days, then returning to
{"page_id": 11780, "title": "Fur seal"}
== A == A System of Logic -- A priori and a posteriori -- Abacus logic -- Abduction (logic) -- Abductive validation -- Academia Analitica -- Accuracy and precision -- Ad captandum -- Ad hoc hypothesis -- Ad hominem -- Affine logic -- Affirming the antecedent -- Affirming the consequent -- Algebraic logic -- Ambiguity -- Analysis -- Analysis (journal) -- Analytic reasoning -- Analytic–synthetic distinction -- Anangeon -- Anecdotal evidence -- Antecedent (logic) -- Antepredicament -- Anti-psychologism -- Antinomy -- Apophasis -- Appeal to probability -- Appeal to ridicule -- Archive for Mathematical Logic -- Arché -- Argument -- Argument by example -- Argument form -- Argument from authority -- Argument map -- Argumentation theory -- Argumentum ad baculum -- Argumentum e contrario -- Ariadne's thread (logic) -- Aristotelian logic -- Aristotle -- Association for Informal Logic and Critical Thinking -- Association for Logic, Language and Information -- Association for Symbolic Logic -- Attacking Faulty Reasoning -- Axiom -- Axiom independence -- Axiom of reducibility -- Axiomatic system -- Axiomatization -- == B == Backward chaining -- Barcan formula -- Begging the question -- Begriffsschrift -- Belief -- Belief bias -- Belief revision -- Benson Mates -- Bertrand Russell Society -- Biconditional elimination -- Biconditional introduction -- Bivalence and related laws -- Blue and Brown Books -- Boole's syllogistic -- Boolean algebra (logic) -- Boolean algebra (structure) -- Boolean network -- == C == Canonical form -- Canonical form (Boolean algebra) -- Cartesian circle -- Case-based reasoning -- Categorical logic -- Categories (Aristotle) -- Categories (Peirce) -- Category mistake -- Catuṣkoṭi -- Circular definition -- Circular reasoning -- Circular reference -- Circular reporting -- Circumscription (logic) -- Circumscription (taxonomy) -- Classical logic -- Clocked logic -- Cognitive bias -- Cointerpretability -- Colorless green ideas sleep furiously -- Combinational logic
{"page_id": 302178, "title": "Index of logic articles"}
data led the experimenters to believe the N2pc corresponds to a filtering process that occurs whenever people focus attention on one object while ignoring others. == Component characteristics == The component's name, N2pc, abbreviates its characteristics. The component belongs to the family of N2 ERP components, a negative deflection in the ERP waveform at a latency of approximately 200-300 ms following a stimulus. The "pc" stands for "posterior-contralateral", describing the topographic distribution of the component. It appears as a greater negativity at posterior electrode sites contralateral to the attended side of the visual field relative to ipsilateral electrode sites. For example, when a person pays attention to something in the left side of the visual field, an N2pc appears as a greater negativity over the right posterior areas of the brain than the left. MEG has been used to localize the N2pc primarily to lateral extrastriate cortex and inferotemporal visual areas, such as V4. == Classic paradigms and findings == The N2pc can be used flexibly in nearly any task in which one would like to study the direction and time course of selective attention. However, researchers have primarily used the N2pc in visual search paradigms to study the deployment of attention over time and test hypotheses of parallel and serial models of visual search. The first experiments to investigate specifically the N2pc used a visual search paradigm in which subjects reported the presence or absence of a pre-defined target (e.g., a green rectangle) in a display containing one "oddball" stimulus that differed on a single feature from a uniform background of items (e.g., a green square among blue squares). The oddball stimuli would "pop out" and attract attention, but were not necessarily targets. As a result, experimenters knew where subjects directed attention, but could simultaneously manipulate factors orthogonal to
{"page_id": 27380813, "title": "N2pc"}
them. As the changes of their bodies varied between each other, the concept of difference arose. The concepts of the beautiful and the ugly were born. The beautiful scorns the ugly and they became arrogant because of their appearance. Then, after the turnips, the earth was grown with rice plants. The first rice plants were without husk and kernels. The sweet and honey-like rice flourished seeds abundantly. The people consumed them for a very long time. But there are people who became greedy and lazy. They took more rice than they needed for one day's meals. They began to take two, four, eight, and sixteen days' of rice reserves as they were too lazy to take rice every day. Owing to this, many other creatures began to store and hoard the rice. The generation time for rice plants became slower and slower. Usually, it took only one night for the plant to grow and be ready to be consumed, but by the karmic power the plant began to grow more and more slowly. Also the rice grew in kernels and husks, scattered, which the creatures must work, nurse, maintain, harvest, and cook in order to obtain the white rice. By this time, the body of the creatures had become finely evolved. There was already the distinction between male and female. The man became preoccupied with women and vice versa. Then, as they were deeply attracted to each another, passion and desire was aroused, and they engaged in sexual relationships. The people who saw a couple engaged in sexual activity scolded them, and usually the couple were forbidden from entering the village for a certain period of time. Owing to this, the indulgent couples built closed dwellings where they indulged in sexual activity. == The Birth of Social Order == In
{"page_id": 4302669, "title": "Aggañña Sutta"}
injury and subsequent amputation of Malik's arm. Malik retrieves the Templar treasure that Altaïr failed to find and delivers it to Al Mualim. No longer able to operate as an Assassin, Malik is made the bureau leader of the Jerusalem Assassins. At first he is bitter towards Altaïr, but over time forgives him and acknowledges his own fault in his brother's death. When Altaïr returns to confront Al Mualim, Malik supports him, distracting the indoctrinated Assassins while Altaïr faces Al Mualim. After Al Mualim's death, Altaïr becomes the new Mentor of the Levantine Assassins and names Malik his second-in-command. Malik is appointed as a temporary leader in Altaïr's absence, though he would soon be imprisoned by Abbas Sofian in Masyaf's dungeons for almost two years with false charges of murder. When Altaïr returns from his quest across the Middle East in 1228, Abbas has Malik executed and sends his severed head to Altaïr as a threat. ==== Kadar Al-Sayf ==== Kadar Al-Sayf (died July 1191) is the younger brother of Malik Al-Sayf and a member of the Levantine Assassin Brotherhood. He is less experienced than his brother and, just like him, is a devoted follower of the Creed, although unlike him admires and deeply respects Altaïr. In 1191, Kadar accompanies Altaïr and Malik to Solomon's Temple to retrieve the Apple of Eden from the Templars. However, Altaïr inadvertently sabotages the mission due to his arrogance, and Kadar, due to his lack of swordsmanship skills, is swiftly killed by the Templars while Altaïr and Malik manage to escape with the Apple. ==== Richard the Lionheart ==== Richard I of England (8 September 1157 – 6 April 1199) (voiced by Marcel Jeannin), commonly known as Richard the Lionheart, reigned as King of England from 1189 until his death in 1199. He is
{"page_id": 26888898, "title": "List of Assassin's Creed characters"}
A number of tools exist to generate computer network diagrams. Broadly, there are four types of tools that help create network maps and diagrams: Hybrid tools Network Mapping tools Network Monitoring tools Drawing tools Network mapping and drawing software support IT systems managers to understand the hardware and software services on a network and how they are interconnected. Network maps and diagrams are a component of network documentation. They are required artifacts to better manage IT systems' uptime, performance, security risks, plan network changes and upgrades. == Hybrid tools == These tools have capabilities in common with drawing tools and network monitoring tools. They are more specialized than general drawing tools and provide network engineers and IT systems administrators a higher level of automation and the ability to develop more detailed network topologies and diagrams. Typical capabilities include but not limited to: Displaying port / interface information on connections between devices on the maps Visualizing VLANs / subnets Visualizing virtual servers and storage Visualizing flow of network traffic across devices and networks Displaying WAN and LAN maps by location Importing network configuration files to generate topologies automatically == Network mapping tools == These tools are specifically designed to generate automated network topology maps. These visual maps are automatically generated by scanning the network using network discovery protocols. Some of these tools integrate into documentation and monitoring tools. Typical capabilities include but not limited to: Automatically scanning the network using SNMP, SSH, WMI, etc. Scanning Windows and Unix servers Scanning virtual hosts Scanning routing protocols Performing scheduled scans Tracking changes to the network Notifying users of changes to the network == Network monitoring tools == Some network monitoring tools generate visual maps by automatically scanning the network using network discovery protocols. The maps are ideally suited for viewing network monitoring status
{"page_id": 33743088, "title": "Network diagram software"}
space vehicles in order of heating applied to the airframe would have capsules at the top (re-entering quickly with very high heating loads), waveriders at the bottom (extremely long gliding profiles at high altitude), and the Space Shuttle somewhere in the middle. Simple waveriders have substantial design problems. First, the obvious designs only work at a particular Mach number, and the amount of lift captured will change dramatically as the vehicle changes speed. Another problem is that the waverider depends on radiative cooling, possible as long as the vehicle spends most of its time at very high altitudes. However these altitudes also demand a very large wing to generate the needed lift in the thin air, and that same wing can become rather unwieldy at lower altitudes and speeds. Because of these problems, waveriders have not found favor with practical aerodynamic designers, despite the fact that they might make long-distance hypersonic vehicles efficient enough to carry air freight. Some researchers controversially claim that there are designs that overcome these problems. One candidate for a multi-speed waverider is a "caret wing", operated at different angles of attack. A caret wing is a delta wing with longitudinal conical or triangular slots or strakes. It strongly resembles a paper airplane or rogallo wing. The correct angle of attack would become increasingly precise at higher Mach numbers, but this is a control problem that is theoretically solvable. The wing is said to perform even better if it can be constructed of tight mesh, because that reduces its drag, while maintaining lift. Such wings are said to have the unusual attribute of operating at a wide range of Mach numbers in different fluids with a wide range of Reynolds numbers. The temperature problem can be solved with some combination of a transpiring surface, exotic materials,
{"page_id": 227086, "title": "Waverider"}
is offset by the heat given up by the water = W C p AT\+l M k r e MJ = Weight of the wales (= 250 gpm x 60 ~ n i ~ d l l r x 8.33 lb./gal) = 124,950 lb&. l'hel-efore, 86,350 BTUIlb. = W CI) AT\+) > = (1 24,950 Ib.1111-) (1 BTU/Ib.) Solving for A& gives the temperature differei-ice of the water (i.e., the temperatuse decrease In the wates as a sesult of heart loss through thc stripper shell = 0.69"F ## e 1'111s gives a water temperature of 54.35 "F riiinus 0.69 "1; = 53.66 "F \vhen the ambient ail-temperatusc is 0 "I'. The air sfri1)per ridllper:forn~ as designed 6. irlill be g14c1ranfcedper ~ I I P s p c i j i c ( ~ f i o t ~ e I W I iff11 e ivnfer let~~/~eraf~rrc> drops fo 50 "F-Calculations Provided by Branch Enviro~~rnental for the Impact of Air Stri ppel- Operation with Water Temperature = 55°F and Ambient Air Temperature = 0°F # ev Treating the first air stripper as a heal exchanger, there will be two temperature impacts: ( I ) the change in water and air temperatures to reach equilibrium (i.e., the air will "wal-111 up" and the water will "cool down" until they reach the same temperature, based on their relative tlm-ma1 mass), and (2) the heat loss tlll.ougll the stsipper fiberglass shell to the ambient air ( I ) The design air to water volumetric ratio is 40: I. This equates to 3 lb. air to 62.4 Ib. water (i.e., 40 cu.ft. of air weighs 3 lbs.; 1 cu.fl. of watcr weighs 62.4 Ibs.) To reach temperature equilibrium, we take WA CpA ATA = MJ\+t Cpj4! AFT\\ Mrhcre W = Weight of air (or water)
{"source": 957, "title": "from dpo"}
the memory hierarchy to program performance, not surprisingly many software optimizations were invented that can dramatically improve performance by reusing data within the cache and hence lower miss rates due to improved temporal locality. When dealing with arrays, we can get good performance from the memory system if we store the array in memory so that accesses to the array are sequential in memory. Suppose that we are dealing with multiple arrays, however, with some arrays accessed by rows and some by columns. Storing the arrays row-by-row (called row major order ) or column-by-column ( column major order ) does not solve the problem because both rows and columns are used in every loop iteration. Instead of operating on entire rows or columns of an array, blocked algorithms operate on submatrices or blocks . The goal is to maximize accesses to the data loaded into the cache before the data are replaced; that is, improve temporal locality to reduce cache misses. For example, the inner loops of DGEMM (lines 4 through 9 of Figure 3.21 in Chapter 3) are > for (int j = 0; j < n; ++j) {double cij = C[i+j*n]; /* cij = C[i][j] */ for( int k = 0; k < n; k++ ) cij += A[i+k*n] * B[k+j*n]; /* cij += A[i][k]*B[k][j] */ C[i+j*n] = cij; /* C[i][j] = cij */ }} It reads all N-by-N elements of B, reads the same N elements in what corresponds to one row of A repeatedly, and writes what corresponds to one row of N elements of C. (The comments make the rows and columns of the matrices easier to identify.) Figure 5.20 gives a snapshot of the accesses to the three arrays. A dark shade indicates a recent access, a light shade indicates an older access, and
{"source": 2304, "title": "from dpo"}
concentration camp. It was the biggest Nazi concentration camp in Europe during World War II. Unit 3 • The Challenge to Make a Difference 295 SAMPLE © 2021 College Board. All rights reserved. the entrance on the inside—they always had a machine gun mounted with SS guards sitting on, and there was nobody there, the machine gun was empty. I came back and told people. Of course, nobody believed me ( laughing ) that this was happening. And then we just waited. And the shooting came closer. Then we began hearing small arms fire and suddenly sort of, I think it was in the early afternoon ... the camp had a big bell in the middle of this field, and a Russian soldier was ringing—was driven in with a jeep and was ringing the bell saying, “You’re free.” You know when I see pictures of people who were liberated by American troops, by British troops, they were liberated. We were sort of ... there were none of these scenes as far as I remember. The Russians just told us, “You can go.” I mean, we felt a great sense of relief, because we expected to be shot. But I didn’t have any sense of the tremendous joy that other people must have experienced. I was alone in many ways. I think if my parents had been there it would have been different. > 9 NARRATOR: Thomas had been separated from his parents for several months. He was taken in by members of a Polish Army unit under Russian command. The soldiers assumed that he was a Christian Polish child. And Thomas had experienced enough discrimination to know that it was not safe to tell them that he was Jewish and from Czechoslovakia. The Polish he had learned in the ghetto
{"source": 4933, "title": "from dpo"}
und Oktavengeometrie, Mathe-matisch Instituut der Rijksuniversiteit te Utrecht, Utrecht, 1951, 1960. H. Freudenthal, Lie groups in the foundations of geometry, Adv. Math. 1 (1964), 145-190. 538 Bibliography [Fr-dV] H. Freudenthal and H. de Vries, Linear Lie Groups, Academic Press, New York, 1969. [Frol] F. G. Frobenius, Uber die Charaktere der symmetrischen Gruppe, Sitz. Konig. Preuss. Akad. Wissen. (1900), 516-534; Gesammelte Abhandlungen III, Springer-Verla~} Heidelberg, 1968, pp. 148-166. [Fro2] F. G. Frobenius, Uber die Charaktere der alternirenden Gruppe, Sitz. [Fu] [G-H] [Gil] Konig. Preuss. Akad. Wissen. (1901), 303 - 315; Gesammelte Abhandlungen III, Springer-Verlag, Heidelberg, 1968, pp. 167-179. W. Fulton, Intersection Theory, Springer-Verlag, New York, 1984. P. Griffiths and 1. Harris, Principles of Algebraic Geometry, Wiley-Interscience, New York, 1978. R. Gilmore, Lie Groups, Lie Algebras, and Some of Their Applications, Wiley, New York, 1974. [G-N-W] C. Greene, A. Nijenhuis, and H. S. Wilf, A probabilistic proof of a formula [Gr] [Gre] [Grie] [Ha] [Ham] [Har] [HeI] [Her] [Hi] [HoI] [Ho2] [Ho3] [H-P] [H-S] [Hul] [Hu2] [Hu3] [Hur] [Hus] [In] for the number of Young tableaux of a given shape, Adv. Math. 31 (1979), 104-109. J. A. Green, The characters of the finite general linear group, Trans. Amer. Math. Soc. 80 (1955), 402-447. M. L. Green, Koszul cohomology and the geometry of projective varieties, I, II, J. Diff. Geom. 19 (1984), 125-171; 20 (1984), 279-289. R. L. Griess, Automorphisms of extra special groups and nonvanishing degree 2 cohomology, Pacific J. Math. 48 (1973), 403-422. J. Harris, Algebraic Geometry, Springer-Verlag, New York, to appear. M. Hamermesh, Group Theory and its Application to Physical Problems, Addison-Wesley, Reading, MA, 1962 and Dover, 1989. G. H. Hardy, Ramanujan, Cambridge University Press, Cambridge, MA, 1940. S. Helgason, Differential Geometry, Lie Groups, and Symmetric Spaces, Academic Press, New York, 1978. R. Hermann, Spinors, Clifford and Cayley Algebras, Interdisciplinary
{"source": 6137, "title": "from dpo"}
end joining (NHEJ) pathway factors, which perform a similar religation function to that of TOPIIβ, and the homologous recombination (HR) pathway, which uses the non-broken sister strand as a template to repair the damaged strand of DNA. Stimulation of neuronal activity, as previously mentioned in IEG expression, is another mechanism through which DSBs are generated. Changes in level of activity have been used in studies as a biomarker to trace the overlap between DSBs and increased histone H3K4 methylation in promoter regions of IEGs. Other studies have indicated that transposable elements (TEs) can cause DSBs through endogenous activity that involves using endonuclease enzymes to insert and cleave target DNA at random sites. ==== DSBs and memory reconsolidation ==== While accumulation of DSBs generally inhibits long-term memory consolidation, the process of memory reconsolidation, in contrast, is DSB-dependent. Memory reconsolidation involves the modification of existing memories stored in long-term memory. Research involving NPAS4, a gene that regulates neuroplasticity in the hippocampus during contextual learning and memory formation, has revealed a link between deletions in the coding region and impairments in recall of fear memories in transgenic rats. Moreover, the enzyme H3K4me3, which catalyzes the demethylation of the H3K4 histone, was upregulated at the promoter region of the NPAS4 gene during the reconsolidation process, while knockdown (gene knockdown) of the same enzyme impeded reconsolidation. A similar effect was observed in TOPIIβ, where knockdown also impaired the fear memory response in rats, indicating that DSBs, along with the enzymes that regulate them, influence memory formation at multiple stages. ==== DSBs and neurodegeneration ==== Buildup of DSBs more broadly leads to the degeneration of neurons, hindering the function of memory and learning processes. Due to their lack of cell division and high metabolic activity, neurons are especially prone to DNA damage. Additionally, an imbalance of
{"page_id": 37626088, "title": "DNA damage (naturally occurring)"}
processes shape the biota and biotic factors can shape land forming processes. == Origins and early work == The earliest work related to biogeomorphology was Charles Darwin's 1881 book titled The Formation of Vegetable Mould through the Action of Worms. Although the field of biogeomorphology had not yet been named, Darwin's work represents the earliest examination of a faunal organism influencing landscape process and form. Charles Darwin begins his work on worms with an examination of behavior and physiology, which then moves towards topics related to geomorphology, pedogenesis, and bioturbation. Observations and measurements of soil moved by earthworms, and emphasis on the role of earthworms in formation of humus, fertility of soils, and mixing of soils were all described in the book, which began to change the perspective on earthworms from pest to critical agent of pedogenesis. Despite the popularity of Darwin's final work, the scientific community was slow to recognize the significance of examining the role of organisms in influencing landscapes. It wasn't until the late twentieth century that biogeomorphology began attracting the attention of more than a handful of researchers. == Research approaches == There are two approaches to research in biogeomorphology. One is through the statistical and empirically derived means. This is an approach commonly used in the fields of ecology and biology. The approach is simply to employ large replication studies and deriving patterns from statistical data. Whereas taking a more geomorphic research approach tends to derive patterns via theoretic knowledge and detailed measurements of multiple factors. In turn, this uses smaller sample sizes than that of large replication studies. == Biogeomorphological processes == There are several biogeomorphological processes. Bioerosion is the weathering and removal of abiotic material via organic processes. This can either be passive or active. Moreover, bioerosion is the chemical and or the
{"page_id": 3964562, "title": "Biogeomorphology"}
The Gosselin fracture is a V-shaped fracture of the distal tibia which extends into the ankle joint and fractures the tibial plafond into anterior and posterior fragments. The fracture was described by Leon Athanese Gosselin, chief of surgery at the Hôpital de la Charité in Paris. == References == == External links ==
{"page_id": 24831559, "title": "Gosselin fracture"}
five core facilities available to consortium members provide services for cloning, expression, and purification of proteins as well as crystallization and subsequent diffraction and data analysis of protein crystals. Furthermore, a database has been developed to record all activity done within the consortium. This database also tracks the movement of materials between members and allows the up to the minute status to be recorded and available to all other members. == References == == External links == Homepage Archived 2017-03-02 at the Wayback Machine
{"page_id": 6050102, "title": "Tuberculosis Structural Genomics Consortium"}
that all the universe's hydrogen would be consumed in the first few minutes after the Big Bang. This "diproton argument" is disputed by other physicists, who calculate that as long as the increase in strength is less than 50%, stellar fusion could occur despite the existence of stable diprotons. The precise formulation of the idea is made difficult by the fact that it is not yet known how many independent physical constants there are. The standard model of particle physics has 25 freely adjustable parameters and general relativity has one more, the cosmological constant, which is known to be nonzero but profoundly small in value. Because physicists have not developed an empirically successful theory of quantum gravity, there is no known way to combine quantum mechanics, on which the standard model depends, and general relativity. Without knowledge of this more complete theory suspected to underlie the standard model, it is impossible to definitively count the number of truly independent physical constants. In some candidate theories, the number of independent physical constants may be as small as one. For example, the cosmological constant may be a fundamental constant but attempts have also been made to calculate it from other constants, and according to the author of one such calculation, "the small value of the cosmological constant is telling us that a remarkably precise and totally unexpected relation exists among all the parameters of the Standard Model of particle physics, the bare cosmological constant and unknown physics". == Examples == Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants. N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist.
{"page_id": 573880, "title": "Fine-tuned universe"}
1930s. Long-term water level decline continued, forcing an emergency release of water from the Flaming Gorge Reservoir in July 2021, and by April 22, 2022 Lake Powell was at 3,522.24 feet (1,073.58 m) in elevation – just 22.88% of capacity. This marks the lowest water level for Lake Powell since it was filled in 1963. Peer-reviewed studies indicate that storing water in Lake Mead rather than in Lake Powell would yield a savings of 300,000 acre feet of water or more per year, leading to calls by environmentalists to drain Lake Powell and restore Glen Canyon to its natural, free-flowing state. === Power generation === The other principal goal of Glen Canyon Dam is hydroelectricity generation. It is the second-biggest producer of hydroelectric power in the Southwestern United States, after Hoover Dam. Revenues derived from power sales were integral in paying off the bonds used to build the dam and have also been used to fund other Bureau of Reclamation projects, including environmental restoration programs in the Grand Canyon and elsewhere along the Colorado River. For this reason, it has long been known as a "cash register" dam. The dam also serves as a primary peaking power plant and black start power source for the Southwest electrical grid. The power plant has a total capacity of 1,320 megawatts from eight 165,000 kilowatt generators. Each generator is driven by a 254,000 horsepower vertical-axis Francis turbine. The gross hydraulic head is 510 feet (160 m). The units were installed between September 1964 and February 1966 at an original rating of 950 megawatts; an upgrade project between 1985 and 1997 brought it to its present capacity. Because of fluctuating demands on the electrical grid, the dam release into the Colorado River rises and falls dramatically on a daily basis. After the dam was
{"page_id": 344815, "title": "Glen Canyon Dam"}
Gene conversion is the process by which one DNA sequence replaces a homologous sequence such that the sequences become identical after the conversion. Gene conversion can be either allelic, meaning that one allele of the same gene replaces another allele, or ectopic, meaning that one paralogous DNA sequence converts another. == Allelic gene conversion == Allelic gene conversion occurs during meiosis when homologous recombination between heterozygotic sites results in a mismatch in base pairing. This mismatch is then recognized and corrected by the cellular machinery causing one of the alleles to be converted to the other. This can cause non-Mendelian segregation of alleles in germ cells. == Nonallelic/ectopic gene conversion == Recombination occurs not only during meiosis, but also as a mechanism for repair of double-strand breaks (DSBs) caused by DNA damage. These DSBs are usually repaired using the sister chromatid of the broken duplex and not the homologous chromosome, so they would not result in allelic conversion. Recombination also occurs between homologous sequences present at different genomic loci (paralogous sequences) which have resulted from previous gene duplications. Gene conversion occurring between paralogous sequences (ectopic gene conversion) is conjectured to be responsible for concerted evolution of gene families. == Mechanism == Conversion of one allele to the other is often due to base mismatch repair during homologous recombination: if one of the four chromatids during meiosis pairs up with another chromatid, as can occur because of sequence homology, DNA strand transfer can occur followed by mismatch repair. This can alter the sequence of one of the chromosomes, so that it is identical to the other. Meiotic recombination is initiated through formation of a double-strand break (DSB). The 5’ ends of the break are then degraded, leaving long 3’ overhangs of several hundred nucleotides. One of these 3’ single stranded DNA
{"page_id": 1695344, "title": "Gene conversion"}
or post-1974 British English word: trillion (1012 in the short scale), and not billion (109 in the short scale). On the other hand, the pre-1961 former French word billion, pre-1994 former Italian word bilione, Brazilian Portuguese word bilhão, and Welsh word biliwn all refer to 109, being short scale terms. Each of these words translates to the American English or post-1974 British English word billion (109 in the short scale). The term billion originally meant 1012 when introduced. In long scale countries, milliard was defined to its current value of 109, leaving billion at its original 1012 value and so on for the larger numbers. Some of these countries, but not all, introduced new words billiard, trilliard, etc. as intermediate terms. In some short scale countries, milliard was defined to 109 and billion dropped altogether, with trillion redefined down to 1012 and so on for the larger numbers. In many short scale countries, milliard was dropped altogether and billion was redefined down to 109, adjusting downwards the value of trillion and all the larger numbers. The root mil in million does not refer to the numeral, 1. The word, million, derives from the Old French, milion, from the earlier Old Italian, milione, an intensification of the Latin word, mille, a thousand. That is, a million is a big thousand, much as a great gross is a dozen gross or 12 × 144 = 1728. The word milliard, or its translation, is found in many European languages and is used in those languages for 109. However, it is not found in American English, which uses billion, and not used in British English, which preferred to use thousand million before the current usage of billion. The financial term yard, which derives from milliard, is used on financial markets, as, unlike the term
{"page_id": 949880, "title": "Long and short scales"}
to target. When performing marketing analysis, neural networks can assist in the gathering and processing of information ranging from consumer demographics and credit history to the purchase patterns of consumers. AI is allowing organizations to “deliver an ad experience that is more personalized for each user, shapes the customer journey, influences purchasing decisions, and builds brand loyalty” (“How”). AI technology allows marketers to separate their consumers into distinct personas and understand what motivates their consumers. Here they can then focus on the specific needs of their audience and create a long-lasting relationship with the brand (Kushmaro). Ultimately brands want to create that loyalty with a consumer, and AI will allow them to better achieve this. “Pini Yakuel, founder and CEO of Optimove. “By analyzing customers based on their movement among segments over time, we can achieve dynamic micro-segmentation and predict future behavior in a very accurate fashion’” (Kushmaro). Being able to predict future behaviors of consumers is very important. This way marketers can specifically market to consumers based on their current behaviors and the predictions of their future behaviors. This will allow for a loyal relationship between the consumer and the brand and will ultimately help businesses. == Application of artificial intelligence to marketing decision making == Marketing is a complex field of decision making which involves a large degree of both judgment and intuition on behalf of the marketer. The enormous increase in complexity that the individual decision-maker faces renders the decision-making process almost an impossible task. The marketing decision engine can help distill the noise. The generation of more efficient management procedures have been recognized as a necessity. The application of Artificial intelligence to decision making through a decision support system has the ability to aid the decision-maker in dealing with uncertainty in decision problems. Artificial intelligence techniques
{"page_id": 35591037, "title": "Marketing and artificial intelligence"}
David Phillips has been Deputy Scientist-in-Charge (DSIC) at the Hawaiian Volcano Observatory since 2020. He served as acting Scientist-in-Charge in the interim period between Tina Neal and Ken Hon. == Career == Prior to his post as DSIC, he was working out of Boulder, Colorado as a program manager for UNAVCO. Throughout his career, he has been consulted by the press as a volcano expert, notably when he spoke to the Associated Press multiple times on behalf of the Observatory about the 2022 eruption of Mauna Loa. == Footnotes == == External links == UH Hilo Alum Profile
{"page_id": 73902783, "title": "David Phillips (geologist)"}
even the most baroque systems of composite pronouns, found in the Bantu language Bamileke, are derived by combination of simpler categories. The interpretation of complex pronouns combines both the explicit, truth-functional semantics of a form with functional inferences made by speakers as to the felicitous use of the form. The interaction of inherent number and functional inference is then illustrated with the pronominal system of Nunggubuyu in section 2.2.6. Then, in section 2.2.7, I show that inherent number and functional inference explain the so-called Ist person Limited Inclusive in Sierra Popoluca, which is a prima facie counterexample to the claim that there is no feature [-3]. Finally, in sections 2.3 - 2.3.3, I argue that both inherent number of nouns and a syntactically-supplied class feature [INVERSE] interact in the Kiowa-Tanoan languages to give a complex relationship between semantic, morphological, and classificatory (agreement-relevant) number-class. 2.2.0 The Simple Cardinality Features In many languages the number features are uncomplicated and are mapped to simple rules governing the cardinality of the index set of the relevant argument: (62) a. [+sgl --- > 1b. [-sgl --- > > 1c. dual -- > 2d. trial --> 3e. quadral --> 4 -176-Beyond these relatively trivial observations, the number features do have interesting pro 1 erties. First, while both values of [±sg] are morphologically active, defining the traditional classes singular and plural, all the evidence I have encountered is consistent with the hypothesis that the cardinality features [dual], [trial] and Iquadrall are monovalent. In other words, the complements of these classes, e.g. [-trialj, do not function as natural classes. In terms of markedness, the value [+sg] is more costly than the value [-sg]. We can then hypothesize that any language which has [+sgl morphologically active will also permit reference to [-sg], but a less costly system will impoverish
{"source": 1025, "title": "from dpo"}
say the preference matrix 𝐵 is fractionally separable for items if the following condition holds. Condition 2 (Fractional Separability Condition for Items) There exists a constant 0 1 𝐺 . So the probability that a user
{"source": 3332, "title": "from dpo"}
hoytema kratochvil curating bolzoni mesoamerican ivuti xcaret tulkowice ricegrass sansour carpilioidea palici kampeas kakeya curitiba palica silozi streblus palice faggs tidings hamishpaha kageaki tubby lanterna tubbs harthill tafuri grantown nekschot stawley greffulhe dembiya penises mardle kazantsev guarde oppdal laplanders dunchad brandstatter fryerning muricidae pensionary aadizookaanag quick quico bahia kilmister subbanna takhtajan monkiewicz fhia casanate wielka wielki guards diprotodon wiewiorow carlingford contracted northfleet lhasa radycza sunseri reply turkce dolgellau clausthal adumim bambous bhupatindra woliczka alkaline mediactive semmy tumbledown water eupatoria shaquille guasacaca mysticeti kozhencherry avaalaaqiaq buchheim bambouk avenging exallodontus gigaset koulikoro gambella lorikeet therapsids zobah edguy sunnyland ethiopian cabby fiumalbo nieman sorce ako mantorville hermits culdee balatonoszod pyramidulidae presezzo avdira szwarc dioicus wrvs tecnologica beals buehning filmer altay stradivarius kupimierz wiessee sessions altas altar hentzau kaysen altai drown altan ayyub sequoia aliya altaf insights wanda swinefleet michelade deobandi scoglitti wando flagship boonton ryusei wandy andreadis numbers streak viceroy plauer stream echochrome parvathy jiushao quacky znin matzenauer plauen streat inheritance stadsteater gigerwaldsee twining morganton jifar ralston giambi czerniew xiuqing srs bhupati dworzakowo atmel pompe guitarfreaks rinku kiernan laumer craigdarroch sciarra rinke disciotis renovation gorges deva badlands cluedo vincello gorgey kakegawa nakhla pinggu purification vrelo donbass pegana maredsous exclusive mcquade trafficked yorishima konigsberg hodiak ducklings teebee sidcup poljane mahomes alerces mierkalns rapport stanchinsky swani kajsa swann swano microcomputer helion dierhagen mudgeeraba seguier pyridoxine terramarque onehunga mkwawa swans swanu nyeri vaila rapamycin vaile postgres balchik mvpa alajuelense borophaginae qs dzbance tetraonides lateranensis aztecas dundrum littorinidae martensson lawwill strongsville qt kluft plyushch gandaki candyskins walpurgisnacht bergslien tabata vitesse riquier caghwell bathurst msciszow menagerie kruse krush bodeans mazatec hairoun bhavni kruss righetti authenticity barrog teshub jaswily hysteric hysteria hacker uduk binissalem saadeh eje middleman gaabu azalais ruck ruch nestande qays pecheurs hacken osakis attended fribourg bolts barrundia sumiswald glockner lentaigne bithynian radanova counteroffensive bolte naktong simek
{"source": 5213, "title": "from dpo"}
(Li × w7) + (SAS × w8) + …. Where: * SAS, CI, DRS, and CST, ELS are as defined above * I= Innovation Index described in the Innovation and value creation section * w[1…n]​ = Weighting factors based on the relative importance of collaboration, developer reputation, and community trust for the grant program. Program managers and relevant stakeholders would define them. * Multiple metrics can be added here and interpretation is based on the weight factors * The scale is optional, it can be normalized to a scale of 0 to 5 or a different one. This composite score gives an overall view of the project’s impact and effectiveness, balancing collaboration, developer reputation, and community trust. * * * 1. Ecosystem growth. community development and sustainable impact Defining and measuring how the grant program contributes to the long-term growth and sustainability of the Web3 ecosystem should be one of the priorities in grant programs. The list below captures broad formulas that can be considered to evaluate a grant program’s effectiveness in terms of ecosystem growth. It is important to mention that each of the formulas can and should be adapted to the scope and timeline of the grant program analyzed. Users is a broad term and can refer to: dapp/protocol users, lenders/borrowers, contributors, marketers, developers, community members, artists, people onboarded, etc 1. User activity and engagement depth(UAED) Grants are growth tools, growth involves increased user activity. This metric broadly helps us calculate the average number of meaningful interactions* (e.g., transactions, votes, contributions, etc) per daily active user. !Image 973: image * G = The total
{"source": 6413, "title": "from dpo"}
Mechanical vapor recompression (MVR) is an energy recovery process which recycles waste heat to improve efficiency. Typically, the compressed vapor is fed back to help heat the mother liquor in order to produce more vapor or steam. == Applications == === Current === Mechanical vapor recompression is used chiefly in industrial processes such as evaporation and distillation. Heat from the condenser, which would otherwise be lost, can be recovered and used in the evaporation process. === Past === MVR was successfully tested in a locomotive under the name of "The Anderson System". Testing found that it almost completely eliminated steam ejection, as well as greatly reduced operating noise. An Harold Holcroft, organiser of the tests wrote the following: "In the ordinary way this would have created much noise and clouds of steam, but with the condensing set in action it was all absorbed with the ease with which snow would melt in a furnace! The engine was as silent as an electric locomotive and the only faint noises were due to slight pounding of the rods and a small blow at a piston gland. This had to be experienced to be believed; but for the regulator being wide open and the reverser well over, one would have imagined that the second engine (an LSWR T14 class that had been provided as a back-up) was propelling the first". The trials continued until 1934 but various problems arose, mostly with the fan for forced draught, and the project went no further. The locomotive was converted back to standard form in 1935. MVR was also used in the Cristiani compressed steam system for locomotive transmission. Although it was technically feasible, it failed to become popular because of its complexity. == Benefits == The main benefit of MVR mechanical vapour recompression is that it
{"page_id": 21709412, "title": "Mechanical vapor recompression"}
(Skousen 2002, in Skousen et al. 2002, pp. 11–25, and Skousen 2003, both passim) === Formulas === Given a context with n {\displaystyle n} elements: total number of pairings: n 2 {\displaystyle n^{2}} number of agreements for outcome i: n i 2 {\displaystyle n_{i}^{2}} number of disagreements for outcome i: n i ( n − n i ) {\displaystyle n_{i}(n-n_{i})} total number of agreements: ∑ n i 2 {\displaystyle \sum {n_{i}^{2}}} total number of disagreements: ∑ n i ( n − n i ) = n 2 − ∑ n i 2 {\displaystyle \sum {n_{i}(n-n_{i})}=n^{2}-\sum {n_{i}^{2}}} === Example === This terminology is best understood through an example. In the example used in the second chapter of Skousen (1989), each context consists of three variables with potential values 0-3 Variable 1: 0,1,2,3 Variable 2: 0,1,2,3 Variable 3: 0,1,2,3 The two outcomes for the dataset are e and r, and the exemplars are: 3 1 0 e 0 3 2 r 2 1 0 r 2 1 2 r 3 1 1 r We define a network of pointers like so: The solid lines represent pointers between exemplars with matching outcomes; the dotted lines represent pointers between exemplars with non-matching outcomes. The statistics for this example are as follows: n = 5 {\displaystyle n=5} n r = 4 {\displaystyle n_{r}=4} n e = 1 {\displaystyle n_{e}=1} total number of pairings: n 2 = 25 {\displaystyle n^{2}=25} number of agreements for outcome r: n r 2 = 16 {\displaystyle n_{r}^{2}=16} number of agreements for outcome e: n e 2 = 1 {\displaystyle n_{e}^{2}=1} number of disagreements for outcome r: n r ( n − n r ) = 4 {\displaystyle n_{r}(n-n_{r})=4} number of disagreements for outcome e: n e ( n − n e ) = 4 {\displaystyle n_{e}(n-n_{e})=4} total number of agreements:
{"page_id": 5481056, "title": "Analogical modeling"}
Renaissance, many popular textbooks were published to cover the practical calculations for commerce. The use of abacuses also became widespread in this period. In the 16th century, the mathematician Gerolamo Cardano conceived the concept of complex numbers as a way to solve cubic equations. The first mechanical calculators were developed in the 17th century and greatly facilitated complex mathematical calculations, such as Blaise Pascal's calculator and Gottfried Wilhelm Leibniz's stepped reckoner. The 17th century also saw the discovery of the logarithm by John Napier. In the 18th and 19th centuries, mathematicians such as Leonhard Euler and Carl Friedrich Gauss laid the foundations of modern number theory. Another development in this period concerned work on the formalization and foundations of arithmetic, such as Georg Cantor's set theory and the Dedekind–Peano axioms used as an axiomatization of natural-number arithmetic. Computers and electronic calculators were first developed in the 20th century. Their widespread use revolutionized both the accuracy and speed with which even complex arithmetic computations can be calculated. == In various fields == === Education === Arithmetic education forms part of primary education. It is one of the first forms of mathematics education that children encounter. Elementary arithmetic aims to give students a basic sense of numbers and to familiarize them with fundamental numerical operations like addition, subtraction, multiplication, and division. It is usually introduced in relation to concrete scenarios, like counting beads, dividing the class into groups of children of the same size, and calculating change when buying items. Common tools in early arithmetic education are number lines, addition and multiplication tables, counting blocks, and abacuses. Later stages focus on a more abstract understanding and introduce the students to different types of numbers, such as negative numbers, fractions, real numbers, and complex numbers. They further cover more advanced numerical operations, like
{"page_id": 3118, "title": "Arithmetic"}
resulting atrophy of the affected organ. This would occur most likely through each cell shrinking in size in response to the energy deficit (and/or in extreme situations from some cells dying via either apoptosis or necrosis, depending on location). This may occur as a result of enough ATP not being available to maintain cellular functions - notably, failure of the Na/K ATPase, resulting in a loss of the gradient to drive the Na/Ca antiporter, which normally keeps Ca+2 out of cells so it does not build to toxic levels that will rupture cell lysosomes leading to apoptosis. An additional feature of a low-energy state is failure to maintain axonal transport via dynein/kinesin ATPases, which in many diseases results in neuronal injury to both the brain and/or periphery. === Pathology === Very little is known about the pathology of Hashimoto's encephalopathy. Autopsies of some individuals have shown lymphocytic vasculitis of venules and veins in the brainstem and a diffuse gliosis involving gray matter more than white matter. As mentioned above, autoantibodies to alpha-enolase associated with Hashimoto's encephalopathy have thus far been the most hypothesized mechanism of injury. == Diagnosis == === Laboratory and radiological findings === Increased liver enzyme levels (55% cases) Increased thyroid-stimulating hormone (55% cases) Increased erythrocyte sedimentation rate (25% cases) Cerebrospinal fluid findings: Raised protein (25% cases) Negative for 14–3–3 protein May contain antithyroid antibodies Magnetic resonance imaging abnormalities consistent with encephalopathy (26% of cases) Single photon emission computed tomography shows focal and global hypoperfusion (75% of cases) Cerebral angiography is normal Thyroid hormone abnormalities are common (>80% of cases): Subclinical hypothyroidism (35% of cases) Overt hypothyroidism (20% of cases) Hyperthyroidism (5% of cases) Euthyroid on levothyroxine (10% of cases) Euthyroid not on levothyroxine (20% of cases) Thyroid antibodies – both antithyroid peroxidase antibodies (anti-TPO, antithyroid microsomal antibodies,
{"page_id": 9729302, "title": "Hashimoto's encephalopathy"}
Presentation Server). In this arrangement, Citrix has access to key source code for the Windows platform, enabling its developers to improve the security and performance of the Terminal Services platform. In late December 2004 the two companies announced a five-year renewal of this arrangement to cover Windows Vista. == Server components == The key server component of RDS is Terminal Server (termdd.sys), which listens on TCP port 3389. When a Remote Desktop Protocol (RDP) client connects to this port, it is tagged with a unique SessionID and associated with a freshly spawned console session (Session 0, keyboard, mouse and character mode UI only). The login subsystem (winlogon.exe) and the GDI graphics subsystem is then initiated, which handles the job of authenticating the user and presenting the GUI. These executables are loaded in a new session, rather than the console session. When creating the new session, the graphics and keyboard/mouse device drivers are replaced with RDP-specific drivers: RdpDD.sys and RdpWD.sys. The RdpDD.sys is the device driver and it captures the UI rendering calls into a format that is transmittable over RDP. RdpWD.sys acts as keyboard and mouse driver; it receives keyboard and mouse input over the TCP connection and presents them as keyboard or mouse inputs. It also allows creation of virtual channels, which allow other devices, such as disc, audio, printers, and COM ports to be redirected, i.e., the channels act as replacement for these devices. The channels connect to the client over the TCP connection; as the channels are accessed for data, the client is informed of the request, which is then transferred over the TCP connection to the application. This entire procedure is done by the terminal server and the client, with the RDP mediating the correct transfer, and is entirely transparent to the applications. RDP communications are
{"page_id": 16575110, "title": "Remote Desktop Services"}
maintainable. Teams can get together and review tests and test practices to share effective techniques and catch bad habits. === Practices to avoid, or "anti-patterns" === Having test cases depend on system state manipulated from previously executed test cases (i.e., you should always start a unit test from a known and pre-configured state). Dependencies between test cases. A test suite where test cases are dependent upon each other is brittle and complex. Execution order should not be presumed. Basic refactoring of the initial test cases or structure of the UUT causes a spiral of increasingly pervasive impacts in associated tests. Interdependent tests. Interdependent tests can cause cascading false negatives. A failure in an early test case breaks a later test case even if no actual fault exists in the UUT, increasing defect analysis and debug efforts. Testing precise execution, timing or performance. Building "all-knowing oracles". An oracle that inspects more than necessary is more expensive and brittle over time. This very common error is dangerous because it causes a subtle but pervasive time sink across the complex project. Testing implementation details. Slow running tests. == Comparison and demarcation == === TDD and ATDD === Test-driven development is related to, but different from acceptance test–driven development (ATDD). TDD is primarily a developer's tool to help create well-written unit of code (function, class, or module) that correctly performs a set of operations. ATDD is a communication tool between the customer, developer, and tester to ensure that the requirements are well-defined. TDD requires test automation. ATDD does not, although automation helps with regression testing. Tests used in TDD can often be derived from ATDD tests, since the code units implement some portion of a requirement. ATDD tests should be readable by the customer. TDD tests do not need to be. === TDD and
{"page_id": 357881, "title": "Test-driven development"}
performed within a given power budget. While device shrinkage can reduce some parasitic capacitances, the number of devices on an integrated circuit chip has increased more than enough to compensate for reduced capacitance in each individual device. Some circuits – dynamic logic, for example – require a minimum clock rate in order to function properly, wasting "dynamic power" even when they do not perform useful computations. Other circuits – most prominently, the RCA 1802, but also several later chips such as the WDC 65C02, the Intel 80C85, the Freescale 68HC11 and some other CMOS chips – use "fully static logic" that has no minimum clock rate, but can "stop the clock" and hold their state indefinitely. When the clock is stopped, such circuits use no dynamic power but they still have a small, static power consumption caused by leakage current. As circuit dimensions shrink, subthreshold leakage current becomes more prominent. This leakage current results in power consumption, even when no switching is taking place (static power consumption). In modern chips, this current generally accounts for half the power consumed by the IC. === Reducing power loss === Loss from subthreshold leakage can be reduced by raising the threshold voltage and lowering the supply voltage. Both these changes slow down the circuit significantly. To address this issue, some modern low-power circuits use dual supply voltages to improve speed on critical paths of the circuit and lower power consumption on non-critical paths. Some circuits even use different transistors (with different threshold voltages) in different parts of the circuit, in an attempt to further reduce power consumption without significant performance loss. Another method that is used to reduce power consumption is power gating: the use of sleep transistors to disable entire blocks when not in use. Systems that are dormant for long periods
{"page_id": 16390520, "title": "Low-power electronics"}
lead to poor estimates and large standard errors. As an example, say that we notice Alice wears her boots whenever it is raining and that there are only puddles when it rains. Then, we cannot tell whether she wears boots to keep the rain from landing on her feet, or to keep her feet dry if she steps in a puddle. The problem with trying to identify how much each of the two variables matters is that they are confounded with each other: our observations are explained equally well by either variable, so we do not know which one of them causes the observed correlations. There are two ways to discover this information: Using prior information or theory. For example, if we notice Alice never steps in puddles, we can reasonably argue puddles are not why she wears boots, as she does not need the boots to avoid puddles. Collecting more data. If we observe Alice enough times, we will eventually see her on days where there are puddles but not rain (e.g. because the rain stops before she leaves home). This confounding becomes substantially worse when researchers attempt to ignore or suppress it by excluding these variables from the regression (see #Misuse). Excluding multicollinear variables from regressions will invalidate causal inference and produce worse estimates by removing important confounders. === Remedies === There are many ways to prevent multicollinearity from affecting results by planning ahead of time. However, these methods all require a researcher to decide on a procedure and analysis before data has been collected (see post hoc analysis and Multicollinearity § Misuse). ==== Regularized estimators ==== Many regression methods are naturally "robust" to multicollinearity and generally perform better than ordinary least squares regression, even when variables are independent. Regularized regression techniques such as ridge regression, LASSO, elastic
{"page_id": 1486691, "title": "Multicollinearity"}