source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Complement%20control%20protein
Complement control protein are proteins that interact with components of the complement system. The complement system is tightly regulated by a network of proteins known as "regulators of complement activation (RCA)" that help distinguish target cells as "self" or "non-self." A subset of this family of proteins, complement control proteins (CCP), are characterized by domains of conserved repeats that direct interaction with components of the complement system. These "Sushi" domains have been used to identify other putative members of the CCP family. There are many other RCA proteins that do not fall into this family. Most CCPs prevent activation of the complement system on the surface of host cells and protect host tissues against damage caused by autoimmunity. Because of this, these proteins play important roles in autoimmune disorders and cancers. Members Most of the well-studied proteins within this family can be categorized in two classes: Membrane-bound complement regulators Membrane Cofactor Protein, MCP (CD46) Decay Accelerating Factor, DAF (CD55) Protectin (CD59) Complement C3b/C4b Receptor 1, CR1 (CD35) Complement Regulator of the Immunoglobulin Superfamily, CRIg Soluble complement regulators Factor H C4-Binding Protein (C4bp) Other proteins with characteristic CCP domains have been identified including members of the sushi domain containing (SUSD) protein family and Human CUB and sushi multiple domains family (CSMD). Mechanisms of protection Every cell in the human body is protected by one or more of the membrane-associated RCA proteins, CR1, DAF or MCP. Factor H and C4BP circulate in the plasma and are recruited to self-surfaces through binding to host-specific polysaccharides such as the glycosaminoglycans. Most CCPs function by preventing convertase activity. Convertases, specifically the C3 convertases C3b.Bb and C4b.2a, are the enzymes that drive complement activation by activating C3b, a central component of the complement syst
https://en.wikipedia.org/wiki/International%20Commission%20on%20Mathematical%20Instruction
The International Commission on Mathematical Instruction (ICMI) is a commission of the International Mathematical Union and is an internationally acting organization focussing on mathematics education. ICMI was founded in 1908 at the International Congress of Mathematicians (ICM) in Rome and aims to improve teaching standards around the world, through programs, workshops and initiatives and publications. It aims to work a great deal with developing countries, to increase teaching standards and education which can improve life quality and aid the country. History ICMI was founded at the ICM, and mathematician Felix Klein was elected first president of the organisation. Henri Fehr and Charles Laisant created the international research journal L'Enseignement Mathématique in 1899, and from early on this journal became the official organ of ICMI. A bulletin is published twice a year by ICMI, and from December 1995 this bulletin has been available at the organisation's official website, in their 'digital library'. In the years between World War I and World War II there was little activity in the organization, but in 1952 ICMI was reconstituted. At this time the organization was reorganized, and it became an official commission of the International Mathematical Union (IMU). As a scientific organization, IMU is a member of the International Council for Science (ICSU). Although ICMI follows the general principles of IMU and ICSU, the organization has a large degree of autonomy. Structure All countries that are members of IMU are automatically members of ICMI; membership is also possible for non-IMU members. Currently, there are 90 member states of ICMI. Each member state has the right to appoint a national representative. As a commission, ICMI has two main bodies: the Executive Committee (EC), and the national representatives from the member countries. Together, these two constitute the General Assembly (GA) of ICMI. The GA is summoned every four years in connectio
https://en.wikipedia.org/wiki/Azimuth%20Systems
Azimuth Systems was a privately held company located near Boston, Massachusetts and in 2016, the company was acquired by Anritsu. The company's primary products include wireless channel emulators and wireless test equipment for LTE, WiMAX, 2G/3G cellular, and Wi-Fi networks. In 2009 Azimuth Systems wrote a White paper entitled Improving 4G Wireless Broadband Product Design through Effective Channel Emulation Testing. In April 2010, Azimuth Systems was awarded the Best in Test award from Test & Measurement World Magazine. Also, in 2009, Azimuth Systems was awarded the 4G Wireless Evolution LTE Visionary Awardby TMC.net. In September 2016, Test solutions major Anritsu Corporation acquired Azimuth Systems.
https://en.wikipedia.org/wiki/Guaiacol
Guaiacol () is an organic compound with the formula C6H4(OH)(OCH3). It is a phenolic compound containing a methoxy functional group. Guaiacol appears as a viscous colorless oil, although aged or impure samples are often yellowish. It occurs widely in nature and is a common product of the pyrolysis of wood. Occurrence Guaiacol is usually derived from guaiacum or wood creosote. It is produced by a variety of plants. It is also found in essential oils from celery seeds, tobacco leaves, orange leaves, and lemon peels. The pure substance is colorless, but samples become yellow upon exposure to air and light. The compound is present in wood smoke, resulting from the pyrolysis of lignin. The compound contributes to the flavor of many substances such as whiskey and roasted coffee. Preparation The compound was first isolated by Otto Unverdorben in 1826. Guaiacol is produced by methylation of o-catechol, for example using potash and dimethyl sulfate: C6H4(OH)2 + (CH3O)2SO2 → C6H4(OH)(OCH3) + HO(CH3O)SO2 Laboratory methods Guaiacol can be prepared by diverse routes in the laboratory. o-Anisidine, derived in two steps from anisole, can be hydrolyzed via its diazonium derivative. Guaiacol can be synthesized by the dimethylation of catechol followed by selective mono-demethylation. C6H4(OCH3)2 + C2H5SNa → C6H4(OCH3)(ONa) + C2H5SCH3 Uses and chemical reactions Syringyl/guaiacyl ratio Lignin, comprising a major fraction of biomass, is sometimes classified according to the guaiacyl component. Pyrolysis of lignin from gymnosperms gives more guaiacol, resulting from removal of the propenyl group of coniferyl alcohol. These lignins are said to have a high guaiacyl (or G) content. In contrast, lignins derived from sinapyl alcohol affords syringol. A high syringyl (or S) content is indicative of lignin from angiosperms. Sugarcane bagasse is one useful source of guaiacol; pyrolysis of the bagasse lignins yields compounds including guaiacol, 4-methylguaiacol and
https://en.wikipedia.org/wiki/Cheminformatics
Cheminformatics (also known as chemoinformatics) refers to the use of physical chemistry theory with computer and information science techniques—so called "in silico" techniques—in application to a range of descriptive and prescriptive problems in the field of chemistry, including in its applications to biology and related molecular fields. Such in silico techniques are used, for example, by pharmaceutical companies and in academic settings to aid and inform the process of drug discovery, for instance in the design of well-defined combinatorial libraries of synthetic compounds, or to assist in structure-based drug design. The methods can also be used in chemical and allied industries, and such fields as environmental science and pharmacology, where chemical processes are involved or studied. History Cheminformatics has been an active field in various guises since the 1970s and earlier, with activity in academic departments and commercial pharmaceutical research and development departments. The term chemoinformatics was defined in its application to drug discovery by F.K. Brown in 1998:Chemoinformatics is the mixing of those information resources to transform data into information and information into knowledge for the intended purpose of making better decisions faster in the area of drug lead identification and optimization. Since then, both terms, cheminformatics and chemoinformatics, have been used, although, lexicographically, cheminformatics appears to be more frequently used, despite academics in Europe declaring for the variant chemoinformatics in 2006. In 2009, a prominent Springer journal in the field was founded by transatlantic executive editors named the Journal of Cheminformatics. Background Cheminformatics combines the scientific working fields of chemistry, computer science, and information science—for example in the areas of topology, chemical graph theory, information retrieval and data mining in the chemical space. Cheminformatics can also be ap
https://en.wikipedia.org/wiki/Padding%20%28cryptography%29
In cryptography, padding is any of a number of distinct practices which all include adding data to the beginning, middle, or end of a message prior to encryption. In classical cryptography, padding may include adding nonsense phrases to a message to obscure the fact that many messages end in predictable ways, e.g. sincerely yours. Classical cryptography Official messages often start and end in predictable ways: My dear ambassador, Weather report, Sincerely yours, etc. The primary use of padding with classical ciphers is to prevent the cryptanalyst from using that predictability to find known plaintext that aids in breaking the encryption. Random length padding also prevents an attacker from knowing the exact length of the plaintext message. A famous example of classical padding which caused a great misunderstanding is "the world wonders" incident, which nearly caused an Allied loss at the WWII Battle off Samar, part of the larger Battle of Leyte Gulf. In that example, Admiral Chester Nimitz, the Commander in Chief, U.S. Pacific Fleet in World War II, sent the following message to Admiral Bull Halsey, commander of Task Force Thirty Four (the main Allied fleet) at the Battle of Leyte Gulf, on October 25, 1944: With padding (bolded) and metadata added, the message became: Halsey's radio operator mistook some of the padding for the message and so Admiral Halsey ended up reading the following message: Admiral Halsey interpreted the padding phrase "the world wonders" as a sarcastic reprimand, which caused him to have an emotional outburst and then lock himself in his bridge and sulk for an hour before he moved his forces to assist at the Battle off Samar. Halsey's radio operator should have been tipped off by the letters RR that "the world wonders" was padding; all other radio operators who received Admiral Nimitz's message correctly removed both padding phrases. Many classical ciphers arrange the plaintext into particular patterns (e.g., squares, rectangles, etc.)
https://en.wikipedia.org/wiki/Bill%20Gosper
Ralph William Gosper Jr. (born April 26, 1943), known as Bill Gosper, is an American mathematician and programmer. Along with Richard Greenblatt, he may be considered to have founded the hacker community, and he holds a place of pride in the Lisp community. The Gosper curve and the Gosper's algorithm are named after him. Becoming a hacker In high school, Gosper was interested in model rockets until one of his friends was injured in a rocketry accident and contracted a fatal brain infection. Gosper enrolled in MIT in 1961, and he received his bachelor's degree in mathematics from MIT in 1965 despite becoming disaffected with the mathematics department because of their anti-computer attitude. In his second year at MIT, Gosper took a programming course from John McCarthy and became affiliated with the MIT AI Lab. His contributions to computational mathematics include HAKMEM and the MIT Maclisp system. He made major contributions to Macsyma, Project MAC's computer algebra system. Gosper later worked with Symbolics and Macsyma, Inc. on commercial versions of Macsyma. In 1974, he moved to Stanford University, where he lectured and worked with Donald Knuth. Since that time, he has worked at or consulted for Xerox PARC, Symbolics, Wolfram Research, the Lawrence Livermore Laboratory, and Macsyma Inc. Key contributions Conway's Game of Life He became intensely interested in the Game of Life shortly after John Horton Conway had proposed it. Conway conjectured the existence of infinitely growing patterns, and offered a reward for an example. Gosper was the first to find such a pattern, the glider gun, and won the prize. Gosper was also the originator of the Hashlife algorithm that can speed up the computation of Life patterns by many orders of magnitude. Packing problems Gosper has created numerous packing problem puzzles, such as "Twubblesome Twelve". Symbolic computation Gosper was the first person to realize the possibilities of symbolic computation on a comput
https://en.wikipedia.org/wiki/Proxemics
Proxemics is the study of human use of space and the effects that population density has on behavior, communication, and social interaction. Proxemics is one among several subcategories in the study of nonverbal communication, including haptics (touch), kinesics (body movement), vocalics (paralanguage), and chronemics (structure of time). Edward T. Hall, the cultural anthropologist who coined the term in 1963, defined proxemics as "the interrelated observations and theories of humans use of space as a specialized elaboration of culture". In his foundational work on proxemics, The Hidden Dimension, Hall emphasized the impact of proxemic behavior (the use of space) on interpersonal communication. According to Hall, the study of proxemics is valuable in evaluating not only the way people interact with others in daily life, but also "the organization of space in [their] houses and buildings, and ultimately the layout of [their] towns". Proxemics remains a hidden component of interpersonal communication that is uncovered through observation and strongly influenced by culture. Human distances The distance surrounding a person forms a space. The space within intimate distance and personal distance is called personal space. The space within social distance and out of personal distance is called social space, and the space within public distance is called public space. Personal space is the region surrounding a person which they regard as psychologically theirs. Most people value their personal space and feel discomfort, anger, or anxiety when their personal space is encroached. Permitting a person to enter personal space and entering somebody else's personal space are indicators of perception of those people's relationship. An intimate zone is reserved for close friends, lovers, children and close family members. Another zone is used for conversations with friends, to chat with associates, and in group discussions. A further zone is reserved for strangers, newly form
https://en.wikipedia.org/wiki/Phosphate%20solubilizing%20bacteria
Phosphate solubilizing bacteria (PSB) are beneficial bacteria capable of solubilizing inorganic phosphorus from insoluble compounds. P-solubilization ability of rhizosphere microorganisms is considered to be one of the most important traits associated with plant phosphate nutrition. It is generally accepted that the mechanism of mineral phosphate solubilization by PSB strains is associated with the release of low molecular weight organic acids, through which their hydroxyl and carboxyl groups chelate the cations [an ion that have positive charge on it.] bound to phosphate, thereby converting it into soluble forms. PSB have been introduced to the Agricultural community as phosphate Biofertilizer. Phosphorus (P) is one of the major essential macronutrients for plants and is applied to soil in the form of phosphate fertilizers. However, a large portion of soluble inorganic phosphate which is applied to the soil as chemical fertilizer is immobilized rapidly and becomes unavailable to plants. Currently, the main purpose in managing soil phosphorus is to optimize crop production and minimize P loss from soils. PSB have attracted the attention of agriculturists as soil inoculums to improve the plant growth and yield. When PSB is used with rock phosphate, it can save about 50% of the crop requirement of phosphatic fertilizer. The use of PSB as inoculants increases P uptake by plants. Simple inoculation of seeds with PSB gives crop yield responses equivalent to 30 kg P2O5 /ha or 50 percent of the need for phosphatic fertilizers. Alternatively, PSB can be applied through fertigation or in hydroponic operations. Many different strains of these bacteria have been identified as PSB, including Pantoea agglomerans (P5), Microbacterium laevaniformans (P7) and Pseudomonas putida (P13) strains are highly efficient insoluble phosphate solubilizers. Recently, researchers at Colorado State University demonstrated that a consortium of four bacteria, synergistically solubilize phosphorus
https://en.wikipedia.org/wiki/Nuevo%20Milenio%20State%20Forest
Nuevo Milenio State Forest (Spanish: Bosque Estatal del Nuevo Milenio or Bosque Urbano del Nuevo Milenio) is one of the 20 forests that make up the public forest system of Puerto Rico. The forest is located east of the University of Puerto Rico Botanical Garden in the Sabana Llana Sur district of San Juan, making it one of the two state forests located within the capital's municipal boundaries (the other being San Patricio State Forest). History The area where the forest now stands had originally been deforested and was the site of several quarries. The forest reserve was first proclaimed in 1998 with the intention of preserving one of the last green areas in San Juan as a secondary forest. The city is very densely populated and has lost most of its forests due to urban sprawl. This was done thanks to the effort of local residents, academics and environmentalists. In 2003 the forest was added to the San Juan Ecological Corridor management zone, also managed by the municipality of San Juan which designates the area as ecologically important as it forms part of the hydrological basin of the estuary of San Juan Bay. Between the years 1998 and 2005 the forest was used for research purposes for studying the impact of Hurricane Georges in the destruction and regrowth of the forest. Ecology Flora The forest is very characteristic of the secondary forest ecosystem found throughout Puerto Rico. Some native tree species found in the forest are the yarumo (Cecropia schreberiana), the West Indian locust tree (Hymenaea courbaril), the Puerto Rican royal palm tree (Roystonea borinquena), and the pink trumpet tree (Tabebuia heterophylla) or Puerto Rican oak as it is colloquially known. Common introduced trees found in the forest are the flea tree (Albizia lebbeck), the karoi tree (Albizia procera) and the Malay pterocarpus (Pterocarpus indicus). Fauna The forest is home to native species such as the zenaida dove (Zenaida aurita), the scaly-naped pigeon (Patagioenas squamosa
https://en.wikipedia.org/wiki/Predual
In mathematics, the predual of an object D is an object P whose dual space is D. For example, the predual of the space of bounded operators is the space of trace class operators, and the predual of the space L∞(R) of essentially bounded functions on R is the Banach space L1(R) of integrable functions. Abstract algebra Functional analysis
https://en.wikipedia.org/wiki/Ignition%20timing
In a spark ignition internal combustion engine, ignition timing is the timing, relative to the current piston position and crankshaft angle, of the release of a spark in the combustion chamber near the end of the compression stroke. The need for advancing (or retarding) the timing of the spark is because fuel does not completely burn the instant the spark fires. The combustion gases take a period of time to expand and the angular or rotational speed of the engine can lengthen or shorten the time frame in which the burning and expansion should occur. In a vast majority of cases, the angle will be described as a certain angle advanced before top dead center (BTDC). Advancing the spark BTDC means that the spark is energized prior to the point where the combustion chamber reaches its minimum size, since the purpose of the power stroke in the engine is to force the combustion chamber to expand. Sparks occurring after top dead center (ATDC) are usually counter-productive (producing wasted spark, back-fire, engine knock, etc.) unless there is need for a supplemental or continuing spark prior to the exhaust stroke. Setting the correct ignition timing is crucial in the performance of an engine. Sparks occurring too soon or too late in the engine cycle are often responsible for excessive vibrations and even engine damage. The ignition timing affects many variables including engine longevity, fuel economy, and engine power. Many variables also affect what the "best" timing is. Modern engines that are controlled in real time by an engine control unit use a computer to control the timing throughout the engine's RPM and load range. Older engines that use mechanical distributors rely on inertia (by using rotating weights and springs) and manifold vacuum in order to set the ignition timing throughout the engine's RPM and load range. Early cars required the driver to adjust timing via controls according to driving conditions, but this is now automated. There are many factors tha
https://en.wikipedia.org/wiki/Mineral%20wool
Mineral wool is any fibrous material formed by spinning or drawing molten mineral or rock materials such as slag and ceramics. Applications of mineral wool include thermal insulation (as both structural insulation and pipe insulation), filtration, soundproofing, and hydroponic growth medium. Naming Mineral wool is also known as mineral fiber, mineral cotton, mineral fiber, man-made mineral fiber (MMMF), and man-made vitreous fiber (MMVF). Specific mineral wool products are stone wool and slag wool. Europe also includes glass wool which, together with ceramic fiber, are entirely artificial fibers that can be made into different shapes and are spiky to touch. History Slag wool was first made in 1840 in Wales by Edward Parry, "but no effort appears to have been made to confine the wool after production; consequently it floated about the works with the slightest breeze, and became so injurious to the men that the process had to be abandoned". A method of making mineral wool was patented in the United States in 1870 by John Player and first produced commercially in 1871 at Georgsmarienhütte in Osnabrück Germany. The process involved blowing a strong stream of air across a falling flow of liquid iron slag which was similar to the natural occurrence of fine strands of volcanic slag from Kilauea called Pele's hair created by strong winds blowing apart the slag during an eruption. According to a mineral wool manufacturer, the first mineral wool intended for high-temperature applications was invented in the United States in 1942 but was not commercially viable until approximately 1953. More forms of mineral wool became available in the 1970s and 1980s. High-temperature mineral wool High-temperature mineral wool is a type of mineral wool created for use as high-temperature insulation and generally defined as being resistant to temperatures above 1,000 °C. This type of insulation is usually used in industrial furnaces and foundries. Because high-temperature mineral
https://en.wikipedia.org/wiki/Badge%20of%20shame
A badge of shame, also a symbol of shame, a mark of shame or a stigma, is typically a distinctive symbol required to be worn by a specific group or an individual for the purpose of public humiliation, ostracism or persecution. The term is also used metaphorically, especially in a pejorative sense, to characterize something associated with a person or group as shameful. In England, under the Poor Act 1697, paupers in receipt of parish relief were required to wear a badge of blue or red cloth on the shoulder of the right sleeve in an open and visible manner, in order to discourage people from collecting relief unless they were desperate, as while many would be willing to collect relief, few would be willing to do so if required to wear the "shameful" mark of the poor in public. The yellow badge that Jews were required to wear in parts of Europe during the Middle Ages, and later in Nazi Germany and German-occupied Europe, was effectively a badge of shame, as well as identification. Other identifying marks may include making shamed people go barefoot. The biblical "Mark of Cain" can be interpreted as synonymous with a badge of shame. History Depilation Punitive depilation of men, especially burning off pubic hair, was intended as a mark of shame in ancient Mediterranean cultures where male body hair was valued. Women who committed adultery have also been forced to wear specific icons or marks, or had their hair shorn, as a badge of shame. Many women who fraternized with the occupiers in German-occupied Europe had their heads shaved by angry mobs of their peers after liberation by the Allies of World War II. During World War II, the Nazis also used head shaving as a mark of shame to punish Germans like the youthful non-conformists known as the Edelweiss Pirates. Clothing In Ancient Rome, both men and women originally wore the toga, but over time matrons adopted the stola as the preferred form of dress, while prostitutes retained the toga. Later, under the Lex J
https://en.wikipedia.org/wiki/Spatial%20reference%20system
A spatial reference system (SRS) or coordinate reference system (CRS) is a framework used to precisely measure locations on the surface of Earth as coordinates. It is thus the application of the abstract mathematics of coordinate systems and analytic geometry to geographic space. A particular SRS specification (for example, "Universal Transverse Mercator WGS 84 Zone 16N") comprises a choice of Earth ellipsoid, horizontal datum, map projection (except in the geographic coordinate system), origin point, and unit of measure. Thousands of coordinate systems have been specified for use around the world or in specific regions and for various purposes, necessitating transformations between different SRS. Although they date to the Hellenic Period, spatial reference systems are now a crucial basis for the sciences and technologies of Geoinformatics, including cartography, geographic information systems, surveying, remote sensing, and civil engineering. This has led to their standardization in international specifications such as the EPSG codes and ISO 19111:2019 Geographic information—Spatial referencing by coordinates, prepared by ISO/TC 211, also published by the Open Geospatial Consortium as Abstract Specification, Topic 2: Spatial referencing by coordinate. Types of systems The thousands of spatial reference systems used today are based on a few general strategies, which have been defined in the EPSG, ISO, and OGC standards: Geographic coordinate system (or geodetic) A spherical coordinate system measuring locations directly on the Earth (modeled as a sphere or ellipsoid) using latitude (degrees north or south of the equator) and longitude (degrees west or east of a prime meridian). Geocentric coordinate system (or Earth-centered Earth-fixed) A three-dimensional cartesian coordinate system that models the Earth as a three-dimensional object, measuring locations from a center point, usually the center of mass of the Earth, along x, y, and z axes aligned with the equ
https://en.wikipedia.org/wiki/Babylon.js
Babylon.js is a real time 3D engine using a JavaScript library for displaying 3D graphics in a web browser via HTML5. The source code is available on GitHub and distributed under the Apache License 2.0. History and progress It was initially released in 2013 under Microsoft Public License having been developed by two Microsoft employees. David Catuhe created the 3D game engine and was helped by David Rousset (VR, Gamepad and IndexedDB support) mostly in their free time as a side-project. They were also helped by artist Michel Rousseau who contributed several 3D scenes. It is based on an earlier game engine for Silverlight's WPF based 3D system. Catuhe's side-project then became his full-time job, and his team's primary focus. In 2015, it was presented at the WebGL Conference in Paris. As of 2018, it has more than 190 contributors and following its promotion and application in games, including one by Ubisoft. Its use has developed into a variety of fields such as: virtual worlds crime data visualization education in medicine fashion avatars managing Kinect on the web military training modelling historic sites Product design RDF graphs urban underground infrastructure modelling Technical description The source code is written in TypeScript and then compiled into a JavaScript version. The JavaScript version is available to end users via NPM or CDN who then code their projects in JavaScript accessing the engine's API. The Babylon.js 3D engine and user code is natively interpreted by all the web browser supporting the HTML5 standard and WebGL to undertake the 3D rendering. Modeling methodology The 3D modeling process used is that of polygon modeling with triangular faces to be represented by shell models. Limited use of constructive solid geometry is possible though only as a transitional method to create the union, subtraction and intersection of shell models. Once created models are rendered on an HTML 5 canvas element using a shader program which determi
https://en.wikipedia.org/wiki/Child%20psychopathology
Child psychopathology refers to the scientific study of mental disorders in children and adolescents. Oppositional defiant disorder, attention-deficit hyperactivity disorder, and autism spectrum disorder are examples of psychopathology that are typically first diagnosed during childhood. Mental health providers who work with children and adolescents are informed by research in developmental psychology, clinical child psychology, and family systems. Lists of child and adult mental disorders can be found in the International Statistical Classification of Diseases and Related Health Problems, 10th Edition (ICD-10), published by the World Health Organization (WHO) and in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), published by the American Psychiatric Association (APA). In addition, the Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood (DC: 0-3R) is used in assessing mental health and developmental disorders in children up to age five. Causes The etiology of child psychopathology has many explanations which differ from case to case. Many psychopathological disorders in children involve genetic and physiological mechanisms, though there are still many without any physical grounds. It is absolutely imperative that multiple sources of data be gathered. Diagnosing the psychopathology of children is daunting. It is influenced by development and contest, in addition to the traditional sources. Interviews with parents about school, etc., are inadequate. Either reports from teachers or direct observation by the professional are critical. (author, Robert B. Bloom, Ph.D.) The disorders with physical or biological mechanisms are easier to diagnose in children and are often diagnosed earlier in childhood. As psychopathy exists on a spectrum, the initial indications of the disorder can differ greatly. Certain children may exhibit subtle indications as early as two or three years old, while in
https://en.wikipedia.org/wiki/Hosanna-Tabor%20Evangelical%20Lutheran%20Church%20%26%20School%20v.%20Equal%20Employment%20Opportunity%20Commission
Hosanna-Tabor Evangelical Lutheran Church and School v. Equal Employment Opportunity Commission, 565 U.S. 171 (2012), was a United States Supreme Court case in which the court unanimously ruled that federal discrimination laws do not apply to religious organizations' selection of religious leaders. Background In 1999, Cheryl Perich started teaching at Hosanna-Tabor Evangelical Lutheran Church and School, an affiliate of the Lutheran Church–Missouri Synod in Redford, Michigan. In addition to other duties, Perich led students in prayer and taught a religion class several days a week. In 2004, Perich left on disability and was diagnosed with narcolepsy. In 2005, after she was cleared by her doctors to go back to work, the school told her that it already hired someone else. Perich threatened to file suit, and the school promptly fired her for "insubordination and disruptive behavior". Perich sued for unlawful dismissal under the Federal Americans with Disabilities Act. Opinion of the Court All nine Supreme Court justices agreed with the decision written by Chief Justice John Roberts that "the Establishment Clause prevents the Government from appointing ministers, and the Free Exercise Clause prevents it from interfering with the freedom of religious groups to select their own". Moreover, because the respondent, in this case, was a minister within the meaning of the ministerial exception, the First Amendment requires dismissal of her employment discrimination suit against her religious employer. The decision explicitly left open the question of whether religious organizations could be sued for other reasons: "We express no view on whether the exception bars other types of suits, including actions by employees alleging breach of contract or tortious conduct". The Court also developed multiple factors to determine whether an employee qualifies as a minister within the meaning of the ministerial exception. These factors include how the school viewed the employee, the e
https://en.wikipedia.org/wiki/Dual%20control%20theory
Dual control theory is a branch of control theory that deals with the control of systems whose characteristics are initially unknown. It is called dual because in controlling such a system the controller's objectives are twofold: (1) Action: To control the system as well as possible based on current system knowledge (2) Investigation: To experiment with the system so as to learn about its behavior and control it better in the future. These two objectives may be partly in conflict. In the context of reinforcement learning, this is known as the exploration-exploitation trade-off (e.g. Multi-armed bandit#Empirical motivation). Dual control theory was developed by Alexander Aronovich Fel'dbaum in 1960. He showed that in principle the optimal solution can be found by dynamic programming, but this is often impractical; as a result a number of methods for designing sub-optimal dual controllers have been devised. Example To use an analogy: if you are driving a new car you want to get to your destination cheaply and smoothly, but you also want to see how well the car accelerates, brakes and steers so as to get a better feel for how to drive it, so you will do some test manoeuvers for this purpose. Similarly a dual controller will inject a so-called probing (or exploration) signal into the system that may detract from short-term performance but will improve control in the future.
https://en.wikipedia.org/wiki/Non-evaporable%20getter
Non evaporable getters (NEG), based on the principle of metallic surface sorption of gas molecules, are mostly porous alloys or powder mixtures of Al, Zr, Ti, V and Fe. They help to establish and maintain vacuums by soaking up or bonding to gas molecules that remain within a partial vacuum. This is done through the use of materials that readily form stable compounds with active gases. They are important tools for improving the performance of many vacuum systems. Sintered onto the inner surface of high vacuum vessels, the NEG coating can be applied even to spaces that are narrow and hard to pump out, which makes it very popular in particle accelerators where this is an issue. The main sorption parameters of the kind of NEGs, like pumping speed and sorption capacity, have low limits. A different type of NEG, which is not coated, is the Tubegetter. The activation of these getters is accomplished mechanically or at a temperature from 550 K. The temperature range is from 0 to 800 K under HV/UHV conditions. The NEG acts as a getter or getter pump that is able to reduce the pressure to less than 10−12 mbar.
https://en.wikipedia.org/wiki/Carlitz%E2%80%93Wan%20conjecture
In mathematics, the Carlitz–Wan conjecture classifies the possible degrees of exceptional polynomials over a finite field Fq of q elements. A polynomial f(x) in Fq[x] of degree d is called exceptional over Fq if every irreducible factor (differing from x − y) or (f(x) − f(y))/(x − y)) over Fq becomes reducible over the algebraic closure of Fq. If q > d4, then f(x) is exceptional if and only if f(x) is a permutation polynomial over Fq. The Carlitz–Wan conjecture states that there are no exceptional polynomials of degree d over Fq if gcd(d, q − 1) > 1. In the special case that q is odd and d is even, this conjecture was proposed by Leonard Carlitz (1966) and proved by Fried, Guralnick, and Saxl (1993). The general form of the Carlitz–Wan conjecture was proposed by Daqing Wan (1993) and later proved by Hendrik Lenstra (1995)
https://en.wikipedia.org/wiki/Active-matrix%20liquid-crystal%20display
An active-matrix liquid-crystal display (AMLCD) is a type of flat-panel display used in high-resolution TVs, computer monitors, notebook computers, tablet computers and smartphones with an LCD screen, due to low weight, very good image quality, wide color gamut and fast response time. The concept of active-matrix LCDs was proposed by Bernard J. Lechner at the RCA Laboratories in 1968. The first functional AMLCD with thin-film transistors was made by T. Peter Brody, Fang-Chen Luo and their team at Westinghouse Electric Corporation in 1972. However, it took years of additional research and development by others to launch successful products. Introduction The most common type of AMLCD contains, besides the polarizing sheets and cells of liquid crystal, a matrix of thin-film transistors to make a thin-film-transistor liquid-crystal display. These devices store the electrical state of each pixel on the display while all the other pixels are being updated. This method provides a much brighter, sharper display than a passive matrix of the same size. An important specification for these displays is their viewing-angle. Thin-film transistors are usually used for constructing an active matrix so that the two terms are often interchanged, even though a thin-film transistor is just one component in an active matrix and some active-matrix designs have used other components such as diodes. Whereas a passive matrix display uses a simple conductive grid to apply a voltage to the liquid crystals in the target area, an active-matrix display uses a grid of transistors and capacitors with the ability to hold a charge for a limited period of time. Because of the switching action of transistors, only the desired pixel receives a charge, and the pixel acts as a capacitor to hold the charge until the next refresh cycle, improving image quality over a passive matrix. This is a special version of a sample-and-hold circuit. See also Organic light-emitting diode Active-matrix organic
https://en.wikipedia.org/wiki/Unreferenced%20variable
An unreferenced variable in the source code of a computer program is a variable that is defined but which is never used. This may result in a harmless waste of memory. Many compilers detect such variables and do not allocate storage for them (i.e., "optimize away" their storage), generally also issuing a warning as they do. Some coding guideline documents consider an unreferenced variable to be a symptom of a potential coding fault. On the other hand, unreferenced variables can be used as temporary placeholders to indicate further expected future developments in the code. Examples C: int main(void) { int i, j; for (i=0; i<10; i++) printf("%d", i); return 0; } In this example, j is an unreferenced variable.
https://en.wikipedia.org/wiki/Leptocephaly
Leptocephaly is a rare form of complex craniosynostosis (usually considered a form of scaphocephaly) in which the sagittal and metopic suture simultaneously close. Leptocephaly is characterized by equal narrowing of the head (from full sagittal fusion) and a tall and narrow (rather than long) head shape. Leptocephaly is usually nonsyndromic, but has been seen in individual cases such as Neu-Laxova syndrome. The term leptocephaly has also been used (erroneously) to refer to microcephaly. See also List of conditions with craniosynostosis Trigonocephaly (isolated metopic synostosis)
https://en.wikipedia.org/wiki/Power%20ISA
Power ISA is a reduced instruction set computer (RISC) instruction set architecture (ISA) currently developed by the OpenPOWER Foundation, led by IBM. It was originally developed by IBM and the now-defunct Power.org industry group. Power ISA is an evolution of the PowerPC ISA, created by the mergers of the core PowerPC ISA and the optional Book E for embedded applications. The merger of these two components in 2006 was led by Power.org founders IBM and Freescale Semiconductor. Prior to version 3.0, the ISA is divided into several categories. Processors implement a set of these categories as required for their task. Different classes of processors are required to implement certain categories, for example a server-class processor includes the categories: Base, Server, Floating-Point, 64-Bit, etc. All processors implement the Base category. Power ISA is a RISC load/store architecture. It has multiple sets of registers: 32 × 32-bit or 64-bit general-purpose registers (GPRs) for integer operations. 64 × 128-bit vector scalar registers (VSRs) for vector operations and floating-point operations. 32 × 64-bit floating-point registers (FPRs) as part of the VSRs for floating-point operations. 32 × 128-bit vector registers (VRs) as part of the VSRs for vector operations. 8 × 4-bit condition register fields (CRs) for comparison and control flow. 11 special registers of various sizes: Counter Register (CTR), link register (LR), time base (TBU, TBL), alternate time base (ATBU, ATBL), accumulator (ACC), status registers (XER, FPSCR, VSCR, SPEFSCR). Instructions up to version 3.0 have a length of 32 bits, with the exception of the VLE (variable-length encoding) subset that provides for higher code density for low-end embedded applications, and version 3.1 which introduced prefixing to create 64-bit instructions. Most instructions are triadic, i.e. have two source operands and one destination. Single- and double-precision IEEE-754 compliant floating-point operations are supp
https://en.wikipedia.org/wiki/Cholesterol%20oxidase
In enzymology, a cholesterol oxidase () is an enzyme that catalyzes the chemical reaction cholesterol + O2 cholest-4-en-3-one + H2O2 Thus, the two substrates of this enzyme are cholesterol and O2, whereas its two products are cholest-4-en-3-one and H2O2. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with oxygen as acceptor. The systematic name of this enzyme class is cholesterol:oxygen oxidoreductase. Other names in common use include cholesterol- O2 oxidoreductase, 3beta-hydroxy steroid oxidoreductase, and 3beta-hydroxysteroid:oxygen oxidoreductase. This enzyme participates in bile acid biosynthesis. The substrate-binding domain found in some bacterial cholesterol oxidases is composed of an eight-stranded mixed beta-pleated sheet and six alpha-helices. This domain is positioned over the isoalloxazine ring system of the FAD cofactor bound by the FAD-binding domain and forms the roof of the active site cavity, allowing for catalysis of oxidation and isomerisation of cholesterol to cholest-4-en-3-one. Structural studies As of late 2007, 14 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , and .
https://en.wikipedia.org/wiki/Competition%20%28biology%29
Competition is an interaction between organisms or species in which both require a resource that is in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other. In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time). There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis. Interference competition During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites.
https://en.wikipedia.org/wiki/Moore%20determinant%20of%20a%20Hermitian%20matrix
In mathematics, the Moore determinant is a determinant defined for Hermitian matrices over a quaternion algebra, introduced by . See also Dieudonné determinant
https://en.wikipedia.org/wiki/Spy%20vs.%20Spy%20II%3A%20The%20Island%20Caper
Spy vs. Spy II: The Island Caper is a 1985 video game. It is the sequel to Spy vs. Spy. Gameplay Spy vs. Spy II: The Island Caper adds a side-scrolling play area. Spies no longer start with a fixed number of traps but must collect the raw materials to build them Reception Steve Panak for ANALOG Computing wrote "you've got an action-packed game. Spy vs Spy II is a fantasy adventure recommended for all who enjoy a little harmless espionage." Computer and Video Games stated "The cartoon graphics are every bit as good as on Spy vs Spy and, with seven levels of action, you'll find it a real challenge." In a 92/100 review, Zzap! concluded "Fans of the original won't be disappointed." Reviews ASM (Aktueller Software Markt) - February 1986 Amiga Action - December 1989 Computer and Video Games - April 1990 Commodore Format - March 1994
https://en.wikipedia.org/wiki/Middleware
Middleware is a type of computer software that provides services to software applications beyond those available from the operating system. It can be described as "software glue". Middleware makes it easier for software developers to implement communication and input/output, so they can focus on the specific purpose of their application. It gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968. In distributed applications The term is most commonly used for software that enables communication and management of data in distributed applications. An IETF workshop in 2000 defined middleware as "those services found above the transport (i.e. over TCP/IP) layer set of services but below the application environment" (i.e. below application-level APIs). In this more specific sense middleware can be described as the dash ("-") in client-server, or the -to- in peer-to-peer. Middleware includes web servers, application servers, content management systems, and similar tools that support application development and delivery. ObjectWeb defines middleware as: "The software layer that lies between the operating system and applications on each side of a distributed computing system in a network." Services that can be regarded as middleware include enterprise application integration, data integration, message oriented middleware (MOM), object request brokers (ORBs), and the enterprise service bus (ESB). Database access services are often characterised as middleware. Some of them are language specific implementations and support heterogeneous features and other related communication features. Examples of database-oriented middleware include ODBC, JDBC and transaction processing monitors. Distributed computing system middleware can loosely be divided into two categories—those that provide human-time services (such as web request servicing) and those that perform in machine
https://en.wikipedia.org/wiki/BL%20Herculis%20variable
BL Herculis variables are a subclass of type II Cepheids with low luminosity and mass, that have a period of less than eight days. They are pulsating stars with light curves that frequently show a bump on the descending side for stars of the shortest periods and on the ascending side for longer period stars. Like other type II Cepheids, they are very old population II stars found in the galaxy’s halo and globular clusters. Also, compared to other type II Cepheids, BL Herculis variables have shorter periods and are fainter than W Virginis variables. Pulsating stars vary in spectral class as they vary in brightness and BL Herculis variables are normally class A at their brightest and class F when most dim. When plotted on the Hertzsprung–Russell diagram they fall in-between W Virginis and RR Lyrae variables. The prototype star, BL Herculis, varies between magnitude 9.7 and 10.6 in a period of 1.3 days. The brightest BL Herculis variables, with their maximum magnitudes, are: VY Pyxidis, 7.7 V553 Centauri, 8.2 SW Tauri, 9.3 RT Trianguli Australis, 9.4 V351 Cephei, 9.5 BL Herculis. 9.7 BD Cassiopeiae, 10.8 UY Eridani, 10.9 The BL Herculis stars show a wide variety of light curves, temperatures, and luminosity, and three subdivisions of the class have been defined, with the acronym AHB referring to above horizontal branch: XX Virginis stars (AHB1), with very fast rises to maximum and low metallicity CW stars (AHB2), W Virginis variables, longer periods, the bump on the ascending leg BL Herculis stars (AHB3), shorter periods, the bump on the descending leg
https://en.wikipedia.org/wiki/Boletus%20mamorensis
Boletus mamorensis is an edible fungus of the genus Boletus native to Morocco. It is closely related to B. aereus. See also List of Boletus species
https://en.wikipedia.org/wiki/Susan%20Mackem
Susan Marie Mackem (born 1954) is an American anatomic pathologist and physician-scientist. She researches vertebrate primary axis formation and the regulation of patterning and differentiation during limb development. She is a senior investigator and head of the regulation of vertebrate morphogenesis section at the National Cancer Institute's Frederick National Laboratory for Cancer Research. She is also an attending pathologist at the National Institutes of Health Clinical Center. Education Mackem undertook her graduate research on the regulation of herpes immediate early gene expression by VP16 with Bernard Roizman and received her Ph.D. as an MSTP trainee at the University of Chicago. Her 1982 dissertation was titled, Construction of chimeric genes for studying the regulation of herpes simplex virus a gene expression. She completed a M.D. at the Johns Hopkins School of Medicine, and then went on to residency training in anatomic pathology at the National Cancer Institute (NCI). Career and research Mackem became a staff member in the Laboratory of Pathology at the NCI. In 2009, she joined the Cancer and Developmental Biology Laboratory at Frederick National Laboratory for Cancer Research. Mackem serves as an attending anatomic pathologist for the Laboratory of Pathology and National Institutes of Health Clinical Center. She is a senior investigator and head of the regulation of vertebrate morphogenesis section. Her research has focused on vertebrate primary axis formation and the regulation of patterning and differentiation during limb development. Mackem studies limb development as a model for learning how signaling networks orchestrate the formation of a complex 3-dimensional structure, using combined genetic, genomic, and biochemical approaches to study transcription factors and signaling cascades that regulate the formation and pattern of digits and unravel the regulatory hierarchy between early patterning and digit morphogenesis. A major focus of her
https://en.wikipedia.org/wiki/Arm%20and%20hammer
The arm and hammer is a symbol consisting of a muscular arm holding a hammer. Used in ancient times as a symbol of the god Vulcan, it came to be known as a symbol of industry, for example blacksmithing and gold-beating. It has been used as a symbol by many different kinds of organizations, including banks, local government, and socialist political parties. It has been used in heraldry, appearing in the Coat of arms of Birmingham and Seal of Wisconsin. The similarity to the name of the industrialist Armand Hammer is not a coincidence: he was named after the symbol, as his father Julius Hammer was a supporter of socialist causes, including the Socialist Labor Party of America, with its arm-and-hammer logo. The Arm & Hammer brand is a registered trademark of Church & Dwight, an American manufacturer of household products. According to the company, the logo originally represented Vulcan. Armand Hammer made an offer to outright purchase this company having this brand with the similarity to his name, and while this offer was refused, he eventually acquired enough stock to have a controlling interest and join the board of directors. He remained an owner until his death in 1990. An arm-and-hammer sign can be seen in Manette Street, Soho, symbolizing the trade of gold-beating carried on there in the nineteenth century. It is referred to by Charles Dickens in A Tale of Two Cities. As of 2016, the sign there is a replica, with the original being held in the Dickens Museum. One of the oldest visualizations of arm and hammer can be found on Svetitskhoveli Cathedral. It was completed in 1029 by the medieval Georgian architect Arsukidze, although the site itself dates back to the early fourth century. Gallery See also Hammer and pick Hammer and sickle Fist and rose Armand Hammer We Can Do It!
https://en.wikipedia.org/wiki/List%20of%20sums%20of%20reciprocals
In mathematics and especially number theory, the sum of reciprocals generally is computed for the reciprocals of some or all of the positive integers (counting numbers)—that is, it is generally the sum of unit fractions. If infinitely many numbers have their reciprocals summed, generally the terms are given in a certain sequence and the first n of them are summed, then one more is included to give the sum of the first n+1 of them, etc. If only finitely many numbers are included, the key issue is usually to find a simple expression for the value of the sum, or to require the sum to be less than a certain value, or to determine whether the sum is ever an integer. For an infinite series of reciprocals, the issues are twofold: First, does the sequence of sums diverge—that is, does it eventually exceed any given number—or does it converge, meaning there is some number that it gets arbitrarily close to without ever exceeding it? (A set of positive integers is said to be large if the sum of its reciprocals diverges, and small if it converges.) Second, if it converges, what is a simple expression for the value it converges to, is that value rational or irrational, and is that value algebraic or transcendental? Finitely many terms The harmonic mean of a set of positive integers is the number of numbers times the reciprocal of the sum of their reciprocals. The optic equation requires the sum of the reciprocals of two positive integers a and b to equal the reciprocal of a third positive integer c. All solutions are given by a = mn + m2, b = mn + n2, c = mn. This equation appears in various contexts in elementary geometry. The Fermat–Catalan conjecture concerns a certain Diophantine equation, equating the sum of two terms, each a positive integer raised to a positive integer power, to a third term that is also a positive integer raised to a positive integer power (with the base integers having no prime factor in common). The conjecture asks whether the equation has an infi
https://en.wikipedia.org/wiki/Gauche%20effect
In the study of conformational isomerism, the Gauche effect is an atypical situation where a gauche conformation (groups separated by a torsion angle of approximately 60°) is more stable than the anti conformation (180°). There are both steric and electronic effects that affect the relative stability of conformers. Ordinarily, steric effects predominate to place large substituents far from each other. However, this is not the case for certain substituents, typically those that are highly electronegative. Instead, there is an electronic preference for these groups to be gauche. Typically studied examples include 1,2-difluoroethane (H2FCCFH2), ethylene glycol, and vicinal-difluoroalkyl structures. There are two main explanations for the gauche effect: hyperconjugation and bent bonds. In the hyperconjugation model, the donation of electron density from the C–H σ bonding orbital to the C–F σ* antibonding orbital is considered the source of stabilization in the gauche isomer. Due to the greater electronegativity of fluorine, the C–H σ orbital is a better electron donor than the C–F σ orbital, while the C–F σ* orbital is a better electron acceptor than the C–H σ* orbital. Only the gauche conformation allows good overlap between the better donor and the better acceptor. Key in the bent bond explanation of the gauche effect in difluoroethane is the increased p orbital character of both C–F bonds due to the large electronegativity of fluorine. As a result, electron density builds up above and below to the left and right of the central C–C bond. The resulting reduced orbital overlap can be partially compensated when a gauche conformation is assumed, forming a bent bond. Of these two models, hyperconjugation is generally considered the principal cause behind the gauche effect in difluoroethane. The molecular geometry of both rotamers can be obtained experimentally by high resolution infrared spectroscopy augmented with in silico work. In accordance with the model describ
https://en.wikipedia.org/wiki/Matrix%20%28mathematics%29
In mathematics, a matrix (: matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Matrices are used to represent linear maps and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents the composition of linear maps. Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such. Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated to the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant, and the eigenvalues of a square matrix are the roots of a polynomial determinant. In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimension. Matrices are used in most areas of mathematics and most scientific fields, either directly, or through their use in geometry and nume
https://en.wikipedia.org/wiki/Parallactic%20angle
In spherical astronomy, the parallactic angle is the angle between the great circle through a celestial object and the zenith, and the hour circle of the object. It is usually denoted q. In the triangle zenith—object—celestial pole, the parallactic angle will be the position angle of the zenith at the celestial object. Despite its name, this angle is unrelated with parallax. The parallactic angle is zero or 180° when the object crosses the meridian. Uses For ground-based observatories, the Earth atmosphere acts like a prism which disperses light of different wavelengths such that a star generates a rainbow along the direction that points to the zenith. So given an astronomical picture with a coordinate system with a known direction to the Celestial pole, the parallactic angle represents the direction of that prismatic effect relative to that reference direction. Knowledge of that angle is needed to align Atmospheric Dispersion Correctors with the beam axis of the telescope Depending on the type of mount of the telescope, this angle may also affect the orientation of the celestial object's disk as seen in a telescope. With an equatorial mount, the cardinal points of the celestial object's disk are aligned with the vertical and horizontal direction of the view in the telescope. With an altazimuth mount, those directions are rotated by the amount of the parallactic angle. The cardinal points referred to here are the points on the limb located such that a line from the center of the disk through them will point to one of the celestial poles or 90° away from them; these are not the cardinal points defined by the object's axis of rotation. The orientation of the disk of the Moon, as related to the horizon, changes throughout its diurnal motion and the parallactic angle changes equivalently. This is also the case with other celestial objects. In an ephemeris, the position angle of the midpoint of the bright limb of the Moon or planets, and the position angles of thei
https://en.wikipedia.org/wiki/DnaB%20helicase
DnaB helicase is an enzyme in bacteria which opens the replication fork during DNA replication. Although the mechanism by which DnaB both couples ATP hydrolysis to translocation along DNA and denatures the duplex is unknown, a change in the quaternary structure of the protein involving dimerisation of the N-terminal domain has been observed and may occur during the enzymatic cycle. Initially when DnaB binds to dnaA, it is associated with dnaC, a negative regulator. After DnaC dissociates, DnaB binds dnaG. The N-terminal has a multi-helical structure that forms an orthogonal bundle. The C-terminal domain contains an ATP-binding site and is therefore probably the site of ATP hydrolysis. In eukaryotes, helicase function is provided by the MCM (Minichromosome maintenance) complex. The DnaB helicase is the product of the dnaB gene. The helicase enzyme that is produced is a hexamer in E. coli, as well as in many other bacteria. The energy for DnaB activity is provided by NTP hydrolysis. Mechanical energy moves the DnaB into the replication fork, physically splitting it in half. E. coli dnaB In E. coli, dnaB is a hexameric protein of six 471-residue subunits, which form a ring-shaped structure with threefold symmetry. During DNA replication, the lagging strand of DNA binds in the central channel of dnaB, and the second DNA strand is excluded. The binding of dNTPs causes a conformational change which allows the dnaB to translocate along the DNA, thus mechanically forcing the separation of the DNA strands. Mechanism of initiation of replication At least 10 different enzymes or proteins participate in the initiation phase of replication. They open the DNA helix at the origin and establish a prepriming complex for subsequent reactions. The crucial component in the initiation process is the DnaA protein, a member of the AAA+ ATPase protein family (ATPases associated with diverse cellular activities). Many AAA+ ATPases, including DnaA, form oligomers and hydrolyze ATP rel
https://en.wikipedia.org/wiki/Ruminococcus
Ruminococcus is a genus of bacteria in the class Clostridia. They are anaerobic, Gram-positive gut microbes. One or more species in this genus are found in significant numbers in the human gut microbiota. The type species is R. flavefaciens. As usual, bacteria taxonomy is in flux, with Clostridia being paraphyletic, and some erroneous members of Ruminococcus being reassigned to a new genus Blautia on the basis of 16S rRNA gene sequences. One of the most highly cited papers involving the genus Ruminococcus is a paper describing interspecies hydrogen transfer between Ruminococcus albus and Wolinella succinogenes. In 1972, Ruminococcus bromii was reportedly found in the human gut, which was the first of several species discovered. They may play a role in plant cell wall breakdown in the colon. One study found that R. albus, R. callidus, and R. bromii are less abundant in people with inflammatory bowel disease. Ruminococcus are also less abundant in patients with Parkinson's disease and Amyotrophic lateral sclerosis. R. gnavus is associated with Crohn's disease. Species Ruminococcus albus Ruminococcus bromii Ruminococcus callidus Ruminococcus flavefaciens Species belonging to the Lachnospiraceae family and therefore in need of reclassification: Ruminococcus gauvreauii Ruminococcus gnavus Ruminococcus lactaris Ruminococcus obeum Ruminococcus torques
https://en.wikipedia.org/wiki/Intel%20Ct
Intel Ct is a programming model developed by Intel to ease the exploitation of its future multicore chips, as demonstrated by the Tera-Scale research program. It is based on the exploitation of SIMD to produce automatically parallelized programs. On August 19, 2009, Intel acquired RapidMind, a privately held company founded and headquartered in Waterloo, Ontario, Canada. RapidMind and Ct combined into a successor named Intel Array Building Blocks (ArBB) released in September 2010.
https://en.wikipedia.org/wiki/NScripter
, officially abbreviated as Nscr, also known under its production title Scripter4, is a game engine developed by Naoki Takahashi between 1999 and 2018 functioning with its own script language which facilitates the creation of both visual and sound novels. The SDK is only available for Windows. From version 2.82, NScripter supports both Japanese characters - these are two bytes long - and any single-byte character; before that, it only supported Japanese characters. This engine was very popular in Japan because of its simplicity and because it was free for amateur game makers. Additionally, there are forks available to extend NScripter's capabilities to display characters from another language, run a game on other platforms, etc. NScripter NScripter's development ranged from to ; it was first called by its production title Scripter4 because it was the successor to Scripter3, Naoki Takahashi's previous engine. was the date of the release of the final version of NScripter. Characteristics The script is executed by the engine in an interpreter. The syntax is very simple, similar to the BASIC language. The functions needed to create visual novels and sound novels, such as displaying text, sprites and CG, playing music and handling choices, are built into the engine as a basic API. As a result, game creation is simplified by the ability to write a script that calls these functions directly. In order to meet specific needs, it is possible to use a method called 'system customisation' which modifies the behaviour of the engine itself in order to add features such as a save system, complex effects not provided in the basic API, or video management. To do this, external DLLs can be used. These functions can be used to create simulation games, etc. On the other hand, before version 2.92, object-oriented elements were not incorporated into the software and NScripter did not handle parallelism at all. The statement was used to try to do structured programming within NSc
https://en.wikipedia.org/wiki/K-approximation%20of%20k-hitting%20set
In computer science, k-approximation of k-hitting set is an approximation algorithm for weighted hitting set. The input is a collection S of subsets of some universe T and a mapping W from T to non-negative numbers called the weights of the elements of T. In k-hitting set the size of the sets in S cannot be larger than k. That is, . The problem is now to pick some subset T' of T such that every set in S contains some element of T', and such that the total weight of all elements in T' is as small as possible. The algorithm For each set in S is maintained a price, , which is initially 0. For an element a in T, let S(a) be the collection of sets from S containing a. During the algorithm the following invariant is kept We say that an element, a, from T is tight if . The main part of the algorithm consists of a loop: As long as there is a set in S that contains no element from T which is tight, the price of this set is increased as much as possible without violating the invariant above. When this loop exits, all sets contain some tight element. Pick all the tight elements to be the hitting set. Correctness The algorithm always terminates because in each iteration of the loop the price of some set in S is increased enough to make one more element from T tight. If it cannot increase any price, it exits. It runs in polynomial time because the loop will not make more iterations than the number of elements in the union of all the sets of S. It returns a hitting set, because when the loop exits, all sets in S contain a tight element from T, and the set of these tight elements are returned. Note that for any hitting set T* and any prices where the invariant from the algorithm is true, the total weight of the hitting set is an upper limit to the sum over all members of T* of the sum of the prices of sets containing this element, that is: . This follows from the invariant property. Further, since the price of every set must occur at least once on the left hand side, we get
https://en.wikipedia.org/wiki/List%20of%20PSPACE-complete%20problems
Here are some of the more commonly known problems that are PSPACE-complete when expressed as decision problems. This list is in no way comprehensive. Games and puzzles Generalized versions of: Amazons Atomix Checkers if a draw is forced after a polynomial number of non-jump moves Dyson Telescope Game Cross Purposes Geography Two-player game version of Instant Insanity Ko-free Go Ladder capturing in Go Gomoku Hex Konane Lemmings Node Kayles Poset Game Reversi River Crossing Rush Hour Finding optimal play in Mahjong solitaire Sokoban Super Mario Bros. Black Pebble game Black-White Pebble game Acyclic pebble game One-player pebble game Token on acyclic directed graph games: Logic Quantified boolean formulas First-order logic of equality Provability in intuitionistic propositional logic Satisfaction in modal logic S4 First-order theory of the natural numbers under the successor operation First-order theory of the natural numbers under the standard order First-order theory of the integers under the standard order First-order theory of well-ordered sets First-order theory of binary strings under lexicographic ordering First-order theory of a finite Boolean algebra Stochastic satisfiability Linear temporal logic satisfiability and model checking Lambda calculus Type inhabitation problem for simply typed lambda calculus Automata and language theory Circuit theory Integer circuit evaluation Automata theory Word problem for linear bounded automata Word problem for quasi-realtime automata Emptiness problem for a nondeterministic two-way finite state automaton Equivalence problem for nondeterministic finite automata Word problem and emptiness problem for non-erasing stack automata Emptiness of intersection of an unbounded number of deterministic finite automata A generalized version of Langton's Ant Minimizing nondeterministic finite automata Formal languages Word problem for context-sensitive language Intersect
https://en.wikipedia.org/wiki/Used%20coffee%20grounds
Used coffee grounds is the result of brewing coffee, and are the final product after preparation of coffee. Despite having several highly-desirable chemical components, used coffee grounds are generally regarded as waste, and they are usually thrown away or composted. As of 2019, it was estimated that over 15 million tonnes of spent coffee grounds are generated annually. Due to this quantity of waste and the chemical properties of used coffee grounds, potential uses for used coffee grounds are a hot topic of investigation as of the 2010s. In the late 19th century, used coffee grounds were used to adulterate pure coffee. Chemical composition Most used coffee grounds are similar in chemical composition, although coffee grounds used to make instant coffee have fewer chemicals in them due to a more extensive extraction process. Used coffee grounds are rich in sugars, which comprise about half of their weight. A further 20% is made up of proteins, and a further 20% is lignins. The dry coffee grounds contain significant amounts of potassium (11.7 g/kg), nitrogen (2.8 g/kg), magnesium (1.9 g/kg), and phosphorus (1.8 g/kg). The quantity of caffeine remaining in used coffee grounds is around 48% of that in fresh coffee grounds. There are significantly less tannins in used coffee grounds than fresh coffee grounds. Production On average, 1 tonne of green coffee produces approximately 650 kg of spent coffee grounds, and over 15 million tonnes of spent coffee grounds are generated annually. In keeping with a life cycle approach to sustainability, this large quantity of waste requires waste management plans. Due to the amount of spent coffee grounds generated and the chemical properties of spent coffee grounds, the usage of spent coffee grounds is avidly investigated. Usage Precautions It is not recommended to burn dried used coffee grounds, as they give off hazardous nitrogen oxides when burnt. In gardens In gardens, coffee grounds may be used for composting or
https://en.wikipedia.org/wiki/Non-autonomous%20system%20%28mathematics%29
In mathematics, an autonomous system is a dynamic equation on a smooth manifold. A non-autonomous system is a dynamic equation on a smooth fiber bundle over . For instance, this is the case of non-autonomous mechanics. An r-order differential equation on a fiber bundle is represented by a closed subbundle of a jet bundle of . A dynamic equation on is a differential equation which is algebraically solved for a higher-order derivatives. In particular, a first-order dynamic equation on a fiber bundle is a kernel of the covariant differential of some connection on . Given bundle coordinates on and the adapted coordinates on a first-order jet manifold , a first-order dynamic equation reads For instance, this is the case of Hamiltonian non-autonomous mechanics. A second-order dynamic equation on is defined as a holonomic connection on a jet bundle . This equation also is represented by a connection on an affine jet bundle . Due to the canonical embedding , it is equivalent to a geodesic equation on the tangent bundle of . A free motion equation in non-autonomous mechanics exemplifies a second-order non-autonomous dynamic equation. See also Autonomous system (mathematics) Non-autonomous mechanics Free motion equation Relativistic system (mathematics)
https://en.wikipedia.org/wiki/Visvalingam%E2%80%93Whyatt%20algorithm
The Visvalingam–Whyatt algorithm, also known as the Visvalingam's algorithm, is an algorithm that decimates a curve composed of line segments to a similar curve with fewer points. Idea Given a polygonal chain (often called a Polyline), the algorithm attempts to find a similar chain composed of fewer points. Points are assigned an importance based on local conditions, and points are removed from the least important to most important. In Visvalingam's algorithm, the importance is related to the triangular area added by each point. Algorithm Given a chain of 2d points , the importance of each interior point is computed by finding the area of the triangle formed by it and its immediate neighbors. This can be done quickly using a matrix determinant. Alternatively, the equivalent formula below can be used The minimum importance point is located and marked for removal (note that and will need to be recomputed). This process is repeated until either the desired number of points is reached, or the contribution of the least important point is large enough to not neglect. Advantages The algorithm is easy to understand and explain, but is often competitive with much more complex approaches. With the use of a priority queue, the algorithm is performant on large inputs, since the importance of each point can be computed using only its neighbors, and removing a point only requires recomputing the importance of two other points. It is simple to generalize to higher dimensions, since the area of the triangle between points has a consistent meaning. Disadvantages The algorithm does not differentiate between sharp spikes and shallow features, meaning that it will clean up sharp spikes that may be important. The algorithm simplifies the entire length of the curve evenly, meaning that curves with high and low detail areas will likely have their fine details eroded. See also Curve fitting Alternative algorithms for line simplification include: Ramer–Dougla
https://en.wikipedia.org/wiki/Landscape%20evolution%20model
A landscape evolution model is a physically-based numerical model that simulates changing terrain over the course of time. The change in, or evolution of, terrain, can be due to: glacial or fluvial erosion, sediment transport and deposition, regolith production, the slow movement of material on hillslopes, more intermittent events such as rockfalls, debris flows, landslides, and other surface processes. These changes occur in response to the land surface being uplifted above sea-level (or other base-level) by surface uplift, and also respond to subsidence. A typical landscape evolution model takes many of these factors into account. Landscape evolution models are used primarily in the field of geomorphology. As they improve, they are beginning to be consulted by land managers to aid in decision making, most recently in the area of degraded landscapes. The earliest landscape evolution models were developed in the 1970s. In those models, flow of water across a mesh was simulated, and cell elevations were changed in response to calculated erosional power. Modern landscape evolution models can leverage graphics processing units and other acceleration hardware and software, to run more quickly. See also Hillslope evolution SIBERIA, CAESAR-Lisflood, LANDIS II, , an open-source, forest landscape model that simulates future forests pyBadlands, Community Surface Dynamics Modeling System,
https://en.wikipedia.org/wiki/Sacrococcygeal%20teratoma
Sacrococcygeal teratoma (SCT) is a type of tumor known as a teratoma that develops at the base of the coccyx (tailbone) and is thought to be primarily derived from remnants of the primitive streak. Sacrococcygeal teratomas are benign 75% of the time, malignant 12% of the time, and the remainder are considered "immature teratomas" that share benign and malignant features. Benign sacrococcygeal teratomas are more likely to develop in younger children who are less than 5 months old, and older children are more likely to develop malignant sacrococcygeal teratomas. The Currarino syndrome, due to an autosomal dominant mutation in the MNX1 gene, consists of a presacral mass (usually a mature teratoma or anterior meningocele), anorectal malformation and sacral dysgenesis. Presentation Complications Maternal complications of pregnancy may include mirror syndrome. Maternal complications of delivery may include a Cesarean section or, alternatively, a vaginal delivery with mechanical dystocia. Complications of the mass effect of a teratoma in general are addressed on the teratoma page. Complications of the mass effect of a large SCT may include hip dysplasia, bowel obstruction, urinary obstruction, hydronephrosis and hydrops fetalis. Even a small SCT can produce complications of mass effect, if it is presacral (Altman Type IV). In the fetus, severe hydronephrosis may contribute to inadequate lung development. Also in the fetus and newborn, the anus may be imperforate. Later complications of the mass effect and/or surgery may include neurogenic bladder, other forms of urinary incontinence, fecal incontinence, and other chronic problems resulting from accidental damage to or sacrifice of nerves and muscles within the pelvis. Removal of the coccyx may include additional complications. In one review of 25 patients, however, the most frequent complication was an unsatisfactory appearance of the surgical scar. Late effects Late effects are of two kinds: consequences of
https://en.wikipedia.org/wiki/Fuze%20Keeping%20Clock
The Fuze Keeping Clock (FKC) was a simplified version of the Royal Navy's High Angle Control System analogue fire control computer. It first appeared as the FKC MkII in destroyers of the 1938 , while later variants were used on sloops, frigates, destroyers, aircraft carriers and several cruisers. The FKC MkII was a non-tachymetric anti-aircraft fire control computer. It could accurately engage targets with a maximum speed of . Operation The FKC received vertical reference information from a Gyro Level Corrector and aircraft altitude, range, direction, and speed input information from the Rangefinder-Director, and output to the guns the elevation and deflection data needed to hit the target, along with the correct fuze timing information, so that the shells fired would explode in the vicinity of the target aircraft. Most guns controlled by the FKC had Fuze Setting Pedestals or Fuze Setting Trays where the correct fuze timing was set on a clockwork mechanism within the AA shell warhead. Development Type 285 radar was an early addition to the FKC system, being fitted on new destroyers from mid-1941 onward, and retrofitted to existing destroyers as time and opportunity permitted. Later variants increased the maximum target speed to , and were combined with Gyro Rate Units (GRU) which gave tachometric capabilities to the system, and radar which greatly improved ranging and rate keeping accuracy. Wartime use The FKC saw extensive use during the war on British Commonwealth naval ships, typically on destroyers and sloops. Prior to the widespread use of radar, optical detection and ranging on high altitude aerial targets was a daunting task, as shown by 's Report of Proceedings, for 3 September 1940: Occasionally conditions would conspire to favour the surface ships during an aerial attack, as again revealed by HMAS Parramattas Report of Proceedings, for 20 September 1940: The FKC was used throughout the war and its effectiveness was increased by the use of radar for g
https://en.wikipedia.org/wiki/Sunflower%20seed
Sunflower seed are the seeds of the sunflower (Helianthus annuus). There are three types of commonly used sunflower seeds: linoleic (most common), high oleic, and sunflower oil seeds. Each variety has its own unique levels of monounsaturated, saturated, and polyunsaturated fats. The information in this article refers mainly to the linoleic variety. For commercial purposes, sunflower seeds are usually classified by the pattern on their husks. If the husk is solid black, the seeds are called black oil sunflower seeds. The crops may be referred to as oilseed sunflower crops. These seeds are usually pressed to extract their oil. Striped sunflower seeds are primarily eaten as a snack food; as a result, they may be called confectionery sunflower seeds. The term "sunflower seed" is actually a misnomer when applied to the seed in its pericarp (hull). Botanically speaking, it is a cypsela. When dehulled, the edible remainder is called the sunflower kernel or heart. Production In 2020, global production of sunflower seeds added up to 50 million tonnes, led by Russia and Ukraine with 53% of the world total combined (table). Argentina, China, and Romania also contributed significant volumes. Usage Sunflower are commonly eaten as a snack, but can also be consumed as part of a meal. They can be used as garnishes or ingredients in various recipes. The seeds may be sold as in-shell seeds or dehulled kernels or be sprouted and eaten in salads. When in-shell seeds are processed, they are first dried. Afterwards, they may be roasted or dusted with salt or flour for the preservation of flavor. Sunflower seeds sold by the bag are either eaten plain, salted (sometimes called 'plain') or with flavoring added by the manufacturer. Flavor examples include barbecue, pickle, hot sauce, bacon, ranch, and nacho cheese. In-shell, sunflower seeds are particularly popular in Mediterranean, Eastern European, and Asian countries where they can be bought freshly roasted and are commonly cons
https://en.wikipedia.org/wiki/BRLESC
The BRLESC I (Ballistic Research Laboratories Electronic Scientific Computer) was one of the last of the first-generation electronic computers. It was built by the United States Army's Ballistic Research Laboratory (BRL) at Aberdeen Proving Ground with assistance from the National Bureau of Standards (now the National Institute of Standards and Technology), and was designed to take over the computational workload of EDVAC and ORDVAC, which themselves were successors of ENIAC. It began operation in 1962. The Ballistic Research Laboratory became a part of the U.S. Army Research Laboratory in 1992. BRLESC was designed primarily for scientific and military tasks requiring high precision and high computational speed, such as ballistics problems, army logistical problems, and weapons systems evaluations. It contained 1727 vacuum tubes and 853 transistors and had a memory of 4096 72-bit words. BRLESC employed punched cards, magnetic tape, and a magnetic drum as input-output devices, which could be operated simultaneously. It was capable of five million (bitwise) operations per second. A fixed-point addition took 5 microseconds, a floating-point addition took 5 to 10 microseconds, a multiplication (fixed- or floating-point) took 25 microseconds, and a division (fixed- or floating-point) took 65 microseconds. (These times are including the memory access time, which was 4-5 microseconds.) It was the fastest computer in the world until the CDC 6600 was introduced in 1964. BRLESC and its predecessor, ORDVAC, used their own unique notation for hexadecimal numbers. Instead of the sequence A B C D E F universally used today, the digits 10 to 15 were represented by the letters K S N J F L, corresponding to the teletypewriter characters on five-track paper tape. The mnemonic phrase "King Size Numbers Just For Laughs" was used to remember the letter sequence. BRLESC II, using integrated circuits, became operational in November 1967; it was designed to be 200 times faster than
https://en.wikipedia.org/wiki/2-Nonenal
2-Nonenal is an unsaturated aldehyde. The colorless liquid is an important aroma component of aged beer and buckwheat. Odor characteristics The odor of this substance is perceived as orris, fat and cucumber. Its odor has been associated with human body odor alterations during aging.
https://en.wikipedia.org/wiki/The%20Arctic%20Fox%20Centre
The Arctic Fox Centre ( ) is a research centre with an enclosed exhibition and café in the municipality Súðavík in the Westfjords in Iceland. It focuses on the Arctic fox (Vulpes lagopus) which is the only native terrestrial mammal in Iceland. The centre was founded in 2007 by locals who are interested in the Arctic fox. It has a strong emphasis on ecotourism. The centre is a non-profit-partner of 1% for the planet and a member of The Wild North. The house The house is located in the new part of Súðavík, on the land of the former farm Eyrardalur. It was built in the last decade of the 19th century and has been renovated recently by the authorities of Súðavík. History Opened in 2010, the centre was founded by Páll Hersteinsson PhD and Ester Rut Unnsteinsdóttir MSc, Arctic fox experts as well as professor and pupil. Páll and Ester Rut originally started studying the Arctic fox together on the Hornstrandir Nature Reserve in 1998, one of the few areas in Iceland where the Arctic fox is protected from hunting, and wished to secure the continued study of the fox and to educate people on the facts of the Arctic fox, which in Iceland had a tarnished image and reputation. From this need grew the idea for the creation of the Arctic Fox Centre, a den for collecting and displaying information on Arctic foxes as well as creating a foundation for the research which they had begun in 1998. After introducing the idea to a mixed reception in a tourism conference in the town of Ísafjörður in 2005 the researchers began receiving supporters and backers, and by 2007 the Arctic Fox Centre was officially created as a non profit organisation with 42 shareholders. The municipality of Súðavíkurhreppur had donated the oldest, but dilapidated house in the town of Súðavík to the project, and financed the complete restoration of the building. During this time Páll and Ester Rut raised funds, collecting Arctic fox material and knowledge for the exhibitions as well as continuing with their re
https://en.wikipedia.org/wiki/LibreWolf
LibreWolf is a free and open-source web browser and fork of Firefox with an emphasis on privacy and security. Development LibreWolf was initially released for Linux operating systems on March 7 2020. A community-maintained version for Windows was released a year later with a macOS port planned. It can also be installed via a portable AppImage or via the Microsoft Store and Windows Package Manager. Features Being a privacy focused browser, LibreWolf does not include telemetry and has uBlock Origin pre-installed. Certain features like Pocket are also disabled, and the auto-update function has been removed. License The browser is licensed by Mozilla Public License 2.0 while the website is licensed by GNU AGPL 3.0. See also Browser security Pale Moon Gecko (software) Waterfox
https://en.wikipedia.org/wiki/Trevor%20Paglen
Trevor Paglen (born 1974) is an American artist, geographer, and author whose work tackles mass surveillance and data collection. In 2016, Paglen won the Deutsche Börse Photography Foundation Prize and he has also won The Cultural Award from the German Society for Photography. In 2017, he was a recipient of a MacArthur Fellowship. Early life and education Paglen earned a B.A. degree in religious studies in 1998 from the University of California at Berkeley, a M.F.A. degree in 2002 from the School of the Art Institute of Chicago, and a Ph.D. in Geography in 2008 from the University of California at Berkeley. While at UC Berkeley, Paglen lived in the Berkeley Student Cooperative, residing in Chateau, Fenwick, and Rochdale co-ops. Work Sean O'Hagan, writing in The Guardian in 2015, said that Paglen, whose "ongoing grand project [is] the murky world of global state surveillance and the ethics of drone warfare", "is one of the most conceptually adventurous political artists working today, and has collaborated with scientists and human rights activists on his always ambitious multimedia projects." His visual work such as his "Limit Telephotography" and "The Other Night Sky" series have received widespread attention for both his technical innovations and for his conceptual project that involves simultaneously making and negating documentary-style truth-claims. The contrasts between secrecy and revelation, evidence and abstraction distinguish Paglen's work. With that the artist presents not so much "evidence" as admonitions to awareness. He was an Eyebeam Commissioned Artist in 2007. In 2008 the Berkeley Art Museum devoted a comprehensive solo exhibition to his work. In the next year, Paglen took part in the Istanbul Biennial, and in 2010 he exhibited at the Vienna Secession. Autonomy Cube was a project by Paglen and Jacob Appelbaum which placed relays for the anonymous communication network Tor in traditional art museums. Paglen is featured in the nerd culture d
https://en.wikipedia.org/wiki/RNA%20binding%20protein%2C%20fox-1%20homolog%20%28c.%20elegans%29%203
RNA binding protein, fox-1 homolog (C. elegans) 3 (Rbfox3) is a protein that in humans is encoded by the RBFOX3 gene. It is related to the alternative splicing factors Rbfox1 and Rbfox2, but instead of its involvement in splicing, it is most well known as the nuclear biomarker NeuN. See also NeuN Rbfox1 and Rbfox2 Alternative splicing
https://en.wikipedia.org/wiki/Cable%20Internet%20access
In telecommunications, cable Internet access, shortened to cable Internet, is a form of broadband internet access which uses the same infrastructure as cable television. Like digital subscriber line and fiber to the premises services, cable Internet access provides network edge connectivity (last mile access) from the Internet service provider to an end user. It is integrated into the cable television infrastructure analogously to DSL which uses the existing telephone network. Cable TV networks and telecommunications networks are the two predominant forms of residential Internet access. Recently, both have seen increased competition from fiber deployments, wireless, mobile networks and satellite internet access. Hardware and bit rates Broadband cable Internet access requires a cable modem at the customer's premises and a cable modem termination system (CMTS) at a cable operator facility, typically a cable television headend. The two are connected via coaxial cable to a hybrid fibre-coaxial (HFC) network. While access networks are referred to as last-mile technologies, cable Internet systems can typically operate where the distance between the modem and the termination system is up to . If the HFC network is large, the cable modem termination system can be grouped into hubs for efficient management. Several standards have been used for cable internet, but the most common is DOCSIS. A cable modem at the customer is connected via coaxial cable to an optical node, and thus into an HFC network. An optical node serves many modems as the modems are connected with coaxial cable to a coaxial cable "trunk" via distribution "taps" on the trunk, which then connects to the node, possibly using amplifiers along the trunk. The optical node converts the Radiofrequency (RF) signal in the coaxial cable trunk into light pulses to be sent through optical fibers in the HFC network. At the other end of the network, an optics platform or headend platform converts the light pulses into
https://en.wikipedia.org/wiki/Rex%20Black
Rex Black (born July 16, 1964) is a software engineer, entrepreneur and an author in the field of software testing. Black graduated from the University of California at Los Angeles (UCLA) in 1990 with a bachelors of science in computer science and engineering. In 1983, Black started work in the software engineering field and has spent more than 20 years in software testing. Black is chief executive officer and founder of RBCS, Inc. Black has also taught courses in managing the testing process at the University of Texas at Austin Center for Lifelong Engineering Education and is an adjunct faculty member and course developer of business analysis and software testing curricula for Villanova University. He serves on editorial board of Software Test & Performance Magazine. Software quality advocacy From April 2005 to April 2009, Black served as president of the International Software Testing Qualifications Board (ISTQB), an international group of 40 international boards responsible for the international qualification scheme called "ISTQB Certified Tester." Black also co-authored the ISTQB Advanced Syllabus (released in July 2007), and is vice chair of the Working Party Foundation Level ISTQB Syllabus. From January 2005 to January 2008, Black was president of the American Software Testing Qualifications Board (ASTQB), a branch of the International Software Testing Qualification Board. He currently sits on the ASTQB Board of Directors. Black served on the Program Committee and as a Session Chair for the Third World Congress on Software Quality in Munich in September 2005. He’s also presented at the 2008 World Congress on Software Quality in Maryland and the 2009-2013, Software Testing Conference (Softec09 and SoftecAsia13) in Kuala Lumpur. Selected bibliography Books (Includes Indian, Chinese, Russian, and Japanese editions.) Articles
https://en.wikipedia.org/wiki/Mir-365%20microRNA%20precursor%20family
In molecular biology mir-365 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. See also MicroRNA
https://en.wikipedia.org/wiki/Carboxylesterase%20type%20B
Carboxylesterase, type B is a family of evolutionarily related proteins that belongs to the superfamily of proteins with the Alpha/beta hydrolase fold. Higher eukaryotes have many distinct esterases. The different types include those that act on carboxylic esters (). Carboxyl-esterases have been classified into three categories (A, B and C) on the basis of differential patterns of inhibition by organophosphates. The sequence of a number of type-B carboxylesterases indicates that the majority are evolutionarily related. As is the case for lipases and serine proteases, the catalytic apparatus of esterases involves three residues (catalytic triad): a serine, a glutamate or aspartate and a histidine. Subfamilies Neuroligin Cholinesterase Examples Human genes that encode proteins containing the carboxylesterase domain include: ACHE ARACHE BCHE CEL CES1 CES2 CES3 CES4 CES7 CES8 NLGN1 NLGN2 NLGN3 NLGN4X NLGN4Y TG See also Carboxylesterase
https://en.wikipedia.org/wiki/Paul%20M.%20G.%20L%C3%A9vy
Paul Michel Gabriel, Baron Lévy (27 November 1910 – 16 August 2002) was a Belgian journalist and professor. He was born in Brussels and was a Holocaust survivor. He worked for many years as Director of Information at the Council of Europe, helping to create the Flag of Europe in the 1950s in collaboration with Arsène Heitz. Early career Before the war, Lévy directed the information services of the Belgian national broadcaster, the Institut National de Radiodiffusion (INR). Under the occupation, he refused to collaborate with the German-backed radio and was sacked and arrested. He was sent to Fort Breendonk, a German prison camp near Mechelen. Released in 1941, he was placed under surveillance by the German authorities in Brussels, but succeeded in escaping to Britain via the Zéro network in July 1942 where he joined Antoine Delfosse, minister and commander of the Armée de la Libération (AL) the principal resistance group. He served alongside Delfosse in the Ministry of Justice of the Belgian government in London. He also spoke on Radio Belgique, the French and Dutch language radio station of the BBC broadcast to occupied Belgium. His principal work, however, was in the Commission belge pour l'étude des problèmes d'après-guerre ("Belgian Commission of study of post-war problems"), in which he founded Mission Samoyède which planned to set up radio broadcasting in Belgium soon after the liberation. Following the invasion of Europe by the Allies, he returned to the continent working as an interpreter and press officer alongside General Henning Linden. His coverage included the liberation of Dachau concentration camp. After the Liberation, and despite having Socialist leanings, he worked for the new Belgian Democratic Union (UDB-BDU) party. In 1946, he was elected deputy for the Nivelles region as the only successful candidate of the UDB-BDU. He resigned to return to radio work. He is said to have invented the neologism Irénologie which is the French term for the stud
https://en.wikipedia.org/wiki/Molecular%20geometry
Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom. Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties. Determination The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances, dihedral angles, angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas. The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions
https://en.wikipedia.org/wiki/Bedtime
Bedtime (also called putting to bed or tucking in) is a ritual part of parenting to help children feel more secure and become accustomed to a more rigid schedule of sleep than they might prefer. The ritual of bedtime is aimed at facilitating the transition from wakefulness to sleep. It may involve bedtime stories, children's songs, nursery rhymes, bed-making and getting children to change into nightwear. In some religious households, prayers are said shortly before going to bed. Sleep training may be part of the bedtime ritual for babies and toddlers. In adult use, the term means simply "time for bed", similar to curfew, as in "It's past my bedtime". Some people are accustomed to drinking a nightcap or herbal tea at bedtime. Sleeping coaches are also used to help individuals reach their bedtime goals. Researchers studying sleep are finding patterns revealing that cell phone use at night disturbs going to sleep at one's bedtime and achieving a good night's sleep. Synonyms In boarding schools and on trips or holidays that involve young people, the equivalent of bedtime is lights out or lights-out - this term is also used in prisons, hospitals, in the military, and in sleep research. Newspapers In the pre-digital newspaper era, a newspaper, usually daily, was "put to bed" when editorial work on the issue had formally ceased, the content was fixed, and printing could begin. See also Crib talk Lullaby Sleep cycle
https://en.wikipedia.org/wiki/Archaeal%20Richmond%20Mine%20acidophilic%20nanoorganisms
Archaeal Richmond Mine acidophilic nanoorganisms (ARMAN) were first discovered in an extremely acidic mine located in northern California (Richmond Mine at Iron Mountain) by Brett Baker in Jill Banfield's laboratory at the University of California Berkeley. These novel groups of archaea named ARMAN-1, ARMAN-2 (Candidatus Micrarchaeum acidiphilum ARMAN-2), and ARMAN-3 were missed by previous PCR-based surveys of the mine community because the ARMANs have several mismatches with commonly used PCR primers for 16S rRNA genes. Baker et al. detected them in a later study using shotgun sequencing of the community. The three groups were originally thought to represent three unique lineages deeply branched within the Euryarchaeota, a subgroup of the Archaea. However, based on a more complete archaeal genomic tree, they were assigned to a new superphylum named DPANN. The ARMAN groups now comprise deeply divergent phyla named Micrarchaeota and Parvarchaeota. Their 16S rRNA genes differ by as much as 17% between the three groups. Prior to their discovery, all of the Archaea shown to be associated with Iron Mountain belonged to the order Thermoplasmatales (e.g., Ferroplasma acidarmanus). Distribution Examination of different sites in the mine using fluorescent probes specific to the ARMAN groups has revealed that they are always present in communities associated with acid mine drainage (AMD), at Iron Mountain in northern California, that have pH < 1.5. They are usually found in low abundance (5–25%) in the community. Recently, closely related organisms have been detected in an acidic boreal mire or bog in Finland, another acid mine drainage site in extreme environments of Rio Tinto, southwestern Spain, and at a weak-alkaline deep subsurface hot spring in Yunohama, Japan. Cell structure and ecology Using cryo-electron tomography, a 3D characterization of uncultivated ARMAN cells within mine biofilms revealed that they are right at the cell size predicted to be the lower l
https://en.wikipedia.org/wiki/Site%20reliability%20engineering
Site reliability engineering (SRE) is a set of principles and practices that applies aspects of software engineering to IT infrastructure and operations. SRE claims to create highly reliable and scalable software systems. Although they are closely related, SRE is slightly different from DevOps. History The field of site reliability engineering originated at Google with Ben Treynor Sloss, who founded a site reliability team after joining the company in 2003. In 2016, Google employed more than 1,000 site reliability engineers. After originating at Google in 2003, the concept spread into the broader software development industry, and other companies subsequently began to employ site reliability engineers. The position is more common at larger web companies, as small companies often do not operate at a scale that would require dedicated SREs. Organizations that have adopted the concept include Airbnb, Dropbox, IBM, LinkedIn, Netflix, and Wikimedia. According to a 2021 report by the DevOps Institute, 22% of organizations in a survey of 2,000 respondents had adopted the SRE model. Definition Site reliability engineering, as a job role, may be performed by individual contributors or organized in teams, responsible for a combination of the following within a broader engineering organization: System availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. Site reliability engineers often have backgrounds in software engineering, system engineering, or system administration. Focuses of SRE include automation, system design, and improvements to system resilience. Site reliability engineering, as a set of principles and practices, can be performed by anyone. SRE is similar to security engineering in that everyone is expected to contribute to good security practices, but a company may decide to eventually hire staff specialists for the job. Conversely, for securing internet systems, companies may hire securit
https://en.wikipedia.org/wiki/Tom%20Bethell
Tom Bethell (; July 17, 1936 – February 12, 2021) was an American journalist who wrote mainly on economic and scientific issues. Life and career Bethell was born and raised in London, England. He was educated at Downside School and Trinity College, Oxford. A resident of the District of Columbia, he lived in Virginia, Louisiana, and California. From 1962 to 1965 he taught math at Woodberry Forest School, Virginia. He was married to Donna R. Fitzpatrick of Washington, D.C. He was a senior editor of The American Spectator and was for 25 years a media fellow of the Hoover Institution. He was Washington editor of Harper's, and an editor of the Washington Monthly. In 1980, he received a Gerald Loeb Award Honorable Mention for Columns/Editorial for "Fooling With the Budget." Jim Garrison investigation Bethell was hired as a researcher by New Orleans district attorney Jim Garrison to assist with his prosecution of Clay Shaw for conspiracy to assassinate John F. Kennedy. Bethell gave no credence to Garrison's charges that Shaw was involved. Shaw was acquitted after the jury deliberated for about an hour. Controversy In 1976, Bethell wrote a controversial article for Harper’s Magazine titled "Darwin's Mistake". According to Bethell there is no independent criterion of fitness and natural selection is a tautology. Bethell also stated that Darwin's theory was on "the verge of collapse" and natural selection had been "quietly abandoned" by his supporters. These claims were disputed by biologists. The paleontologist Stephen Jay Gould wrote a rebuttal to Bethell's arguments. Bethell was a member of the Group for the Scientific Reappraisal of the HIV-AIDS Hypothesis, which denies that HIV causes AIDS. In The Politically Incorrect Guide to Science (2005), he promoted denial of the existence of man-made global warming, AIDS denialism, and denial of evolution (which Bethell denied was "real science"). Bethell endorsed the intelligent design documentary Expelled: No Intelligence
https://en.wikipedia.org/wiki/Parallel%20Redundancy%20Protocol
Parallel Redundancy Protocol (PRP) is a network protocol standard for Ethernet that provides seamless failover against failure of any network component. This redundancy is invisible to the application. PRP nodes have two ports and are attached to two separated networks of similar topology. PRP can be implemented entirely in software, i.e. integrated in the network driver. Nodes with single attachment can be attached to one network only. This is in contrast to the companion standard HSR (IEC 62439-3 Clause 5), with which PRP shares the operating principle. PRP and HSR are independent of the application-protocol and can be used by most Industrial Ethernet protocols in the IEC 61784 suite. PRP and HSR are standardized by the IEC 62439-3:2016). They have been adopted for substation automation in the framework of IEC 61850. PRP and HSR are suited for applications that request high availability and short switchover time, such as: protection for electrical substation, synchronized drives, for instance in printing machines or high power inverters. For such applications, the recovery time of commonly used protocols such as the Rapid Spanning Tree Protocol (RSTP) is too long. The cost of PRP is a duplication of all network elements that require it. Cost impact is low since it makes little difference if the spares lie on the shelf or are actually working in the plant. The maintenance interval is shortened since more components can fail in use, but such outage will remain invisible to the application. PRP does not cover end node failures, but redundant nodes may be connected via a PRP network. Topology Each PRP network node (DANP) has two Ethernet ports attached to two separate local area networks of arbitrary, but similar topology. The two LANs have no links connecting them and are assumed to be fail-independent, to avoid common mode failures. Nodes with single attachment (such as a printer) are either attached to one network only (and therefore can communicate
https://en.wikipedia.org/wiki/Ultrabarrelled%20space
In functional analysis and related areas of mathematics, an ultrabarrelled space is a topological vector spaces (TVS) for which every ultrabarrel is a neighbourhood of the origin. Definition A subset of a TVS is called an ultrabarrel if it is a closed and balanced subset of and if there exists a sequence of closed balanced and absorbing subsets of such that for all In this case, is called a defining sequence for A TVS is called ultrabarrelled if every ultrabarrel in is a neighbourhood of the origin. Properties A locally convex ultrabarrelled space is a barrelled space. Every ultrabarrelled space is a quasi-ultrabarrelled space. Examples and sufficient conditions Complete and metrizable TVSs are ultrabarrelled. If is a complete locally bounded non-locally convex TVS and if is a closed balanced and bounded neighborhood of the origin, then is an ultrabarrel that is not convex and has a defining sequence consisting of non-convex sets. Counter-examples There exist barrelled spaces that are not ultrabarrelled. There exist TVSs that are complete and metrizable (and thus ultrabarrelled) but not barrelled. See also Citations Bibliography Topological vector spaces
https://en.wikipedia.org/wiki/Anabasine
Anabasine is a pyridine and piperidine alkaloid found in the Tree Tobacco (Nicotiana glauca) plant, as well as in the close relative of the common tobacco plant (Nicotiana tabacum). It is a structural isomer of, and chemically similar to, nicotine. Its principal (historical) industrial use is as an insecticide. Anabasine is present in trace amounts in tobacco smoke, and can be used as an indicator of a person's exposure to tobacco smoke. Pharmacology Anabasine is a nicotinic acetylcholine receptor agonist. In high doses, it produces a depolarizing block of nerve transmission, which can cause symptoms similar to those of nicotine poisoning and, ultimately, death by asystole. In larger amounts it is thought to be teratogenic in swine. The intravenous LD50 of anabasine ranges from 11 mg/kg to 16 mg/kg in mice, depending on the enantiomer. Analogs B. Bhatti, et al. made some higher potency sterically strained bicyclic analogs of anabasine: 2-(Pyridin-3-yl)-1-azabicyclo[3.2.2]nonane (TC-1698) 2-(Pyridin-3-yl)-1-azabicyclo[2.2.2]octane, and 2-(Pyridin-3-yl)-1-azabicyclo[3.2.1]octane. See also Anatabine
https://en.wikipedia.org/wiki/Rough%20number
A k-rough number, as defined by Finch in 2001 and 2003, is a positive integer whose prime factors are all greater than or equal to k. k-roughness has alternately been defined as requiring all prime factors to strictly exceed k. Examples (after Finch) Every odd positive integer is 3-rough. Every positive integer that is congruent to 1 or 5 mod 6 is 5-rough. Every positive integer is 2-rough, since all its prime factors, being prime numbers, exceed 1. See also Buchstab function, used to count rough numbers Smooth number Notes
https://en.wikipedia.org/wiki/Pyramid%20%28geometry%29
In geometry, a pyramid () is a polyhedron formed by connecting a polygonal base and a point, called the apex. Each base edge and apex form a triangle, called a lateral face. It is a conic solid with polygonal base. A pyramid with an base has vertices, faces, and edges. All pyramids are self-dual. Terminology A right pyramid has its apex directly above the centroid of its base. Nonright pyramids are called oblique pyramids. A regular pyramid has a regular polygon base and is usually implied to be a right pyramid. When unspecified, a pyramid is usually assumed to be a regular square pyramid, like the physical pyramid structures. A triangle-based pyramid is more often called a tetrahedron. Among oblique pyramids, like acute and obtuse triangles, a pyramid can be called acute if its apex is above the interior of the base and obtuse if its apex is above the exterior of the base. A right-angled pyramid has its apex above an edge or vertex of the base. In a tetrahedron these qualifiers change based on which face is considered the base. Pyramids are a class of the prismatoids. Pyramids can be doubled into bipyramids by adding a second offset point on the other side of the base plane. A pyramid cut off by a plane is called a truncated pyramid; if the truncation plane is parallel to the pyramid's base, it is called a frustum. Right pyramids with a regular base A right pyramid with a regular base has isosceles triangle sides, with symmetry is Cnv or [1,n], with order 2n. It can be given an extended Schläfli symbol ( ) ∨ {n}, representing a point, ( ), joined (orthogonally offset) to a regular polygon, {n}. A join operation creates a new edge between all pairs of vertices of the two joined figures. The trigonal or triangular pyramid with all equilateral triangle faces becomes the regular tetrahedron, one of the Platonic solids. A lower symmetry case of the triangular pyramid is C3v, which has an equilateral triangle base, and 3 identical isosceles triangle sides. Th
https://en.wikipedia.org/wiki/Phase%20margin
In electronic amplifiers, the phase margin (PM) is the difference between the phase lag (< 0) and -180°, for an amplifier's output signal (relative to its input) at zero dB gain - i.e. unity gain, or that the output signal has the same amplitude as the input. . For example, if the amplifier's open-loop gain crosses 0 dB at a frequency where the phase lag is -135°, then the phase margin of this feedback system is -135° -(-180°) = 45°. See Bode plot#Gain margin and phase margin for more details. Theory Typically the open-loop phase lag (relative to input, < 0) varies with frequency, progressively increasing to exceed 180°, at which frequency the output signal becomes inverted, or antiphase in relation to the input. The PM will be positive but decreasing at frequencies less than the frequency at which inversion sets in (at which PM = 0), and PM is negative (PM < 0) at higher frequencies. In the presence of negative feedback, a zero or negative PM at a frequency where the loop gain exceeds unity (1) guarantees instability. Thus positive PM is a "safety margin" that ensures proper (non-oscillatory) operation of the circuit. This applies to amplifier circuits as well as more generally, to active filters, under various load conditions (e.g. reactive loads). In its simplest form, involving ideal negative feedback voltage amplifiers with non-reactive feedback, the phase margin is measured at the frequency where the open-loop voltage gain of the amplifier equals the desired closed-loop DC voltage gain. More generally, PM is defined as that of the amplifier and its feedback network combined (the "loop", normally opened at the amplifier input), measured at a frequency where the loop gain is unity, and prior to the closing of the loop, through tying the output of the open loop to the input source, in such a way as to subtract from it. In the above loop-gain definition, it is assumed that the amplifier input presents zero load. To make this work for non-zero-load input,
https://en.wikipedia.org/wiki/The%20Solitude%20of%20Prime%20Numbers%20%28novel%29
The Solitude of Prime Numbers (original title: La solitudine dei numeri primi ) is a novel by the Italian author Paolo Giordano, published in 2008. It won the 2008 Strega Prize. A cinematic adaptation of the novel was directed by Saverio Costanzo and released in 2010. Overview The novel narrates the childhood and early adulthood of a boy and girl who were each exposed to traumatic situations that followed them into adulthood. Both are outsiders, similar to how prime numbers are outsiders in relation to the other numbers. They befriend each other, forming a special relationship – becoming very close but never romantic. Their relationship appears to have run its course, but circumstances bring them together again and while they clearly love each other, they are unable to express their emotions. This relationship is compared to prime pairs: always together, but never touching. Although the setting is never mentioned explicitly, references to Gran Madre Church, the Basilica of Superga, the Maria Ausiliatrice hospital, and to Fraitève and Monte Fraiteve identify the locale as Turin. Plot As a seven-year-old girl, Alice Della Rocca is forced by her father to take skiing lessons, although she hates the ski school and has no particular aptitude for the sport. One morning, Alice is separated from the rest of the group and falls off a cliff, sustaining serious injuries. Alice will remain crippled for the rest of her life. Mattia Balossino is a gifted and intelligent child, unlike his twin sister Michela, who has a intellectual disability. Isolated from the rest of his peers because of his uncomfortable sister, Mattia lives his childhood in solitude. When he and his sister are invited to a classmate's birthday party, Mattia leaves Michela in a park so he can attend the party without her. Upon his return to the park a few hours later, Michela has disappeared, perhaps drowned in a nearby pond, and is never found despite a police search and investigation. These events deep
https://en.wikipedia.org/wiki/10K%20resolution
10K resolution refers to a horizontal display resolutions of approximately 10,000 pixels. Unlike 4K UHD and 8K UHD, there are no 10K resolutions defined in the UHDTV broadcast standard. The first 10K displays demonstrated were ultrawide "21:9" screens with a resolution of , the same vertical resolution as 8K UHD. History On June 5, 2015, Chinese manufacturer BOE showed a 10K display with an aspect ratio of 64:27 (≈21:9) and a resolution of . In November 2016, the Consumer Technology Association published CTA-861-G, an update to their standard for digital video transmission formats. This revision added support for 102404320, a 10K resolution with an aspect ratio of 64:27 (≈21:9), at up to 120Hz. On January 4, 2017, HDMI version 2.1 was officially announced, and was later released on November 28, 2017. HDMI 2.1 includes support for all the formats listed in the CTA-861-G standard, including 10K (102404320) at up to 120Hz. HDMI 2.1 specifies a new Ultra High Speed HDMI cable which supports a bandwidth of up to 48 Gbit/s. Display Stream Compression (DSC) 1.2a is used for video formats higher than 8K resolution with 4:2:0 chroma subsampling. 10K resolutions are also sometimes seen in the case of gaming, for instance high resolution screenshots in the case of Minecraft with the OptiFine mod. Cameras , there are multiple companies producing photo cameras capable of 10K and higher resolutions, such as Phase One, Fujifilm, Hasselblad, and Sony. Other companies also create sensors capable of 10K resolution, though they are mostly not available to the general public, and are often used for scientific or industrial purposes. Blackmagic Design is the only company producing a video camera capable of filming in resolutions 10K or higher with their URSA Mini Pro 12K. See also Ultrawide formats
https://en.wikipedia.org/wiki/Hartogs%20number
In mathematics, specifically in axiomatic set theory, a Hartogs number is an ordinal number associated with a set. In particular, if X is any set, then the Hartogs number of X is the least ordinal α such that there is no injection from α into X. If X can be well-ordered then the cardinal number of α is a minimal cardinal greater than that of X. If X cannot be well-ordered then there cannot be an injection from X to α. However, the cardinal number of α is still a minimal cardinal not less than or equal to the cardinality of X. (If we restrict to cardinal numbers of well-orderable sets then that of α is the smallest that is not not less than or equal to that of X.) The map taking X to α is sometimes called Hartogs's function. This mapping is used to construct the aleph numbers, which are all the cardinal numbers of infinite well-orderable sets. The existence of the Hartogs number was proved by Friedrich Hartogs in 1915, using Zermelo–Fraenkel set theory alone (that is, without using the axiom of choice). Hartogs's theorem Hartogs's theorem states that for any set X, there exists an ordinal α such that ; that is, such that there is no injection from α to X. As ordinals are well-ordered, this immediately implies the existence of a Hartogs number for any set X. Furthermore, the proof is constructive and yields the Hartogs number of X. Proof See . Let be the class of all ordinal numbers β for which an injective function exists from β into X. First, we verify that α is a set. X × X is a set, as can be seen in Axiom of power set. The power set of X × X is a set, by the axiom of power set. The class W of all reflexive well-orderings of subsets of X is a definable subclass of the preceding set, so it is a set by the axiom schema of separation. The class of all order types of well-orderings in W is a set by the axiom schema of replacement, as (Domain(w), w) (β, ≤) can be described by a simple formula. But this last set is exactly α. Now, because a transitive set
https://en.wikipedia.org/wiki/Mono-BOC-cystamine
Mono-BOC-cystamine (mono BOC protected cystamine) is a tert-butyloxycarbonyl (BOC) derivative of cystamine used as crosslinker in biotechnology and molecular biology applications. This compound was originally reported by Hansen et al. Uses The disulfide chain allows the mono-BOC-cystamine to be easily cleaved, allowing removal of the tagging residue when desired. Mono-BOC-cystamine is used as a crosslinker for the synthesis of cleavable photo-cross-linking reagent. Mono-BOC-cystamine is used as a crosslinker for the synthesis of a biodegradable cystamine spacer in PGA-cystamine-Gd-DO3A, which shows improved MRI contrast for breast carcinoma imaging in mice. Related compounds Biotin-Peg2-Amine Biotin-Peg2-Maleimide Biotin-Peg3-Amine Biotin-Peg4-NHS
https://en.wikipedia.org/wiki/Outline%20of%20transport
The following outline is provided as an overview of and topical guide to transport: Transport or transportation – movement of people and goods from one place to another. Essence of transport Driving involves controlling a vehicle, usually a motor vehicle such as a truck, bus, or automobile. For motorcycles, bicycles and animals, it is called riding. Shipping, transporting of goods and cargo, by land, sea, and air Travel, movement of people, by land, sea, and air Types of transport By availability Private transport Public transport (public transit) Modes and vehicles Intermodal passenger transport Aviation Aviation Fixed-wing aircraft Airship (dirigible) Autogyro Balloon Blimp Helicopter Human-powered aircraft Parachute (downward air transport only) Rocket Projectile (goods only, normally explosives) / Human cannonball Supersonic transport Zeppelin Animal-powered transport Animal-powered transport Animals domesticated for transport camel, Arabian, and Bactrian carabao deer dog sled dog Dogcart (dog-drawn) elephant equine donkey mule hinny horse pack horse draught horse riding horse coach horse llama moose ostrich ox reindeer sheep yak Turtles were used for riding as a sport in early 20th-century Australia Dolphins (to carry markers to attach to detected mines) Pigeon (for carrying messages) Animal-powered vehicles barge (sometimes pulled by humans) berlin (vehicle) Brougham (carriage) carriage cart chaise charabanc chariot (ancient form sometimes used in combat, later a racing machine, later a name for something entirely different in carriages) coach Conestoga wagon curricle dogcart dray ferry float gig governess cart Hansom cab horsecar horse-drawn boat horse-powered boat Experiment (horse-powered boat) howdah litter (vehicle) (sometimes carried by humans, mainly used with equines, though occasionally camels) mail coach Michigan logging wheels omnibus bullock cart pantechnicon van
https://en.wikipedia.org/wiki/Cognitive%20architecture
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R (Adaptive Control of Thought - Rational) and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990. The Institute for Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments." History Herbert A. Simon, one of the founders of the field of artificial intelligence, stated that the 1960 thesis by his student Ed Feigenbaum, EPAM provided a possible "architecture for cognition" because it included some commitments for how more than one fundamental aspect of the human mind worked (in EPAM's case, human memory and human learning). John R. Anderson started research on human memory in the early 1970s and his 1973 thesis with Gordon H. Bower provided a theory of human associative memory. He included more aspects of his research on long-term memory and thinking processes into this research and eventually designed a cognitive architecture he eventually called ACT. He and his students were influenced by Allen Newell's use of the term "cognitive architecture". Anderson's lab used the term to refer to the ACT theory as embodied in a collection of papers and designs (there was not a complete implementation of ACT at the time). In 1983 John R. Anderson published the seminal work in this area, entitled The Architecture o
https://en.wikipedia.org/wiki/German%20Army%20cryptographic%20systems%20of%20World%20War%20II
German Army cryptographic systems of World War II were based on the use of three types of cryptographic machines that were used to encrypt communications between units at the division level. These were the Enigma machine, the teleprinter cipher attachment (Lorenz cipher), and the cipher teleprinter the Siemens and Halske T52, (Siemens T-43). All were considered insecure. Introduction Machine ciphers The first cipher attachment, the () SZ-40 original mode was introduced into the Army, probably in 1940, although Erich Hüttenhain, a cryptographer assigned to the Cipher Department of the High Command of the Wehrmacht (OKW/Chi), stated that the Army had been experimenting with this type of cryptographic apparatus from as early as 1937. It was replaced by the SZ-40 regular mode and this was succeeded by the SZ-42a and SZ-42b, both developed by Werner Liebknecht, Erich Hüttenhain and Fritz Menzer. The SZ-42c was also developed and 30 or 40 test sets built but the apparatus was evidently not used. The (), the first cipher teleprinter, T-52a, was introduced in 1939. Newer models were versions T-52c, T-52d and T-52e were in use. The one-time tape cipher teleprinter designated the SFM T-43 was developed in 1943 and introduced in 1944. The machine was theoretically unbreakable, if the key tape was truly random. However, the key tape was pseudo random as it was generated by the T-52e, and therefore insecure. Hand Ciphers The German Army used hand ciphers below division level. The manually operated hand systems of the Army that were used between 1939 and 1942 were listed by Erich Hüttenhain as follows: A monoalphabetic type substitution using a keyword mixed alphabet in a 5 x 5 square A comb-transposition () A book key () A double Transposition cipher () 4-S-40 This system was used until 1926 or 1927 The Playfair cipher in Single Playfair () (TS-42) The Double Playfair () Message Cypher 42 (NS-42) In 1942, Walter Fricke of the General der Nachrichtenaufklärung, decl
https://en.wikipedia.org/wiki/Alkaliphile
Alkaliphiles are a class of extremophilic microbes capable of survival in alkaline (pH roughly 8.5–11) environments, growing optimally around a pH of 10. These bacteria can be further categorized as obligate alkaliphiles (those that require high pH to survive), facultative alkaliphiles (those able to survive in high pH, but also grow under normal conditions) and haloalkaliphiles (those that require high salt content to survive). Background information Microbial growth in alkaline conditions presents several complications to normal biochemical activity and reproduction, as high pH is detrimental to normal cellular processes. For example, alkalinity can lead to denaturation of DNA, instability of the plasma membrane and inactivation of cytosolic enzymes, as well as other unfavorable physiological changes. Thus, to adequately circumvent these obstacles, alkaliphiles must either possess specific cellular machinery that works best in the alkaline range, or they must have methods of acidifying the cytosol in relation to the extracellular environment. To determine which of the above possibilities an alkaliphile uses, experimentation has demonstrated that alkaliphilic enzymes possess relatively normal pH optimums. The determination that these enzymes function most efficiently near physiologically neutral pH ranges (about 7.5–8.5) was one of the primary steps in elucidating how alkaliphiles survive intensely basic environments. Since the cytosolic pH must remain nearly neutral, alkaliphiles must have one or more mechanisms of acidifying the cytosol when in the presence of a highly alkaline environment. Mechanisms of cytosolic acidification Alkaliphiles maintain cytosolic acidification through both passive and active means. In passive acidification, it has been proposed that cell walls contain acidic polymers composed of residues such as galacturonic acid, gluconic acid, glutamic acid, aspartic acid, and phosphoric acid. Together, these residues form an acidic matrix that
https://en.wikipedia.org/wiki/Bhavnagar%20Amreli%20Forest
Bhavnagar Amreli Forest is a reserved area for conservation of Asiatic lions. The new location is east side of Gir National Park in Amreli district of Gujarat. After inclusion of New Jesal sanctuary the area of this forest will go to 1600 km2 which is bigger than Gir sanctuary. The Gujarat state government insisted on shifting the lions to this forest claiming that lions were pride of Gujarat. However the central government is pressuring Gujarat state government to shift some lions in Kuno Wildlife Sanctuary. There are almost 110 lions outside the protected area in Gir forest. A pride of about 20 lions will be introduced in three different area in this forest. A large number of lions have been wiped out of Tanzania's Serengeti National Park. To prevent this from happing at Gir forest, Gujarat state government has proposed this habitat. Distribution of Asiatic lions according to 2010 census: According to above table 141 lions live in Bhavnagar and Amreli districts.
https://en.wikipedia.org/wiki/Perseus%20%28geometer%29
Perseus (; c. 150 BC) was an ancient Greek geometer, who invented the concept of spiric sections, in analogy to the conic sections studied by Apollonius of Perga. Life Few details of Perseus' life are known, as he is mentioned only by Proclus and Geminus; none of his own works have survived. Spiric sections The spiric sections result from the intersection of a torus with a plane that is parallel to the rotational symmetry axis of the torus. Consequently, spiric sections are fourth-order (quartic) plane curves, whereas the conic sections are second-order (quadratic) plane curves. Spiric sections are a special case of a toric section, and were the first toric sections to be described. Examples The most famous spiric section is the Cassini oval, which is the locus of points having a constant product of distances to two foci. For comparison, an ellipse has a constant sum of focal distances, a hyperbola has a constant difference of focal distances, and a circle has a constant ratio of focal distances.
https://en.wikipedia.org/wiki/Acoustical%20measurements%20and%20instrumentation
Analysis of sound and acoustics plays a role in such engineering tasks as product design, production test, machine performance, and process control. For instance, product design can require modification of sound level or noise for compliance with standards from ANSI, IEC, and ISO. The work might also involve design fine-tuning to meet market expectations. Here, examples include tweaking an automobile door latching mechanism to impress a consumer with a satisfying click or modifying an exhaust manifold to change the tone of an engine's rumble. Aircraft designers are also using acoustic instrumentation to reduce the noise generated on takeoff and landing. Acoustical measurements and instrumentation range from a handheld sound level meter to a 1000-microphone phased array. Components Most of the acoustical measurement and instrumentation systems can be broken down into three components: sensors, data acquisition and analysis. Sensors The most common sensor used for acoustic measurement is the microphone. Measurement-grade microphones are different from typical recording-studio microphones because they can provide a detailed calibration for their response and sensitivity. Other sensors include hydrophones for measuring sound in water, particle velocity probes for localizing acoustic leakage, sound intensity probes for quantifying acoustic emission and ranking, or accelerometers for measuring vibrations causing sound. The three main groups of microphones are pressure, free-field, and random-incidence, each with their own correction factors for different applications. Well-known acoustic sensor suppliers include PCB Piezotronics, Brüel & Kjær, GRAS and Audio Precision. Data acquisition Data acquisition hardware for acoustic measurements typically utilizes 24-bit analog-to-digital converters (ADCs), anti-aliasing filters, and other signal conditioning. This signal conditioning may include amplification, filtering, sensor excitation, and input configuration. An
https://en.wikipedia.org/wiki/Homeotropic%20alignment
In liquid crystals, homeotropic alignment is one of the ways of alignment of liquid crystalline molecules. Homeotropic alignment is the state in which a rod-like liquid crystalline molecule aligns perpendicularly to the substrate. In the polydomain state, the parts also are called homeotropic domains. In contrast, the state in which the molecule aligns to a substance in parallel is called homogeneous alignment. There are various other ways of alignment in liquid crystals. Because homeotropic alignment is not anisotropic optically, a dark field is observed between crossed polarizers in polarizing optical microscopy. By conoscope observation, however, a cross image is observed in the homeotropic alignments. Homeotropic alignment often appears in the smectic A phase (SA). In discotic liquid crystals homeotropic alignment is defined as the state in which an axis of the column structure, which is formed by disc-like liquid crystalline molecules, aligns perpendicularly to a substance. In other words, this alignment looks like a state in which columns formed by piled-up coins are arranged in an orderly way on a table. In practice, the homeotropic alignment is usually achieved by surfactants and detergent for example lecithin, some esilanes or some special polyimide (PI 1211). Generally liquid crystals align homeotropically at an air or glass interface.
https://en.wikipedia.org/wiki/Grand%20unification%20energy
The grand unification energy , or the GUT scale, is the energy level above which, it is believed, the electromagnetic force, weak force, and strong force become equal in strength and unify to one force governed by a simple Lie group. The exact value of the grand unification energy (if grand unification is indeed realized in nature) depends on the precise physics present at shorter distance scales not yet explored by experiments. If one assumes the Desert and supersymmetry, it is at around 1025 eV or GeV (≈ 1.6 megajoules). Some Grand Unified Theories (GUTs) can predict the grand unification energy but, usually, with large uncertainties due to model dependent details such as the choice of the gauge group, the Higgs sector, the matter content or further free parameters. Furthermore, at the moment it seems fair to state that there is no agreed minimal GUT. The unification of the electroweak force and the strong force with the gravitational force in a so-called "Theory of Everything" requires an even higher energy level which is generally assumed to be close to the Planck scale of GeV. In theory, at such short distances, gravity becomes comparable in strength to the other three forces of nature known to date. This statement is modified if there exist additional dimensions of space at intermediate scales. In this case, the strength of gravitational interactions increases faster at smaller distances and the energy scale at which all known forces of nature unify can be considerably lower. This effect is exploited in models of large extra dimensions. The most powerful collider to date, the Large Hadron Collider (LHC), is designed to reach about 104 GeV in proton–proton collisions. The scale 1016 GeV is only a few orders of magnitude below the Planck energy of 1019 GeV, and thus not within reach of man-made earth bound colliders. See also Desert (particle physics) Standard Model Timeline of the Big Bang
https://en.wikipedia.org/wiki/Morningness%E2%80%93eveningness%20questionnaire
The morningness–eveningness questionnaire (MEQ) is a self-assessment questionnaire developed by researchers James A. Horne and Olov Östberg in 1976. Its main purpose is to measure whether a person's circadian rhythm (biological clock) produces peak alertness in the morning, in the evening, or in between. The original study showed that the subjective time of peak alertness correlates with the time of peak body temperature; morning types (early birds) have an earlier temperature peak than evening types (night owls), with intermediate types having temperature peaks between the morning and evening chronotype groups. The MEQ is widely used in psychological and medical research and has been professionally cited more than 4,000 times. MEQ questions The standard MEQ consists of 19 multiple-choice questions, each having four or five response options. Some example questions are: Responses to the questions are combined to form a composite score that indicates the degree to which the respondent favors morning versus evening. Subsequent researchers have created shorter versions with four, five, or six questions. Related research According to a 1997 study of identical and fraternal twins, 54% of variance in morningness–eveningness was due to genetic variability, 3% was due to age, and the rest was explained by non-shared environmental influences and errors in measurement. A study in 2000 showed that both "morningness" and "eveningness" participants performed poorly in the morning on the Multidimensional Aptitude Battery (MAB) tests. It thus did not support the hypothesis that there is a reliable relationship between morningness–eveningness, time of day, and cognitive ability. A study in 2008 examined the relationship between morningness and anxiety in adults aged 40–63. It found a negative correlation in women, but not in men, suggesting that gender-related variables may be attributed to morningness and eveningness when looking at mood. A study in 2009 examined differences
https://en.wikipedia.org/wiki/Diffeomorphometry
Diffeomorphometry is the metric study of imagery, shape and form in the discipline of computational anatomy (CA) in medical imaging. The study of images in computational anatomy rely on high-dimensional diffeomorphism groups which generate orbits of the form , in which images can be dense scalar magnetic resonance or computed axial tomography images. For deformable shapes these are the collection of manifolds , points, curves and surfaces. The diffeomorphisms move the images and shapes through the orbit according to which are defined as the group actions of computational anatomy. The orbit of shapes and forms is made into a metric space by inducing a metric on the group of diffeomorphisms. The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation. In Computational anatomy, the diffeomorphometry metric measures how close and far two shapes or images are from each other. Informally, the metric is constructed by defining a flow of diffeomorphisms which connect the group elements from one to another, so for then . The metric between two coordinate systems or diffeomorphisms is then the shortest length or geodesic flow connecting them. The metric on the space associated to the geodesics is given by. The metrics on the orbits are inherited from the metric induced on the diffeomorphism group. The group is thusly made into a smooth Riemannian manifold with Riemannian metric associated to the tangent spaces at all . The Riemannian metric satisfies at every point of the manifold there is an inner product inducing the norm on the tangent space that varies smoothly across . Oftentimes, the familiar Euclidean metric is not directly applicable because the patterns of shapes and images don't form a vector space. In the Riemannian orbit model of Computational anatomy, diffeomorphisms acting on the forms don't act linearly. There are many ways to define metrics, an
https://en.wikipedia.org/wiki/Program%20evaluation%20and%20review%20technique
The program evaluation and review technique (PERT) is a statistical tool used in project management, which was designed to analyze and represent the tasks involved in completing a given project. First developed by the United States Navy in 1958, it is commonly used in conjunction with the critical path method (CPM) that was introduced in 1957. Overview PERT is a method of analyzing the tasks involved in completing a given project, especially the time needed to complete each task, and to identify the minimum time needed to complete the total project. It incorporates uncertainty by making it possible to schedule a project while not knowing precisely the details and durations of all the activities. It is more of an event-oriented technique rather than start- and completion-oriented, and is used more in those projects where time is the major factor rather than cost. It is applied on very large-scale, one-time, complex, non-routine infrastructure and on Research and Development projects. PERT offers a management tool, which relies "on arrow and node diagrams of activities and events: arrows represent the activities or work necessary to reach the events or nodes that indicate each completed phase of the total project." PERT and CPM are complementary tools, because "CPM employs one time estimation and one cost estimation for each activity; PERT may utilize three time estimates (optimistic, expected, and pessimistic) and no costs for each activity. Although these are distinct differences, the term PERT is applied increasingly to all critical path scheduling." History PERT was developed primarily to simplify the planning and scheduling of large and complex projects. It was developed for the U.S. Navy Special Projects Office in 1957 to support the U.S. Navy's Polaris nuclear submarine project. It found applications all over industry. An early example is when it was used for the 1968 Winter Olympics in Grenoble which applied PERT from 1965 until the opening of the 1968
https://en.wikipedia.org/wiki/Project%20Canterbury
Project Canterbury (sometimes abbreviated as PC) is an online archive of material related to the history of Anglicanism. It was founded by Richard Mammana, Jr. in 1999 with a grant from Episcopal Church Presiding Bishop Frank T. Griswold, and is hosted by the non-profit Society of Archbishop Justus. The episcopal patron of the site is Terry Brown, retired bishop of Malaita in the Church of the Province of Melanesia; Geoffrey Rowell Bishop of Gibraltar in Europe had served in this capacity from 1999 until his death. Volunteer transcribers prepare material for the site, which incorporates modern scholarly material, primary source texts, photographic images and engravings. Imprint Since 2018, Project Canterbury is also an imprint of Anglican historical material in printed formats. Titles Our Aunt: Low Church Observations of American Anglo-Catholicism (New Haven, 2018) Moravians and Anglicans: Ecumenical Sources (Resica Falls, Pennsylvania, 2021) Intercommunion between the Episcopal Church and the Polish National Catholic Church: An Introduction and Sourcebook (Resica Falls, Pennsylvania, 2022)
https://en.wikipedia.org/wiki/Tetramer%20assay
A tetramer assay (also known as a tetramer stain) is a procedure that uses tetrameric proteins to detect and quantify T cells that are specific for a given antigen within a blood sample. The tetramers used in the assay are made up of four major histocompatibility complex (MHC) molecules, which are found on the surface of most cells in the body. MHC molecules present peptides to T-cells as a way to communicate the presence of viruses, bacteria, cancerous mutations, or other antigens in a cell. If a T-cell's receptor matches the peptide being presented by an MHC molecule, an immune response is triggered. Thus, MHC tetramers that are bioengineered to present a specific peptide can be used to find T-cells with receptors that match that peptide. The tetramers are labeled with a fluorophore, allowing tetramer-bound T-cells to be analyzed with flow cytometry. Quantification and sorting of T-cells by flow cytometry enables researchers to investigate immune response to viral infection and vaccine administration as well as functionality of antigen-specific T-cells. Generally, if a person's immune system has encountered a pathogen, the individual will possess T cells with specificity toward some peptide on that pathogen. Hence, if a tetramer stain specific for a pathogenic peptide results in a positive signal, this may indicate that the person's immune system has encountered and built a response to that pathogen. History This methodology was first published in 1996 by a lab at Stanford University. Previous attempts to quantify antigen-specific T-cells involved the less accurate limiting dilution assay, which estimates numbers of T-cells at 50-500 times below their actual levels. Stains using soluble MHC monomers were also unsuccessful due to the low binding affinity of T-cell receptors and MHC-peptide monomers. MHC tetramers can bind to more than one receptor on the target T-cell, resulting in an increased total binding strength and lower dissociation rates. Uses CD8+ T-ce
https://en.wikipedia.org/wiki/Green%20manure
In agriculture, a green manure is a crop specifically cultivated to be incorporated into the soil while still green. Typically, the green manure's biomass is incorporated with a plow or disk, as is often done with (brown) manure. The primary goal is to add organic matter to the soil for its benefits. Green manuring is often used with legume crops to add nitrogen to the soil for following crops, especially in organic farming, but is also used in conventional farming. Method of Application Farmers apply green manure by blending available plant discards into the soil. Farmers begin the process of green manuring by growing legumes or collecting tree/shrub clippings. Harvesters gather the green manure crops and mix the plant material into the soil. The un-decomposed plants prepare the ground for cash crops by slowly releasing nutrients like nitrogen into the soil. Farmers may decide to add the green manure into the soil before or after cash crop planting. This variety in planting schedules can be seen in rice farming. Green manures are meant to be used alongside chemical fertilizers as a way to reduce our use of chemical fertilizers. When used alone, we cannot see all the advantages of green manures. Functions Green manures usually perform multiple functions that include soil improvement and soil protection: Leguminous green manures such as clover and vetch contain nitrogen-fixing symbiotic bacteria in root nodules that fix atmospheric nitrogen in a form that plants can use. This performs the vital function of fertilization. Depending on the species of cover crop grown, the amount of nitrogen released into the soil lies between 40 and 200 pounds per acre. With green manure use, the amount of nitrogen that is available to the succeeding crop is usually in the range of 40-60% of the total amount of nitrogen that is contained within the green manure crop. {| class="wikitable" |- |+Average biomass and nitrogen yields of several legumes ! Crop ! Biomass (tons
https://en.wikipedia.org/wiki/Microponics
Microponics, in agricultural practice, is a symbiotic integration of fish, plants, and micro-livestock within a semi-controlled environment, designed to enhance soil fertility and crop productivity. Coined by Gary Donaldson, an Australian urban farmer, in 2008, the term was used to describe his innovative concept of integrated backyard food production. It is important to note that while "microponics" had been previously used to refer to an obscure grafting method in hydroponics, Donaldson's application of the term was derived from the amalgamation of micro-livestock (micro-farming) and the cultivation of fish and plants, a practice commonly known as aquaponics. History The origins of microponics can be traced back to the integrated aquaculture experiments conducted by the New Alchemy Institute during the late 1960s and early 1970s. The New Alchemists developed innovative food production models that revolved around the integration of various elements, including fish, plants, ducks, rabbits, and other organisms, all housed within their solar and wind-powered Cape Cod Ark bio-shelter. The concept of integrated aquaculture, which involves converting the by-products (waste) of one species into the feedstock (food, fertilizer, etc.) for another species, deeply resonated with Gary Donaldson when he was introduced to it in the mid-1970s. His vision was to create a backyard food production system that empowered ordinary households to cultivate their own fresh and clean food using modest skills and appropriate technology. Recognizing that water was a fundamental necessity for all plant and animal species, Donaldson believed that aquaculture should play a central role in any integrated backyard food production system. While inspired by the idea of integration from the New Alchemists, Donaldson faced a challenge regarding scale. The Cape Cod Ark, which measured just under 30 meters in length and 6 meters in height, proved to be too large for an average backyard setting. Fur
https://en.wikipedia.org/wiki/Positive-real%20function
Positive-real functions, often abbreviated to PR function or PRF, are a kind of mathematical function that first arose in electrical network synthesis. They are complex functions, Z(s), of a complex variable, s. A rational function is defined to have the PR property if it has a positive real part and is analytic in the right half of the complex plane and takes on real values on the real axis. In symbols the definition is, In electrical network analysis, Z(s) represents an impedance expression and s is the complex frequency variable, often expressed as its real and imaginary parts; in which terms the PR condition can be stated; The importance to network analysis of the PR condition lies in the realisability condition. Z(s) is realisable as a one-port rational impedance if and only if it meets the PR condition. Realisable in this sense means that the impedance can be constructed from a finite (hence rational) number of discrete ideal passive linear elements (resistors, inductors and capacitors in electrical terminology). Definition The term positive-real function was originally defined by Otto Brune to describe any function Z(s) which is rational (the quotient of two polynomials), is real when s is real has positive real part when s has a positive real part Many authors strictly adhere to this definition by explicitly requiring rationality, or by restricting attention to rational functions, at least in the first instance. However, a similar more general condition, not restricted to rational functions had earlier been considered by Cauer, and some authors ascribe the term positive-real to this type of condition, while others consider it to be a generalization of the basic definition. History The condition was first proposed by Wilhelm Cauer (1926) who determined that it was a necessary condition. Otto Brune (1931) coined the term positive-real for the condition and proved that it was both necessary and sufficient for realisability. Properties The sum of two
https://en.wikipedia.org/wiki/Ardipithecus%20ramidus
Ardipithecus ramidus is a species of australopithecine from the Afar region of Early Pliocene Ethiopia 4.4 million years ago (mya). A. ramidus, unlike modern hominids, has adaptations for both walking on two legs (bipedality) and life in the trees (arboreality). However, it would not have been as efficient at bipedality as humans, nor at arboreality as non-human great apes. Its discovery, along with Miocene apes, has reworked academic understanding of the chimpanzee–human last common ancestor from appearing much like modern-day chimpanzees, orangutans and gorillas to being a creature without a modern anatomical cognate. The facial anatomy suggests that A. ramidus males were less aggressive than those of modern chimps, which is correlated to increased parental care and monogamy in primates. It has also been suggested that it was among the earliest of human ancestors to use some proto-language, possibly capable of vocalizing at the same level as a human infant. This is based on evidence of human-like skull architecture, cranial base angle and vocal tract dimensions, all of which in A. ramidus are paedomorphic when compared to chimpanzees and bonobos. This suggests the trend toward paedomorphic or juvenile-like form evident in human evolution, may have begun with A. ramidus. Given these unique features, it has been argued that in A. ramidus we may have the first evidence of human-like forms of social behaviour, vocally mediated sociality as well as increased levels of prosociality via the process of self-domestication - all of which seem to be associated with the same underlying changes in skull architecture. A. ramidus appears to have inhabited woodland and bushland corridors between savannas, and was a generalized omnivore. Taxonomy The first remains were described in 1994 by American anthropologist Tim D. White, Japanese paleoanthropologist Gen Suwa, and Ethiopian paleontologist Berhane Asfaw. The holotype specimen, ARA-VP-6/1, comprised an associated set of 10 t
https://en.wikipedia.org/wiki/AltiVec
AltiVec is a single-precision floating point and integer SIMD instruction set designed and owned by Apple, IBM, and Freescale Semiconductor (formerly Motorola's Semiconductor Products Sector) — the AIM alliance. It is implemented on versions of the PowerPC processor architecture, including Motorola's G4, IBM's G5 and POWER6 processors, and P.A. Semi's PWRficient PA6T. AltiVec is a trademark owned solely by Freescale, so the system is also referred to as Velocity Engine by Apple and VMX (Vector Multimedia Extension) by IBM and P.A. Semi. While AltiVec refers to an instruction set, the implementations in CPUs produced by IBM and Motorola are separate in terms of logic design. To date, no IBM core has included an AltiVec logic design licensed from Motorola or vice versa. AltiVec is a standard part of the Power ISA v.2.03 specification. It was never formally a part of the PowerPC architecture until this specification although it used PowerPC instruction formats and syntax and occupied the opcode space expressly allocated for such purposes. Comparison to x86-64 SSE Both VMX/AltiVec and SSE feature 128-bit vector registers that can represent sixteen 8-bit signed or unsigned chars, eight 16-bit signed or unsigned shorts, four 32-bit ints or four 32-bit floating-point variables. Both provide cache-control instructions intended to minimize cache pollution when working on streams of data. They also exhibit important differences. Unlike SSE2, VMX/AltiVec supports a special RGB "pixel" data type, but it does not operate on 64-bit double-precision floats, and there is no way to move data directly between scalar and vector registers. In keeping with the "load/store" model of the PowerPC's RISC design, the vector registers, like the scalar registers, can only be loaded from and stored to memory. However, VMX/AltiVec provides a much more complete set of "horizontal" operations that work across all the elements of a vector; the allowable combinations of data type and opera
https://en.wikipedia.org/wiki/Nuisance%20wildlife%20management
Nuisance wildlife management is the selective removal of problem individuals or populations of specific species of wildlife. Other terms for the field include wildlife damage management, wildlife control, and animal damage control. Some wild animal species may get used to human presence, causing property damage or risking the transfer of diseases (zoonoses) to humans or pets. Many wildlife species coexist with humans very successfully, such as commensal rodents which have become more or less dependent on humans. Common nuisance species Wild animals that can cause problems in homes, gardens or yards include armadillos, skunks, boars, foxes, squirrels, snakes, rats, groundhogs, beavers, opossums, raccoons, bats, moles, deer, mice, coyotes, bears, ravens, seagulls, woodpeckers and pigeons. In the United States, some of these species are protected, such as bears, ravens, bats, deer, woodpeckers, and coyotes, and a permit may be required to control some species. Conflicts between people and wildlife arise in certain situations, such as when an animal's population becomes too large for a particular area to support. Human-induced changes in the environment will often result in increased numbers of a species. For example, piles of scrap building material make excellent sites where rodents can nest. Food left out for household pets is often equally attractive to some wildlife species. In these situations, the wildlife have suitable food and habitat and may become a nuisance. The Humane Society of the United States (HSUS) provides strategies for the control of species such as bats, bears, chipmunks, coyotes, deer, mice, racoons and snakes. Control methods The most commonly used methods for controlling nuisance wildlife around homes and gardens include exclusion, habitat modification, repellents, toxic baits, glue boards, traps and frightening. Exclusion Exclusion techniques refer to the act of sealing a home to prevent wildlife; such as, rodents (squirrels, rats, mice
https://en.wikipedia.org/wiki/Plate%20notation
In Bayesian inference, plate notation is a method of representing variables that repeat in a graphical model. Instead of drawing each repeated variable individually, a plate or rectangle is used to group variables into a subgraph that repeat together, and a number is drawn on the plate to represent the number of repetitions of the subgraph in the plate. The assumptions are that the subgraph is duplicated that many times, the variables in the subgraph are indexed by the repetition number, and any links that cross a plate boundary are replicated once for each subgraph repetition. Example In this example, we consider Latent Dirichlet allocation, a Bayesian network that models how documents in a corpus are topically related. There are two variables not in any plate; α is the parameter of the uniform Dirichlet prior on the per-document topic distributions, and β is the parameter of the uniform Dirichlet prior on the per-topic word distribution. The outermost plate represents all the variables related to a specific document, including , the topic distribution for document i. The M in the corner of the plate indicates that the variables inside are repeated M times, once for each document. The inner plate represents the variables associated with each of the words in document i: is the topic distribution for the jth word in document i, and is the actual word used. The N in the corner represents the repetition of the variables in the inner plate times, once for each word in document i. The circle representing the individual words is shaded, indicating that each is observable, and the other circles are empty, indicating that the other variables are latent variables. The directed edges between variables indicate dependencies between the variables: for example, each depends on and β. Extensions A number of extensions have been created by various authors to express more information than simply the conditional relationships. However, few of these have become s
https://en.wikipedia.org/wiki/Direct%20comparison%20test
In mathematics, the comparison test, sometimes called the direct comparison test to distinguish it from similar related tests (especially the limit comparison test), provides a way of deducing the convergence or divergence of an infinite series or an improper integral. In both cases, the test works by comparing the given series or integral to one whose convergence properties are known. For series In calculus, the comparison test for series typically consists of a pair of statements about infinite series with non-negative (real-valued) terms: If the infinite series converges and for all sufficiently large n (that is, for all for some fixed value N), then the infinite series also converges. If the infinite series diverges and for all sufficiently large n, then the infinite series also diverges. Note that the series having larger terms is sometimes said to dominate (or eventually dominate) the series with smaller terms. Alternatively, the test may be stated in terms of absolute convergence, in which case it also applies to series with complex terms: If the infinite series is absolutely convergent and for all sufficiently large n, then the infinite series is also absolutely convergent. If the infinite series is not absolutely convergent and for all sufficiently large n, then the infinite series is also not absolutely convergent. Note that in this last statement, the series could still be conditionally convergent; for real-valued series, this could happen if the an are not all nonnegative. The second pair of statements are equivalent to the first in the case of real-valued series because converges absolutely if and only if , a series with nonnegative terms, converges. Proof The proofs of all the statements given above are similar. Here is a proof of the third statement. Let and be infinite series such that converges absolutely (thus converges), and without loss of generality assume that for all positive integers n. Consider the partial sum
https://en.wikipedia.org/wiki/Trizol
TRIzol is a widely used chemical solution used in the extraction of DNA, RNA, and proteins from cells. The solution was initially used and published by Piotr Chomczyński and Nicoletta Sacchi in 1987. TRIzol is the brand name of guanidinium thiocyanate from the Ambion part of Life Technologies, and Tri-Reagent is the brand name from MRC, which was founded by Chomczynski. Uses in extraction The correct name of the method is guanidinium thiocyanate-phenol-chloroform extraction. The use of TRIzol can result in DNA yields comparable to other extraction methods, and it leads to >50% bigger RNA yield. An alternative method for RNA extraction is phenol extraction and TCA/acetone precipitation. Chloroform should be exchanged with 1-bromo-3-chloropropane when using the new generation TRI Reagent. DNA and RNA from TRIzol and TRI reagent can also be extracted using the Direct-zol Miniprep kit by Zymo Research. This method eliminates the use of Chloroform and 1-bromo-3-chloropropane completely, bypassing phase-separation and precipitation steps. TRIzol is light-sensitive and is often stored in a dark-colored, glass container covered in foil. It is stored at room temperature. When used, it resembles cough syrup, bright pink. The smell of the phenol is extremely strong. TRIzol works by maintaining RNA integrity during tissue homogenization, while at the same time disrupting and breaking down cells and cell components. Hazards Vigilant caution should be taken while using TRIzol (due to the phenol and chloroform). TRIzol is labeled as acute oral, dermal, and inhalation toxicity besides skin corrosion/irritation in the manufacturer MDS. Exposure to TRIzol can be a serious health hazard. Exposure can lead to serious chemical burns, permanent scarring and kidney failure. Experiments should be performed under a chemical hood, with lab coat, nitrile gloves and a plastic apron. TRIzol waste should never be mixed with bleach or acids: the guanidinium thiocyanate in TRIzol rea