source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Time%20slicing%20%28digital%20broadcasting%29 | Time slicing is a technique used by the DVB-H and ATSC-M/H technologies for achieving power-savings on mobile terminal devices. It is based on the time-multiplexed transmission of different services.
DVB-H and ATSC-M/H transmit large pieces of data in bursts, allowing the receiver to be switched off in inactive periods. The result is power savings of up to 90% - and the same inactive receiver could be used to monitor neighboring cells for seamless handovers.
Detailed description
Motivation
A special problem for mobile terminals is the limited battery capacity. In a way, being compatible with a broadband terrestrial service would place a burden on the mobile terminal, because demodulating and decoding a high data-rate stream involves certain power dissipation in the tuner and the demodulator. An investigation at the beginning of the development of DVB-H showed that the total power consumption of a DVB-T front end was more than 1 Watt at the time of the examination and was expected not to decrease below 600 mW until 2006; meanwhile a somewhat lower value seems possible but the envisaged target of 100 mW as a maximum threshold for the entire front end incorporated in a DVB-H terminal is still unobtainable for a DVB-T receiver.
A considerable drawback for battery-operated terminals is the fact that with DVB-T or ATSC, the whole data stream has to be decoded before any one of the services (TV programmes) of the multiplex can be accessed. The power saving made possible by time slicing is derived from the fact that essentially only those parts of the stream which carry the data of the service currently selected have to be processed. However, the data stream needs to be reorganized in a suitable way for that purpose. In DVB-H and ATSC-M/H, service multiplexing is performed in a pure time-division multiplex. The data of one particular service are therefore not transmitted continuously but in compact periodical bursts with interruptions in between. Multiplexing of severa |
https://en.wikipedia.org/wiki/Dirty%20paper%20coding | In telecommunications, dirty paper coding (DPC) or Costa precoding is a technique for efficient transmission of digital data through a channel subjected to some interference known to the transmitter. The technique consists of precoding the data in order to cancel the interference. Dirty-paper coding achieves the channel capacity, without a power penalty and without requiring the receiver to know the interfering signal.
The term dirty paper coding was coined by Max Costa who compared the technique to writing a message on a piece of paper which is partially soiled with random ink strokes or spots. By erasing and adding ink in the proper places, the writer can convey just as much information as if the paper were clean, even though the reader does not know where the dirt was. In this analogy, the paper is the channel, the dirt is interference, the writer is the transmitter, and the reader is the receiver.
Note that DPC at the encoder is an information-theoretic dual of Wyner-Ziv coding at the decoder.
Variants
Instances of dirty paper coding include Costa precoding (1983). Suboptimal approximations of dirty paper coding include Tomlinson-Harashima precoding (THP) published in 1971 and the vector perturbation technique of Hochwald et al. (2005).
Design considerations
DPC and DPC-like techniques require knowledge of the interference state in a non causal manner, such as channel state information of all users and other user data. Hence, the design of a DPC-based system should include a procedure to feed side information to the transmitters.
Applications
In 2003, Caire and Shamai applied DPC to the multi-antenna multi-user downlink, which is referred to as the 'broadcast channel' by information theorists. Since then, there has been widespread use of DPC in wireless networks and into an interference aware coding technique for dynamic wireless networks.
Recently, DPC has also been used for "informed digital watermarking" and is the modulation mechanism used by |
https://en.wikipedia.org/wiki/Berendsen%20thermostat | The Berendsen thermostat is an algorithm to re-scale the velocities of particles in molecular dynamics simulations to control the simulation temperature.
Basic description
In this scheme, the system is weakly coupled to a heat bath with some temperature. The thermostat suppresses fluctuations of the kinetic energy of the system and therefore cannot produce trajectories consistent with the canonical ensemble. The temperature of the system is corrected such that the deviation exponentially decays with some time constant .
Though the thermostat does not generate a correct canonical ensemble (especially for small systems), for large systems on the order of hundreds or thousands of atoms/molecules, the approximation yields roughly correct results for most calculated properties. The scheme is widely used due to the efficiency with which it relaxes a system to some target (bath) temperature. In many instances, systems are initially equilibrated using the Berendsen scheme, while properties are calculated using the widely known Nosé–Hoover thermostat, which correctly generates trajectories consistent with a canonical ensemble. However, the Berendsen thermostat can result in the flying ice cube effect, an artifact which can be eliminated by using the more rigorous Bussi–Donadio–Parrinello thermostat; for this reason, it has been recommended that usage of the Berendsen thermostat be discontinued in almost all cases except for replication of prior studies.
See also
Molecular mechanics
Software for molecular mechanics modeling |
https://en.wikipedia.org/wiki/Torsion-free%20abelian%20group | In mathematics, specifically in abstract algebra, a torsion-free abelian group is an abelian group which has no non-trivial torsion elements; that is, a group in which the group operation is commutative and the identity element is the only element with finite order.
While finitely generated abelian groups are completely classified, not much is known about infinitely generated abelian groups, even in the torsion-free countable case.
Definitions
An abelian group is said to be torsion-free if no element other than the identity is of finite order. Explicitly, for any , the only element for which is .
A natural example of a torsion-free group is , as only the integer 0 can be added to itself finitely many times to reach 0. More generally, the free abelian group is torsion-free for any . An important step in the proof of the classification of finitely generated abelian groups is that every such torsion-free group is isomorphic to a .
A non-finitely generated countable example is given by the additive group of the polynomial ring (the free abelian group of countable rank).
More complicated examples are the additive group of the rational field , or its subgroups such as (rational numbers whose denominator is a power of ). Yet more involved examples are given by groups of higher rank.
Groups of rank 1
Rank
The rank of an abelian group is the dimension of the -vector space . Equivalently it is the maximal cardinality of a linearly independent (over ) subset of .
If is torsion-free then it injects into . Thus, torsion-free abelian groups of rank 1 are exactly subgroups of the additive group .
Classification
Torsion-free abelian groups of rank 1 have been completely classified. To do so one associates to a group a subset of the prime numbers, as follows: pick any , for a prime we say that if and only if for every . This does not depend on the choice of since for another there exists such that . Baer proved that is a complete isomorphism invari |
https://en.wikipedia.org/wiki/Flat%20neighborhood%20network | Flat Neighborhood Network (FNN) is a topology for distributed computing and other computer networks. Each node connects to two or more switches which, ideally, entirely cover the node collection, so that each node can connect to any other node in two "hops" (jump up to one switch and down to the other node). This contrasts to topologies with fewer cables per node which communicate with remote nodes via intermediate nodes, as in Hypercube (see The Connection Machine).
See also
Thinking Machines Corporation built the Connection Machine employing hypercube topology for its compute nodes.
Kentucky's Linux/Athlon Testbed KLAT2 is an archetypal implementation.
External links
The Aggregate (at the University of Kentucky) defines FNN and includes a bibliography.
Supercomputers |
https://en.wikipedia.org/wiki/Human%20impact%20on%20the%20nitrogen%20cycle | Human impact on the nitrogen cycle is diverse. Agricultural and industrial nitrogen (N) inputs to the environment currently exceed inputs from natural N fixation. As a consequence of anthropogenic inputs, the global nitrogen cycle (Fig. 1) has been significantly altered over the past century. Global atmospheric nitrous oxide (N2O) mole fractions have increased from a pre-industrial value of ~270 nmol/mol to ~319 nmol/mol in 2005. Human activities account for over one-third of N2O emissions, most of which are due to the agricultural sector. This article is intended to give a brief review of the history of anthropogenic N inputs, and reported impacts of nitrogen inputs on selected terrestrial and aquatic ecosystems.
History of anthropogenic nitrogen inputs
Approximately 78% of earth's atmosphere is N gas (N2), which is an inert compound and biologically unavailable to most organisms. In order to be utilized in most biological processes, N2 must be converted to reactive nitrogen (Nr), which includes inorganic reduced forms (NH3 and NH4+), inorganic oxidized forms (NO, NO2, HNO3, N2O, and NO3−), and organic compounds (urea, amines, and proteins). N2 has a strong triple bond, and so a significant amount of energy (226 kcal mol−1) is required to convert N2 to Nr. Prior to industrial processes, the only sources of such energy were solar radiation and electrical discharges. Utilizing a large amount of metabolic energy and the enzyme nitrogenase, some bacteria and cyanobacteria convert atmospheric N2 to NH3, a process known as biological nitrogen fixation (BNF). The anthropogenic analogue to BNF is the Haber-Bosch process, in which H2 is reacted with atmospheric N2 at high temperatures and pressures to produce NH3. Lastly, N2 is converted to NO by energy from lightning, which is negligible in current temperate ecosystems, or by fossil fuel combustion.
Until 1850, natural BNF, cultivation-induced BNF (e.g., planting of leguminous crops), and incorporated organic matter wer |
https://en.wikipedia.org/wiki/List%20of%20system%20on%20a%20chip%20suppliers | List of system-on-a-chip suppliers.
Actions Semiconductor
Advanced Micro Devices (AMD)
Advanced Semiconductor Engineering (ASE)
Alchip
Allwinner Technology
Altera
Amkor Technology
Amlogic
Analog Devices
Apple Inc.
Applied Micro Circuits Corporation (AMCC)
ARM Holdings
ASIX Electronics
Atheros
Atmel
Axis Communications
Broadcom
Cambridge Silicon Radio
Cavium Networks
CEVA, Inc.
Cirrus Logic
Conexant
Cortina Systems
Cypress Semiconductor
Freescale Semiconductor
Fujifilm
HiSilicon
Imagination Technologies
Infineon Technologies
Integra Technologies
Intel Corporation
InvenSense
Lattice Semiconductor
Leadcore Technology
LSI Corporation
Marvell Technology Group
MediaTek
Maxim Integrated Products
Milkymist
MIPS Technologies
MStar Semiconductor
Nokia
NVIDIA
NXP Semiconductors (formerly Philips Semiconductors)
Open-Silicon
PMC-Sierra
Qualcomm
Redpine Signals
Renesas
Rockchip
Ruselectronics
Samsung Exynos
Sharp
Sigma Designs
SigmaTel
Silicon Integrated Systems
Silicon Motion
Skyworks Solutions
Socionext
SolidRun
Spreadtrum
STMicroelectronics
ST-Ericsson
Telechips
Tensilica
Teridian Semiconductor
Texas Instruments
Transmeta
Vimicro
Virage Logic
WonderMedia
Xilinx
Zoran Corporation
See also
List of countries by integrated circuit exports
List of integrated circuit manufacturers
Electronic design
Lists of technology companies
System on a chip |
https://en.wikipedia.org/wiki/Blood%20as%20food | Blood as food is the usage of blood on food, religiously and culturally. Many cultures consume blood, often in combination with meat. The blood may be in the form of blood sausage, as a thickener for sauces, a cured salted form for times of food scarcity, or in a blood soup. This is a product from domesticated animals, obtained at a place and time where the blood can run into a container and be swiftly consumed or processed. In many cultures, the animal is slaughtered. In some cultures, blood is a taboo food.
Blood is the most important byproduct of slaughtering. It consists predominantly of protein and water, and is sometimes called "liquid meat" because its composition is similar to that of lean meat. Blood collected hygienically can be used for human consumption, otherwise it is converted to blood meal. Certain fractions of animal blood are used in human medicine.
Methods of preparation
Sausage
Blood sausage is any sausage made by cooking animal blood with a filler until it is thick enough to congeal when cooled. Pig or cattle blood is most often used. Typical fillers include meat, fat, suet, bread, rice, barley and oatmeal. Varieties include biroldo, black pudding, blood tongue, blutwurst, drisheen, kishka (kaszanka), morcilla, moronga, mustamakkara, sundae, verivorst, and many types of boudin.
Pancakes
Blood pancakes are encountered in Galicia (filloas), Scandinavia, and the Baltic; for example, Swedish blodplättar, Finnish veriohukainen, and Estonian veripannkoogid.
Soups, stews and sauces
Blood soups and stews, which use blood as part of the broth, include czernina, dinuguan, haejangguk, mykyrokka, pig's organ soup, tiet canh and svartsoppa.
Blood is also used as a thickener in sauces, such as coq au vin or pressed duck, and puddings, such as tiết canh. It can provide flavor or color for meat, as in cabidela.
Solidified
Blood can also be used as a solid ingredient, either by allowing it to congeal before use, or by cooking it to accelerate the proce |
https://en.wikipedia.org/wiki/Locality-sensitive%20hashing | In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
Hashing-based approximate nearest-neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).
Locality-preserving hashing was initially devised as a way to facilitate data pipelining in implementations of massively parallel algorithms that use randomized routing and universal hashing to reduce memory contention and network congestion.
Definitions
An LSH family
is defined for
a metric space ,
a threshold ,
an approximation factor ,
and probabilities and .
This family is a set of functions that map elements of the metric space to buckets . An LSH family must satisfy the following conditions for any two points and any hash function chosen uniformly at random from :
if , then (i.e., and collide) with probability at least ,
if , then with probability at most .
A family is interesting when . Such a family is called -sensitive.
Alternatively it is defined with respect to a universe of items that have a similarity function . An LSH scheme is a family of hash functions coupled with a probability distribution over the functions such that a function chosen according to satisfies the prope |
https://en.wikipedia.org/wiki/Citarum%20River | The Citarum River () is the longest and largest river in West Java, Indonesia. It is the third longest river in Java, after Bengawan Solo and Brantas. It plays an important role in the life of the people of West Java. It has been noted for being considered one of the most polluted rivers in the world.
History
In Indonesian history the Citarum is linked with the 4th-century Tarumanagara kingdom, as the kingdom and the river shared the same etymology, derived from the word "tarum" (Sundanese for indigo plant). The earlier 4th-century BCE prehistoric Buni clay pottery-making culture flourished near the river's mouth. Stone inscriptions, Chinese sources, and archaeological sites such as Batujaya and Cibuaya suggest that human habitation and civilization flourished in and around the river estuaries and river valley as early as the 4th century and even earlier.
Geography
The river flows in the northwest area of Java with a predominantly tropical monsoon climate. The annual average temperature in the area is 24 °C. The warmest month is May, when the average temperature is around 26 °C, and the coldest is January, at 22 °C. The average annual rainfall is 2646 mm. The wettest month is January, with an average of 668 mm rainfall, and the driest month is September, with 14 mm rainfall.
Hydroelectric and irrigation dams
Three hydroelectric powerplant dams are installed along the Citarum: Saguling, Cirata, and Ir. H. Djuanda (Jatiluhur), all supplying the electricity for the Bandung and Greater Jakarta areas. The waters from these dams are also used to irrigate vast rice paddies in Karawang and Bekasi area, making northern West Java lowlands one of the most productive rice farming areas.
The Jatiluhur Dam with a 3 billion cubic meter storage capacity has the largest reservoir in Indonesia.
The river makes up around 80 percent of the surface water available to the people who use it. Pollution has affected agriculture so much that farmers have sold their rice paddies for hal |
https://en.wikipedia.org/wiki/Amylin%20Pharmaceuticals | Amylin Pharmaceuticals is a biopharmaceutical founded in 1987 that was based in San Diego, California. The company was engaged in the discovery, development, and commercialization of drug candidates for the treatment of diabetes, obesity, and other diseases. Amylin produced three drugs: Symlin (pramlintide acetate), Byetta (exenatide) and Bydureon (exenatide extended release).
History
1987–1992: Founding and IPO
In 1987, Amylin Pharmaceuticals was co-founded by Howard E. Greene Jr., former CEO of San Diego biotech pioneer Hybridtech, to develop a treatment for diabetes from a synthetic analog of amylin. Amylin was discovered by researchers at Oxford University earlier that year. Greene served as CEO from 1987 to 1996. Amylin completed its IPO in 1992.
1992–1998: Invention of Pramlintide and partnership with Johnson & Johnson
Amylin, in its natural form, is sticky, clumping on needles and forming little rocks in the pancreas. To create a synthetic version that was more reliable and easy to work with, researchers at Amylin Pharmaceuticals altered amino acids in the molecule. The result was a new drug named pramlintide.
In 1995, Amylin Pharmactietucals signed an agreement with Johnson & Johnson's LifeScan division to further develop pramlintide. A Phase II study made public in January 1997 showed that pramlintide was safe to mix with leading short-acting and intermediate-acting commercial insulin products, with preliminary results suggesting it might improve glycemic control.
Initial Phase III trial results released in August 1997 demonstrated statistically significant results for type 1 (juvenile-onset) diabetes, helping modestly to improve glucose control without increasing the risk of hypoglycemia (low blood sugar) while also improving weight and cholesterol levels. In patients with adult-onset type 2 diabetes, pramlintide showed significant benefits at six months but not after 12 months. In March 1998, seven months before the next trial data were due, Johns |
https://en.wikipedia.org/wiki/Key%20Underwood%20Coon%20Dog%20Memorial%20Graveyard | The Key Underwood Coon Dog Memorial Graveyard is a specialized and restricted pet cemetery and memorial in rural Colbert County, Alabama, US. It is reserved specifically for the burials of coon dogs. The cemetery was established by Key Underwood on September 4, 1937. Underwood buried his own dog there, choosing the spot, previously a popular hunting camp where "Troop" did 15 years of service. , more than 300 dogs were buried in the graveyard.
Criteria for burial are fairly well established, albeit being subject to interpretation and application. Only bona fide "coonhounds" are to be buried there. The exact measure of that standard depends on breeding, experience and performance; and seemingly depends on who and when the tale is told and the determination made.
History
Key Underwood established the cemetery on September 4, 1937, interring his coon dog, Troop, in an old hunting camp located in rural Colbert County, Alabama, US. The closest town is Cherokee, Alabama. At the time, Underwood only intended to bury Troop in a place they had coon hunted together for 15 years. The memorial was a serendipitous afterthought. Underwood buried Troop there, three feet deep, with an engraved old chimney stone for a marker. Later, other bereaved hunters followed his example when their dogs died, and the cemetery flourished as a result.
The entrance is marked by a statue of two coonhounds treeing a raccoon. During a 1984 interview with columnist Rheta Grimsley Johnson, Underwood said that burying Troop was doing "something special for a special coon dog". Allowance of mere pets is contraindicated. "It would reveal that you must not know much about coon hunters and their dogs, if you think we would contaminate this burial place with poodles and lap dogs."
Dogs must meet three requirements to qualify for burial at the cemetery:
the owner must verify that their dog was a purebred coonhound
a witness must declare that the dead animal is a coon dog
a member of the local coonhunters |
https://en.wikipedia.org/wiki/GB-PVR | GB-PVR was a PVR (personal video recorder aka digital video recorder) application, running on Microsoft Windows, whose main function was scheduling TV recordings and playing back live TV. GB-PVR is no longer under active development and has been superseded by NextPVR, also known as nPVR.
GB-PVR also acts as a home media center software with a digital video recorder, a radio station online tuner, a music and movie player, a library of images and other features.
Although GB-PVR supports open interfaces, the core engine code is closed. However developing personal plug-ins is an option to extend the application and these can be closed or open source, depending on the developer's interests. These plug-ins can be developed in C#, VB.NET or C++ and some examples are available in the GB-PVR official Forums and the GB-PVR Documentation wiki websites. The softwarewas developed with an interface which allows user to change the skin view or other graphic elements as the wallpaper.
GB-PVR is mostly an MPEG recording and playback system, but may also play other non-MPG content such as AVI (DivX/Xvid), WMV, and other formats that are supported by the codecs installed into a computer's.
It requires a supported TV tuner card, a VMR9 capable display adapter (video card), and a supported MPEG2 Decoder. Other requirements are listed on the GB-PVR web site.
Features
Integrated graphical user interface to manage all functionality
10-foot user interface for large screen displays
TV Guide for scheduling of recordings
Support for season recordings
Support for automatically converting recordings to DivX/Xvid/WMV/iPod etc.
Support for manual recordings on a specified channel at a specified time
Timeshift television allowing for pausing live TV etc.
Multidec support enabling the use of a wide range of softcams and other DVB plugins.
Teletext
DVB Subtitles
Support for recording multiple digital channels at the same time with 1 tuner card when channels are on the same frequency
|
https://en.wikipedia.org/wiki/Global%20Public%20Health%20Intelligence%20Network | The Global Public Health Intelligence Network (GPHIN) is an electronic public health early warning system developed by Canada's Public Health Agency, and is part of the World Health Organization's (WHO) Global Outbreak Alert and Response Network (GOARN). This system monitors internet media, such as news wires and websites, in nine languages in order to help detect and report potential disease or other health threats around the world. The system has been credited with detecting early signs of the 2009 swine flu pandemic in Mexico, Zika in West Africa, H5N1 in Iran, MERS and Ebola.
The system came to greater public awareness after it was revealed that Canada's Federal Government effectively shut it down in May 2019, ultimately preventing the system from providing an early warning of COVID-19. In August 2020, the system began issuing alerts again.
History
Ronald St. John, then a government epidemiologist, created GPHIN in 1994 as a way to improve Canada's intelligence surrounding outbreaks. Growing in parallel with ProMED-mail, GPHIN was Canada's major contribution to the World Health Organization (WHO), which at one point credited the system with supplying 20 per cent of its "epidemiological intelligence" and described the system as "the foundation" of a global pandemic early-warning system.
After the 2003 SARS outbreak, the system became central to Canada's pandemic preparedness. The system, which eventually fell under the Centre for Emergency Preparedness and Response in the PHAC, detected early signs of the 2009 swine flu pandemic in Mexico, Zika in West Africa, H5N1 in Iran, MERS and Ebola.
2019–2020 silence
A July 2020 investigation by The Globe and Mail revealed that Canada's Federal Government effectively shutdown GPHIN in May 2019, ultimately preventing the system from providing an early warning of COVID-19. After the government directed for a more domestic focus, the Public Health Agency of Canada (PHAC) assigned employees to different tasks in the depart |
https://en.wikipedia.org/wiki/Logic%20alphabet | The logic alphabet, also called the X-stem Logic Alphabet (XLA), constitutes an iconic set of symbols that systematically represents the sixteen possible binary truth functions of logic. The logic alphabet was developed by Shea Zellweger. The major emphasis of his iconic "logic alphabet" is to provide a more cognitively ergonomic notation for logic. Zellweger's visually iconic system more readily reveals, to the novice and expert alike, the underlying symmetry relationships and geometric properties of the sixteen binary connectives within Boolean algebra.
Truth functions
Truth functions are functions from sequences of truth values to truth values. A unary truth function, for example, takes a single truth value and maps it to another truth value. Similarly, a binary truth function maps ordered pairs of truth values to truth values, while a ternary truth function maps ordered triples of truth values to truth values, and so on.
In the unary case, there are two possible inputs, viz. T and F, and thus four possible unary truth functions: one mapping T to T and F to F, one mapping T to F and F to F, one mapping T to T and F to T, and finally one mapping T to F and F to T, this last one corresponding to the familiar operation of logical negation. In the form of a table, the four unary truth functions may be represented as follows.
In the binary case, there are four possible inputs, viz. (T, T), (T, F), (F, T), and (F, F), thus yielding sixteen possible binary truth functions – in general, there are n-ary truth functions for each natural number n. The sixteen possible binary truth functions are listed in the table below.
Content
Zellweger's logic alphabet offers a visually systematic way of representing each of the sixteen binary truth functions. The idea behind the logic alphabet is to first represent the sixteen binary truth functions in the form of a square matrix rather than the more familiar tabular format seen in the table above, and then to assign a letter shape |
https://en.wikipedia.org/wiki/Oskar%20%28gene%29 | oskar is a gene required for the development of the Drosophila embryo. It defines the posterior pole during early embryogenesis. Its two isoforms, short and long, play different roles in Drosophila embryonic development. oskar was named after the main character from the Günter Grass novel The Tin Drum, who refuses to grow up.
Evolutionary history
oskar displays a unique evolutionary origin resulting from a Horizontal Domain Transfer from a probably bacterial endosymbiont onto an ancestral insect genome. The OSK domain is of bacterial origin and fused with the LOTUS domain through a linker domain. This event must have happened just prior to the divergence with the Crustacean, the insect's sister group, as oskar can be found as early as the Zygentoma but does not seem to exist in Crustacean.
Translational-level regulation
oskar is translationally repressed prior to reaching the posterior pole of the oocyte by Bruno, which binds to three bruno response elements (BREs) on the 3' end of the transcribed oskar mRNA. The Bruno inhibitor has two distinct modes of action: recruiting the Cup eIF4E binding protein, which is also required for oskar mRNA localization due to interactions with the Barentsz microtubule-linked transporter, and promoting oligomerization of oskar mRNA. Oskar mRNA harbours a stem-loop structure in the 3’UTR, called the oocyte entry signal (OES), that promotes dynein-based mRNA accumulation in the oocyte.
P granule formation
oskar plays role in recruiting other germ line genes to the germ plasm for PGC (primordial germ cell) specification. oskar mRNA locates to the posterior end of an oocyte and, once translated, the short isoform of oskar (Short oskar) recruits germ plasm components such as the protein Vasa and the RNA-binding proteins of the Piwi family, among many others. The long isoform of oskar (Long oskar) has been implicated in creating an actin network on the posterior pole end.
A second role has been discovered that relates to the fo |
https://en.wikipedia.org/wiki/Structured%20English | Structured English is the use of the English language with the syntax of structured programming to communicate the design of a computer program to non-technical users by breaking it down into logical steps using straightforward English words. Structured English gives aims to get the benefits of both the programming logic and natural language: program logic helps to attain precision, whilst natural language helps with the familiarity of the spoken word.
It is the basis of some programming languages such as SQL (Structured Query Language) "for use by people who have need for interaction with a large database but who are not trained programmers".
Elements
Advanced English Structure is a limited-form "pseudocode" and consists of the following elements:
Operation statements written as English phrases executed from the top down
Conditional blocks indicated by keywords such as IF, THEN, and ELSE
Repetition blocks indicated by keywords such as DO, WHILE, and UNTIL
The following guidelines are used when writing Structured English:
All logic should be expressed in operational, conditional, and repetition blocks
Statements should be clear and unambiguous
Logical blocks should be indented to show relationship and hierarchy
Use one line per logical element, or indent the continuation line
Keywords should be capitalized
Group blocks of statements together, with a capitalized name that describes their function and end with an EXIT.
Underline words or phrases defined in a data dictionary pronunciation definition and meaning.
Mark comment lines with an asterisk
Example of Structured English
APPROVE LOAN
IF customer has a Bank Account THEN
IF Customer has no dues from previous account THEN
Allow loan facility
ELSE
IF Management Approval is obtained THEN
Allow loan facility
ELSE
Reject
ENDIF
ENDIF
ELSE
Reject
ENDIF
EXIT
Criticism
Though useful for planning programs, modules and routines, or describing algorithms |
https://en.wikipedia.org/wiki/Fundamental%20modeling%20concepts | Fundamental modeling concepts (FMC) provide a framework to describe software-intensive systems. It strongly emphasizes the communication about software-intensive systems by using a semi-formal graphical notation that can easily be understood.
Introduction
FMC distinguishes three perspectives to look at a software system:
Structure of the system
Processes in the system
Value domains of the system
FMC defines a dedicated diagram type for each perspective. FMC diagrams use a simple and lean notation. The purpose of FMC diagrams is to facilitate the communication about a software system, not only between technical experts but also between technical experts and business or domain experts. The comprehensibility of FMC diagrams has made them famous among its supporters.
The common approach when working with FMC is to start with a high-level diagram of the compositional structure of a system. This “big picture” diagram serves as a reference in the communication with all involved stakeholders of the project. Later on, the high-level diagram is iteratively refined to model technical details of the system. Complementary diagrams for processes observed in the system or value domains found in the system are introduced as needed.
Diagram Types
FMC uses three diagram types to model different aspects of a system:
Compositional Structure Diagram depicts the static structure of a system. This diagram type is also known as FMC Block Diagram
Dynamic Structure Diagram depicts processes that can be observed in a system. This diagram type is also known as FMC Petri-net
Value Range Structure Diagram depicts structures of values found in the system. This diagram type is also known as FMC E/R Diagram
All FMC diagrams are bipartite graphs. Each bipartite graph consists of two disjoint sets of vertices with the condition that no vertex is connected to another vertex of the same set. In FMC diagrams, members of one set are represented by angular shapes, and members of the other set ar |
https://en.wikipedia.org/wiki/Pruinescence | Pruinescence , or pruinosity, is a "frosted" or dusty-looking coating on top of a surface. It may also be called a pruina (plural: pruinae), from the Latin word for hoarfrost. The adjectival form is pruinose .
Entomology
In insects, a "bloom" caused by wax particles on top of an insect's cuticle covers up the underlying coloration, giving a dusty or frosted appearance. The pruinescence is commonly white to pale blue in color but can be gray, pink, purple, or red; these colors may be produced by Tyndall scattering of light. When pale in color, pruinescence often strongly reflects ultraviolet.
Pruinescence is found in many species of Odonata, particularly damselflies of the families Lestidae and Coenagrionidae, where it occurs on the wings and body. Among true dragonflies it is most common on male Libellulidae (skimmers).
In the common whitetail and blue dasher dragonflies (Plathemis lydia and Pachydiplax longipennis), males display the pruinescence on the back of the abdomen to other males as a territorial threat. Other Odonata may use pruinescence to recognize members of their own species or to cool their bodies by reflecting radiation away.
Plants, fungi, and lichens
The term pruinosity is also applied to "blooms" on plants—for example, on the skin of grapes—and to powderings on the cap and stem of mushrooms, which can be important for identification.
An epinecral layer is "a layer of horny dead fungal hyphae with indistinct lumina in or near the cortex above the algal layer". |
https://en.wikipedia.org/wiki/GOST%2010859 | GOST 10859 (1964) is a standard of the Soviet Union which defined how to encode data on punched cards. This standard allowed a variable word size, depending on the type of data being encoded, but only uppercase characters.
These include the non-ASCII “decimal exponent symbol” . It was used to express real numbers in scientific notation. For example: 6.0221415⏨23.
The character was also part of the ALGOL programming language specifications and was incorporated into the then German character encoding standard ALCOR. GOST 10859 also included numerous other non-ASCII characters/symbols useful to ALGOL programmers, e.g.: ∨, ∧, ⊃, ≡, ¬, ≠, ↑, ↓, ×, ÷, ≤, ≥, °, &, ∅, compare with ALGOL operators.
Character sets
See also
KOI-7 (GOST 13052-67)
KOI-8 (GOST 19768-74) |
https://en.wikipedia.org/wiki/Scur | A scur is an incompletely developed horn growth. In cattle, scurs are not attached to the skull, whereas horns are attached and have blood vessels and nerves.
Scurs may also occur in sheep.
Genetic Inheritance
The gene for scurs is inherited separately from the polled gene in cattle. Not all polled animals lack the scur gene. Since horned is recessive to polled, no horned cattle carry the polled allele, but they may also carry scurs.
In cattle, genetic expression of the scur gene is different from that of the dominant polled gene, in that the scur gene's expression depends on the sex of the animal. The scur gene is dominant in males and recessive in females.
See also
Horn (anatomy)
Polled livestock
Livestock dehorning |
https://en.wikipedia.org/wiki/511%20%28number%29 | 511 is the natural number following 510 and preceding 512.
It is a Mersenne number, being one less than a power of 2: .
As a result, 511 is a palindromic number and a repdigit in bases 2 (1111111112). It is also palindromic and a repdigit in base 8 (7778).
It is a generalized heptagonal number , since
when .
It is a Harshad number in bases 3, 5, 7, 10, 13 and 15.
Special use in computers
The octal representation of 511 (7778) is commonly used by Unix commands to specify a custom record separator in order to "slurp" input as a whole, rather than line-by-line (i.e. separated at newline characters). |
https://en.wikipedia.org/wiki/Key%20risk%20indicator | A key risk indicator (KRI) is a measure used in management to indicate how risky an activity is. Key risk indicators are metrics used by organizations to provide an early signal of increasing risk exposures in various areas of the enterprise. It differs from a key performance indicator (KPI) in that the latter is meant as a measure of how well something is being done while the former is an indicator of the possibility of future adverse impact. KRI give an early warning to identify potential events that may harm continuity of the activity/project.
KRIs are a mainstay of operational risk analysis.
Definitions
According to OECD
A risk indicator is an indicator that estimates the potential for some form of resource degradation using mathematical formulas or models.
Risk management
Security risk management
According to Risk IT framework by ISACA, key risk indicators are metrics capable of showing that the organization is subject or has a high probability of being subject to a risk that exceed the defined risk appetite.
Organizations have different sizes and environment. So every enterprise should choose its own KRI, taking into account the following steps:
Consider the different stakeholders of the organization
Make a balanced selection of risk indicators, covering performance indicators, lead indicators and trends
Ensure that the selected indicators drill down to the root cause of the events
Choose high relevant and high probability of predicting important risks:
High business impact
Easy to measure
With high correlation with the risk
Sensitivity
Determine thresholds and triggers for the set of KRI's
Locate and fold in data sources that contribute or feed data into KRI triggers
Determine notification methods, recipients, and action or response sequences
The constant measure of KRI can bring the following benefits to the organization:
Provide an early warning: a proactive action can take place
Provide a backward looking view on risk events, so lesson |
https://en.wikipedia.org/wiki/Permitted%20attached%20private%20lines | Permitted attached private lines, abbreviated PAPL, are voice-grade telephone wires that run point-to-point (rather than point-to-exchange) between locations in the telephone company's copper network. Data can travel across the PAPL link (at a distance of up to 3.5 km) at speeds of around 2Mb per second. Originally, PAPLs were intended to act as basic alarm circuits for fire or security systems, though in recent years have been utilised to carry DSL data signals. |
https://en.wikipedia.org/wiki/Weigert%27s%20elastic%20stain | Weigert's elastic stain is a combination of stains used in histology which is useful in identifying elastic fibers. Often orcein or a combination of resorcinol and fuchsine are used for staining. For counterstaining cell nuclei nuclear fast red or hematoxylin is also used. After applying elastic fibers show up blue coloured while cell nuclei gets red or blue.
See also
Karl Weigert
Masson's trichrome stain
External links
Histology
Staining |
https://en.wikipedia.org/wiki/Spacetime%20diagram | A spacetime diagram is a graphical illustration of locations in space at various times, especially in the special theory of relativity. Spacetime diagrams can show the geometry underlying phenomena like time dilation and length contraction without mathematical equations.
The history of an object's location through time traces out a line or curve on a spacetime diagram, referred to as the object's world line. Each point in a spacetime diagram represents a unique position in space and time and is referred to as an event.
The most well-known class of spacetime diagrams are known as Minkowski diagrams, developed by Hermann Minkowski in 1908. Minkowski diagrams are two-dimensional graphs that depict events as happening in a universe consisting of one space dimension and one time dimension. Unlike a regular distance-time graph, the distance is displayed on the horizontal axis and time on the vertical axis. Additionally, the time and space units of measurement are chosen in such a way that an object moving at the speed of light is depicted as following a 45° angle to the diagram's axes.
Introduction to kinetic diagrams
Position versus time graphs
In the study of 1-dimensional kinematics, position vs. time graphs (called x-t graphs for short) provide a useful means to describe motion. Kinematic features besides the object's position are visible by the slope and shape of the lines. In Fig 1-1, the plotted object moves away from the origin at a positive constant velocity (1.66 m/s) for 6 seconds, halts for 5 seconds, then returns to the origin over a period of 7 seconds at a non-constant speed (but negative velocity).
At its most basic level, a spacetime diagram is merely a time vs position graph, with the directions of the axes in a usual p-t graph exchanged; that is, the vertical axis refers to temporal and the horizontal axis to spatial coordinate values. Especially when used in special relativity (SR), the temporal axes of a spacetime diagram are often scaled with |
https://en.wikipedia.org/wiki/Fibulin | Fibulin (FY-beau-lin) (now known as Fibulin-1 FBLN1) is the prototypic member of a multigene family, currently with seven members. Fibulin-1 is a calcium-binding glycoprotein. In vertebrates, fibulin-1 is found in blood and extracellular matrices. In the extracellular matrix, fibulin-1 associates with basement membranes and elastic fibers. The association with these matrix structures is mediated by its ability to interact with numerous extracellular matrix constituents including fibronectin, proteoglycans, laminins and tropoelastin. In blood, fibulin-1 binds to fibrinogen and incorporates into clots.
Fibulins are secreted glycoproteins that become incorporated into a fibrillar extracellular matrix when expressed by cultured cells or added exogenously to cell monolayers. The five known members of the family share an elongated structure and many calcium-binding sites, owing to the presence of tandem arrays of epidermal growth factor-like domains. They have overlapping binding sites for several basement-membrane proteins, tropoelastin, fibrillin, fibronectin and proteoglycans, and they participate in diverse supramolecular structures. The amino-terminal domain I of fibulin consists of three anaphylatoxin-like (AT) modules, each approximately 40 residues long and containing four or six cysteines. The structure of an AT module was determined for the complement-derived anaphylatoxin C3a, and was found to be a compact alpha-helical fold that is stabilized by three disulphide bridges in the pattern Cys14, Cys25 and Cys36 (where Cys is cysteine). The bulk of the remaining portion of the fibulin molecule is a series of nine EGF-like repeats.
Genes
FBLN1, FBLN2, FBLN3, FBLN4, FBLN5, FBLN7 and HMCN1 is also known as "fibulin-6". |
https://en.wikipedia.org/wiki/Video%20buffering%20verifier | The Video Buffering Verifier (VBV) is a theoretical MPEG video buffer model, used to ensure that an encoded video stream can be correctly buffered, and played back at the decoder device.
By definition, the VBV shall not overflow nor underflow when its input is a compliant stream, (except in the case of low_delay). It is therefore important when encoding such a stream that it comply with the VBV requirements.
One way to think of the VBV is to consider both a maximum bitrate and a maximum buffer size. You'll need to know how quickly the video data is coming into the buffer. Keep in mind that video data is always changing the bitrate so there is no constant number to note how fast the data is arriving. The larger question is how long before the buffer overflows. A larger buffer size simply means that the decoder will tolerate high bitrates for longer periods of time, but no buffer is infinite, so eventually even a large buffer will overflow.
Operation Modes
There are two operational modes of VBV: Constant Bit Rate (CBR) and Variable Bit Rate (VBR). In CBR, the decoder's buffer is filled over time at a constant data rate. In VBR, the buffer is filled at a non-constant rate. In both cases, data is removed from the buffer in varying chunks, depending on the actual size of the coded frames.
Standards
In the H.264 and VC-1 standards, the VBV is replaced with generalized version called Hypothetical Reference Decoder (HRD). |
https://en.wikipedia.org/wiki/Visual%20Intercept | Visual Intercept is a Microsoft Windows based software defect tracking system produced by Elsinore Technologies Inc. Visual Intercept was actively sold from 1995 until early 2006 when it was integrated as a single solution in the broader IssueNet issue management system, also produced by Elsinore Technologies Inc.
Version History
1.x
Intercept 1.0 was released in 1996. In 1998 the product line was expanded to include Intercept Relay for beta testers, Intercept SDK for integrations, and Intercept Web for Web Access. One of the distinguishing features of early versions of Intercept was its integration to Microsoft Visual SourceSafe.
2.x
Version 2.0 released in 2000 provided major and minor enhancements to all products in the Intercept product line. In addition to enhancements to existing capabilities Intercept 2.0 introduced integration to Visual Studio and VBA (Visual Basic for Applications) integration for implementing custom workflow rules. In the 2.0 time frame Elsinore Technologies also released Visual Intercept Project and integration to the products in the Microsoft Office suite. Visual Intercept Project was designed to help project managers dynamically update project plans by integrating them with the real time issue management data captured and tracked in Visual Intercept.
3.x
In 2002 Elsinore Technologies released Visual Intercept 3.0. The 3.0 release focused on updating the functionality of Visual Intercept Web and Web Relay to match the desktop suite and integrations to Visual Studio and Microsoft Office. After the initial release of version 3.0, Elsinore continued to release major enhancements to the 3.0 version in services releases as it completed its IssueNet platform which would serve as the software platform for version 4.0.
See also
IssueNet
Business software
Bug and issue tracking software |
https://en.wikipedia.org/wiki/Prevalent%20and%20shy%20sets | In mathematics, the notions of prevalence and shyness are notions of "almost everywhere" and "measure zero" that are well-suited to the study of infinite-dimensional spaces and make use of the translation-invariant Lebesgue measure on finite-dimensional real spaces. The term "shy" was suggested by the American mathematician John Milnor.
Definitions
Prevalence and shyness
Let be a real topological vector space and let be a Borel-measurable subset of is said to be prevalent if there exists a finite-dimensional subspace of called the probe set, such that for all we have for -almost all where denotes the -dimensional Lebesgue measure on Put another way, for every Lebesgue-almost every point of the hyperplane lies in
A non-Borel subset of is said to be prevalent if it contains a prevalent Borel subset.
A Borel subset of is said to be shy if its complement is prevalent; a non-Borel subset of is said to be shy if it is contained within a shy Borel subset.
An alternative, and slightly more general, definition is to define a set to be shy if there exists a transverse measure for (other than the trivial measure).
Local prevalence and shyness
A subset of is said to be locally shy if every point has a neighbourhood whose intersection with is a shy set. is said to be locally prevalent if its complement is locally shy.
Theorems involving prevalence and shyness
If is shy, then so is every subset of and every translate of
Every shy Borel set admits a transverse measure that is finite and has compact support. Furthermore, this measure can be chosen so that its support has arbitrarily small diameter.
Any finite or countable union of shy sets is also shy. Analogously, countable intersection of prevalent sets is prevalent.
Any shy set is also locally shy. If is a separable space, then every locally shy subset of is also shy.
A subset of -dimensional Euclidean space is shy if and only if it has Lebesgue measure zero.
Any prevalent subs |
https://en.wikipedia.org/wiki/Semliki%20harpoon | The Semliki harpoon, also known as the Katanda harpoon, refers to a group of complex barbed harpoon heads carved from bone, which were found at an archaeologic site on the Semliki River in the Democratic Republic of the Congo (formerly Zaire); the artifacts which date back approximately 90,000 years. The initial discovery of the first harpoon head was made in 1988. When the artifact was dated to 88,000 BCE, there was skepticism within the archaeological community about the accuracy of the stated age; in that the object seemed too advanced for human cultures of that era. However, the site has yielded multiple other examples of similar harpoons, and the dates have been confirmed.
It seemed to substantiate that fishing and an "aquatic civilization" was likely in the region across eastern and northern Africa during the wetter climatic conditions of the early to mid-Holocene, as shown by other evidence at the lakeshore site of Ishango.
The site is littered with catfish bones and the harpoons are the correct size to catch adult catfish, so investigators suspect the fisherman came to the site every year "to catch giant catfish."
It is unlikely that the harpoons are much different from those used today (see reference for photos).
The archaeologic site coincides with the range of the Efé Pygmies, which have been shown by mitochondrial DNA analyses to be of extremely ancient and distinct lineage. |
https://en.wikipedia.org/wiki/AFGROW | AFGROW (Air Force Grow) is a Damage Tolerance Analysis (DTA) computer program that calculates crack initiation, fatigue crack growth, and fracture to predict the life of metallic structures. Originally developed by the Air Force Research Laboratory, AFGROW is mainly used for aerospace applications, but can be applied to any type of metallic structure that experiences fatigue cracking.
History
AFGROW's history traces back to a crack growth life prediction program (ASDGRO) which was written in BASIC for IBM-PCs by E. Davidson at ASD/ENSF in the early-mid-1980s. In 1985, ASDGRO was used as the basis for crack growth analysis for the Sikorsky H-53 helicopter under contract to Warner-Robins ALC. The program was modified to utilize very large load spectra, approximate stress intensity solutions for cracks in arbitrary stress fields, and use a tabular crack growth rate relationship based on the Walker equation on a point-by-point basis (Harter T-Method). The point loaded crack solution from the Tada, Paris, and Irwin Stress Intensity Factor Handbook
was originally used to determine K (for arbitrary stress fields) by integration over the crack length using the unflawed stress distribution independently for each crack dimension. A new method was developed by F. Grimsley (AFWAL/FIBEC) to determine stress intensity, which used a 2-D Gaussian integration scheme with Richardson Extrapolation which was optimized by G. Sendeckyj (AFWAL/FIBEC). The resulting program was named MODGRO since it was a modified version of ASDGRO.
Many modifications were made during the late 1980s and early 1990s. The primary modification was changing the coding language from BASIC to Turbo Pascal and C. Numerous small changes/repairs were made based on errors that were discovered. During this time period, NASA/Dryden implemented MODGRO in the analysis for the flight test program for the X-29.
In 1993, the Navy was interested in using MODGRO to assist in a program to assess the effect of certain |
https://en.wikipedia.org/wiki/Happy%20Cube | Happy Cubes are a set of mechanical puzzles created in 1986 by the Belgian toy inventor Dirk Laureyssens. The company "Happy bvba" has the exclusive license to manufacture and sell these puzzles. Happy Cubes are also known by a number of other names, among them: "Cube It!" cubes, "Wirrel Warrel" (in the Netherlands), "I.Q.ubes" and "Cococrash" (in Spain and Portugal).
The Happy Cubes were made of 8mm-thick ethylene-vinyl acetate foam mats (also known as EVA). The tiles were based upon a 5x5 matrix where the outer squares may be present or absent. The central 3x3 kernel was fixed. Initially the puzzle is assembled into a 2-dimensional, flat 2x3 piece rectangle fitted into a frame. The basic challenge is to construct a perfect, 6-sided cube out of these 6 pieces.
Usually, there is only one way to fit the pieces into a complete cube and it can be reached with various levels of difficulty by trial and error.
Variations
There are four families of Happy Cube puzzles, each containing 6 different cubes. Each family has a unique texture, style and difficulty level. Within each puzzle family the cubes are differentiated by color. The four families of Happy Cube puzzles with names of each puzzle are:
Junior (formerly The Little Genius) - Textured with various icons of foods, vehicles, emotions etc.'. These are the easiest cubes to build and are designed for children aged three to seven years.
Original (formerly The Happy Cube) - Plainly colored with a single color each and with no texture, medium difficulty. These are the original cubes that were designed by the inventor in 1986.
Pro (formerly The Profi Cube) - Textured with two interleaving colors, rated by the manufacturer as slightly harder to build than the Happy Cubes (but see next section). There are two versions of this family, the first is colored with one dominant color and a black background and the newer version is much more lighter in shade, replacing the black background with a second dominant color similar i |
https://en.wikipedia.org/wiki/Slicer%20%28guitar%20effect%29 | A slicer is an effects unit which is similar to a tremolo, vibrato, phaser, or autopan. It combines a modulation sequence with a noise gate or envelope filter to create a percussive and rhythmic effect like a helicopter, with rapid cutting out and coming in—on and off. Most have variable speeds and depths, creating different sounds. It may be implemented through an effects unit or a VST. The Boss SL-20 is an example of a slicer effect in a guitar pedal. |
https://en.wikipedia.org/wiki/FGED%20Society | The Functional GEnomics Data Society (FGED) (formerly known as the MGED Society)
was a non-profit, volunteer-run international organization
of biologists, computer scientists, and data analysts that aims to
facilitate biological and biomedical discovery through data
integration. The approach of FGED was to promote the sharing of basic research
data generated primarily via high-throughput technologies
that generate large data sets within the domain of functional genomics.
Members of the FGED Society worked with other organizations to support the effective sharing and reproducibility
of functional genomics data; facilitate the creation of
standards and software tools that leverage the standards; and promote the sharing of high quality, well
annotated data within the life sciences and biomedical communities.
Founded in 1999 as the "Microarray Gene Expression Data (MGED) Society", this organization changed its name to the "Functional Genomics Data Society" in 2010 to reflect the fact that it has broadened its focus beyond the application of DNA microarrays for gene expression analysis to include technologies such as high-throughput sequencing. The scope of the FGED Society includes data generated using any functional genomics technology when applied to genome-scale studies of gene expression, binding, modification and other related applications.
In September 2021, the FGED Society ceased operations.
History
The FGED Society was formed in 1999 at a meeting on Microarray Gene
Expression Databases in recognition of the need to establish standards
for sharing and storing data from DNA microarray experiments. Originally named the "MGED Society," the society began with
a focus on DNA microarrays and gene expression data.
The original MGED Society was incorporated in 2002 as a non-profit
public benefit organization with the title
Microarray Gene Expression Data Society and obtained permanent
charity status in 2007. The MGED name was legally changed in 2007 to
Micro |
https://en.wikipedia.org/wiki/Coronal%20loop | In solar physics, a coronal loop is a well-defined arch-like structure in the Sun's atmosphere made up of relatively dense plasma confined and isolated from the surrounding medium by magnetic flux tubes. Coronal loops begin and end at two footpoints on the photosphere and project into the transition region and lower corona. They typically form and dissipate over periods of seconds to days and may span anywhere from in length.
Coronal loops are often associated with the strong magnetic fields located within active regions and sunspots. The number of coronal loops varies with the 11 year solar cycle.
Origin and physical features
Due to a natural process called the solar dynamo driven by heat produced in the Sun's core, convective motion of the electrically conductive plasma which makes up the Sun creates electric currents, which in turn create powerful magnetic fields in the Sun's interior. These magnetic fields are in the form of closed loops of magnetic flux, which are twisted and tangled by solar differential rotation (the different rotation rates of the plasma at different latitudes of the solar sphere). A coronal loop occurs when a curved arc of the magnetic field projects through the visible surface of the Sun, the photosphere, protruding into the solar atmosphere.
Within a coronal loop, the paths of the moving electrically charged particles which make up its plasma—electrons and ions—are sharply bent by the Lorentz force when moving transverse to the loop's magnetic field. As a result, they can only move freely parallel to the magnetic field lines, tending to spiral around these lines. Thus, the plasma within a coronal loop cannot escape sideways out of the loop and can only flow along its length. This is known as the frozen-in condition.
The strong interaction of the magnetic field with the dense plasma on and below the Sun's surface tends to tie the magnetic field lines to the motion of the Sun's plasma; thus, the two footpoints (the location where the |
https://en.wikipedia.org/wiki/Reduction%20%28music%29 | In music, a reduction is an arrangement or transcription of an existing score or composition in which complexity is lessened to make analysis, performance, or practice easier or clearer; the number of parts may be reduced or rhythm may be simplified, such as through the use of block chords.
Orchestral
An orchestral reduction is a sheet music arrangement of a work originally for full symphony orchestra (such as a symphony, overture, or opera), rearranged for a single instrument (typically piano or organ), a smaller orchestra, or a chamber ensemble with or without a keyboard (e.g. a string quartet). A reduction for solo piano is sometimes called a piano reduction or piano score.
During opera rehearsals, a répétiteur (piano player) will typically read from a piano reduction of the opera. When a choir is learning a work scored for choir and full orchestra, the initial rehearsals will usually be done with a pianist playing a piano reduction of the orchestra part. Before the advent of the phonograph, arrangements of orchestral works for solo piano or piano four hands were in common use for enjoyment at home.
A reduction for a smaller orchestra or chamber ensemble may be used when not enough players are available, when a venue is too small to accommodate the full orchestra, to accompany less powerful voices, or to save money by hiring fewer players.
Piano
A piano reduction or piano transcription is sheet music for the piano (a piano score) that has been compressed and/or simplified so as to fit on a two-line staff and be playable on the piano. It is also considered a style of orchestration or music arrangement less well known as contraction scoring, a subset of elastic scoring.
The most notable example is Franz Liszt's transcriptions for solo piano of Ludwig van Beethoven's symphonies.
According to Arnold Schoenberg, a piano reduction should "only be like the view of a sculpture from one viewpoint", and he advises that timbre and thickness should largely be ignored |
https://en.wikipedia.org/wiki/Punchscan | Punchscan is an optical scan vote counting system invented by cryptographer David Chaum. Punchscan is designed to offer integrity, privacy, and transparency. The system is voter-verifiable, provides an end-to-end (E2E) audit mechanism, and issues a ballot receipt to each voter. The system won grand prize at the 2007 University Voting Systems Competition.
The computer software which Punchscan incorporates is open-source; the source code was released on 2 November 2006 under a revised BSD licence. However, Punchscan is software independent; it draws its security from cryptographic functions instead of relying on software security like DRE voting machines. For this reason, Punchscan can be run on closed source operating systems, like Microsoft Windows, and still maintain unconditional integrity.
The Punchscan team, with additional contributors, has since developed Scantegrity.
Voting procedure
A Punchscan ballot has two layers of paper. On the top layer, the candidates are listed with a symbol or letter beside their name. Below the candidate list, there are a series of round holes in the top layer of the ballot. Inside the holes on the bottom layer, the corresponding symbols are printed.
To cast a vote for a candidate, the voter must locate the hole with the symbol corresponding to the symbol beside the candidate's name. This hole is marked with a Bingo-style ink dauber, which is purposely larger than the hole. The voter then separates the ballot, chooses either the top or the bottom layer to keep as a receipt, and shreds the other layer. The receipt is scanned at the polling station for tabulation.
The order of the symbols beside the candidate names is generated randomly for each ballot, and thus differs from ballot to ballot. Likewise for the order of the symbols in the holes. For this reason, the receipt does not contain enough information to determine which candidate the vote was cast for. If the top layer is kept, the order of the symbols through the holes |
https://en.wikipedia.org/wiki/Steinhaus%20theorem | In the mathematical field of real analysis, the Steinhaus theorem states that the difference set of a set of positive measure contains an open neighbourhood of zero. It was first proved by Hugo Steinhaus.
Statement
Let A be a Lebesgue-measurable set on the real line such that the Lebesgue measure of A is not zero. Then the difference set
contains an open neighbourhood of the origin.
The general version of the theorem, first proved by André Weil, states that if G is a locally compact group, and A ⊂ G a subset of positive (left) Haar measure, then
contains an open neighbourhood of unity.
The theorem can also be extended to nonmeagre sets with the Baire property. The proof of these extensions, sometimes also called Steinhaus theorem, is almost identical to the one below.
Proof
The following simple proof can be found in a collection of problems by late professor H.M. Martirosian from the Yerevan State University, Armenia (Russian).
Let's keep in mind that for any , there exists an open set , so that and . As a consequence, for a given , we can find an appropriate interval so that taking just an appropriate part of positive measure of the set we can assume that , and that .
Now assume that , where . We'll show that there are common points in the sets and . Otherwise . But since , and
,
we would get , which contradicts the initial property of the set. Hence, since , when , it follows immediately that , what we needed to establish.
Corollary
A corollary of this theorem is that any measurable proper subgroup of is of measure zero.
See also
Falconer's conjecture
Notes |
https://en.wikipedia.org/wiki/Andr%C3%A9%20N%C3%A9ron | André Néron (November 30, 1922, La Clayette, France – April 6, 1985, Paris, France) was a French mathematician at the Université de Poitiers who worked on elliptic curves and abelian varieties. He discovered the Néron minimal model of an elliptic curve or abelian variety, the Néron differential, the Néron–Severi group, the Néron–Ogg–Shafarevich criterion, the local height and Néron–Tate height of rational points on an abelian variety over a discrete valuation ring or Dedekind domain, and classified the possible fibers of an elliptic fibration.
Life and career
He was a student of Albert Châtelet, and his PhD students were Jean-Louis Colliot-Thélène and Gérard Ligozat.
He gave invited talks at the International Congress of Mathematicians in 1954 and 1966 . In 1983 the Académie des sciences awarded him the Émile Picard Medal.
He died of cancer in 1985.
Publications |
https://en.wikipedia.org/wiki/SCSI%20Trade%20Association | The SCSI Trade Association, or SCSITA (sometimes STA), is an industry trade group which exists to promote the use of SCSI technology. It was formed in 1996. , sponsor members include HP, Intel, LSI Logic, PMC-Sierra, and Seagate. Requirements for membership are (1) manufacturing or selling SCSI related products or services and (2) paying dues and fees, which start at $4500/year.
The SCSITA does not define SCSI technical standards; that is the job of the INCITS T10 Committee. Rather, SCSITA promotes the use of SCSI and establishes standard marketing terminology and trademarks. They also foster vendor inter-operability.
See also
SCSI – Small Computer System Interface
INCITS – International Committee for Information Technology Standards
Notes
External links
SCSI Trade Association – Official website
SCSI
Technology trade associations
Trade associations based in the United States |
https://en.wikipedia.org/wiki/Quotient%20of%20subspace%20theorem | In mathematics, the quotient of subspace theorem is an important property of finite-dimensional normed spaces, discovered by Vitali Milman.
Let (X, ||·||) be an N-dimensional normed space. There exist subspaces Z ⊂ Y ⊂ X such that the following holds:
The quotient space E = Y / Z is of dimension dim E ≥ c N, where c > 0 is a universal constant.
The induced norm || · || on E, defined by
is uniformly isomorphic to Euclidean. That is, there exists a positive quadratic form ("Euclidean structure") Q on E, such that
for
with K > 1 a universal constant.
The statement is relative easy to prove by induction on the dimension of Z (even for Y=Z, X=0, c=1) with a K that depends only on N; the point of the theorem is that K is independent of N.
In fact, the constant c can be made arbitrarily close to 1, at the expense of the
constant K becoming large. The original proof allowed
Notes |
https://en.wikipedia.org/wiki/Dini%27s%20theorem | In the mathematical field of analysis, Dini's theorem says that if a monotone sequence of continuous functions converges pointwise on a compact space and if the limit function is also continuous, then the convergence is uniform.
Formal statement
If is a compact topological space, and is a monotonically increasing sequence (meaning for all and ) of continuous real-valued functions on which converges pointwise to a continuous function , then the convergence is uniform. The same conclusion holds if is monotonically decreasing instead of increasing. The theorem is named after Ulisse Dini.
This is one of the few situations in mathematics where pointwise convergence implies uniform convergence; the key is the greater control implied by the monotonicity. The limit function must be continuous, since a uniform limit of continuous functions is necessarily continuous. The continuity of the limit function cannot be inferred from the other hypothesis (consider in .)
Proof
Let be given. For each , let , and let be the set of those such that . Each is continuous, and so each is open (because each is the preimage of the open set under , a continuous function). Since is monotonically increasing, is monotonically decreasing, it follows that the sequence is ascending (i.e. for all ). Since converges pointwise to , it follows that the collection is an open cover of . By compactness, there is a finite subcover, and since are ascending the largest of these is a cover too. Thus we obtain that there is some positive integer such that . That is, if and is a point in , then , as desired.
Notes |
https://en.wikipedia.org/wiki/Complex%20conjugate%20root%20theorem | In mathematics, the complex conjugate root theorem states that if P is a polynomial in one variable with real coefficients, and a + bi is a root of P with a and b real numbers, then its complex conjugate a − bi is also a root of P.
It follows from this (and the fundamental theorem of algebra) that, if the degree of a real polynomial is odd, it must have at least one real root. That fact can also be proved by using the intermediate value theorem.
Examples and consequences
The polynomial x2 + 1 = 0 has roots ± i.
Any real square matrix of odd degree has at least one real eigenvalue. For example, if the matrix is orthogonal, then 1 or −1 is an eigenvalue.
The polynomial
has roots
and thus can be factored as
In computing the product of the last two factors, the imaginary parts cancel, and we get
The non-real factors come in pairs which when multiplied give quadratic polynomials with real coefficients. Since every polynomial with complex coefficients can be factored into 1st-degree factors (that is one way of stating the fundamental theorem of algebra), it follows that every polynomial with real coefficients can be factored into factors of degree no higher than 2: just 1st-degree and quadratic factors.
If the roots are and , they form a quadratic
.
If the third root is , this becomes
.
Corollary on odd-degree polynomials
It follows from the present theorem and the fundamental theorem of algebra that if the degree of a real polynomial is odd, it must have at least one real root.
This can be proved as follows.
Since non-real complex roots come in conjugate pairs, there are an even number of them;
But a polynomial of odd degree has an odd number of roots;
Therefore some of them must be real.
This requires some care in the presence of multiple roots; but a complex root and its conjugate do have the same multiplicity (and this lemma is not hard to prove). It can also be worked around by considering only irreducible polynomials; any real polynomial o |
https://en.wikipedia.org/wiki/Multi-threshold%20CMOS | Multi-threshold CMOS (MTCMOS) is a variation of CMOS chip technology which has transistors with multiple threshold voltages (Vth) in order to optimize delay or power. The Vth of a MOSFET is the gate voltage where an inversion layer forms at the interface between the insulating layer (oxide) and the substrate (body) of the transistor. Low Vth devices switch faster, and are therefore useful on critical delay paths to minimize clock periods. The penalty is that low Vth devices have substantially higher static leakage power. High Vth devices are used on non-critical paths to reduce static leakage power without incurring a delay penalty. Typical high Vth devices reduce static leakage by 10 times compared with low Vth devices.
One method of creating devices with multiple threshold voltages is to apply different bias voltages (Vb) to the base or bulk terminal of the transistors. Other methods involve adjusting the gate oxide thickness, gate oxide dielectric constant (material type), or dopant concentration in the channel region beneath the gate oxide.
A common method of fabricating multi-threshold CMOS involves simply adding additional photolithography and ion implantation steps. For a given fabrication process, the Vth is adjusted by altering the concentration of dopant atoms in the channel region beneath the gate oxide. Typically, the concentration is adjusted by ion implantation method. For example, photolithography methods are applied to cover all devices except the p-MOSFETs with photoresist. Ion implantation is then completed, with ions of the chosen dopant type penetrating the gate oxide in areas where no photoresist is present. The photoresist is then stripped. Photolithography methods are again applied to cover all devices except the n-MOSFETs. Another implantation is then completed using a different dopant type, with ions penetrating the gate oxide. The photoresist is stripped. At some point during the subsequent fabrication process, implanted ions are activa |
https://en.wikipedia.org/wiki/M.%20Riesz%20extension%20theorem | The M. Riesz extension theorem is a theorem in mathematics, proved by Marcel Riesz during his study of the problem of moments.
Formulation
Let be a real vector space, be a vector subspace, and be a convex cone.
A linear functional is called -positive, if it takes only non-negative values on the cone :
A linear functional is called a -positive extension of , if it is identical to in the domain of , and also returns a value of at least 0 for all points in the cone :
In general, a -positive linear functional on cannot be extended to a -positive linear functional on . Already in two dimensions one obtains a counterexample. Let and be the -axis. The positive functional can not be extended to a positive functional on .
However, the extension exists under the additional assumption that namely for every there exists an such that
Proof
The proof is similar to the proof of the Hahn–Banach theorem (see also below).
By transfinite induction or Zorn's lemma it is sufficient to consider the case dim .
Choose any . Set
We will prove below that . For now, choose any satisfying , and set , , and then extend to all of by linearity. We need to show that is -positive. Suppose . Then either , or or for some and . If , then . In the first remaining case , and so
by definition. Thus
In the second case, , and so similarly
by definition and so
In all cases, , and so is -positive.
We now prove that . Notice by assumption there exists at least one for which , and so . However, it may be the case that there are no for which , in which case and the inequality is trivial (in this case notice that the third case above cannot happen). Therefore, we may assume that and there is at least one for which . To prove the inequality, it suffices to show that whenever and , and and , then . Indeed,
since is a convex cone, and so
since is -positive.
Corollary: Krein's extension theorem
Let E be a real linear space, and let K ⊂ E be a convex cone. Let x ∈ E\(− |
https://en.wikipedia.org/wiki/Methyllysine | Methyllysine is derivative of the amino acid residue lysine where the sidechain ammonium group has been methylated one or more times.
Such methylated lysines play an important role in epigenetics; the methylation of specific lysines of certain histones in a nucleosome alters the binding of the surrounding DNA to those histones, which in turn affects the expression of genes on that DNA. The binding is affected because the effective radius of the positive charge is increased (methyl groups are larger than the hydrogen atoms they replace), reducing the strongest potential electrostatic attraction with the negatively charged DNA.
It is thought that the methylation of lysine (and arginine) on histone tails does not directly affect their binding to DNA. Rather, such methyl marks recruit other proteins that modulate chromatin structure.
In Protein Data Bank files, methylated lysines are indicated by the MLY or MLZ acronyms. |
https://en.wikipedia.org/wiki/Sand%20boil | Sand boils or sand volcanoes occur when water under pressure wells up through a bed of sand. The water looks like it is boiling up from the bed of sand, hence the name.
Sand volcano
A sand volcano or sand blow is a cone of sand formed by the ejection of sand onto a surface from a central point. The sand builds up as a cone with slopes at the sand's angle of repose. A crater is commonly seen at the summit. The cone looks like a small volcanic cone and can range in size from millimetres to metres in diameter.
The process is often associated with soil liquefaction and the ejection of fluidized sand that can occur in water-saturated sediments during an earthquake. The New Madrid Seismic Zone exhibited many such features during the 1811–12 New Madrid earthquakes. Adjacent sand blows aligned in a row along a linear fracture within fine-grained surface sediments are just as common, and can still be seen in the New Madrid area.
In the past few years, much effort has gone into the mapping of liquefaction features to study ancient earthquakes. The basic idea is to map zones that are susceptible to the process and then go in for a closer look. The presence or absence of soil liquefaction features is strong evidence of past earthquake activity, or lack thereof.
These are to be contrasted with mud volcanoes, which occur in areas of geyser or subsurface gas venting.
Flood protection structures
Sand boils can be a mechanism contributing to liquefaction and levee failure during floods. This effect is caused by a difference in pressure on two sides of a levee or dike, most likely during a flood. This process can result in internal erosion, whereby the removal of soil particles results in a pipe through the embankment. The creation of the pipe will quickly pick up pace and will eventually result in failure of the embankment.
A sand boil is difficult to stop. The most effective method is by creating a body of water above the boil to create enough pressure to slow the flow of |
https://en.wikipedia.org/wiki/Open%20Source%20Cluster%20Application%20Resources | Open Source Cluster Application Resources (OSCAR) is a Linux-based software installation for high-performance cluster computing. OSCAR allows users to install a Beowulf type high performance computing cluster.
See also
TORQUE Resource Manager
Maui Cluster Scheduler
Beowulf cluster
External links
Official OSCAR site
github repository
Cluster computing
Parallel computing |
https://en.wikipedia.org/wiki/Tonic%20sol-fa | Tonic sol-fa (or tonic sol-fah) is a pedagogical technique for teaching sight-singing, invented by Sarah Ann Glover (1785–1867) of Norwich, England and popularised by John Curwen, who adapted it from a number of earlier musical systems. It uses a system of musical notation based on movable do solfège, whereby every note is given a name according to its relationship with other notes in the key: the usual staff notation is replaced with anglicized solfège syllables (e.g. do, re, mi, fa, sol, la, ti, do) or their abbreviations (d, r, m, f, s, l, t, d). "Do" is chosen to be the tonic of whatever key is being used (thus the terminology moveable Do in contrast to the fixed Do system used by John Pyke Hullah). The original solfège sequence started with "Ut", the first syllable of the hymn Ut queant laxis, which later became "Do".
Overview
Glover developed her method in Norwich from 1812, resulting in the "Norwich Sol-fa Ladder" which she used to teach children to sing. She published her work in the Manual of the Norwich Sol-fa System (1845) and Tetrachordal System (1850).
Curwen was commissioned by a conference of Sunday school teachers in 1841 to find and promote a
way of teaching music for Sunday school singing. He took elements of the Norwich Sol-fa and other techniques later adding hand signals. It was intended that his method could teach singing initially from the Sol-fa and then a transition to staff notation.
Curwen brought out his Grammar of Vocal Music in 1843, and in 1853 started the Tonic Sol-Fa Association. The Standard Course of Lessons on the Tonic Sol-fa Method of Teaching to Sing was published in 1858.
In 1872, Curwen changed his former course of using the Sol-fa system as an aid to sight reading, when that edition of his Standard Course of Lessons excluded the staff and relied solely on Tonic Sol-fa.
In 1879 the Tonic Sol-Fa College was opened. Curwen also began publishing, and brought out a periodical called the Tonic Sol-fa Reporter and Magazine of |
https://en.wikipedia.org/wiki/Diskless%20shared-root%20cluster | A diskless shared-root cluster is a way to manage several machines at the same time. Instead of each having its own operating system (OS) on its local disk, there is only one image of the OS available on a server, and all the nodes use the same image. (SSI cluster = single-system image)
The simplest way to achieve this is to use a NFS server, configured to host the generic boot image for the SSI cluster nodes. (pxe + dhcp + tftp + nfs)
To ensure that there is no single point of failure, the NFS export for the boot-image should be hosted on a two node cluster.
The architecture of a diskless computer cluster makes it possible to separate servers and storage array. The operating system as well as the actual reference data (userfiles, databases or websites) are stored competitively on the attached storage system in a centralized manner. Any server that acts as a cluster node can be easily exchanged by demand.
The additional abstraction layer between storage system and computing power eases the scale out of the infrastructure. Most notably the storage capacity, the computing power and the network bandwidth can be scaled independent from one another.
A similar technology can be found in VMScluster (OpenVMS) and TruCluster (Tru64 UNIX).
The open-source implementation of a diskless shared-root cluster is known as Open-Sharedroot.
Literature
Marc Grimme, Mark Hlawatschek, Thomas Merz: Data sharing with a Red Hat GFS storage cluster
Marc Grimme, Mark Hlawatschek German Whitepaper: Der Diskless Shared-root Cluster (PDF-Datei; 1,1 MB)
Kenneth W. Preslan: Red Hat GFS 6.1 – Administrator’s Guide |
https://en.wikipedia.org/wiki/Process%20migration | In computing, process migration is a specialized form of process management whereby processes are moved from one computing environment to another. This originated in distributed computing, but is now used more widely. On multicore machines (multiple cores on one processor or multiple processors) process migration happens as a standard part of process scheduling, and it is quite easy to migrate a process within a given machine, since most resources (memory, files, sockets) do not need to be changed, only the execution context (primarily program counter and registers).
The traditional form of process migration is in computer clusters where processes are moved from machine to machine, which is significantly more difficult, as it requires serializing the process image and migrating or reacquiring resources at the new machine. The first implementation of process migration was in the DEMOS/MP operating project at the University of California, Berkeley and was described in a 1983 paper by Barton Miller and Michael Powell. Process migration is implemented in, among others, OpenMosix and the Sprite OS from the University of California, Berkeley.
Varieties
Process migration in computing comes in two flavors:
Non-preemptive process migration Process migration that takes place before execution of the process starts (i.e. migration whereby a process need not be preempted). This type of process migration is relatively cheap, since relatively little administrative overhead is involved.
Preemptive process migration Process migration whereby a process is preempted, migrated and continues processing in a different execution environment. This type of process migration is relatively expensive, since it involves recording, migration and recreation of the process state as well as the reconstructing of any inter-process communication channels to which the migrating process is connected.
Problems
Several problems occur when a running process moves to another machine. Some of these prob |
https://en.wikipedia.org/wiki/Alveolar%20gas%20equation | The alveolar gas equation is the method for calculating partial pressure of alveolar oxygen (PAO2). The equation is used in assessing if the lungs are properly transferring oxygen into the blood. The alveolar air equation is not widely used in clinical medicine, probably because of the complicated appearance of its classic forms.
The partial pressure of oxygen (pO2) in the pulmonary alveoli is required to calculate both the alveolar-arterial gradient of oxygen and the amount of right-to-left cardiac shunt, which are both clinically useful quantities. However, it is not practical to take a sample of gas from the alveoli in order to directly measure the partial pressure of oxygen. The alveolar gas equation allows the calculation of the alveolar partial pressure of oxygen from data that is practically measurable. It was first characterized in 1946.
Assumptions
The equation relies on the following assumptions:
Inspired gas contains no carbon dioxide (CO2)
Nitrogen (and any other gases except oxygen) in the inspired gas are in equilibrium with their dissolved states in the blood
Inspired and alveolar gases obey the ideal gas law
Carbon dioxide (CO2) in the alveolar gas is in equilibrium with the arterial blood i.e. that the alveolar and arterial partial pressures are equal
The alveolar gas is saturated with water
Equation
If is small, or more specifically if then the equation can be simplified to:
where:
Sample Values given for air at sea level at 37 °C.
Doubling will double .
Other possible equations exist to calculate the alveolar air.
Abbreviated alveolar air equation
PAO2, PEO2, and PiO2 are the partial pressures of oxygen in alveolar, expired, and inspired gas, respectively, and VD/VT is the ratio of physiologic dead space over tidal volume.
Respiratory quotient (R)
Physiologic dead space over tidal volume (VD/VT)
See also
Pulmonary gas pressures |
https://en.wikipedia.org/wiki/Fixation%20%28population%20genetics%29 | In population genetics, fixation is the change in a gene pool from a situation where there exists at least two variants of a particular gene (allele) in a given population to a situation where only one of the alleles remains. That is, the allele becomes fixed.
In the absence of mutation or heterozygote advantage, any allele must eventually be lost completely from the population or fixed (permanently established at 100% frequency in the population). Whether a gene will ultimately be lost or fixed is dependent on selection coefficients and chance fluctuations in allelic proportions. Fixation can refer to a gene in general or particular nucleotide position in the DNA chain (locus).
In the process of substitution, a previously non-existent allele arises by mutation and undergoes fixation by spreading through the population by random genetic drift or positive selection. Once the frequency of the allele is at 100%, i.e. being the only gene variant present in any member, it is said to be "fixed" in the population.
Similarly, genetic differences between taxa are said to have been fixed in each species.
History
The earliest mention of gene fixation in published works was found in Motoo Kimura's 1962 paper "On Probability of Fixation of Mutant Genes in a Population". In the paper, Kimura uses mathematical techniques to determine the probability of fixation of mutant genes in a population. He showed that the probability of fixation depends on the initial frequency of the allele and the mean and variance of the gene frequency change per generation.
Probability
Under conditions of genetic drift alone, every finite set of genes or alleles has a "coalescent point" at which all descendants converge to a single ancestor (i.e. they 'coalesce'). This fact can be used to derive the rate of gene fixation of a neutral allele (that is, one not under any form of selection) for a population of varying size (provided that it is finite and nonzero). Because the effect of natural selecti |
https://en.wikipedia.org/wiki/LWT%20%28journal%29 | LWT - Food Science and Technology, formerly known as Lebensmittel-Wissenschaft & Technologie (), is a peer-reviewed scientific journal published by Elsevier. It is the official journal of the Swiss Society of Food Science and Technology and the International Union of Food Science and Technology. The editor-in-chief is Rakesh K. Singh. According to the Journal Citation Reports, the journal's 2020 impact factor is 4.952.
From January 2022 LWT will become an open access journal. |
https://en.wikipedia.org/wiki/Ena/Vasp%20homology%20proteins | ENA/VASP homology proteins or EVH proteins are a family of closely related proteins involved in cell motility in vertebrate and invertebrate animals. EVH proteins are modular proteins that are involved in actin polymerization, as well as interactions with other proteins. Within the cell, Ena/VASP proteins are found at the leading edge of Lamellipodia and at the tips of filopodia. Ena, the founding member of the family was discovered in a drosophila genetic screen for mutations that act as dominant suppressors of the abl non receptor tyrosine kinase. Invertebrate animals have one Ena homologue, whereas mammals have three, named Mena, VASP, and Evl.
Ena/VASP proteins promote the spatially regulated actin polymerization required for efficient chemotaxis in response to attractive and repulsive guidance cues. Mice lacking functional copies of all three family members display pleiotropic phenotypes including exencephaly, edema, failures in neurite formation, and embryonic lethality.
A sub-domain of EVH is the EVH1 domain.
VASP
Vasodilator-stimulated phosphoprotein (VASP) 45-residue-long tetramerization protein domain which regulates actin dynamics in the cytoskeleton. This is vital for processes such as cell adhesion and cell migration.
Function
Ena/VASP proteins are actin cytoskeletal regulatory proteins. Ena/VASP proteins are often found in dynamic actin structures like filopodia and lamellipodia, but the precise function in their formation is controversial. Ena/VASP proteins remain processively bound to growing barbed (+) ends of an actin filaments. They promote actin filament elongation both by delivering monomeric actin to the barbed (+) ends as well as protecting these ends from F-actin capping protein.
Structure
The tetramerisation domain has a right-handed alpha helical coiled-coil structure. |
https://en.wikipedia.org/wiki/Cuspate%20foreland | Cuspate forelands, also known as cuspate barriers or nesses in Britain, are geographical features found on coastlines and lakeshores that are created primarily by longshore drift. Formed by accretion and progradation of sand and shingle, they extend outwards from the shoreline in a triangular shape.
Some cuspate forelands may be stabilised by vegetation, while others may migrate down the shoreline. Because some cuspate forelands provide an important habitat for flora and fauna, effective management is required to reduce the impacts from both human activities and physical factors such as climate change and sea level rise.
Formation
The debate involving how cuspate forelands form is ongoing. However, the most widely accepted process of formation involves long shore drift. Where longshore drift occurs in opposite directions, two spits merge into a triangular protrusion along a coastline or lakeshore. Their formation is also dependent on dominant and prevailing winds working in opposite directions. Formation can also occur when waves are diffracted around a barrier.
Cuspate forelands can form both along coastlines and along lakeshores. Those formed along coastlines can be in the lee of an offshore island, along a coastline that has no islands in the vicinity, or at a stream mouth where disposition occurs.
Formation in narrow straits or on open coastlines
A cuspate foreland can form in a strait or along a coastline that has no islands or shoals in the area. In this case, longshore drift as well as prevailing wind and waves bring sediment together from opposite directions. If there is a large angle between the waves and the shoreline, the sediment converges, accumulates, and forms beach ridges. Over time, a cuspate foreland forms as a result of continued accretion and progradation. An example of this type of cuspate foreland is the one found at Dungeness along the southern coast of Britain. This cuspate foreland has formed as a result of the merging of SW waves fro |
https://en.wikipedia.org/wiki/Inter-server | In computer network protocol design, inter-server communication is an extension of the client–server model in which data are exchanged directly between servers. In some fields server-to-server (S2S) is used as an alternative, and the term inter-domain can in some cases be used interchangeably.
Protocols
Protocols that have inter-server functions as well as the regular client–server communications include the following:
IPsec, secure network protocol that can be used to secure a host-to-host connection
The domain name system (DNS), which uses an inter-server protocol for zone transfers;
The Dynamic Host Configuration Protocol (DHCP);
FXP, allowing file transfer directly between FTP servers;
The Inter-Asterisk eXchange (IAX);
InterMUD;
The IRC, an Internet chat system with an inter-server protocol allowing clients to be distributed across many servers;
The Network News Transfer Protocol (NNTP);
The Protocol for SYnchronous Conferencing (PSYC);
SIP, a signaling protocol commonly used for Voice over IP;
SILC, a secure Internet conferencing protocol;
The Extensible Messaging and Presence Protocol (XMPP, formerly named Jabber).
ActivityPub a client/server API for creating, updating and deleting content, as well as a federated server-to-server API for delivering notifications and content.
SMTP which accepts both MUA->MTA traffic, as well as MTA->MTA, but it is usually recommended that different ports are used for these actions
Some of these protocols employ multicast strategies to efficiently deliver information to multiple servers at once.
See also
Overlay network
Internet Relay Chat
Network protocols |
https://en.wikipedia.org/wiki/QuickCheck | QuickCheck is a software library, specifically a combinator library, originally written in the programming language Haskell, designed to assist in software testing by generating test cases for test suites – an approach known as property testing.
Software
It is compatible with the compiler, Glasgow Haskell Compiler (GHC) and the interpreter, Haskell User's Gofer System (Hugs). It is free and open-source software released under a BSD-style license.
In QuickCheck, assertions are written about logical properties that a function should fulfill. Then QuickCheck attempts to generate a test case that falsifies such assertions. Once such a test case is found, QuickCheck tries to reduce it to a minimal failing subset by removing or simplifying input data that are unneeded to make the test fail.
The project began in 1999. Besides being used to test regular programs, QuickCheck is also useful for building up a functional specification, for documenting what functions should be doing, and for testing compiler implementations.
Re-implementations of QuickCheck exist for several languages:
C
C++
Chicken
Clojure
Common Lisp
Coq
D
Elm
Elixir
Erlang
F#, and C#, Visual Basic .NET (VB.NET)
Factor
Go
Io
Java
JavaScript
Julia
Logtalk
Lua
Mathematica
Objective-C
OCaml
Perl
Prolog
PHP
Pony
Python
R
Racket
Ruby
Rust
Scala
Scheme
Smalltalk
Standard ML
Swift
TypeScript
Whiley
See also
SPIN model checker
Software testing#Property testing |
https://en.wikipedia.org/wiki/N%C3%A9ron%20model | In algebraic geometry, the Néron model (or Néron minimal model, or minimal model)
for an abelian variety AK defined over the field of fractions K of a Dedekind domain R is the "push-forward" of AK from Spec(K) to Spec(R), in other words the "best possible" group scheme AR defined over R corresponding to AK.
They were introduced by for abelian varieties over the quotient field of a Dedekind domain R with perfect residue fields, and extended this construction to semiabelian varieties over all Dedekind domains.
Definition
Suppose that R is a Dedekind domain with field of fractions K, and suppose that AK is a smooth separated scheme over K (such as an abelian variety). Then a Néron model of AK is defined to be a smooth separated scheme AR over R with fiber AK that is universal in the following sense.
If X is a smooth separated scheme over R then any K-morphism from XK to AK can be extended to a unique R-morphism from X to AR (Néron mapping property).
In particular, the canonical map is an isomorphism. If a Néron model exists then it is unique up to unique isomorphism.
In terms of sheaves, any scheme A over Spec(K) represents a sheaf on the category of schemes smooth over Spec(K) with the smooth Grothendieck topology, and this has a pushforward by the injection map from Spec(K) to Spec(R), which is a sheaf over Spec(R). If this pushforward is representable by a scheme, then this scheme is the Néron model of A.
In general the scheme AK need not have any Néron model.
For abelian varieties AK Néron models exist and are unique (up to unique isomorphism) and are commutative quasi-projective group schemes over R. The fiber of a Néron model over a closed point of Spec(R) is a smooth commutative algebraic group, but need not be an abelian variety: for example, it may be disconnected or a torus. Néron models exist as well for certain commutative groups other than abelian varieties such as tori, but these are only locally of finite type. Néron models do not exis |
https://en.wikipedia.org/wiki/Readex | Readex, a division of NewsBank, publishes collections of primary source research materials.
History
In 1950, publisher Albert Boni, co-founder of the Modern Library, formed the Readex Microprint Corporation in New York City. Some of the companies Readex partnered with early on included the American Antiquarian Society and the Library of Congress. Early printing projects included the “British House of Commons Sessional Papers”, the “Early American Imprints, 1639-1800”, and the “English and American Drama of the 19th Century”.
In 1983, Readex was acquired by NewsBank. Since the acquisition, the company has been known primarily as NewsBank.
In the 2000s, several collections became available in searchable digital editions.
Partnerships formed in the 2000s with the Library of Congress, Dartmouth College Library, University of Vermont Libraries, and the United States Senate Library led to publication of digital editions of the American State Papers and the United States Congressional Serial Set.
In 2006, Readex launched America's Historical Newspapers, which includes Early American Newspapers, 1690-1922, and American Ethnic Newspapers.
In 2007, Readex announced a Web-based edition of the Foreign Broadcast Information Service (FBIS) Daily Reports, 1941-1996—the record of political and historical open-source intelligence for the United States government. Also developed was a digital edition of Joint Publications Research Service (JPRS) Reports, 1957-1994.
In 2008, Readex announced a partnership with the Center for Research Libraries to launch an online World Newspaper Archive.
Collections
More recent collections created in partnership with the American Antiquarian Society include The American Civil War Collection, 1860-1922; The American Slavery Collection, 1820-1922; and Caribbean Newspapers, 1718-1876. Also part of America’s Historical Imprints is American Pamphlets, 1820-1922: From the New-York Historical Society.
The latter includes Ethnic American Newspaper |
https://en.wikipedia.org/wiki/Wholesale%20line%20rental | Wholesale line rental (WLR) is a service in which a telecommunications operator takes control of all the connections made through a telephone line from the native operator and collects the subscription fee from the subscribers.
With WLR the alternative telecoms provider buys a wholesale product from the incumbent (usually in conjunction with a wholesale call product such as CPS) and is then able to produce a single bill for the end user covering calls and line rental. Broadband services can also be provided by the WLR operator (and included in a single bill) if a separate wholesale DSL product is purchased from the incumbent, but this is optional.
As part of the United Kingdom PSTN switch-off, BT Openreach is due to retire its Wholesale Line Rental service. Providers are expected to implement Voice over IP to replace copper PSTN voice and ISDN connections. |
https://en.wikipedia.org/wiki/External%20variable | In the C programming language, an external variable is a variable defined outside any function block. On the other hand, a local (automatic) variable is a variable defined inside a function block.
Definition, declaration and the extern keyword
To understand how external variables relate to the extern keyword, it is necessary to know the difference between defining and declaring a variable. When a variable is defined, the compiler allocates memory for that variable and possibly also initializes its contents to some value. When a variable is declared, the compiler requires that the variable be defined elsewhere. The declaration informs the compiler that a variable by that name and type exists, but the compiler does not need to allocate memory for it since it is allocated elsewhere.
The extern keyword means "declare without defining". In other words, it is a way to explicitly declare a variable, or to force a declaration without a definition. It is also possible to explicitly define a variable, i.e. to force a definition. It is done by assigning an initialization value to a variable. If neither the extern keyword nor an initialization value are present, the statement can be either a declaration or a definition. It is up to the compiler to analyse the modules of the program and decide.
A variable must be defined exactly once in one of the modules of the program. If there is no definition or more than one, an error is produced, possibly in the linking stage. A variable may be declared many times, as long as the declarations are consistent with each other and with the definition (something which header files facilitate greatly). It may be declared in many modules, including the module where it was defined, and even many times in the same module. But it is usually pointless to declare it more than once in a module.
An external variable may also be declared inside a function. In this case the extern keyword must be used, otherwise the compiler will consider it a definit |
https://en.wikipedia.org/wiki/Compound%20heterozygosity | In medical genetics, compound heterozygosity is the condition of having two or more heterogeneous recessive alleles at a particular locus that can cause genetic disease in a heterozygous state; that is, an organism is a compound heterozygote when it has two recessive alleles for the same gene, but with those two alleles being different from each other (for example, both alleles might be mutated but at different locations). Compound heterozygosity reflects the diversity of the mutation base for many autosomal recessive genetic disorders; mutations in most disease-causing genes have arisen many times. This means that many cases of disease arise in individuals who have two unrelated alleles, who technically are heterozygotes, but both the alleles are defective.
These disorders are often best known in some classic form, such as the homozygous recessive case of a particular mutation that is widespread in some population. In its compound heterozygous forms, the disease may have lower penetrance, because the mutations involved are often less deleterious in combination than for a homozygous individual with the classic symptoms of the disease. As a result, compound heterozygotes often become ill later in life, with less severe symptoms. Although compound heterozygosity as a cause of genetic disease had been suspected much earlier, widespread confirmation of the phenomenon was not feasible until the 1980s, when polymerase chain reaction techniques for amplification of DNA made it cost-effective to sequence genes and identify polymorphic alleles.
Cause
Compound heterozygosity is one of the causes of variation in genetic disease. The diagnosis and nomenclature for such disorders sometimes reflects history, because most diseases were first observed and classified based on biochemistry and pathophysiology before genetic diagnosis was available. Some genetic disorders are really a family of related disorders that occur in the same metabolic pathway, or in related pathways. Na |
https://en.wikipedia.org/wiki/Skip%20counting | Skip counting is a mathematics technique taught as a kind of multiplication in reform mathematics textbooks such as TERC. In older textbooks, this technique is called counting by twos (threes, fours, etc.).
In skip counting by twos, a person can count to 10 by only naming every other even number: 2, 4, 6, 8, 10. Combining the base (two, in this example) with the number of groups (five, in this example) produces the standard multiplication equation: two multiplied by five equals ten. |
https://en.wikipedia.org/wiki/Countryman%20line | In mathematics, a Countryman line (named after Roger Simmons Countryman Jr.) is an uncountable linear ordering whose square is the union of countably many chains. The existence of Countryman lines was first proven by Shelah. Shelah also conjectured that, assuming PFA, every Aronszajn line contains a Countryman line. This conjecture, which remained open for three decades, was proven by Justin Moore. |
https://en.wikipedia.org/wiki/Open%20Base%20Station%20Architecture%20Initiative | The Open Base Station Architecture Initiative (OBSAI) was a trade association created by Hyundai, LG Electronics, Nokia, Samsung and ZTE in September 2002 with the aim of creating an open market for cellular network base stations. The hope was that an open market would reduce the development effort and costs traditionally associated with creating base station products.
Goal
The OBSAI specifications provided the architecture, function descriptions and minimum requirements for integration of a set of common modules into a base transceiver station (BTS). It:
defined an internal modular structure of wireless base stations.
defined a set of standard BTS modules with specified form, fit and function such that BTS vendors can acquire and integrate modules from multiple vendors in an OEM fashion.
defined internal digital interfaces between BTS modules to assure interoperability and compatibility.
supported different access technologies such as GSM, Enhanced Data Rates for GSM Evolution (EDGE), CDMA2000, WCDMA or IEEE 802.16 marketed as WiMAX.
This was intended to provide the BTS integrator with flexibility.
A version 2.0 system reference document was published in 2006.
BTS structure
The OBSAI Reference Architecture defines four functional blocks, interfaces between them, and requirements for external interfaces.
Functional blocks
A base transceiver station (BTS) has four main blocks or logical entities: Radio Frequency (RF) block, Baseband block, Control and Clock block, and Transport block.
The Radio Frequency Block sends and receives signals to/from portable devices (via the air interface) and converts between digital data and antenna signal.
Some of the main functions are D/A and A/D conversion, up/down conversion, carrier selection, linear power amplification, diversity transmit and receive, RF combining and RF filtering.
The Baseband Block processes the baseband signal. The functions include encoding/decoding, ciphering/deciphering, frequency hopping (GS |
https://en.wikipedia.org/wiki/CryptoGraf | CryptoGraf is a secure messaging application for smartphones running Symbian OS and Windows Mobile. It allows the user to compose and send SMS and MMS messages that are encrypted and digitally signed using methods that are based on the S/MIME standard. Secure e-mail messaging is not supported.
The cryptographic algorithms supported by CryptoGraf include AES, RSA and SHA-256.
RSA public keys of other users are stored in a "Crypto Contacts" list . The user sends an encrypted SMS or MMS to a recipient listed in Crypto Contacts. Keys must be exchanged before messages can be sent. The way a Crypto Contact is received determines the trust level assigned to the key:
High trust for Crypto Contacts received by Bluetooth.
Medium trust for Crypto Contacts received via High trust contacts.
Low trust for Crypto Contacts received by SMS or MMS.
The Crypto Contacts list is based on a trust model similar to the Web of trust known from PGP. Crypto Contacts are compatible with X.509 digital certificates and contain RSA (1024/2048 bit) public keys. Messages are encrypted using AES-256 bit and digitally signed using RSA (1024/2048 bit) with SHA-256. CryptoGraf is integrated with standard messaging application in both Symbian and Windows Mobile and stores messages in the default Inbox, Sent and other folders.
CryptoGraf in the press
CryptoGraf got attention from local newspaper after their first release.
The Nation, 23 January 2007. Data-encryption firm upbeat
See also
S/MIME
PGP
Public key infrastructure
Random number generator
X.509
Web of trust
External links
CryptoGraf website
CryptoGraf documentation
CryptoGraf FAQ
Cryptographic software
Pocket PC software
Symbian instant messaging clients |
https://en.wikipedia.org/wiki/Tend%20and%20befriend | Tend-and-befriend is a behavior exhibited by some animals, including humans, in response to threat. It refers to protection of offspring (tending) and seeking out their social group for mutual defense (befriending). In evolutionary psychology, tend-and-befriend is theorized as having evolved as the typical female response to stress.
The tend-and-befriend theoretical model was originally developed by Shelley E. Taylor and her research team at the University of California, Los Angeles and first described in a Psychological Review article published in the year 2000.
Biological bases
According to the Polyvagal theory developed by Dr. Stephen Porges, the "Social Nervous System" is an affiliative neurocircuitry that prompts affiliation, particularly in response to stress. This system is described as regulating social approach behavior. A biological basis for this regulation appears to be oxytocin.
Oxytocin has been tied to a broad array of social relationships and activities, including peer bonding, sexual activity, and affiliative preferences. Oxytocin is released in humans in response to a broad array of stressors, especially those that may trigger affiliative needs. Oxytocin promotes affiliative behavior, including maternal tending and social contact with peers. Thus, affiliation under stress serves tending needs, including protective responses towards offspring. Affiliation may also take the form of befriending, namely seeking social contact for one's own protection, the protection of offspring, and the protection of the social group. These social responses to threat reduce biological stress responses, including lowering heart rate, blood pressure, and hypothalamic pituitary adrenal axis (HPA) stress activity, such as cortisol responses.
Women are more likely to respond to stress through tending and befriending than men. Paralleling this behavioral sex difference, estrogen enhances the effects of oxytocin, whereas androgens inhibit oxytocin release.
Tending unde |
https://en.wikipedia.org/wiki/Sazonov%27s%20theorem | In mathematics, Sazonov's theorem, named after Vyacheslav Vasilievich Sazonov (), is a theorem in functional analysis.
It states that a bounded linear operator between two Hilbert spaces is γ-radonifying if it is a Hilbert–Schmidt operator. The result is also important in the study of stochastic processes and the Malliavin calculus, since results concerning probability measures on infinite-dimensional spaces are of central importance in these fields. Sazonov's theorem also has a converse: if the map is not Hilbert–Schmidt, then it is not γ-radonifying.
Statement of the theorem
Let G and H be two Hilbert spaces and let T : G → H be a bounded operator from G to H. Recall that T is said to be γ-radonifying if the push forward of the canonical Gaussian cylinder set measure on G is a bona fide measure on H. Recall also that T is said to be a Hilbert–Schmidt operator if there is an orthonormal basis } of G such that
Then Sazonov's theorem is that T is γ-radonifying if it is a Hilbert–Schmidt operator.
The proof uses Prokhorov's theorem.
Remarks
The canonical Gaussian cylinder set measure on an infinite-dimensional Hilbert space can never be a bona fide measure; equivalently, the identity function on such a space cannot be γ-radonifying.
See also |
https://en.wikipedia.org/wiki/Dorsal%20carpometacarpal%20ligaments | The dorsal carpometacarpal ligaments, the strongest and most distinct carpometacarpal ligaments, connect the carpal and metacarpal bones on their dorsal surfaces.
The second metacarpal bone receives two fasciculi, one from the greater, the other from the lesser multangular.
The third metacarpal receives two, one each from the lesser multangular and capitate.
The fourth two, one each from the capitate and hamate.
The fifth receives a single fasciculus from the hamate, and this is continuous with a similar ligament on the volar surface, forming an incomplete capsule. |
https://en.wikipedia.org/wiki/Palmar%20carpometacarpal%20ligaments | The Palmar carpometacarpal ligaments (or volar) are a series of bands on the palmar surface of the carpometacarpal joints that connect the carpal bones to the second through fifth metacarpal bones. The second metacarpal is connected to the trapezium. The third metacarpal is connected to the trapezium, to the capitate, and to the hamate. The fourth and fifth metacarpals are connected to the hamate.
The palmar carpometacarpal ligaments have a somewhat similar arrangement to the dorsal carpometacarpal ligaments, with the exception of those of the third metacarpal, which are three in number:
a lateral one from the greater multangular, situated superficial to the sheath of the tendon of the Flexor carpi radialis;
and intermediate one from the capitate;
and a medial one from the hamate. |
https://en.wikipedia.org/wiki/Gaussian%20free%20field | In probability theory and statistical mechanics, the Gaussian free field (GFF) is a Gaussian random field, a central model of random surfaces (random height functions). gives a mathematical survey of the Gaussian free field.
The discrete version can be defined on any graph, usually a lattice in d-dimensional Euclidean space. The continuum version is defined on Rd or on a bounded subdomain of Rd. It can be thought of as a natural generalization of one-dimensional Brownian motion to d time (but still one space) dimensions: it is a random (generalized) function from Rd to R. In particular, the one-dimensional continuum GFF is just the standard one-dimensional Brownian motion or Brownian bridge on an interval.
In the theory of random surfaces, it is also called the harmonic crystal. It is also the starting point for many constructions in quantum field theory, where it is called the Euclidean bosonic massless free field. A key property of the 2-dimensional GFF is conformal invariance, which relates it in several ways to the Schramm–Loewner evolution, see and .
Similarly to Brownian motion, which is the scaling limit of a wide range of discrete random walk models (see Donsker's theorem), the continuum GFF is the scaling limit of not only the discrete GFF on lattices, but of many random height function models, such as the height function of uniform random planar domino tilings, see . The planar GFF is also the limit of the fluctuations of the characteristic polynomial of a random matrix model, the Ginibre ensemble, see .
The structure of the discrete GFF on any graph is closely related to the behaviour of the simple random walk on the graph. For instance, the discrete GFF plays a key role in the proof by of several conjectures about the cover time of graphs (the expected number of steps it takes for the random walk to visit all the vertices).
Definition of the discrete GFF
Let P(x, y) be the transition kernel of the Markov chain given by a random walk on a finite |
https://en.wikipedia.org/wiki/Gankyil | The Gankyil (, Lhasa ) or "wheel of joy" () is a symbol and ritual tool used in Tibetan and East Asian Buddhism. It is composed of three (sometimes two or four) swirling and interconnected blades. The traditional spinning direction is clockwise (right turning), but the counter-clockwise ones are also common.
The gankyil as inner wheel of the dharmachakra is depicted on the Flag of Sikkim, Joseon, and is also depicted on the Flag of Tibet and Emblem of Tibet.
Exegesis
In addition to linking the gankyil with the "wish-fulfilling jewel" (Skt. cintamani), Robert Beer makes the following connections:
The "victory" referred to above is symbolised by the dhvaja or "victory banner".
Wallace (2001: p. 77) identifies the ānandacakra with the heart of the "cosmic body" of which Mount Meru is the epicentre:
Associated triunes
Ground, path, and fruit
"ground", "base" ()
"path", "method" ()
"fruit", "product" ()
Three humours of traditional Tibetan medicine
Attributes connected with the three humors (Sanskrit: tridoshas, Tibetan: nyi pa gsum):
Desire (Tibetan: འདོད་ཆགས། ’dod chags) is aligned with the humor Wind (rlung, , Sanskrit: vata - "air and aether constitution")
Hatred (Tibetan: ཞེ་སྡང་། zhe sdang) is aligned with the humor Bile (Tripa, mkhris pa, Sanskrit: pitta - "fire and water constitution")
Ignorance (Tibetan: གཏི་མུག gti mug) is aligned with the humor Phlegm (Béken bad kan, Sanskrit: kapha - "earth and water constitution").
Study, reflection, and meditation
Study ( Tibetan: ཐོས་པ། thos + pa)
Reflection ( Tibetan: བསམ་པ།sam+ pa)
Meditation ( Tibetan: སྒོམ་པ། sgom pa)
These three aspects are the mūlaprajñā of the sādhanā of the prajñāpāramitā, the "pāramitā of wisdom". Hence, these three are related to, but distinct from, the Prajñāpāramitā that denotes a particular cycle of discourse in the Buddhist literature that relates to the doctrinal field (kṣetra) of the second turning of the dharmacakra.
Mula dharmas of the path
The Dzogchen teachings f |
https://en.wikipedia.org/wiki/Cell%20survival%20curve | A cell survival curve is a curve used in radiobiology. It depicts the relationship between the fraction of cells retaining their reproductive integrity and the absorbed dose of radiation. Conventionally, the surviving fraction is depicted on a logarithmic scale, and is plotted on the y-axis against dose on the x-axis.
The linear quadratic model is now most often used to describe the cell survival curve, assuming that there are two mechanisms to cell death by radiation: A single lethal event or an accumulation of harmful but non-lethal events. Cell survival fractions are exponential functions with a dose-dependent term in the exponent due to the Poisson statistics underlying the stochastic process. Whereas single lethal events lead to an exponent that is linearly related to dose, the survival fraction function for a two-stage mechanism carries an exponent proportional to the square of dose. The coefficients must be inferred from measured data, such as the Hiroshima Leukemia data. With higher orders being of lesser importance and the total survival fraction being the product of the two functions, the model is aptly called linear-quadratic.
See also
Dose fractionation
Dose–response relationship
Chronic radiation syndrome
External links
Curves
Radiobiology |
https://en.wikipedia.org/wiki/Software%20remastering | Software remastering is software development that recreates system software and applications while incorporating customizations, with the intent that it is copied and run elsewhere for "off-label" usage. The term comes from remastering in media production, where it is similarly distinguished from mere copying.
If the codebase does not continue to parallel an ongoing, upstream software development, then it is a fork, not a remastered version. If a codebase replicates the behaviour of the original but does not derive from the original codebase then it is a clone.
Common examples of software remastering include Linux and Unix-like distributions, and video games. Remastered Linux, BSD and OpenSolaris operating system distributions are common because they are not copy protected, but also because of the allowance of such operating systems to grow an application for taking a snapshot of itself, and of installing that onto bootable media such as a thumb drive or a virtual machine in a hypervisor. Since 2001 over 1000 computer operating systems have arisen for download from the Internet. A global community of Linux providers pushes the practice of remastering by developer switching, project overtaking or merging, and by sharing over the Internet. Most distributions start as a remastered version of another distribution as evidenced by the announcements made at DistroWatch. Notably, remastering SLS Linux forked Slackware, remastering Red Hat Linux helped fork Yellow Dog Linux and Mandriva and TurboLinux, and by remastering a Debian distribution, Ubuntu was started, which is itself remastered by the Linux Mint team. These might involve critical system software, but the extent of the customizations made in remastering can be as trivial as a change in a default setting of the distribution and subsequent provision to an acquaintance on installation media. When a remastered version becomes public it becomes a distribution.
Microsoft Windows has also been modified and remastered |
https://en.wikipedia.org/wiki/Biolab | Biolab (Biological Experiment Laboratory) is a single-rack multi-user science payload designed for use in the Columbus laboratory of the International Space Station. Biolab support biological research on small plants, small invertebrates, microorganisms, animal cells, and tissue cultures. It includes an incubator equipped with centrifuges in which the preceding experimental subjects can be subjected to controlled levels of accelerations.
These experiments help to identify "the role that microgravity plays at all levels of an organism, from the effects on a single cell up to a complex organism including humans."
Description
Summary :
BioLab provides an on-orbit biology laboratory that enables scientists to study the effects of microgravity and space radiation on unicellular and multicellular organisms, including bacteria, insects, protists (simple eukaryotic organisms), seeds, and cells.
The BioLab facility includes an incubator, microscope, spectrophotometer (instrument used to measure the spectrum of light absorbed by a sample), and two centrifuges to provide artificial gravity. BioLab allows researchers to illuminate and observe individual experiment containers (ECs), and BioLab's life support system can regulate the content of the atmosphere (including humidity).
BioLab is integrated into a single International Standard Payload Rack (ISPR) within the European Columbus laboratory, which was launched on space shuttle mission STS-122.
Results from BioLab experiments could affect biomedical research in areas such as immunology, pharmacology, bone demineralization, cellular signal transduction (the processing of electrochemical stimuli in cells), cellular repair, and biotechnology.
The BioLab facility, which has been integrated into a single International Standard Payload Rack (ISPR) in the European Columbus laboratory, is divided into two sections: the automated section, or core unit, and the manual section, designed for crew interaction with the experiments. The |
https://en.wikipedia.org/wiki/List%20of%20PSPACE-complete%20problems | Here are some of the more commonly known problems that are PSPACE-complete when expressed as decision problems. This list is in no way comprehensive.
Games and puzzles
Generalized versions of:
Amazons
Atomix
Checkers if a draw is forced after a polynomial number of non-jump moves
Dyson Telescope Game
Cross Purposes
Geography
Two-player game version of Instant Insanity
Ko-free Go
Ladder capturing in Go
Gomoku
Hex
Konane
Lemmings
Node Kayles
Poset Game
Reversi
River Crossing
Rush Hour
Finding optimal play in Mahjong solitaire
Sokoban
Super Mario Bros.
Black Pebble game
Black-White Pebble game
Acyclic pebble game
One-player pebble game
Token on acyclic directed graph games:
Logic
Quantified boolean formulas
First-order logic of equality
Provability in intuitionistic propositional logic
Satisfaction in modal logic S4
First-order theory of the natural numbers under the successor operation
First-order theory of the natural numbers under the standard order
First-order theory of the integers under the standard order
First-order theory of well-ordered sets
First-order theory of binary strings under lexicographic ordering
First-order theory of a finite Boolean algebra
Stochastic satisfiability
Linear temporal logic satisfiability and model checking
Lambda calculus
Type inhabitation problem for simply typed lambda calculus
Automata and language theory
Circuit theory
Integer circuit evaluation
Automata theory
Word problem for linear bounded automata
Word problem for quasi-realtime automata
Emptiness problem for a nondeterministic two-way finite state automaton
Equivalence problem for nondeterministic finite automata
Word problem and emptiness problem for non-erasing stack automata
Emptiness of intersection of an unbounded number of deterministic finite automata
A generalized version of Langton's Ant
Minimizing nondeterministic finite automata
Formal languages
Word problem for context-sensitive language
Intersect |
https://en.wikipedia.org/wiki/Test%20fixture | A test fixture is a device used to consistently test some item, device, or piece of software. Test fixtures are used in the testing of electronics, software and physical devices.
Electronics
In testing electronic equipment such as circuit boards, electronic components, and chips, a test fixture is a device or setup designed to hold the device under test in place and allow it to be tested by being subjected to controlled electronic test signals. Examples are a bed of nails tester or smart fixture.
Software
In the context of software a test fixture (also called "test context") is used to set up system state and input data needed for test execution. For example, the Ruby on Rails web framework uses YAML to initialize a database with known parameters before running a test. This allows for tests to be repeatable, which is one of the key features of an effective test framework.
Setup
Test fixtures can be set up three different ways: in-line, delegate, and implicit.
In-line setup creates the test fixture in the same method as the rest of the test. While in-line setup is the simplest test fixture to create, it leads to duplication when multiple tests require the same initial data.
Delegate setup places the test fixture in a separate standalone helper method that is accessed by multiple test methods.
Implicit setup places the test fixture in a setup method which is used to set up multiple test methods. This differs from delegate setup in that the overall setup of multiple tests is in a single setup method where the test fixture gets created rather than each test method having its own setup procedures and linking to an external test fixture.
Advantages and disadvantages
The main advantage of a test fixture is that it allows for tests to be repeatable since each test is always starting with the same setup. Test fixtures also ease test code design by allowing the developer to separate methods into different functions and reuse each function for other tests. Furthe |
https://en.wikipedia.org/wiki/Fourier%E2%80%93Motzkin%20elimination | Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions.
The algorithm is named after Joseph Fourier who proposed the method in 1826 and Theodore Motzkin who re-discovered it in 1936.
Elimination
The elimination of a set of variables, say V, from a system of relations (here linear inequalities) refers to the creation of another system of the same sort, but without the variables in V, such that both systems have the same solutions over the remaining variables.
If all variables are eliminated from a system of linear inequalities, then one obtains a system of constant inequalities. It is then trivial to decide whether the resulting system is true or false. It is true if and only if the original system has solutions. As a consequence, elimination of all variables can be used to detect whether a system of inequalities has solutions or not.
Consider a system of inequalities with variables to , with the variable to be eliminated. The linear inequalities in the system can be grouped into three classes depending on the sign (positive, negative or null) of the coefficient for .
those inequalities that are of the form ; denote these by , for ranging from 1 to where is the number of such inequalities;
those inequalities that are of the form ; denote these by , for ranging from 1 to where is the number of such inequalities;
those inequalities in which plays no role, grouped into a single conjunction .
The original system is thus equivalent to
.
Elimination consists in producing a system equivalent to . Obviously, this formula is equivalent to
.
The inequality
is equivalent to inequalities , for and .
We have therefore transformed the original system into another system where is eliminated. Note that the output system has inequalities. In particular, if , then the number of output inequalities is .
Example
Consider the followi |
https://en.wikipedia.org/wiki/Plant%20Resources%20of%20Tropical%20Africa |
Plant Resources of Tropical Africa, known by its acronym PROTA, is a retired NGO and interdisciplinary documentation programme active between 2000 and 2013. PROTA produced a large database and various publications about Africa's useful plants.
Purpose
PROTA was concerned with increasing accessibility to traditional knowledge and scientific information about many types of African plants including: dyes & tannins, fibers, medicinal plants, stimulants, tropical timbers, vegetables, tubers (carbohydrates), oil seeds, ornamental plants, forage plants, and cereals. PROTA supported the sustainable use of these useful plants to preserve culture, reduce poverty and hunger, and respond to climate change. To this end, PROTA's overall goal was synthesize diverse, published information for approximately 8,000 plants used in tropical Africa, then make it widely accessible through an online database and various book publications. In other words, PROTA was dedicated to making the useful plant biodiversity of tropical Africa better-known and respected.
PROTA's database and various publications are considered unique in their epistemological approach because they were compiled as much from obscure publications as from peer-reviewed and popular literature, gathered throughout Africa and Europe. In this way PROTA publications include Africa-centered references and perspectives, which is a major focus of the broader discipline of African studies. PROTA also was an international NGO registered in Nairobi, Kenya that used information from its publications to structure a number of community projects involving over 800 farmers in Benin, Botswana, Burkina Faso, Kenya, and Madagascar.
Some of PROTA's other goals included:
to promote the sustainable use of plants to the public and private sectors
to facilitate socially inclusive, collaborative research about African plants from experts in Africa and elsewhere
to make research about African plants more accessible
to support intel |
https://en.wikipedia.org/wiki/Medial%20calcaneal%20branches%20of%20the%20tibial%20nerve | The medial calcaneal branches of the tibial nerve (internal calcaneal branches) perforate the laciniate ligament, and supply the skin of the heel and medial side of the sole of the foot.
Structure
The medial calcaneal nerve originates either from the tibial nerve or the lateral plantar nerve. It splits into two cutaneous branches.
Function
The medial calcaneal nerve provides sensory innervation to the medial side of the heel.
See also
Cutaneous innervation of the lower limbs |
https://en.wikipedia.org/wiki/Poultry%20Science%20Association | The Poultry Science Association (PSA) is an American non-profit professional organization for the advancement of poultry science. Founded in 1908, the PSA is headquartered in Champaign, Illinois.
Consisting of 1800 members, PSA is involved in research, education, nutrition, and processing of poultry-based products, including chicken, quail, turkey, and duck. Its two journals are Poultry Science and Journal of Applied Poultry Research.
Its youngest ever President was Prof Frederick Hutt of the University of Minnesota in 1932 (then aged 34).
See also
Poultry farming in the United States |
https://en.wikipedia.org/wiki/Metal%E2%80%93insulator%20transition | Metal–insulator transitions are transitions of a material from a metal (material with good electrical conductivity of electric charges) to an insulator (material where conductivity of charges is quickly suppressed). These transitions can be achieved by tuning various ambient parameters such as temperature, pressure or, in case of a semiconductor, doping.
History
The basic distinction between metals and insulators was proposed by Hans Bethe, Arnold Sommerfeld and Felix Bloch in 1928-1929. It distinguished between conducting metals (with partially filled bands) and nonconducting insulators. However, in 1937 Jan Hendrik de Boer and Evert Verwey reported that many transition-metal oxides (such as NiO) with a partially filled d-band were poor conductors, often insulating. In the same year, the importance of the electron-electron correlation was stated by Rudolf Peierls. Since then, these materials as well as others exhibiting a transition between a metal and an insulator have been extensively studied, e.g. by Sir Nevill Mott, after whom the insulating state is named Mott insulator.
The first metal-insulator transition to be found was the Verwey transition of magnetite in the 1940s.
Theoretical description
The classical band structure of solid state physics predicts the Fermi level to lie in a band gap for insulators and in the conduction band for metals, which means metallic behavior is seen for compounds with partially filled bands. However, some compounds have been found which show insulating behavior even for partially filled bands. This is due to the electron-electron correlation, since electrons cannot be seen as noninteracting. Mott considers a lattice model with just one electron per site. Without taking the interaction into account, each site could be occupied by two electrons, one with spin up and one with spin down. Due to the interaction the electrons would then feel a strong Coulomb repulsion, which Mott argued splits the band in two. Having one electr |
https://en.wikipedia.org/wiki/American%20Dairy%20Science%20Association | The American Dairy Science Association (ADSA) is a non-profit professional organization for the advancement of dairy science. ADSA is headquartered in Champaign, Illinois.
Consisting of 4500 members, ADSA is involved in research, education, and industry relations. Areas of ADSA focus include:
care and nutrition of dairy animals;
management, economics and marketing of dairy farms and product manufacturing;
sanitation throughout the dairy industry; and,
processing of dairy-based products, including processing and foods manufacturing (milk, cheese, yogurt, and ice cream).
ADSA's top priorities are the Journal of Dairy Science, annual meetings, scientific liaisons with other organizations and agencies, and international development. ADSA is attempting to add value to potential new members through an emphasis on "integration of dairy disciplines from the farm to the table."
History
In the summer of 1905, the Graduate School of Agriculture was held at Ohio State University. Professor Wilber J. Fraser of the University of Illinois at Urbana-Champaign suggested a permanent "Dairy Instructors and Investigators Association". Attendees decided that Professor Fraser should discuss the matter further with university leaders and, if enough interest was indicated, call an organizational meeting at the 1906 Graduate School of Agriculture to be held at the University of Illinois, Urbana. Apparently, sufficient interest was raised, because Professor Fraser called interested parties to attend an inaugural meeting on July 17, 1906. Although 19 persons appear on the photograph of that first meeting, records indicate only 17 or 18 charter members joined what was then called "National Association of Dairy Instructors and Investigators". At this time, dairy schools existed at Cornell, Iowa State, Wisconsin, Purdue, Penn State, Ohio State, Missouri, Minnesota, Guelph (Ontario), and Illinois.
The second meeting was at the National Dairy Show in Chicago on 11 Oct 1907. Only 11 members |
https://en.wikipedia.org/wiki/QuarkNet | QuarkNet is a long-term, research-based teacher professional development program in the United States jointly funded by the National Science Foundation and the US Department of Energy. Since 1999, QuarkNet has established centers at universities and national laboratories conducting research in particle physics (also called high-energy physics) across the United States, and have been bringing such physics to high school classrooms. QuarkNet programs are designed and conducted according to “best practices” described in the National Research Council National Science Education Standards report (1995) and support the Next Generation Science Standards (2013).
Data Camp
The summer Boot Camp is an annual national activity allowing teachers to see detectors and colliders, as well as form research groups to process experimental data. Teachers have been working in separate groups investigating triggers released by CMS since early 2011. The groups search the data for evidence of the J/Psi, Z and W bosons. They used Excel to reconstruct the invariant mass of a particle when given the four-vector of that particle's decay products. In addition, participants attend several talks and go on tours of technical areas.
Cosmic Ray Studies
The main QuarkNet student investigations supported at the national level are cosmic ray studies. Working with Fermilab technicians and research physicists, QuarkNet staff have developed a classroom cosmic ray muon detector that uses the same technologies as the largest detectors at Fermilab and CERN. To support interschool collaboration, QuarkNet collaborates with the Interactions in Understanding the Universe Project (I2U2) to develop and support the Cosmic Ray e-Lab. An e-Lab is a student-led, teacher-guided investigation using experimental data. Students have an opportunity to organize and conduct authentic research and experience the environment of a scientific collaboration. Participating schools set up a detector somewhere at the school. Student |
https://en.wikipedia.org/wiki/Robert%20R.%20Williams | Robert Runnels Williams (February 16, 1886 – October 2, 1965) was an American chemist, known for being the first to chemically fully characterize and then synthesize thiamine (vitamin B1). He first isolated thiamine in 1933, and synthesized it in 1935, reporting this in 1936. Williams also provided the modern name "thiamine" from the molecule's sulfur atom, and it being a vitamin (a class ultimately named for the earlier-known amine of thiamine itself).
Among his awards were the Elliott Cresson Medal in 1940 and the Perkin Medal in 1947. He was elected to both the American Philosophical Society and the United States National Academy of Sciences. His brother was Roger J. Williams, another important chemist at the time and discoverer of Vitamin B5.
Life
He was born in Nellore, India to Baptist missionaries. He moved to the United States when he was ten. In the early 1900s, Williams studied at Ottawa University and eventually procured a master's degree at the University of Chicago in 1908. He then spent some time teaching in the Philippines. After returning to the United States, he worked for Bell Telephone Laboratories from 1915, until he retired in 1945.
A resident of Summit, New Jersey, Williams died there at the age of 79 on October 2, 1965.
Work
1933-4 - developed a way of isolating 1/3 an ounce of thiamine from a ton of rice polishings.
1935 - Worked out its molecular structure and named it "thiamine" from its sulfur atom and amino group
1935 - Synthesized thiamine (vitamin B1), reporting the work in 1936. |
https://en.wikipedia.org/wiki/Solder%20mask | Solder mask, solder stop mask or solder resist is a thin lacquer-like layer of polymer that is usually applied to the copper traces of a printed circuit board (PCB) for protection against oxidation and to prevent solder bridges from forming between closely spaced solder pads. A solder bridge is an unintended electrical connection between two conductors by means of a small blob of solder. PCBs use solder masks to prevent this from happening. Solder mask is not always used for hand soldered assemblies, but is essential for mass-produced boards that are soldered automatically using reflow or wave soldering techniques. Once applied, openings must be made in the solder mask wherever components are soldered, which is accomplished using photolithography. Solder mask is traditionally green, but is also available in many other colors.
Solder mask comes in different media depending upon the demands of the application. The lowest-cost solder mask is epoxy liquid that is silkscreened through the pattern onto the PCB. Other types are the liquid photoimageable solder mask (LPSM or LPI) inks and dry-film photoimageable solder mask (DFSM). LPSM can be silkscreened or sprayed on the PCB, exposed to the pattern and developed to provide openings in the pattern for parts to be soldered to the copper pads. DFSM is vacuum-laminated on the PCB then exposed and developed. All three processes typically go through a thermal cure of some type after the pattern is defined although LPI solder masks are also available in ultraviolet (UV) cure.
The solder stop layer on a flexible board is also called coverlay or coverfilm.
In electronic design automation, the solder mask is treated as part of the layer stack of the printed circuit board, and is described in individual Gerber files for the top and bottom side of the PCB like any other layer (such as the copper and silk-screen layers). Typical names for these layers include tStop/bStop aka STC/STS or TSM/BSM (EAGLE), F.Mask/B.Mask (KiCad), StopT |
https://en.wikipedia.org/wiki/Rabin%20fingerprint | The Rabin fingerprinting scheme is a method for implementing fingerprints using polynomials over a finite field. It was proposed by Michael O. Rabin.
Scheme
Given an n-bit message m0,...,mn-1, we view it as a polynomial of degree n-1 over the finite field GF(2).
We then pick a random irreducible polynomial of degree k over GF(2), and we define the fingerprint of the message m to be the remainder after division of by over GF(2) which can be viewed as a polynomial of degree or as a k-bit number.
Applications
Many implementations of the Rabin–Karp algorithm internally use Rabin fingerprints.
The Low Bandwidth Network Filesystem (LBFS) from MIT uses Rabin fingerprints to implement variable size shift-resistant blocks.
The basic idea is that the filesystem computes the cryptographic hash of each block in a file. To save on transfers between the client and server,
they compare their checksums and only transfer blocks whose checksums differ. But one problem with this scheme is that a single insertion at the beginning of the file will cause every checksum to change if fixed-sized (e.g. 4 KB) blocks are used. So the idea is to select blocks not based on a specific offset but rather by some property of the block contents. LBFS does this by sliding a 48 byte window over the file and computing the Rabin fingerprint of each window. When the low 13 bits of the fingerprint are zero LBFS calls those 48 bytes a breakpoint and ends the current block and begins a new one. Since the output of Rabin fingerprints are pseudo-random the probability of any given 48 bytes being a breakpoint is (1 in 8192). This has the effect of shift-resistant variable size blocks. Any hash function could be used to divide a long file into blocks (as long as a cryptographic hash function is then used to find the checksum of each block): but the Rabin fingerprint is an efficient rolling hash, since the computation of the Rabin fingerprint of region B can reuse some of the computation of the |
https://en.wikipedia.org/wiki/Thermoelectric%20generator | A thermoelectric generator (TEG), also called a Seebeck generator, is a solid state device that converts heat flux (temperature differences) directly into electrical energy through a phenomenon called the Seebeck effect (a form of thermoelectric effect). Thermoelectric generators function like heat engines, but are less bulky and have no moving parts. However, TEGs are typically more expensive and less efficient.
Thermoelectric generators could be used in power plants and factories to convert waste heat into additional electrical power and in automobiles as automotive thermoelectric generators (ATGs) to increase fuel efficiency. Radioisotope thermoelectric generators use radioisotopes to generate the required temperature difference to power space probes. Thermoelectric generators can also be used alongside solar panels.
History
In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two different conducting material (has electromagnetic property) can produce electricity. At the heart of the thermoelectric effect is the fact that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered the reverse effect, that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler.
Efficiency
The typical efficiency of TEGs is around 5–8%, although it can be higher. Older devices used bimetallic junctions and were bulky. More recent devices use highly doped semiconductors made from bismuth telluride (Bi2Te3), lead telluride (PbTe), calcium manganese oxide (Ca2Mn3O8), or combinations thereof, depending on application temperature. These are solid-state devices and unlike dynamos have no moving parts, with the occasional exception of a fan or |
https://en.wikipedia.org/wiki/Rhind%20Mathematical%20Papyrus | The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum 10057 and pBM 10058) is one of the best known examples of ancient Egyptian mathematics. It is named after Alexander Henry Rhind, a Scottish antiquarian, who purchased the papyrus in 1858 in Luxor, Egypt; it was apparently found during illegal excavations in or near the Ramesseum. It dates to around 1550 BC. The British Museum, where the majority of the papyrus is now kept, acquired it in 1865 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind. There are a few small fragments held by the Brooklyn Museum in New York City and an central section is missing. It is one of the two well-known Mathematical Papyri along with the Moscow Mathematical Papyrus. The Rhind Papyrus is larger than the Moscow Mathematical Papyrus, while the latter is older.
The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt. It was copied by the scribe Ahmes (i.e., Ahmose; Ahmes is an older transcription favoured by historians of mathematics), from a now-lost text from the reign of king Amenemhat III (12th dynasty). Written in the hieratic script, this Egyptian manuscript is tall and consists of multiple parts which in total make it over long. The papyrus began to be transliterated and mathematically translated in the late 19th century. The mathematical translation aspect remains incomplete in several respects. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later historical note on its verso likely dating from the period ("Year 11") of his successor, Khamudi.
In the opening paragraphs of the papyrus, Ahmes presents the papyrus as giving "Accurate reckoning for inquiring into things, and the knowledge of all things, mysteries ... all secrets". He continues with:
This book was copied in regnal year 33, month 4 of Akhet, under the majesty of the King of Upper and Lower Egypt, Awserre, given life, from an ancient copy made |
https://en.wikipedia.org/wiki/Suillus%20grevillei | Suillus grevillei (commonly known as Greville's bolete and larch bolete) is a mycorrhizal mushroom with a tight, brilliantly coloured cap, shiny and wet looking with its mucous slime layer. The hymenium easily separates from the flesh of the cap, with a central stalk that is quite slender. The species has a ring or a tight-fitting annular zone.
Description
Suillus grevillei is a mushroom with a 5–10 cm (2–4 in) cap colored from citrus yellow to burnt orange, that is at first hemispherical, then bell-shaped, and finally flattened. It has a sticky skin, often with veil remnants on the edge, short tubes of yellow (possibly staining brownish) which descend down to the bottom of its cylindrical stalk (6–10 x 1–2 cm), which is yellowish above the ring area with streaks of reddish brown below. The flesh is yellow, staining brown.
The thin meat has consistency at first but then quickly becomes soft. It has an odor reminiscent of rumpled Pelargonium geranium leaves.
It grows in the soil of mixed forests, not always at the foot of larch (can be quite some distance away) with which it lives in symbiosis. It grows from June until November.
Suillus grevillei is an edible mushroom (without consistency nor flavor) if the slimy cuticle is removed off the cap. This mucousy skin layer is what is known to cause intestinal issues, as is the case with several other Suillus such as Slippery Jack (S. luteus) or Jill (S. salmonicolor); often considered to be not worth the work.
Its name is derived from Robert Kaye Greville.
Habitat and distribution
Grows only under larch trees. Widespread in North America and Europe. In Asia, it has been recorded from Taiwan.
Chemistry
The fungus produces grevillin which is characteristic of this fungus. The genetic and enzymatic basis for atromentin, the precursor to various pulvinic acid-type pigments, has been characterized (an atromentin synthetase by the name, GreA). A cosmid library (31 249 bp in total) has been made from the genome. The estim |
https://en.wikipedia.org/wiki/Gyula%20O.%20H.%20Katona | Gyula O. H. Katona (born 16 March 1941 in Budapest) is a Hungarian mathematician known for his work in combinatorial set theory, and especially for the Kruskal–Katona theorem and his beautiful and elegant proof of the Erdős–Ko–Rado theorem in which he discovered a new method, now called Katona's cycle method. Since then, this method has become a powerful tool in proving many interesting results in extremal set theory. He is affiliated with the Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences.
Katona was secretary-general of the János Bolyai Mathematical Society from 1990 to 1996. In 1966 and 1968 he won the Grünwald Prize, awarded by the Bolyai Society to outstanding young mathematicians, he was awarded the Alfréd Rényi Prize of the Hungarian Academy of Sciences in 1975, and the same academy awarded him the Prize of the Academy in 1989. In 2011 the Alfréd Rényi Institute, the János Bolyai Society and the Hungarian Academy of Sciences organized a conference in honor of Katona's 70th birthday.
Gyula O.H. Katona is the father of Gyula Y. Katona, another Hungarian mathematician with similar research interests to those of his father. |
https://en.wikipedia.org/wiki/Aronszajn%20line | In mathematical set theory, an Aronszajn line (named after Nachman Aronszajn) is a linear ordering of cardinality
which contains no subset order-isomorphic to
with the usual ordering
the reverse of
an uncountable subset of the Real numbers with the usual ordering.
Unlike Suslin lines, the existence of Aronszajn lines is provable using the standard axioms of set theory. A linear ordering is an Aronszajn line if and only if it is the lexicographical ordering of some Aronszajn tree. |
https://en.wikipedia.org/wiki/List%20of%20impossible%20puzzles | This is a list of puzzles that cannot be solved. An impossible puzzle is a puzzle that cannot be resolved, either due to lack of sufficient information, or any number of logical impossibilities.
15 puzzle – Slide fifteen numbered tiles into numerical order. Impossible for half of the starting positions.
Five room puzzle – Cross each wall of a diagram exactly once with a continuous line.
MU puzzle – Transform the string to according to a set of rules.
Mutilated chessboard problem – Place 31 dominoes of size 2×1 on a chessboard with two opposite corners removed.
Coloring the edges of the Petersen graph with three colors.
Seven Bridges of Königsberg – Walk through a city while crossing each of seven bridges exactly once.
Three cups problem – Turn three cups right-side up after starting with one wrong and turning two at a time.
Three utilities problem – Connect three cottages to gas, water, and electricity without crossing lines.
Thirty-six officers problem – Arrange six regiments consisting of six officers each of different ranks in a 6 × 6 square so that no rank or regiment is repeated in any row or column.
See also
Impossible Puzzle, or "Sum and Product Puzzle", which is not impossible
-gry, a word puzzle
List of undecidable problems, no algorithm can exist to answer a yes–no question about the input
Puzzles
Mathematics-related lists |
https://en.wikipedia.org/wiki/Field%20of%20definition | In mathematics, the field of definition of an algebraic variety V is essentially the smallest field to which the coefficients of the polynomials defining V can belong. Given polynomials, with coefficients in a field K, it may not be obvious whether there is a smaller field k, and other polynomials defined over k, which still define V.
The issue of field of definition is of concern in diophantine geometry.
Notation
Throughout this article, k denotes a field. The algebraic closure of a field is denoted by adding a superscript of "alg", e.g. the algebraic closure of k is kalg. The symbols Q, R, C, and Fp represent, respectively, the field of rational numbers, the field of real numbers, the field of complex numbers, and the finite field containing p elements. Affine n-space over a field F is denoted by An(F).
Definitions for affine and projective varieties
Results and definitions stated below, for affine varieties, can be translated to projective varieties, by replacing An(kalg) with projective space of dimension n − 1 over kalg, and by insisting that all polynomials be homogeneous.
A k-algebraic set is the zero-locus in An(kalg) of a subset of the polynomial ring k[x1, ..., xn]. A k-variety is a k-algebraic set that is irreducible, i.e. is not the union of two strictly smaller k-algebraic sets. A k-morphism is a regular function between k-algebraic sets whose defining polynomials' coefficients belong to k.
One reason for considering the zero-locus in An(kalg) and not An(k) is that, for two distinct k-algebraic sets X1 and X2, the intersections X1∩An(k) and X2∩An(k) can be identical; in fact, the zero-locus in An(k) of any subset of k[x1, ..., xn] is the zero-locus of a single element of k[x1, ..., xn] if k is not algebraically closed.
A k-variety is called a variety if it is absolutely irreducible, i.e. is not the union of two strictly smaller kalg-algebraic sets. A variety V is defined over k if every polynomial in kalg[x1, ..., xn] that vanishes on V is |
https://en.wikipedia.org/wiki/Gyula%20Y.%20Katona | Gyula Y. Katona (born 4 December 1965) is a Hungarian mathematician, the son of mathematician Gyula O. H. Katona. He received his Ph.D. in 1997 from Hungarian Academy of Sciences, with a dissertation titled Paths and Cycles in Graphs and Hypergraphs under the advisement of László Lovász and András Recski, and is on the faculty of the Budapest University of Technology and Economics.
Katona is the coauthor of three textbooks, Introduction to Computer Science (Typotex, Budapest, 2002), Introduction to Finite Mathematics, (Eötvös L. University, Budapest, 1993), and Combinatorics, Graph Theory and Algorithms (Technical University of Budapest, 1993). In addition his research publications include several works on Hamiltonian cycles and related properties of graphs.
External links
Katona's web site
Katona at the Mathematics Genealogy Project
20th-century Hungarian mathematicians
Combinatorialists
Living people
1965 births |
https://en.wikipedia.org/wiki/Champa%20rice | Champa rice is a quick-maturing, drought resistant rice that can allow two harvests of sixty days each per growing season. Champa rice is from the aus sub-population, which shares similarities with both the japonica and the indica rice varieties. Likely originating from Eastern India, champa rice was introduced into the Champa Kingdom from the Vietnamese Empire in the late 10th century. Champa rice was then sent to Song China in the 11th century as a tribute gift from Champa during the reign of Emperor Zhenzong of Song (r. 997–1022). Song dynasty officials gave the quick-growing champa rice to peasants across China in order to boost their crop yields, and its rapid growth time was crucial in feeding the burgeoning Chinese population of over 100 million.
See also
List of rice varieties |
https://en.wikipedia.org/wiki/Gullstrand%E2%80%93Painlev%C3%A9%20coordinates | Gullstrand–Painlevé coordinates are a particular set of coordinates for the Schwarzschild metric – a solution to the Einstein field equations which describes a black hole. The ingoing coordinates are such that the time coordinate follows the proper time of a free-falling observer who starts from far away at zero velocity, and the spatial slices are flat. There is no coordinate singularity at the Schwarzschild radius (event horizon). The outgoing ones are simply the time reverse of ingoing coordinates (the time is the proper time along outgoing particles that reach infinity with zero velocity).
The solution was proposed independently by Paul Painlevé in 1921 and Allvar Gullstrand in 1922. It was not explicitly shown until 1933 in Lemaître's paper
that these solutions were simply coordinate transformations of the usual Schwarzschild solution, although Einstein immediately believed that to be true.
Derivation
The derivation of GP coordinates requires defining the following coordinate systems and understanding how data measured for events in one coordinate system is interpreted in another coordinate system.
Convention: The units for the variables are all geometrized. Time and mass have units in meters. The speed of light in flat spacetime has a value of 1. The gravitational constant has a value of 1.
The metric is expressed in the +−−− sign convention.
Schwarzschild coordinates
A Schwarzschild observer is a far observer or a bookkeeper. He does not directly make measurements of events that occur in different places. Instead, he is far away from the black hole and the events. Observers local to the events are enlisted to make measurements and send the results to him. The bookkeeper gathers and combines the reports from various places. The numbers in the reports are translated into data in Schwarzschild coordinates, which provide a systematic means of evaluating and describing the events globally. Thus, the physicist can compare and interpret the data inte |
https://en.wikipedia.org/wiki/Hat%20notation | A "hat" (circumflex (ˆ)), placed over a symbol is a mathematical notation with various uses.
Estimated value
In statistics, a circumflex (ˆ), called a "hat", is used to denote an estimator or an estimated value. For example, in the context of errors and residuals, the "hat" over the letter indicates an observable estimate (the residuals) of an unobservable quantity called (the statistical errors).
Another example of the hat operator denoting an estimator occurs in simple linear regression. Assuming a model of , with observations of independent variable data and dependent variable data , the estimated model is of the form where is commonly minimized via least squares by finding optimal values of and for the observed data.
Hat matrix
In statistics, the hat matrix H projects the observed values y of response variable to the predicted values ŷ:
Cross product
In screw theory, one use of the hat operator is to represent the cross product operation. Since the cross product is a linear transformation, it can be represented as a matrix. The hat operator takes a vector and transforms it into its equivalent matrix.
For example, in three dimensions,
Unit vector
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in (pronounced "v-hat").
Fourier transform
The Fourier transform of a function is traditionally denoted by .
See also
Exterior algebra
Top-hat filter
Circumflex, noting that precomposed glyphs [letter-with-circumflex] do not exist for all letters. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.