source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/NMR%20tube | An NMR tube is a thin glass walled tube used to contain samples in nuclear magnetic resonance spectroscopy. Typically NMR tubes come in 5 mm diameters but 10 mm and 3 mm samples are known. It is important that the tubes are uniformly thick and well-balanced to ensure that NMR tube spins at a regular rate (i.e., they do not wobble), usually about 20 Hz in the NMR spectrometer.
Construction
NMR tubes are typically made of borosilicate glass. They are available in seven and eight inch lengths; a 5 mm tube outer diameter is most common, but 3 mm and 10 mm outer diameters are available as well. Where boron NMR is desired, quartz NMR tubes containing low concentrations of boron (as opposed to borosilicate glass) are available. Specialized closures such as J. Young valves and screwcap closures are available aside from more common polyethylene caps.
Two common specifications for NMR tubes are concentricity and camber. Concentricity refers to the variation in the radial centers, measured at the inner and outer walls. Camber refers to the "straightness" of the tube. Poor values for either may cause poorer quality spectra by reducing the homogeneity of the sample. In particular, an NMR tube which has poor camber may wobble when rotated, giving rise to spinning side bands. With modern manufacturing techniques even cheap tubes give good spectra for routine applications.
Sample preparation
Usually, only a small sample is dissolved in an appropriate solvent. For 1H NMR experiments, this will usually be a deuterated solvent such as CDCl3. Sufficient solvent should be used in order to fill the tube by 4–5 cm (depending on the spectrometer). Protein NMR is usually performed in a 90% H2O (or buffer)/10% D2O mixture.
The sample may be sonicated or agitated to aid dissolution, and solids are removed via filtering through a plug of celite layered on a cotton wool plug in a Pasteur pipette, directly into the NMR tube.
The NMR tube is then usually sealed with a polyethylene cap, but |
https://en.wikipedia.org/wiki/Richard%20D.%20Gill | Richard David Gill (born 1951) is a British-Dutch mathematician. He has held academic positions in the Netherlands. As a probability theorist and statistician, Gill has researched counting processes. He is also known for his consulting and advocacy on behalf of alleged victims of statistical misrepresentation, including the reversal of the murder conviction of a Dutch nurse who had been jailed for six years.
Education
Gill studied mathematics at the University of Cambridge (1970–1973), and subsequently followed the Diploma of Statistics course there (1973–1974). He obtained a Ph.D. in mathematics in 1979, with the thesis Censoring and Stochastic Integrals, which was supervised by Jacobus Oosterhoff of the Vrije Universiteit, which awarded the doctorate.
Gill has said that he was "not much of an activist" as a student, but now feels guilty about not speaking up more at the time about perceived injustices, saying that this is partly because of an incident when working as a statistician in the 1970s when he helped on an experiment that severed the front legs of rats to investigate whether it would lead to the reshaping of their skulls. Gill said that this incident has stayed with him, as "what upset me most is that I didn’t have the strength of character to refuse to do that job".
Career
In 1974 Gill was appointed at the Mathematical Centre (later renamed Centrum Wiskunde & Informatica, or CWI) of Amsterdam. After receiving his Ph.D., he continued to collaborate with Danish and Norwegian statisticians for ten years, co-authoring Statistical models based on counting processes, by Andersen, Borgan, Gill, and Keiding.
Gill became head of the Department of Mathematical Statistics at CWI in 1983. In 1988, Gill moved to the Department of Mathematics of Utrecht University, where he held the chair in mathematical stochastics. His PhD students include Sara van de Geer, and Mark van der Laan (co-advised by Peter Bickel). In 2006, Gill moved to the Department of Mathematics |
https://en.wikipedia.org/wiki/Novell%20BorderManager | BorderManager is a multi purpose network security application developed by Novell, Inc. BorderManager is designed as a proxy server, firewall, and VPN access point. Novell has announced that migration to SuperLumin 4.0 Proxy Cache is "Novell's preferred firewall and proxy solution for NetWare customers upgrading to Novell Open Enterprise Server on Linux."
History
BorderManager was designed to run on top of the NetWare kernel and takes advantage of the fast file services that the NetWare kernel delivers. Aside from the more easily copied firewall and VPN access point services, Novell designed the proxy services to retrieve web data with a server to server connection rather than a client to server connection as all of the prior proxy servers on the market had done. This retrieval method along with NetWare's fast file IO and other proprietary code made BorderManager's proxy engine one of the fastest in existence.
In 2003, Novell announced the successor product to NetWare: Open Enterprise Server (OES). First released in March 2005, OES completes the separation of the services traditionally associated with NetWare, i.e. file and print. This makes it possible for the customer to choose which NetWare or Linux kernel the services will run on.
At this time Novell all but announced the end of development for the NetWare kernel (numerous public and private statements that there is no 64-bit future for NetWare and that Linux is the path to 64-bit computing for OES). To follow through on this migration path, Novell began porting all applications to Linux. The company began looking at alternate ways to deliver these same services, as firewall and VPN access point services of equivalent functionality are readily available in the free/open-source community and there are also basic proxy services as well (i.e. Squid). The desire to deliver a functional equivalent could not be done by a full software code port as much of the cache engine was sold as part of the Volera Excelerator |
https://en.wikipedia.org/wiki/Acoustical%20measurements%20and%20instrumentation | Analysis of sound and acoustics plays a role in such engineering tasks as product design, production test, machine performance, and process control. For instance, product design can require modification of sound level or noise for compliance with standards from ANSI, IEC, and ISO. The work might also involve design fine-tuning to meet market expectations. Here, examples include tweaking an automobile door latching mechanism to impress a consumer with a satisfying click or modifying an exhaust manifold to change the tone of an engine's rumble. Aircraft designers are also using acoustic instrumentation to reduce the noise generated on takeoff and landing.
Acoustical measurements and instrumentation range from a handheld sound level meter to a 1000-microphone phased array.
Components
Most of the acoustical measurement and instrumentation systems can be broken down into three components: sensors, data acquisition and analysis.
Sensors
The most common sensor used for acoustic measurement is the microphone. Measurement-grade microphones are different from typical recording-studio microphones because they can provide a detailed calibration for their response and sensitivity. Other sensors include hydrophones for measuring sound in water, particle velocity probes for localizing acoustic leakage, sound intensity probes for quantifying acoustic emission and ranking, or accelerometers for measuring vibrations causing sound. The three main groups of microphones are pressure, free-field, and random-incidence, each with their own correction factors for different applications. Well-known acoustic sensor suppliers include PCB Piezotronics, Brüel & Kjær, GRAS and Audio Precision.
Data acquisition
Data acquisition hardware for acoustic measurements typically utilizes 24-bit analog-to-digital converters (ADCs), anti-aliasing filters, and other signal conditioning. This signal conditioning may include amplification, filtering, sensor excitation, and input configuration. An |
https://en.wikipedia.org/wiki/Math%20wars | Math wars is the debate over modern mathematics education, textbooks and curricula in the United States that was triggered by the publication in 1989 of the Curriculum and Evaluation Standards for School Mathematics by the National Council of Teachers of Mathematics (NCTM) and subsequent development and widespread adoption of a new generation of mathematics curricula inspired by these standards.
While the discussion about math skills has persisted for many decades, the term "math wars" was coined by commentators such as John A. Van de Walle and David Klein. The debate is over traditional mathematics and reform mathematics philosophy and curricula, which differ significantly in approach and content.
Advocates of reform
The largest supporter of reform in the US has been the National Council of Teachers of Mathematics.
One aspect of the debate is over how explicitly children must be taught skills based on formulas or algorithms (fixed, step-by-step procedures for solving math problems) versus a more inquiry-based approach in which students are exposed to real-world problems that help them develop fluency in number sense, reasoning, and problem-solving skills. In this latter approach, conceptual understanding is a primary goal and algorithmic fluency is expected to follow secondarily. Some parents and other stakeholders blame educators saying that failures occur not because the method is at fault, but because these educational methods require a great deal of expertise and have not always been implemented well in actual classrooms.
A backlash, which advocates call "poorly understood reform efforts" and critics call "a complete abandonment of instruction in basic mathematics," resulted in "math wars" between reform and traditional methods of mathematics education.
Critics of reform
Those who disagree with the inquiry-based philosophy maintain that students must first
develop computational skills before they can understand concepts of mathematics. These
skills shoul |
https://en.wikipedia.org/wiki/Contact%20resistance | The term contact resistance refers to the contribution to the total resistance of a system which can be attributed to the contacting interfaces of electrical leads and connections as opposed to the intrinsic resistance. This effect is described by the term electrical contact resistance (ECR) and arises as the result of the limited areas of true contact at an interface and the presence of resistive surface films or oxide layers. ECR may vary with time, most often decreasing, in a process known as resistance creep. The idea of potential drop on the injection electrode was introduced by William Shockley to explain the difference between the experimental results and the model of gradual channel approximation. In addition to the term ECR, interface resistance, transitional resistance, or just simply correction term are also used. The term parasitic resistance is used as a more general term, of which it is usually assumed that contact resistance is a major component.
Experimental characterization
Here we need to distinguish the contact resistance evaluation in two-electrode systems (e.g. diodes) and three-electrode systems (e.g. transistors).
For two electrode systems the specific contact resistivity is experimentally defined as the slope of the I-V curve at :
where J is the current density, or current per area. The units of specific contact resistivity are typically therefore in ohms-square meter, or . When the current is a linear function of the voltage, the device is said to have ohmic contacts.
The resistance of contacts can be crudely estimated by comparing the results of a four terminal measurement to a simple two-lead measurement made with an ohmmeter. In a two-lead experiment, the measurement current causes a potential drop across both the test leads and the contacts so that the resistance of these elements is inseparable from the resistance of the actual device, with which they are in series. In a four-point probe measurement, one pair of leads is used to inj |
https://en.wikipedia.org/wiki/Streaker%20%28video%20game%29 | Streaker is an action game published by Mastertronic on their "Bulldog" label in 1987. The game was released for the Amstrad CPC, MSX, ZX Spectrum.
Plot
The hero of the game is Carlin, a naked man who has been mugged and stripped, and needs to find all of his clothes. Carlin is in a town on the planet Zuggi, which resembles a town in Earth in many ways, with locations including an hotel, a cafe, a supermarket and a chemist's shop. As Streaker finds more clothes, he is able to enter more and more locations. The game is won when he is fully dressed.
Gameplay
Gameplay is controlled through a nested menu system, as in Mastertronic's Spellbound. There are a multitude of items in the game, many of which can be used and some of which seem to have no purpose. Streaker can pick up keys to open doors, eat food items to raise his hunger levels, and needs to solve a number of simple puzzles.
While most aspects of life on Zuggi are very similar to Earth, there are a number of fantastic elements in the world. For example, Streaker may teleport ("beam transfer") by using a number of different coloured beamers. These will teleport Streaker to the location of the corresponding beampad. These beampads can be moved by Streaker and are a good way to avoid thieves. Nice touches include Streaker's ability to advance time by acquiring a stopwatch (the game is played in real time, and you often need to wait for a particular location to open) and the "Save" and "Load" options appearing after Streaker picks up a Tape Recorder.
Streaker's quest is made more difficult by the presence of a number of thieves. They will steal Streaker's clothes on contact, or cause him to lose a life if he is naked. The clothes may be regained from the thieves by finding and offering the thief an item that he wants.
Reception |
https://en.wikipedia.org/wiki/Levilactobacillus%20brevis | Levilactobacillus brevis is a gram-positive, rod shaped species of lactic acid bacteria which is heterofermentative, creating CO2, lactic acid and acetic acid or ethanol during fermentation. L. brevis is the type species of the genus Levilactobacillus (previously L. brevis group), which comprises 24 species. It can be found in many different environments, such as fermented foods, and as normal microbiota. L. brevis is found in food such as sauerkraut and pickles. It is also one of the most common causes of beer spoilage. Ingestion has been shown to improve human immune function, and it has been patented several times. Normal gut microbiota L. brevis is found in human intestines, vagina, and feces.
L. brevis is one of the major lactobacilli found in tibicos grains, used to make kefir, but Lentilactobacillus species are responsible for the production of the polysaccharide (dextran and kefiran) that forms the grains. Major metabolites of L. brevis include lactic acid and ethanol. Strains of L. brevis and L. hilgardii have been found to produce the biogenic amines tyramine and phenylethylamine.
History
E.B.Fred, W.H. Peterson, and J.A. Anderson initially discovered the species in 1921 and the it was categorized based on the ability to metabolize certain carbon compounds such as the sugars. This early study showed that this can produce acetic acid, carbon dioxide and large amounts of mannitol. Mannitol which is another carbon source that can be used to produce lactic acid.
Growth and metabolism
L. brevis has been shown to actively transport glucose and galactose. When fructose was used as a carbon source there was only some growth and L. brevis was able to partially metabolize the fructose to mannitol. There are some strains that poorly metabolize glucose but prefer disaccharides as carbon source.
By using the fermentation pathway the end result is lactic acid and acetic acid. It appears that under high temperature conditions, 50°C and in acidic environments the |
https://en.wikipedia.org/wiki/Total%20viable%20organism | Total viable organism (or TVO) is a term used in microbiology to quantify the amount of microorganisms present in a sample. Each sample is usually cultured on a variety of agar plates (petri dishes) often containing different types of selective media. The colony-forming units (CFUs) are calculated after allowing time for growth.
TVO numbers are used to quantify the CFUs for a given amount of sample and often include dilution factors. For example, a 1 mL sample of water containing 10 CFUs on one plate would have a TVO value of 10 cfu/mL. |
https://en.wikipedia.org/wiki/Siegel%20modular%20form | In mathematics, Siegel modular forms are a major type of automorphic form. These generalize conventional elliptic modular forms which are closely related to elliptic curves. The complex manifolds constructed in the theory of Siegel modular forms are Siegel modular varieties, which are basic models for what a moduli space for abelian varieties (with some extra level structure) should be and are constructed as quotients of the Siegel upper half-space rather than the upper half-plane by discrete groups.
Siegel modular forms are holomorphic functions on the set of symmetric n × n matrices with positive definite imaginary part; the forms must satisfy an automorphy condition. Siegel modular forms can be thought of as multivariable modular forms, i.e. as special functions of several complex variables.
Siegel modular forms were first investigated by for the purpose of studying quadratic forms analytically. These primarily arise in various branches of number theory, such as arithmetic geometry and elliptic cohomology. Siegel modular forms have also been used in some areas of physics, such as conformal field theory and black hole thermodynamics in string theory.
Definition
Preliminaries
Let and define
the Siegel upper half-space. Define the symplectic group of level , denoted by as
where is the identity matrix. Finally, let
be a rational representation, where is a finite-dimensional complex vector space.
Siegel modular form
Given
and
define the notation
Then a holomorphic function
is a Siegel modular form of degree (sometimes called the genus), weight , and level if
for all .
In the case that , we further require that be holomorphic 'at infinity'. This assumption is not necessary for due to the Koecher principle, explained below. Denote the space of weight , degree , and level Siegel modular forms by
Examples
Some methods for constructing Siegel modular forms include:
Eisenstein series
Theta functions of lattices (possibly with |
https://en.wikipedia.org/wiki/P16 | p16 (also known as p16INK4a, cyclin-dependent kinase inhibitor 2A, CDKN2A, multiple tumor suppressor 1 and numerous other synonyms), is a protein that slows cell division by slowing the progression of the cell cycle from the G1 phase to the S phase, thereby acting as a tumor suppressor. It is encoded by the CDKN2A gene. A deletion (the omission of a part of the DNA sequence during replication) in this gene can result in insufficient or non-functional p16, accelerating the cell cycle and resulting in many types of cancer.
p16 can be used as a biomarker to improve the histological diagnostic accuracy of grade 3 cervical intraepithelial neoplasia (CIN). p16 is also implicated in the prevention of melanoma, oropharyngeal squamous cell carcinoma, cervical cancer, vulvar cancer and esophageal cancer.
p16 was discovered in 1993. It is a protein with 148 amino acids and a molecular weight of 16 kDa that comprises four ankyrin repeats. The name of p16 is derived from its molecular weight, and the alternative name p16INK4a refers to its role in inhibiting cyclin-dependent kinase CDK4.
Nomenclature
p16 is also known as:
p16INK4A
p16Ink4
Cyclin-dependent kinase inhibitor 2A (CDKN2A)
CDKN2
CDK 4 Inhibitor
Multiple Tumor Suppressor 1 (MTS1)
TP16
ARF
MLM
P14
Gene
In humans, p16 is encoded by the CDKN2A gene, located on chromosome 9 (9p21.3). This gene generates several transcript variants that differ in their first exons. At least three alternatively spliced variants encoding distinct proteins have been reported, two of which encode structurally related isoforms known to function as inhibitors of CDK4. The remaining transcript includes an alternate exon 1 located 20 kb upstream of the remainder of the gene; this transcript contains an alternate open reading frame (ARF) that specifies a protein that is structurally unrelated to the products of the other variants. The ARF product functions as a stabilizer of the tumor suppressor protein p53, as it can interact with a |
https://en.wikipedia.org/wiki/Ronald%20Ernest%20Aitchison | Ronald Ernest Aitchison (29 December 1921 – 9 March 1996) was an Australian physicist and electronics engineer who contributed to a range of fields and technologies from solid-state devices to satellite imaging. He was born in Hurstville, New South Wales, Australia on 29 December 1921.
Career
From 1942 to 1945 Aitchison worked as an engineer with the Amalgamated Wireless Valve Company on the design and production of klystrons and radar magnetrons, which were new devices important to the war effort. He was also involved in work on semiconductor diodes, which were the forerunners of the revolution in electronics brought about by the advent of solid-state semiconductor components. In 1945 he joined the National Acoustic Laboratories where he worked on the design and construction of hearing aids for children.
Aitchison was appointed as senior lecturer in Communications Engineering at the University of Sydney, which was the start of his 25-year teaching experience at that institution, culminating in his appointment as associate professor. His interest in solid-state physics took him to Bristol University, UK, for a year, and he also spent a year at Stanford University, California, on a Fulbright scholarship, working at the forefront of electronics research. In 1970, he accepted an offer from Macquarie University to become the founding professor of electronics and took up the post in 1971.
Macquarie University
At Macquarie, Aitchison was hired as foundation professor of electronics. He taught this subject, with an emphasis on semiconductor physics, to advanced undergraduates and graduate students. In his teachings he emphasized understanding of principles over memorization of facts.
As a researcher, he created a state-of-the-art electronics laboratory and led several successful projects of a highly practical nature including pioneering work on the reception of satellite weather pictures that were shown every evening in Sydney's TV newscasts. In addition, he serve |
https://en.wikipedia.org/wiki/List%20of%20Heroes%20characters | This is a list of fictional characters in the television series Heroes, the Heroes graphic novels, and the Heroes webisodes.
Main characters
Character duration
In its inaugural season, Heroes featured an ensemble cast of twelve main characters. During the first season, the NBC Heroes cast page listed ten characters among the cast; Leonard Roberts arrived later, and Jack Coleman was promoted to series regular as of the eleventh episode.
For the second season of the show, Santiago Cabrera, Tawny Cypress, and Leonard Roberts left the main cast. Zachary Quinto and James Kyson Lee, who were recurring characters in the first season, were added to the main cast, and were joined by new cast members David Anders, Kristen Bell, Dana Davis and Dania Ramirez. Anders was originally meant to be a recurring character, but was promoted to a series regular prior to the start of the season. He is credited as a guest star for the first four episodes of season two.
For the third season, Cristine Rose, recurring in the first two seasons, was promoted to series regular. David Anders, Kristen Bell, and Noah Gray-Cabey were taken off the main cast and become special guest stars. Additionally, Dana Davis was no longer part of the main cast, with scenes involving her in the third season being cut.
For the fourth season, a new character Samuel Sullivan (portrayed by Robert Knepper) was added as a series regular. Originally cast as a recurring part, the part had been changed to a starring role. Dania Remirez left the main cast as well.
Character profiles
Other characters with special abilities
Introduced in Season One
Charlie Andrews
Charlene "Charlie" Andrews, portrayed by Jayma Mays (with K Callan playing an elderly Charlie in one episode), is a waitress at the Burnt Toast Diner in Midland, Texas, where Hiro Nakamura and Ando Masahashi stop to eat on their road trip to New York. After she reveals to Hiro that she had recently developed the ability to quickly memorize and recall a |
https://en.wikipedia.org/wiki/Biseriate | Biseriate is a botanical term applied to both plantae and fungi, meaning 'arranged in two rows'.
The term can refer to any number of structures found within these kingdoms, from arrangement of leaves to the placement of spores.
It becomes useful in taxonomy for placing a species within a certain genus, family, or even order, based upon morphology, when making an initial choice or when DNA evidence is inconclusive. |
https://en.wikipedia.org/wiki/Phialide | The phialide ( ; , diminutive of phiale, a broad, flat vessel) is a flask-shaped projection from the vesicle (dilated part of the top of conidiophore) of certain fungi. It projects from the mycelium without increasing in length unless a subsequent increase in the formation of conidia occurs.
It is the end cell of a phialosphore.
See also
Ascomycete |
https://en.wikipedia.org/wiki/Dig%20%28command%29 | dig is a network administration command-line tool for querying the Domain Name System (DNS).
dig is useful for network troubleshooting and for educational purposes. It can operate based on command line option and flag arguments, or in batch mode by reading requests from an operating system file. When a specific name server is not specified in the command invocation, it uses the operating system's default resolver, usually configured in the file resolv.conf. Without any arguments it queries the DNS root zone.
dig supports Internationalized domain name (IDN) queries.
dig is a component of the domain name server software suite BIND. dig supersedes in functionality older tools, such as nslookup and the program host; however, the older tools are still used in complementary fashion.
Example usage
Basic
In this example, dig is used to query for any type of record information in the domain example.com:
$ dig example.com any
; <<>> DiG 9.6.1 <<>> example.com any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4016
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;example.com. IN ANY
;; ANSWER SECTION:
example.com. 172719 IN NS a.iana-servers.net.
example.com. 172719 IN NS b.iana-servers.net.
example.com. 172719 IN A 208.77.188.166
example.com. 172719 IN SOA dns1.icann.org. hostmaster.icann.org. 2007051703 7200 3600 1209600 86400
;; Query time: 1 msec
;; SERVER: ::1#53(::1)
;; WHEN: Wed Aug 12 11:40:43 2009
;; MSG SIZE rcvd: 154
The number 172719 in the above example is the time to live value, which indicates the time of validity of the data.
The any DNS query is a special meta query which is now deprecated. Since around 2019, most public DNS servers have stopped answering most DNS ANY queries usefully .
If ANY queries do not enumerate multiple records, the only option i |
https://en.wikipedia.org/wiki/Thermal%20death%20time | Thermal death time is how long it takes to kill a specific bacterium at a specific temperature. It was originally developed for food canning and has found applications in cosmetics, producing salmonella-free feeds for animals (e.g. poultry) and pharmaceuticals.
History
In 1895, William Lyman Underwood of the Underwood Canning Company, a food company founded in 1822 at Boston, Massachusetts and later relocated to Watertown, Massachusetts, approached William Thompson Sedgwick, chair of the biology department at the Massachusetts Institute of Technology, about losses his company was suffering due to swollen and burst cans despite the newest retort technology available. Sedgwick gave his assistant, Samuel Cate Prescott, a detailed assignment on what needed to be done. Prescott and Underwood worked on the problem every afternoon from late 1895 to late 1896, focusing on canned clams. They first discovered that the clams contained heat-resistant bacterial spores that were able to survive the processing; then that these spores' presence depended on the clams' living environment; and finally that these spores would be killed if processed at 250 ˚F (121 ˚C) for ten minutes in a retort.
These studies prompted the similar research of canned lobster, sardines, peas, tomatoes, corn, and spinach. Prescott and Underwood's work was first published in late 1896, with further papers appearing from 1897 to 1926. This research, though important to the growth of food technology, was never patented. It would pave the way for thermal death time research that was pioneered by Bigelow and C. Olin Ball from 1921 to 1936 at the National Canners Association (NCA).
Bigelow and Ball's research focused on the thermal death time of Clostridium botulinum (C. botulinum) that was determined in the early 1920s. Research continued with inoculated canning pack studies that were published by the NCA in 1968.
Mathematical formulas
Thermal death time can be determined one of two ways: 1) by using graphs |
https://en.wikipedia.org/wiki/International%20Partnership%20for%20Microbicides | The International Partnership for Microbicides or IPM is a non-profit product development partnership (PDP) founded by Dr. Zeda Rosenberg in 2002 to prevent HIV transmission by accelerating the development and availability of a safe and effective microbicide for use by women in developing countries.
Since its inception, IPM has focused on developing HIV-prevention products for women including gels, films, tablets and rings that contain antiretroviral (ARV)-based microbicides. Rights to incorporate existing ARVs into products developed specifically for use in developing countries have been negotiated with pharmaceutical companies working in the HIV field.
See also
International AIDS Society
Joint United Nations Programme on HIV/AIDS (UNAIDS)
Prince Leopold Institute of Tropical Medicine
Tibotec |
https://en.wikipedia.org/wiki/Markus%E2%80%93Yamabe%20conjecture | In mathematics, the Markus–Yamabe conjecture is a conjecture on global asymptotic stability. If the Jacobian matrix of a dynamical system at a fixed point is Hurwitz, then the fixed point is asymptotically stable. Markus-Yamabe conjecture asks if a similar result holds globally. Precisely, the conjecture states that if a continuously differentiable map on an -dimensional real vector space has a fixed point, and its Jacobian matrix is everywhere Hurwitz, then the fixed point is globally stable.
The conjecture is true for the two-dimensional case. However, counterexamples have been constructed in higher dimensions. Hence, in the two-dimensional case only, it can also be referred to as the Markus–Yamabe theorem.
Related mathematical results concerning global asymptotic stability, which are applicable in dimensions higher than two, include various autonomous convergence theorems. Analog of the conjecture for nonlinear control system with scalar nonlinearity is known as Kalman's conjecture.
Mathematical statement of conjecture
Let be a map with and Jacobian which is Hurwitz stable for every .
Then is a global attractor of the dynamical system .
The conjecture is true for and false in general for . |
https://en.wikipedia.org/wiki/Erwise | Erwise is an early discontinued web browser, and the first that was available for the X Window System.
Released in April 1992, the browser was written for Unix computers running X and used the W3 common access library. Erwise was the combined master's project of four Finnish students at the Helsinki University of Technology (now merged into Aalto University): Kim Nyberg, Teemu Rantanen, Kati Suominen and Kari Sydänmaanlakka. The group decided to make a web browser at the suggestion of Robert Cailliau, who was visiting the university, and were supervised by Ari Lemmke.
The development of Erwise halted after the students graduated and went on to other projects. Tim Berners-Lee, the creator of the World Wide Web, travelled to Finland to encourage the group to continue with the project. However, none of the project members could afford to continue with the project without proper funding.
The name Erwise originates from otherwise and the name of the project group, OHT.
Development
For the web to be popularized, Tim Berners-Lee knew that what people wanted was a GUI-based browser – one that could target multiple operating systems, and most importantly, be easy-to-use for the technologically challenged. At the time, personal computers were also confusing to some people that were not experienced with technology.
History
Extremely pre-documented (in Finnish).
Serious coding started around March 1992.
Alpha release available by anonymous FTP from info.cern.ch—binaries only (sun4 works, decstation too, display requires Motif) as of 15 April 1992.
Source code released on www-talk August 92.
Characteristics
The following are significant characteristics of the browser:
It used a multi-font text.
The links of Erwise browser were underlined. To visit the links you had to double click on the links.
Erwise could execute multiple window operation, though the optional single window mode was also available.
Erwise could open local files.
Erwise had little English docume |
https://en.wikipedia.org/wiki/Tefkat | Tefkat is a model transformation language and a model transformation engine. The language is based on F-logic and the theory of stratified logic programs. The engine is an Eclipse plug-in for the Eclipse Modeling Framework (EMF).
History
Tefkat was one of the sub-projects of the Pegamento project at the Distributed Systems Technology Centre (DSTC), Australia. Although the project was already underway, the most active research occurred for the submission of a response to the OMG's MOF 2.0 Queries / Views / Transformations Request for Proposals.
Tefkat was open-sourced before the closure of the DSTC in June 2006.
Brief description
Tefkat defines a mapping from a set of source metamodels to a set of target metamodels. A Tefkat transformation consists of rules, patterns and templates. Rules contain a source term and a target term. Patterns are simply named composite source terms, and templates are simply named composite target terms. These elements are based on F-logic and pure logic programming, however the absence of function symbols means a significant reduction in complexity.
Tefkat has two more significant language elements: trackings and injections. Trackings allow arbitrary relationships to be preserved in a trace model. Injections allow the identity of target objects to be specified in terms of a function symbol. Thus injections are similar (but more powerful) to QVT's keys, which specify a target object's identity to be a function of its type and some of its properties.
The declarative semantics of a Tefkat transformation is the perfect model of traces and targets that satisfies all the rules. A more imperative semantics of a Tefkat transformation is the iterated least fixed-point of the immediate consequence of each rule. Due to stratification, these semantics are equivalent and unambiguous. Tefkat does not use explicit rule-calling; all (non-abstract) rules fire independently from all others, however rules can be loosely coupled using |
https://en.wikipedia.org/wiki/Inner-platform%20effect | The inner-platform effect is the tendency of software architects to create a system so customizable as to become a replica, and often a poor replica, of the software development platform they are using. This is generally inefficient and such systems are often considered to be examples of an anti-pattern.
Examples
Examples are visible in plugin-based software such as some text editors and web browsers which often have developers create plugins that recreate software that would normally run on top of the operating system itself. The Firefox add-on mechanism has been used to develop a number of FTP clients and file browsers, which effectively replicate some of the features of the operating system, albeit on a more restricted platform.
In the database world, developers are sometimes tempted to bypass the RDBMS, for example by storing everything in one big table with three columns labelled entity ID, key, and value. While this entity-attribute-value model allows the developer to break out from the structure imposed by an SQL database, it loses out on all the benefits, since all of the work that could be done efficiently by the RDBMS is forced onto the application instead. Queries become much more convoluted, the indexes and query optimizer can no longer work effectively, and data validity constraints are not enforced. Performance and maintainability can be extremely poor.
A similar temptation exists for XML, where developers sometimes favor generic element names and use attributes to store meaningful information. For example, every element might be named item and have attributes type and value. This practice requires joins across multiple attributes in order to extract meaning. As a result, XPath expressions are more convoluted, evaluation is less efficient, and structural validation provides little benefit.
Another example is the phenomenon of web desktops, where a whole desktop environment—often including a web browser—runs inside a browser (which itself typically |
https://en.wikipedia.org/wiki/Anatoli%20Bugorski | Anatoli Petrovich Bugorski (; born 25 June 1942) is a Russian retired particle physicist. He is known for surviving a radiation accident in 1978, when a high-energy proton beam from a particle accelerator passed through his brain.
Accident
As a researcher at the Institute for High Energy Physics in Protvino, Russian SSR, Anatoli Bugorski worked with the largest particle accelerator in the Soviet Union, the U-70 synchrotron. On 13 July 1978, Bugorski was checking a malfunctioning piece of equipment when the safety mechanisms failed. Bugorski was leaning over the equipment when he stuck his head in the path of the 76 GeV proton beam. Reportedly, he saw a flash "brighter than a thousand suns" but did not feel any pain. The beam passed through the back of his head, the occipital and temporal lobes of his brain, the left middle ear, and out through the left hand side of his nose. The exposed parts of his head received a local dose of 200,000 to 300,000 roentgens (2,000 to 3,000 Sieverts). Bugorski understood the severity of what had happened, but continued working on the malfunctioning equipment, and initially opted not to tell anyone what had happened.
Aftermath
The left half of Bugorski's face swelled up beyond recognition and, over the next several days, the skin started to peel, revealing the path that the proton beam had burned through parts of his face, his bone, and the brain tissue underneath. As it was believed that he had received far in excess of a fatal dose of radiation, Bugorski was taken to a clinic in Moscow where the doctors could observe his expected demise. However, Bugorski survived, completed his PhD, and continued working as a particle physicist. There was virtually no damage to his intellectual capacity, but the fatigue of mental work increased markedly. Bugorski completely lost hearing in the left ear, replaced by a form of tinnitus. The left half of his face was paralysed due to the destruction of nerves. He was able to function well, except |
https://en.wikipedia.org/wiki/Ad%20hoc%20testing | Ad hoc testing is a commonly used term for planned software testing that is performed without initial test case documentation; however, ad hoc testing can also be applied to other scientific research and quality control efforts. Ad hoc tests are useful for adding additional confidence to a resulting product or process, as well as quickly spotting important defects or inefficiencies, but they have some disadvantages, such as having inherent uncertainties in their performance and not being as useful without proper documentation post-execution and -completion. Occasionally, ad hoc testing is compared to exploratory testing as being less rigorous, though others argue that ad hoc testing still has value as "improvised testing that deals well with verifying a specific subject."
Ad hoc testing of software
When testing software, that testing may be methodical or more improvisational. Methodical testing will include written test cases, which detail their own set of specified inputs, execution conditions, testing procedures, and expected results as a means of achieving a particular software testing objective. Ad hoc testing may have a more "improvisational" feel to it as initial test cases are not documented and the tester's intuition, skillset, and experience are more relevant; however, ad hoc testing of software is still largely a planned activity. The tester still intends to apply—as part of the overall software development process—their own methodology to find bugs not anticipated for by planned test cases using any means that seem appropriate given the situation. Ad hoc testing can, for example, be an extension of existing documented test cases but intend to apply invented variations of those test cases improvisationally without formally documenting the specifics beforehand. However, as Desikan notes, to get the most from an ad hoc test and limit its downsides, the test should be properly documented post-execution and -completion, and the results report should address h |
https://en.wikipedia.org/wiki/Entropy%20%28order%20and%20disorder%29 | In thermodynamics, entropy is often associated with the amount of order or disorder in a thermodynamic system. This stems from Rudolf Clausius' 1862 assertion that any thermodynamic process always "admits to being reduced [reduction] to the alteration in some way or another of the arrangement of the constituent parts of the working body" and that internal work associated with these alterations is quantified energetically by a measure of "entropy" change, according to the following differential expression:
where = motional energy (“heat”) that is transferred reversibly to the system from the surroundings and = the absolute temperature at which the transfer occurs.
In the years to follow, Ludwig Boltzmann translated these 'alterations of arrangement' into a probabilistic view of order and disorder in gas-phase molecular systems. In the context of entropy, "perfect internal disorder" has often been regarded as describing thermodynamic equilibrium, but since the thermodynamic concept is so far from everyday thinking, the use of the term in physics and chemistry has caused much confusion and misunderstanding.
In recent years, to interpret the concept of entropy, by further describing the 'alterations of arrangement', there has been a shift away from the words 'order' and 'disorder', to words such as 'spread' and 'dispersal'.
History
This "molecular ordering" entropy perspective traces its origins to molecular movement interpretations developed by Rudolf Clausius in the 1850s, particularly with his 1862 visual conception of molecular disgregation. Similarly, in 1859, after reading a paper on the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics.
In 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and was so inspired |
https://en.wikipedia.org/wiki/Loop-switch%20sequence | A loop-switch sequence (also known as the for-case paradigm or Anti-Duff's Device) is a programming antipattern where a clear set of steps is implemented as a switch-within-a-loop. The loop-switch sequence is a specific derivative of spaghetti code.
It is not necessarily an antipattern to use a switch statement within a loop—it is only considered incorrect when used to model a known sequence of steps. The most common example of the correct use of a switch within a loop is an inversion of control such as an event handler. In event handler loops, the sequence of events is not known at compile-time, so the repeated switch is both necessary and correct (see event-driven programming, event loop and event-driven finite state machine).
This is not a performance antipattern, though it may lead to an inconsequential performance penalty due to the lack of an unrolled loop. Rather, it is a clarity antipattern, as in any non-trivial example it is much more difficult to decipher the intent and actual function of the code than the more straightforward refactored solution.
Example
An event-driven solution would implement a listener interface:
String key = null;
String value = null;
List<String> params = null;
int column = 0;
public void addToken(token) {
// parse a key, a value, then three parameters
switch (column) {
case 0:
params = new LinkedList<String>();
key = token;
break;
case 1:
value = token;
break;
default:
params.add(token);
break;
}
if (++column >= 5) {
column = 0;
completeRow(key, value, params);
}
}
But without the listener, it becomes an example of the antipattern:
// parse a key, a value, then three parameters
String key = null;
String value = null;
List<String> params = new LinkedList<String>();
for (int i = 0; i < 5; i++) {
switch (i) {
case 0:
key = stream.parse();
break;
|
https://en.wikipedia.org/wiki/Random%20digit%20dialing | Random digit dialing (RDD) is a method for selecting people for involvement in telephone statistical surveys by generating telephone numbers at random. Random digit dialing has the advantage that it includes unlisted numbers that would be missed if the numbers were selected from a phone book. In populations where there is a high telephone-ownership rate, it can be a cost efficient way to get complete coverage of a geographic area.
RDD is widely used for statistical surveys, including election opinion polling and selection of experimental control groups.
When the desired coverage area matches up closely enough with country codes and area codes, random digits can be chosen within the desired area codes. In cases where the desired region doesn't match area codes (for instance, electoral districts), surveys must rely on telephone databases, and must rely on self-reported address information for unlisted numbers. Increasing use of mobile phones (although there are currently techniques which allow infusion of wireless phones into the RDD sampling frame), number portability, and VoIP have begun to decrease the ability for RDD to target specific areas within a country and achieve complete coverage.
See also
Autodialer |
https://en.wikipedia.org/wiki/Debye%E2%80%93H%C3%BCckel%20theory | The Debye–Hückel theory was proposed by Peter Debye and Erich Hückel as a theoretical explanation for departures from ideality in solutions of electrolytes and plasmas.
It is a linearized Poisson–Boltzmann model, which assumes an extremely simplified model of electrolyte solution but nevertheless gave accurate predictions of mean activity coefficients for ions in dilute solution. The Debye–Hückel equation provides a starting point for modern treatments of non-ideality of electrolyte solutions.
Overview
In the chemistry of electrolyte solutions, an ideal solution is a solution whose colligative properties are proportional to the concentration of the solute. Real solutions may show departures from this kind of ideality. In order to accommodate these effects in the thermodynamics of solutions, the concept of activity was introduced: the properties are then proportional to the activities of the ions. Activity, a, is proportional to concentration, c. The proportionality constant is known as an activity coefficient, .
In an ideal electrolyte solution the activity coefficients for all the ions are equal to one. Ideality of an electrolyte solution can be achieved only in very dilute solutions. Non-ideality of more concentrated solutions arises principally (but not exclusively) because ions of opposite charge attract each other due to electrostatic forces, while ions of the same charge repel each other. In consequence ions are not randomly distributed throughout the solution, as they would be in an ideal solution.
Activity coefficients of single ions cannot be measured experimentally because an electrolyte solution must contain both positively charged ions and negatively charged ions. Instead, a mean activity coefficient, is defined. For example, with the electrolyte NaCl
In general, the mean activity coefficient of a fully dissociated electrolyte of formula AnBm is given by
Activity coefficients are themselves functions of concentration as the amount of inter-ionic |
https://en.wikipedia.org/wiki/Epic%20of%20evolution | In social, cultural and religious studies in the United States, the "epic of evolution" is a narrative that blends religious and scientific views of cosmic, biological and sociocultural evolution in a mythological manner. According to The Encyclopedia of Religion and Nature, an "epic of evolution" encompasses
History
"Epic of evolution" seems to have originated from the sociobiologist Edward O. Wilson's use of the phrase "evolutionary epic" in 1978. Wilson was not the first to use the term but his prominence prompted its usage as the morphed phrase 'epic of evolution'. In later years, he also used the latter term.
Naturalistic and liberal religious writers have picked up on Wilson's term and have used it in a number of texts. These authors however have at times used other terms to refer to the idea: Universe Story (Brian Swimme, John F. Haught), Great Story (Connie Barlow, Michael Dowd), Everybody's Story (Loyal Rue), New Story (Thomas Berry, Al Gore, Brian Swimme) and Cosmic Evolution (Eric Chaisson).
Narrative
Evolution generally refers to biological evolution, but here it means a process in which the whole universe is a progression of interrelated phenomena, a gradual process in which something changes into a different and usually more complex form (emergence). It should not be "biologized" as it includes many areas of science. In addition, outside of the scientific community, the term evolution is frequently used differently from scientists' usage. This often leads to misunderstanding since scientists are viewing evolution from a different perspective. The same applies to the use of the term theory as used in the theory of evolution (see references for Evolution as theory and fact).
This epic is not a long narrative poem but a series of events that form the proper subject for a laudable kind of tale. It is mythic in that it is a story of ostensibly historical events that serves to unfold part of the worldview of a people and explains a natural phenomenon. |
https://en.wikipedia.org/wiki/Herbrand%20interpretation | In mathematical logic, a Herbrand interpretation is an interpretation in which all constants and function symbols are assigned very simple meanings. Specifically, every constant is interpreted as itself, and every function symbol is interpreted as the application function on terms. The interpretation also defines predicate symbols as denoting a subset of the relevant Herbrand base, effectively specifying which ground atoms are true in the interpretation. This allows the symbols in a set of clauses to be interpreted in a purely syntactic way, separated from any real instantiation.
The importance of Herbrand interpretations is that, if there exists an interpretation that satisfies a given set of clauses S then there is a Herbrand interpretation that satisfies the clauses. Moreover, Herbrand's theorem states that if S is unsatisfiable then there is a finite unsatisfiable set of ground instances from the Herbrand universe defined by S. Since this set is finite, its unsatisfiability can be verified in finite time. However, there may be an infinite number of such sets to check.
Herbrand interpretations are named after Jacques Herbrand.
See also
Herbrand structure
Interpretation (logic)
Interpretation (model theory)
Notes
Mathematical logic |
https://en.wikipedia.org/wiki/Gamma%20secretase | Gamma secretase is a multi-subunit protease complex, itself an integral membrane protein, that cleaves single-pass transmembrane proteins at residues within the transmembrane domain. Proteases of this type are known as intramembrane proteases. The most well-known substrate of gamma secretase is amyloid precursor protein, a large integral membrane protein that, when cleaved by both gamma and beta secretase, produces a short 37-43 amino acid peptide called amyloid beta whose abnormally folded fibrillar form is the primary component of amyloid plaques found in the brains of Alzheimer's disease patients. Gamma secretase is also critical in the related processing of several other type I integral membrane proteins, such as Notch, ErbB4, E-cadherin, N-cadherin, ephrin-B2, or CD44.
Subunits and assembly
The gamma secretase complex consists of four individual proteins: PSEN1 (presenilin-1), nicastrin, APH-1 (anterior pharynx-defective 1), and PEN-2 (presenilin enhancer 2). Recent evidence suggests that a fifth protein, known as CD147, is a non-essential regulator of the complex whose absence increases activity. Presenilin, an aspartyl protease, is the catalytic subunit; mutations in the presenilin gene have been shown to be a major genetic risk factor for Alzheimer's disease and modulates immune cell activity. In humans, two forms of presenilin and two forms of APH-1 have been identified in the genome; one of the APH homologs can also be expressed in two isoforms via alternative splicing, leading to at least six different possible gamma secretase complexes that may have tissue- or cell type specificity.
The proteins in the gamma secretase complex are heavily modified by proteolysis during assembly and maturation of the complex; a required activation step is in the autocatalytic cleavage of presenilin to N- and C-terminal fragments. Nicastrin's primary role is in maintaining the stability of the assembled complex and regulating intracellular protein trafficking. PEN-2 asso |
https://en.wikipedia.org/wiki/Presenilin | Presenilins are a family of related multi-pass transmembrane proteins which constitute the catalytic subunits of the gamma-secretase intramembrane protease protein complex. They were first identified in screens for mutations causing early onset forms of familial Alzheimer's disease by Peter St George-Hyslop. Vertebrates have two presenilin genes, called PSEN1 (located on chromosome 14 in humans) that codes for presenilin 1 (PS-1) and PSEN2 (on chromosome 1 in humans) that codes for presenilin 2 (PS-2). Both genes show conservation between species, with little difference between rat and human presenilins. The nematode worm C. elegans has two genes that resemble the presenilins and appear to be functionally similar, sel-12 and hop-1.
Presenilins undergo cleavage in an alpha helical region of one of the cytoplasmic loops to produce a large N-terminal and a smaller C-terminal fragment that together form part of the functional protein. Cleavage of presenilin 1 can be prevented by a mutation that causes the loss of exon 9, and results in loss of function. Presenilins play a key role in the modulation of intracellular Ca2+ involved in presynaptic neurotransmitter release and long-term potentiation induction.
Structure
Presenilins are transmembrane proteins with nine alpha helices. Structures have been solved of the assembled gamma secretase complex by cryo-electron microscopy, demonstrating significant conformational flexibility in the structure of the presenilin subunit of the complex in response to ligand or inhibitor binding. Presenilins undergo autocatalytic proteolytic processing after expression, cleaving a cytoplasmic loop region between the sixth and seventh helices to produce a large N-terminal and a smaller C-terminal fragment. The two fragments remain in contact with each other in the mature protein. The two catalytic aspartate active site residues required for aspartyl protease activity are located in the sixth and seventh helices.
The structure and membran |
https://en.wikipedia.org/wiki/Nested%20quotation | A nested quotation is a quotation that is encapsulated inside another quotation, forming a hierarchy with multiple levels. When focusing on a certain quotation, one must interpret it within its scope. Nested quotation can be used in literature (as in nested narration), speech, and computer science (as in "meta"-statements that refer to other statements as strings). Nested quotation can be very confusing until evaluated carefully and until each quotation level is put into perspective.
In literature
In languages that allow for nested quotes and use quotation mark punctuation to indicate direct speech, hierarchical quotation sublevels are usually punctuated by alternating between primary quotation marks and secondary quotation marks. For a comprehensive analysis of the major quotation mark systems employed in major writing systems, see Quotation mark.
In JavaScript programming
Nested quotes often become an issue using the eval keyword. The eval function is a function that converts and interprets a string as actual JavaScript code, and runs that code. If that string is specified as a literal, then the code must be written as a quote itself (and escaped accordingly).
For example:
eval("var a=3; alert();");
This code declares a variable a, which is assigned the value 3, and a blank alert window is popped up to the user.
Nested strings (level 2)
Suppose we had to make a quote inside the quoted interpreted code. In JavaScript, you can only have one unescaped quote sublevel, which has to be the alternate of the top-level quote. If the 2nd-level quote symbol is the same as the first-level symbol, these quotes must be escaped. For example:
alert("I don't need to escape here");
alert('Nor is it "required" here');
alert('But now I do or it won\'t work');
Nested strings (level 3 and beyond)
Furthermore, (unlike in the literature example), the third-level nested quote must be escaped in order not to conflict with either the first- or second-level quote delimiters. Thi |
https://en.wikipedia.org/wiki/ACF2 | ACF2 (Access Control Facility 2) is a commercial, discretionary access control software security system developed for the MVS (z/OS today), VSE (z/VSE today) and VM (z/VM today) IBM mainframe operating systems by SKK, Inc. Barry Schrager, Eberhard Klemens, and Scott Krueger combined to develop ACF2 at London Life Insurance in London, Ontario in 1978. The "2" was added to the ACF2 name by Cambridge Systems (who had the North American marketing rights for the product) to differentiate it from the prototype, which was developed by Schrager and Klemens at the University of Illinois—the prototype name was ACF. The "2" also helped to distinguish the product from IBM's ACF/VTAM.
ACF2 was developed in response to IBM's RACF product (developed in 1976), which was IBM's answer to the 1974 SHARE Security and Data Management project's requirement whitepaper. ACF2's design was guided by these requirements, taking a resource-rule oriented approach. Unique to ACF2 were the concepts of "Protection by Default" and resource pattern masking.
As a result of the competitive tension between RACF and ACF2, IBM matured the SAF (Security Access Facility) interface in MVS (now z/OS), which allowed any security product to process operating system ("OS"), third-party software and application security calls, enabling the mainframe to secure all facets of mainframe operations.
SKK and ACF2 were sold to UCCEL Corporation in 1986, which in turn was purchased by Computer Associates International, Inc. in 1987. Broadcom Inc. now (2019) markets ACF2 as CA ACF2. |
https://en.wikipedia.org/wiki/WirelessHD | WirelessHD, also known as UltraGig, is a proprietary standard owned by Silicon Image (originally SiBeam) for wireless transmission of high-definition video content for consumer electronics products. The consortium currently has over 40 adopters; key members behind the specification include Broadcom, Intel, LG, Panasonic, NEC, Samsung, SiBEAM, Sony, Philips and Toshiba. The founders intend the technology to be used for Consumer Electronic devices, PCs, and portable devices.
The specification was finalized in January 2008.
Technology
The WirelessHD specification is based on a 7 GHz channel in the 60 GHz Extremely High Frequency radio band. It allows either lightly compressed (proprietary wireless link-aware codec) or uncompressed digital transmission of high-definition video and audio and data signals, essentially making it equivalent of a wireless HDMI. First-generation implementation achieves data rates from 4 Gbit/s, but the core technology allows theoretical data rates as high as 25 Gbit/s (compared to 10.2 Gbit/s for HDMI 1.3 and 21.6 Gbit/s for DisplayPort 1.2), permitting WirelessHD to scale to higher resolutions, color depth, and range. The 1.1 version of the specification increases the maximum data rate to 28 Gbit/s, supports common 3D formats, 4K resolution, WPAN data, low-power mode for portable devices, and HDCP 2.0 content protection.
The 60 GHz band usually requires line of sight between transmitter and receiver, and the WirelessHD specification ameliorates this limitation through the use of beam forming at the receiver and transmitter antennas to increase the signal's effective radiated power, find the best path, and utilise wall reflections. The goal range for the first products will be in-room, point-to-point, non line-of-sight (NLOS) at up to 10 meters. The atmospheric absorption of 60 GHz energy by oxygen molecules limits undesired propagation over long distances and helps control intersystem interference and long distance reception, which is a |
https://en.wikipedia.org/wiki/Physica%20Scripta | Physica Scripta is an international scientific journal for experimental and theoretical physics. It was established in 1970 as the successor of Arkiv för Fysik and published by the Royal Swedish Academy of Sciences (KVA). Since 2006, it has been published by IOP Publishing with the endorsement of the KVA. The journal covers both experimental and theoretical physics, with an accent on atomic, molecular and optical physics, plasma physics, condensed matter physics and mathematical physics.
Abstracting, indexing, and impact factor
According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.081.
It is indexed in the following bibliographic databases:
Chemical Abstracts
Compendex
GeoRef
Inspec
Scopus
Zentralblatt MATH |
https://en.wikipedia.org/wiki/Cage%20%28graph%20theory%29 | In the mathematical field of graph theory, a cage is a regular graph that has as few vertices as possible for its girth.
Formally, an is defined to be a graph in which each vertex has exactly neighbors, and in which the shortest cycle has length exactly .
An is an with the smallest possible number of vertices, among all . A is often called a .
It is known that an exists for any combination of and . It follows that all exist.
If a Moore graph exists with degree and girth , it must be a cage. Moreover, the bounds on the sizes of Moore graphs generalize to cages: any cage with odd girth must have at least
vertices, and any cage with even girth must have at least
vertices. Any with exactly this many vertices is by definition a Moore graph and therefore automatically a cage.
There may exist multiple cages for a given combination of and . For instance there are three nonisomorphic , each with 70 vertices: the Balaban 10-cage, the Harries graph and the Harries–Wong graph. But there is only one : the Balaban 11-cage (with 112 vertices).
Known cages
A 1-regular graph has no cycle, and a connected 2-regular graph has girth equal to its number of vertices, so cages are only of interest for r ≥ 3. The (r,3)-cage is a complete graph Kr+1 on r+1 vertices, and the (r,4)-cage is a complete bipartite graph Kr,r on 2r vertices.
Notable cages include:
(3,5)-cage: the Petersen graph, 10 vertices
(3,6)-cage: the Heawood graph, 14 vertices
(3,7)-cage: the McGee graph, 24 vertices
(3,8)-cage: the Tutte–Coxeter graph, 30 vertices
(3,10)-cage: the Balaban 10-cage, 70 vertices
(3,11)-cage: the Balaban 11-cage, 112 vertices
(4,5)-cage: the Robertson graph, 19 vertices
(7,5)-cage: The Hoffman–Singleton graph, 50 vertices.
When r − 1 is a prime power, the (r,6) cages are the incidence graphs of projective planes.
When r − 1 is a prime power, the (r,8) and (r,12) cages are generalized polygons.
The numbers of vertices in the known (r,g) cages, for values of r > |
https://en.wikipedia.org/wiki/E-function | In mathematics, E-functions are a type of power series that satisfy particular arithmetic conditions on the coefficients. They are of interest in transcendental number theory, and are more special than G-functions.
Definition
A function is called of type , or an -function, if the power series
satisfies the following three conditions:
All the coefficients belong to the same algebraic number field, , which has finite degree over the rational numbers;
For all ,
,
where the left hand side represents the maximum of the absolute values of all the algebraic conjugates of ;
For all there is a sequence of natural numbers such that is an algebraic integer in for , and and for which
.
The second condition implies that is an entire function of .
Uses
-functions were first studied by Siegel in 1929. He found a method to show that the values taken by certain -functions were algebraically independent. This was a result which established the algebraic independence of classes of numbers rather than just linear independence. Since then these functions have proved somewhat useful in number theory and in particular they have application in transcendence proofs and differential equations.
The Siegel–Shidlovsky theorem
Perhaps the main result connected to -functions is the Siegel–Shidlovsky theorem (also known as the Siegel and Shidlovsky theorem), named after Carl Ludwig Siegel and Andrei Borisovich Shidlovsky.
Suppose that we are given -functions, , that satisfy a system of homogeneous linear differential equations
where the are rational functions of , and the coefficients of each and are elements of an algebraic number field . Then the theorem states that if are algebraically independent over , then for any non-zero algebraic number that is not a pole of any of the the numbers are algebraically independent.
Examples
Any polynomial with algebraic coefficients is a simple example of an -function.
The exponential function is an -function, in its case |
https://en.wikipedia.org/wiki/Trichocyst | A trichocyst is an organelle found in certain ciliates and dinoflagellates.
A trichocyst can be found in tetrahymena and along cila pathways of several metabolic systems.
It is also a structure in the cortex of certain ciliate and flagellate protozoans consisting of a cavity and long, thin threads that can be ejected in response to certain stimuli. Trichocysts may be widely distributed over an organism or restricted to certain areas (e.g., tentacles, papillae, around the mouth). There are several types. Mucoid trichocysts are elongated inclusions that may be ejected as visible bodies after artificial stimulation. Filamentous trichocysts in Paramecium and other ciliates are discharged as filaments composed of a cross-striated shaft and a tip. Toxicysts (in Dileptus and certain other carnivorous protozoans) tend to be localized around the mouth. When discharged, a toxicyst expels a long, nonstriated filament with a rodlike tip, which paralyzes or kills other microorganisms; this filament is used to capture food and, presumably, in defense.
The functional significance of other trichocysts is uncertain, although those of Paramecium apparently can be extruded for anchorage during feeding. |
https://en.wikipedia.org/wiki/Hack%20%28falconry%29 | Hacking is a training method that helps young birds of prey reach their hunting potential by giving them exercise and experience. This technique is used to prepare the falcon to become an independent hunter. The sequence of the procedure includes captivity, releasing, flight, and either the falcon will be recaptured for falconry or released into the wild. This has also been adapted to other raptor species to preserve the population. Generally, falconers agree that hacked falcons are better and more preferred in the field. Hacking is beneficial , not only for the falconers, but for the bird itself and the species; however, there are some criticism and restrictions that come along with this method.
Procedure
Captivity
Hacking sites are usually large tracts of land. These areas have to be similar to the natural surroundings of a wild nest. The young raptors are put in a “hack box”, boxes that contain a nest inside that protect them from predators and are usually placed on a high site, e.g. cliffs, atop poles. Eggs are either captive bred or taken from wild nests and the chicks are placed in the boxes a couple of weeks before they reach their fledge age of six weeks. Until then, the birds are closely looked after and provided food without too much human contact.
Releasing
The box is open after they have been in there for about 5 to 10 days, which gives them the freedom to do what they want around the box. However, they are not ready to fly yet. At this point they can climb down from their box, flap their wings to get a feel for the wind and exercise their muscles. Just as they would in the wild, the young stay close to the safe area.
Flight
The time until the first flight is taken should take about 3 days after the hack box is open, which should be a distance around a few dozen feet. Afterwards, the young start to fly further away for a longer span. Hacking has an indefinite time period because it depends on the weather conditions and the personality of the bird |
https://en.wikipedia.org/wiki/Die%20%28integrated%20circuit%29 | A die, in the context of integrated circuits, is a small block of semiconducting material on which a given functional circuit is fabricated. Typically, integrated circuits are produced in large batches on a single wafer of electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through processes such as photolithography. The wafer is cut (diced) into many pieces, each containing one copy of the circuit. Each of these pieces is called a die.
There are three commonly used plural forms: dice, dies, and die. To simplify handling and integration onto a printed circuit board, most dies are packaged in various forms.
Manufacturing process
Most dies are composed of silicon and used for integrated circuits. The process begins with the production of monocrystalline silicon ingots. These ingots are then sliced into disks with a diameter of up to 300 mm.
These wafers are then polished to a mirror finish before going through photolithography. In many steps the transistors are manufactured and connected with metal interconnect layers. These prepared wafers then go through wafer testing to test their functionality. The wafers are then sliced and sorted to filter out the faulty dies. Functional dies are then packaged and the completed integrated circuit is ready to be shipped.
Uses
A die can host many types of circuits. One common use case of an integrated circuit die is in the form of a Central Processing Unit (CPU). Through advances in modern technology, the size of the transistor within the die has shrunk exponentially, following Moore's Law. Other uses for dies can range from LED lighting to power semiconductor devices.
Images
See also
Die preparation
Integrated circuit design
Wire bonding and ball bonding |
https://en.wikipedia.org/wiki/Cyclic%20ADP-ribose | Cyclic ADP-ribose, frequently abbreviated as cADPR, is a cyclic adenine nucleotide (like cAMP) with two phosphate groups present on 5' OH of the adenosine (like ADP), further connected to another ribose at the 5' position, which, in turn, closes the cycle by glycosidic bonding to the nitrogen 1 (N1) of the same adenine base (whose position N9 has the glycosidic bond to the other ribose). The N1-glycosidic bond to adenine is what distinguishes cADPR from ADP-ribose (ADPR), the non-cyclic analog. cADPR is produced from nicotinamide adenine dinucleotide (NAD+) by ADP-ribosyl cyclases (EC 3.2.2.5) as part of a second messenger system.
Function
cADPR is a cellular messenger for calcium signaling. It stimulates calcium-induced calcium release at lower cytosolic concentrations of Ca2+. The primary target of cADPR is the endoplasmic reticulum Ca2+ uptake mechanism. cADPR mobilizes Ca2+ from the endoplasmic reticulum by activation of ryanodine receptors, a critical step in muscle contraction.
cADPR also acts as an agonist for the TRPM2 channel, but less potently than ADPR. cADPR and ADPR act synergistically, with both molecules enhancing the action of the other molecule in activating the TRPM2 channel.
Potentiation of Ca2+ release by cADPR is mediated by increased accumulation of Ca2+ in the sarcoplasmic reticulum.
Metabolism
cADPR and ADPR are synthesized from NAD+ by the bifunctional ectoenzymes of the CD38 family (also includes the GPI-anchored CD157 and the specific, monofunctional ADP ribosyl cyclase of the mollusc Aplysia). The same enzymes are also capable of hydrolyzing cADPR to ADPR. Catalysis proceeds via a covalently bound intermediate. The hydrolysis reaction is inhibited by ATP, and cADPR may accumulate. Synthesis and degradation of cADPR by enzymes of the CD38 family involve, respectively, the formation and the hydrolysis of the N1-glycosidic bond. In 2009, the first enzyme able to hydrolyze the phosphoanhydride linkage of cADPR, i.e. the one between the tw |
https://en.wikipedia.org/wiki/Phase%20factor | For any complex number written in polar form (such as ), the phase factor is the complex exponential factor (). The phase factor is a unit complex number, i.e. a complex number of absolute value 1. It is commonly used in quantum mechanics. It is a special case of phasors, which may have arbitrary magnitude (i.e. not necessarily on the unit circle in the complex plane).
The variable appearing in such an expression is generally referred to as the phase. Multiplying the equation of a plane wave by a phase factor shifts the phase of the wave by :
In quantum mechanics, a phase factor is a complex coefficient that multiplies a ket or bra . It does not, in itself, have any physical meaning, since the introduction of a phase factor does not change the expectation values of a Hermitian operator. That is, the values of and are the same. However, differences in phase factors between two interacting quantum states can sometimes be measurable (such as in the Berry phase) and this can have important consequences.
In optics, the phase factor is an important quantity in the treatment of interference.
See also
Berry phase
Bra-ket notation
Euler's formula
Phasor
Plane wave
The circle group U(1)
Notes |
https://en.wikipedia.org/wiki/Nicotinic%20acid%20adenine%20dinucleotide%20phosphate | Nicotinic acid adenine dinucleotide phosphate, (NAADP), is a Ca2+-mobilizing second messenger synthesised in response to extracellular stimuli. Like its mechanistic cousins, IP3 and cyclic adenosine diphosphoribose (Cyclic ADP-ribose), NAADP binds to and opens Ca2+ channels on intracellular organelles, thereby increasing the intracellular Ca2+ concentration which, in turn, modulates sundry cellular processes (see Calcium signalling). Structurally, it is a dinucleotide that only differs from the house-keeping enzyme cofactor, NADP by a hydroxyl group (replacing the nicotinamide amino group) and yet this minor modification converts it into the most potent Ca2+-mobilizing second messenger yet described. NAADP acts across phyla from plants to humans.
Discovery
In their landmark 1987 paper, Hon Cheung Lee and colleagues discovered not one but two Ca2+-mobilizing second messengers, cADPR and NAADP from the effects of nucleotides on Ca2+ release in sea urchin egg homogenates. It turns out that NAADP was a contaminant in commercial sources of NADP, but it was not until 1995 that its structure was solved. The first demonstration that NAADP could act in mammalian cells (pancreas) came four years later. Subsequently, NAADP has been detected in sources as diverse as human sperm, red and white blood cells, liver, and pancreas, to name but a few.
Synthesis and degradation
Kinetics and transduction
The first demonstration that NAADP levels increase in response to an extracellular stimulus arose from studying sea urchin fertilization (NAADP changed in both the eggs and sperm upon contact). Subsequently, other cell types have followed suit, as exemplified by the pancreas (acinar and beta cells), T-cells, and smooth muscle. Levels increase very rapidly — and possibly precede the increase in the other messengers IP3 and cADPR— but can be very transient (spiking and returning to basal levels within seconds). The transduction mechanisms that couple cell stimuli to such NAADP increa |
https://en.wikipedia.org/wiki/Peristimulus%20time%20histogram | In neurophysiology, peristimulus time histogram and poststimulus time histogram, both abbreviated PSTH or PST histogram, are histograms of the times at which neurons fire. It is also sometimes called pre event time histogram or PETH. These histograms are used to visualize the rate and timing of neuronal spike discharges in relation to an external stimulus or event. The peristimulus time histogram is sometimes called perievent time histogram, and post-stimulus and peri-stimulus are often hyphenated.
The prefix peri, for through, is typically used in the case of periodic stimuli, in which case the PSTH show neuron firing times wrapped to one cycle of the stimulus. The prefix post is used when the PSTH shows the timing of neuron firings in response to a stimulus event or onset.
To make a PSTH, a spike train recorded from a single neuron is aligned with the onset, or a fixed phase point, of an identical stimulus repeatedly presented to an animal. The aligned sequences are superimposed in time, and then used to construct a histogram.
Construction procedure
Align spike sequences with the onset of a stimulus that is repeated n times. For periodic stimuli, wrap the response sequence back to time zero after each time period T, and count n as the total number of periods of data.
Divide the stimulus period or observation period T into N bins of size .
Count the number of spikes ki from all n sequences that fall in the bin i.
Draw a bar-graph histogram with the bar-height of bin i given by in units of estimated spikes per second at time .
The optimal bin size (assuming an underlying Poisson point process) Δ is a minimizer of the formula, (2k-v)/Δ2,
where k and v are mean and variance of ki. |
https://en.wikipedia.org/wiki/Oncogenomics | Oncogenomics is a sub-field of genomics that characterizes cancer-associated genes. It focuses on genomic, epigenomic and transcript alterations in cancer.
Cancer is a genetic disease caused by accumulation of DNA mutations and epigenetic alterations leading to unrestrained cell proliferation and neoplasm formation. The goal of oncogenomics is to identify new oncogenes or tumor suppressor genes that may provide new insights into cancer diagnosis, predicting clinical outcome of cancers and new targets for cancer therapies. The success of targeted cancer therapies such as Gleevec, Herceptin and Avastin raised the hope for oncogenomics to elucidate new targets for cancer treatment.
Besides understanding the underlying genetic mechanisms that initiate or drive cancer progression, oncogenomics targets personalized cancer treatment. Cancer develops due to DNA mutations and epigenetic alterations that accumulate randomly. Identifying and targeting the mutations in an individual patient may lead to increased treatment efficacy.
The completion of the Human Genome Project facilitated the field of oncogenomics and increased the abilities of researchers to find oncogenes. Sequencing technologies and global methylation profiling techniques have been applied to the study of oncogenomics.
History
The genomics era began in the 1990s, with the generation of DNA sequences of many organisms. In the 21st century, the completion of the Human Genome Project enabled the study of functional genomics and examining tumor genomes. Cancer is a main focus.
The epigenomics era largely began more recently, about 2000. One major source of epigenetic change is altered methylation of CpG islands at the promoter region of genes (see DNA methylation in cancer). A number of recently devised methods can assess the DNA methylation status in cancers versus normal tissues. Some methods assess methylation of CpGs located in different classes of loci, including CpG islands, shores, and shelves as wel |
https://en.wikipedia.org/wiki/Incipient%20wetness%20impregnation | Incipient wetness impregnation (IW or IWI), also called capillary impregnation or dry impregnation, is a commonly used technique for the synthesis of heterogeneous catalysts. Typically, the active metal precursor is dissolved in an aqueous or organic solution. Then the metal-containing solution is added to a catalyst support containing the same pore volume as the volume of the solution that was added. Capillary action draws the solution into the pores. Solution added in excess of the support pore volume causes the solution transport to change from a capillary action process to a diffusion process, which is much slower. The catalyst can then be dried and calcined to drive off the volatile components within the solution, depositing the metal on the catalyst surface. The maximum loading is limited by the solubility of the precursor in the solution. The concentration profile of the impregnated compound depends on the mass transfer conditions within the pores during impregnation and drying. |
https://en.wikipedia.org/wiki/Phosphatidylmyo-inositol%20mannosides | Phosphatidylmyo-inositol Mannosides (PIMs) are a family of glycolipids found in the cell wall of Mycobacterium tuberculosis. PIMs influence the interaction of the immune system with M. tuberculosis, and mice that develop antibodies for this family of glycolipids are better at sustaining or defeating a M. tuberculosis infection. Thus, PIMs are important glycolipids associated with M. tuberculosis, but are also likely involved with the process by which M. tuberculosis subverts the immune system. |
https://en.wikipedia.org/wiki/Conditional%20event%20algebra | A standard, Boolean algebra of events is a set of events related to one another by the familiar operations and, or, and not. A conditional event algebra (CEA) contains not just ordinary events but also conditional events, which have the form "if A, then B". The usual purpose of a CEA is to enable the defining of a probability function, P, that satisfies the equation P(if A then B) = P(A and B) / P(A).
Motivation
In standard probability theory, an event is a set of outcomes, any one of which would be an occurrence of the event. P(A), the probability of event A, is the sum of the probabilities of all A-outcomes, P(B) is the sum of the probabilities of all B-outcomes, and P(A and B) is the sum of the probabilities of all outcomes that are both A-outcomes and B-outcomes. In other words, and, customarily represented by the logical symbol ∧, is interpreted as set intersection: P(A ∧ B) = P(A ∩ B). In the same vein, or, ∨, becomes set union, ∪, and not, ¬, becomes set complementation, ′. Any combination of events using the operations and, or, and not is also an event, and assigning probabilities to all outcomes generates a probability for every event. In technical terms, this means that the set of events and the three operations together constitute a Boolean algebra of sets, with an associated probability function.
P(if A, then B) is normally interpreted not as an ordinary probability—not, specifically, as P(A′ ∨ B)—but as the conditional probability of B given A, P(B | A) = P(A ∧ B) / P(A). This raises a question: what about a probability like P(if A, then B, and if C, then D)? For this, there is no standard answer. What would be needed, for consistency, is a treatment of if-then as a binary operation, →, such that for conditional events A → B and C → D, P(A → B) = P(B | A), P(C → D) = P(D | C), and P((A → B) ∧ (C → D)) is well-defined and reasonable. This treatment is what conditional event algebras try to provide.
Types of conditional event algebra
Ideally, a con |
https://en.wikipedia.org/wiki/Bracht%E2%80%93Wachter%20bodies | Bracht–Wachter bodies are a finding in infective endocarditis consisting of yellow-white miliary spots in the myocardium.
Histologically, these are collections of chronic inflammatory cells, mainly lymphocytes and histiocytes.
History
They were described by two Germans, Erich Franz Eugen Bracht, a pathologist and obstetrician-gynecologist, and Hermann Julius Gustav Wächter, a physician.
Related findings
Other findings in infective endocarditis are:
Osler's nodes
Janeway lesions
Roth's spots
Flea-bitten kidneys- pyemic spots |
https://en.wikipedia.org/wiki/Elasticity%20tensor | The elasticity tensor is a fourth-rank tensor describing the stress-strain relation in
a linear elastic material. Other names are elastic modulus tensor and stiffness tensor. Common symbols include and .
The defining equation can be written as
where and are the components of the Cauchy stress tensor and infinitesimal strain tensor, and are the components of the elasticity tensor. Summation over repeated indices is implied. This relationship can be interpreted as a generalization of Hooke's law to a 3D continuum.
A general fourth-rank tensor in 3D has 34 = 81 independent components , but the elasticity tensor has at most 21 independent components. This fact follows from the symmetry of the stress and strain tensors, together with the requirement that the stress derives from an elastic energy potential. For isotropic materials, the elasticity tensor has just two independent components, which can be chosen to be the bulk modulus and shear modulus.
Definition
The most general linear relation between two second-rank tensors is
where are the components of a fourth-rank tensor . The elasticity tensor is defined as for the case where and are the stress and strain tensors, respectively.
The compliance tensor is defined from the inverse stress-strain relation:
The two are related by
where is the Kronecker delta.
Unless otherwise noted, this article assumes is defined from the stress-strain relation of a linear elastic material, in the limit of small strain.
Special cases
Isotropic
For an isotropic material, simplifies to
where and are scalar functions of the material coordinates
, and is the metric tensor in the reference frame of the material. In an orthonormal Cartesian coordinate basis, there is no distinction between upper and lower indices, and the metric tensor can be replaced with the Kronecker delta:
Substituting the first equation into the stress-strain relation and summing over repeated indices gives
where is the trace of .
In this |
https://en.wikipedia.org/wiki/Control%20track | A control track is a track that runs along an outside edge of a standard analog videotape (including VHS). The control track encodes a series of pulses, each pulse corresponding to the beginning of each frame. This allows the video tape player to synchronize its scan speed and tape speed to the speed of the recording. Thus, the recorded control track defines the speed of playback (e.g. SP, LP, EP, etc.), and it is also what drives the relative counter clock that most VCRs have.
Significance and use
The control track is used to fine-tune the tape speed during playback, so that the high speed rotating heads remained exactly on their helical tracks rather than somewhere between two adjacent tracks (known as "tracking"). Since good tracking depends on precise distances between the rotating drum and the fixed control/audio head reading the linear tracks, which usually varies by a couple of micrometers between machines due to manufacturing tolerances, most VCRs offer tracking adjustment, either manual or automatic, to correct such mismatches.
The control track is also used to hold index marks, which were normally written at the beginning of each recording session, and can be found using the VCR's index search function: this will fast-wind forward or backward to the nth specified index mark, and resume playback from there. At times, higher-end VCRs provided functions for the user to manually add and remove these marks — so that, for example, they coincide with the actual start of the television program — but this feature later became hard to find.
By the late 1990s, some high-end VCRs offered more sophisticated indexing. For example, Panasonic's Tape Library system assigned an ID number to each cassette, and logged recording information (channel, date, time and optional program title entered by the user) both on the cassette and in the VCR's memory for up to 900 recordings (600 with titles).
Control track damage
A gap in the control track signal of a videotape usua |
https://en.wikipedia.org/wiki/Wigner%20D-matrix | The Wigner D-matrix is a unitary matrix in an irreducible representation of the groups SU(2) and SO(3). It was introduced in 1927 by Eugene Wigner, and plays a fundamental role in the quantum mechanical theory of angular momentum. The complex conjugate of the D-matrix is an eigenfunction of the Hamiltonian of spherical and symmetric rigid rotors. The letter stands for Darstellung, which means "representation" in German.
Definition of the Wigner D-matrix
Let be generators of the Lie algebra of SU(2) and SO(3). In quantum mechanics, these three operators are the components of a vector operator known as angular momentum. Examples are the angular momentum of an electron in an atom, electronic spin, and the angular momentum of a rigid rotor.
In all cases, the three operators satisfy the following commutation relations,
where i is the purely imaginary number and Planck's constant has been set equal to one. The Casimir operator
commutes with all generators of the Lie algebra. Hence, it may be diagonalized together with .
This defines the spherical basis used here. That is, there is a complete set of kets (i.e. orthonormal basis of joint eigenvectors labelled by quantum numbers that define the eigenvalues) with
where j = 0, 1/2, 1, 3/2, 2, ... for SU(2), and j = 0, 1, 2, ... for SO(3). In both cases, .
A 3-dimensional rotation operator can be written as
where α, β, γ are Euler angles (characterized by the keywords: z-y-z convention, right-handed frame, right-hand screw rule, active interpretation).
The Wigner D-matrix is a unitary square matrix of dimension 2j + 1 in this spherical basis with elements
where
is an element of the orthogonal Wigner's (small) d-matrix.
That is, in this basis,
is diagonal, like the γ matrix factor, but unlike the above β factor.
Wigner (small) d-matrix
Wigner gave the following expression:
The sum over s is over such values that the factorials are nonnegative, i.e. , .
Note: The d-matrix elements defined here are real. |
https://en.wikipedia.org/wiki/Semantically%20Interlinked%20Online%20Communities | Semantically Interlinked Online Communities Project (SIOC ( )) is a Semantic Web technology. SIOC provides methods for interconnecting discussion methods such as blogs, forums and mailing lists to each other. It consists of the SIOC ontology, an open-standard machine-readable format for expressing the information contained both explicitly and implicitly in Internet discussion methods, of SIOC metadata producers for a number of popular blogging platforms and content management systems, and of storage and browsing/searching systems for leveraging this SIOC data.
The SIOC vocabulary is based on RDF and is defined using RDFS. SIOC documents may use other existing ontologies to enrich the information described. Additional information about the creator of the post can be described using FOAF Vocabulary and the foaf:maker property. Rich content of the post (e.g., an HTML representation) can be described using the AtomOWL or RSS 1.0 Content module.
The SIOC project was started in 2004 by John Breslin and Uldis Bojars at DERI, NUI Galway. In 2007, SIOC became a W3C Member Submission.
Example
<sioc:Post rdf:about="http://johnbreslin.com/blog/2006/09/07/creating-connections-between-discussion-clouds-with-sioc/">
<dc:title>Creating connections between discussion clouds with SIOC</dc:title>
<dcterms:created>2006-09-07T09:33:30Z</dcterms:created>
<sioc:has_container rdf:resource="http://johnbreslin.com/blog/index.php?sioc_type=site#weblog"/>
<sioc:has_creator>
<sioc:UserAccount rdf:about="http://johnbreslin.com/blog/author/cloud/" rdfs:label="Cloud">
<rdfs:seeAlso rdf:resource="http://johnbreslin.com/blog/index.php?sioc_type=user&sioc_id=1"/>
</sioc:UserAccount>
</sioc:has_creator>
<foaf:maker rdf:resource="http://johnbreslin.com/blog/author/cloud/#foaf"/>
<sioc:content>SIOC provides a unified vocabulary for content and interaction description: a semantic layer that can co-exist with existing discussion platforms.
</s |
https://en.wikipedia.org/wiki/Parry%E2%80%93Sullivan%20invariant | In mathematics, the Parry–Sullivan invariant (or Parry–Sullivan number) is a numerical quantity of interest in the study of incidence matrices in graph theory, and of certain one-dimensional dynamical systems. It provides a partial classification of non-trivial irreducible incidence matrices.
It is named after the English mathematician Bill Parry and the American mathematician Dennis Sullivan, who introduced the invariant in a joint paper published in the journal Topology in 1975.
Definition
Let A be an n × n incidence matrix. Then the Parry–Sullivan number of A is defined to be
where I denotes the n × n identity matrix.
Properties
It can be shown that, for nontrivial irreducible incidence matrices, flow equivalence is completely determined by the Parry–Sullivan number and the Bowen–Franks group. |
https://en.wikipedia.org/wiki/Automatic%20sequence | In mathematics and theoretical computer science, an automatic sequence (also called a k-automatic sequence or a k-recognizable sequence when one wants to indicate that the base of the numerals used is k) is an infinite sequence of terms characterized by a finite automaton. The n-th term of an automatic sequence a(n) is a mapping of the final state reached in a finite automaton accepting the digits of the number n in some fixed base k.
An automatic set is a set of non-negative integers S for which the sequence of values of its characteristic function χS is an automatic sequence; that is, S is k-automatic if χS(n) is k-automatic, where χS(n) = 1 if n S and 0 otherwise.
Definition
Automatic sequences may be defined in a number of ways, all of which are equivalent. Four common definitions are as follows.
Automata-theoretic
Let k be a positive integer, and let D = (Q, Σk, δ, q0, Δ, τ) be a deterministic finite automaton with output, where
Q is the finite set of states;
the input alphabet Σk consists of the set {0,1,...,k-1} of possible digits in base-k notation;
δ : Q × Σk → Q is the transition function;
q0 ∈ Q is the initial state;
the output alphabet Δ is a finite set; and
τ : Q → Δ is the output function mapping from the set of internal states to the output alphabet.
Extend the transition function δ from acting on single digits to acting on strings of digits by defining the action of δ on a string s consisting of digits s1s2...st as:
δ(q,s) = δ(δ(q, s1s2...st-1), st).
Define a function a from the set of positive integers to the output alphabet Δ as follows:
a(n) = τ(δ(q0,s(n))),
where s(n) is n written in base k. Then the sequence a = a(1)a(2)a(3)... is a k-automatic sequence.
An automaton reading the base k digits of s(n) starting with the most significant digit is said to be direct reading, while an automaton starting with the least significant digit is reverse reading. The above definition holds whether s(n) is direct or reverse reading.
Substitution
L |
https://en.wikipedia.org/wiki/Tris-buffered%20saline | Tris-buffered saline (TBS) is a buffer used in some biochemical techniques to maintain the pH within a relatively narrow range. Tris (with HCl) has a slightly alkaline buffering capacity in the 7–9.2 range. The conjugate acid of Tris has a pKa of 8.07 at 25 °C. The pKa declines approximately 0.03 units per degree Celsius rise in temperature. This can lead to relatively dramatic pH shifts when there are shifts in solution temperature. Sodium chloride concentration may vary from 100 to 200 mM, tris concentration from 5 to 100 mM and pH from 7.2 to 8.0. A common formulation of TBS is 150 mM NaCl, 50 mM Tris-HCl, pH 7.6. TBS can also be prepared by using commercially made TBS buffer tablets or pouches.
Applications
TBS is isotonic and non-toxic. It can be used to dilute substances used in laboratory experiments. Additives can be used to add to a compound's functionality.
TBS is often used in immuno-blotting for both membrane washing and antibody dilution. |
https://en.wikipedia.org/wiki/Rectus%20sheath | The rectus sheath (also called the rectus fascia) is a tough fibrous compartment formed by the aponeuroses of the transverse abdominal muscle, and the internal and external oblique muscles. It contains the rectus abdominis and pyramidalis muscles, as well as vessels and nerves.
Structure
The rectus sheath extends between the inferior costal margin and costal cartilages of ribs 5-7 superiorly, and the pubic crest inferiorly.
Studies indicate that all three aponeuroses constituting the rectus sheath are in fact bilaminar.
Below the costal margin
Superficial/anterior to the anterior layer of the rectus sheath are the following two layers:
Camper's fascia (anterior part of superficial fascia)
Scarpa's fascia (posterior part of the superficial fascia)
Deep/posterior posterior layer of the rectus sheath (where present) are the following three layers:
transversalis fascia
extraperitoneal fat
parietal peritoneum
Above the costal margin
Since the tendons of the internal oblique and transversus abdominis only reach as high as the costal margin, it follows that above this level the sheath of the rectus is deficient behind, the muscle resting directly on the cartilages of the ribs, and being covered only by the tendons of the external obliques.
Clinical significance
The rectus sheath is a useful attachment for surgical meshes during abdominal surgery. This has a higher risk of infection than many other attachment sites.
Additional images |
https://en.wikipedia.org/wiki/Adobe%20AIR | Adobe AIR (also known as Adobe Integrated Runtime and codenamed Apollo) is a cross-platform runtime system currently developed by Harman International, in collaboration with Adobe Inc., for building desktop applications and mobile applications, programmed using Adobe Animate, ActionScript, and optionally Apache Flex. It was originally released in 2008. The runtime supports installable applications on Windows, macOS, and mobile operating systems, including Android, iOS, and BlackBerry Tablet OS.
AIR is a runtime environment that allows Adobe Animate content and ActionScript 3.0 coders to construct applications and video games that run as a stand-alone executable and behave similar to a native application on supported platforms. An HTML5 application used in a browser does not require installation, while AIR applications require installation from an installer file (Windows and OS X) or the appropriate App Store (iOS and Android). AIR applications have unrestricted access to local storage and file systems, while browser-based applications only have access to individual files selected by users.
AIR internally uses a shared codebase with the Flash Player rendering engine and ActionScript 3.0 as the primary programming language. Applications must specifically be built for AIR to use additional features provided, such as multi-touch, file system integration, native client extensions, integration with Taskbar or Dock, and access to accelerometer and GPS devices. HTML5 applications may run on the WebKit engine included in AIR.
Notable applications built with Adobe AIR include eBay Desktop, Pandora One desktop, TweetDeck, the former Adobe Media Player, Angry Birds, and Machinarium, among other multimedia and task management applications. According to Adobe, over 100,000 unique applications have been built on AIR, and over 1 billion installations of the same were logged from users across the world, as of May 2014. Adobe AIR was voted as the Best Mobile Application Developme |
https://en.wikipedia.org/wiki/E-patient | An e-patient is a health consumer who participates fully in his/her medical care, primarily by gathering information about medical conditions that impact them and their families, using the Internet and other digital tools. The term encompasses those who seek guidance for their own ailments and the friends and family members who go online on their behalf. E-patients report two effects of their health research: "better health information and services, and different, but not always better, relationships with their doctors."
E-patients are active in their care and demonstrate the power of the participatory medicine or Health 2.0 / Medicine 2.0. model of care. The "e" can stand for "electronic" but has also been used to refer to other terms, such as "equipped", "enabled", "empowered" and "expert".
The current state of knowledge on the impact of e-patients on the healthcare system and the quality of care received indicates:
A growing number of people say the internet played a crucial or important role as they helped another person cope with a major illness.
Many clinicians underestimated the benefits and overestimated the risks of online health resources for patients.
Medical online support groups are an important healthcare resource.
"the net friendliness of clinicians and provider organizations—as rated by the e-patients they serve—is becoming an important new aspect of healthcare quality."
According to one study, the advent of patients as partners is one of the most important cultural medical revolutions of the past century.
In order to understand the impact of the e-patient, clinicians will likely need to move beyond "pre-internet medical constructs".
Medical education must adapt to take the e-patient into account, and to prepare students for medical practice that includes the e-patient.
A 2011 study of European e-patients found that they tended to be "inquisitive and autonomous" and that they noted that the number of e-patients in Europe appeared to be rising. A |
https://en.wikipedia.org/wiki/Architectural%20model | An architectural model is a type of scale model made to study aspects of an architectural design or to communicate design intent. They are made using a variety of materials including paper, plaster, plastic, resin, wood, glass, and metal.
Models are built either with traditional handcraft techniques or via 3D printing technologies such as stereolithography, fused filament fabrication, and selective laser sintering.
History
The use of architectural models dates to pre-history. Some of the oldest standing models were found in Malta at Tarxien Temples. Those models are now stored at the National Museum of Archaeology in Malta.
Purpose
Architectural models are used by architects for a range of purposes, including:
Ad hoc or "sketch" models are sometimes made to study the interaction of volumes, different viewpoints, or concepts during the design process. They're useful in explaining a complicated or unusual design to builders. They also serve as a focus for discussion between architects, engineers, and town planners.
Presentation models can be used to exhibit, visualize, or sell a final design.
A model also serves as a show piece. Once a building is finished, the model is sometimes featured in a common area of the building.
Types of models include:
Exterior models are models of buildings that usually include some landscaping or civic spaces around the building.
Interior models are models showing interior space planning, finishes, colors, furniture, and beautification.
Landscaping design models are models of landscape design and development, representing features such as walkways, small bridges, pergolas, vegetation patterns, and beautification. Landscape design models usually represent public spaces and, in some cases, include buildings as well.
Urban models are typically built at a much smaller scale (starting from 1:500 and less, 1:700, 1:1000, 1:1200, 1:2000, and 1:20,000), representing several city blocks, even a town or village, a large resort, a cam |
https://en.wikipedia.org/wiki/Diel%20vertical%20migration | Diel vertical migration (DVM), also known as diurnal vertical migration, is a pattern of movement used by some organisms, such as copepods, living in the ocean and in lakes. The word "diel" (IPA: , ) comes from , and means a 24-hour period. The migration occurs when organisms move up to the uppermost layer of the sea at night and return to the bottom of the daylight zone of the oceans or to the dense, bottom layer of lakes during the day. It is important to the functioning of deep-sea food webs and the biologically driven sequestration of carbon.
In terms of biomass, it is the largest synchronous migration in the world. It is not restricted to any one taxon as examples are known from crustaceans (copepods), molluscs (squid), and ray-finned fishes (trout).
The phenomenon may be advantageous for a number of reasons, most typically to access food and avoid predators.
It is triggered by various stimuli, the most prominent being response to changes in light intensity, though evidence suggests that biological clocks are an underlying stimulus as well. While this mass migration is generally nocturnal, with the animals ascending from the depths at nightfall and descending at sunrise, the timing can be altered in response to the different cues and stimuli that trigger it. Some unusual events impact vertical migration: DVM can be absent during the midnight sun in Arctic regions and vertical migration can occur suddenly during a solar eclipse. The phenomenon also demonstrates cloud-driven variations.
The common swift is an exception among birds in that it ascends and descends into high altitudes at dusk and dawn, similar to the vertical migration of aquatic lifeforms.
Discovery
The phenomenon was first documented by French naturalist Georges Cuvier in 1817. He noted that daphnia, a type of plankton, appeared and disappeared according to a diurnal pattern.
During World War II the U.S. Navy was taking sonar readings of the ocean when they discovered the deep scatteri |
https://en.wikipedia.org/wiki/Need%20for%20cognition | The need for cognition (NFC), in psychology, is a personality variable reflecting the extent to which individuals are inclined towards effortful cognitive activities.
Need for cognition has been variously defined as "a need to structure relevant situations in meaningful, integrated ways" and "a need to understand and make reasonable the experiential world". Higher NFC is associated with increased appreciation of debate, idea evaluation, and problem solving. Those with a high need for cognition may be inclined towards high elaboration. Those with a lower need for cognition may display the opposite tendencies, and may process information more heuristically, often through low elaboration.
Need for cognition is closely related to the five factor model domain openness to experience, typical intellectual engagement, and epistemic curiosity (see below).
History
Cohen, Stotland and Wolfe (1955), in their work on individual differences in cognitive motivation, identified a "need for cognition" which they defined as "the individual's need to organize his experience meaningfully", the "need to structure relevant situations in meaningful, integrated ways", and "need to understand and make reasonable the experiential world" (p. 291). They argued that, if this "need" were frustrated, it would generate "feelings of tension and deprivation" that would instigate "active efforts to structure the situation and increase understanding" (p. 291), though the particular situations arousing and satisfying the need may vary (p. 291). Cohen argued that even in structured situations, people high in NFC see ambiguity and strive for higher standards of cognitive clarity.
Cohen and colleagues themselves identified multiple prior identifications of need for cognition, citing works by Murphy, Maslow, Katz, Harlow and Asch. They distinguished their concept from the apparently similar "intolerance of ambiguity" proposed by Frenkel-Brunswik, arguing that NFC does not reflect the need to experien |
https://en.wikipedia.org/wiki/Automation%20and%20Remote%20Control | Automation and Remote Control () is a Russian scientific journal published by MAIK Nauka/Interperiodica Press and distributed in English by Springer Science+Business Media.
The journal was established in April 1936 by the USSR Academy of Sciences Department of Control Processes Problems. Cofounders were the Trapeznikov Institute of Control Sciences and the Institute of Information Transmission Problems. The journal covers research on control theory problems and applications. The editor-in-chief is Andrey A. Galyaev. According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7.
History
The journal was established in April 1936 and published bimonthly. Since 1956 the journal has been a monthly publication and was translated into English and published in the United States under the title Automation and Remote Control by Plenum Publishing Corporation. During its existence, the scope of the journal substantially evolved and expanded to reflect virtually all subjects concerned in one way or another with the current science of automation and control systems. The journal publishes surveys, original papers, and short communications. |
https://en.wikipedia.org/wiki/Cultural%20universal | A cultural universal (also called an anthropological universal or human universal) is an element, pattern, trait, or institution that is common to all known human cultures worldwide. Taken together, the whole body of cultural universals is known as the human condition. Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations. Some anthropological and sociological theorists that take a cultural relativist perspective may deny the existence of cultural universals: the extent to which these universals are "cultural" in the narrow sense, or in fact biologically inherited behavior is an issue of "nature versus nurture". Prominent scholars on the topic include Emile Durkheim, George Murdock, Claude Lévi-Strauss, and Donald Brown.
Donald Brown's list in Human Universals
In his book Human Universals (1991), Donald Brown defines human universals as comprising "those features of culture, society, language, behavior, and psyche for which there are no known exception", providing a list of hundreds of items he suggests as universal. Among the cultural universals listed by Donald Brown are:
Language and cognition
Society
Beliefs
Technology
Shelter
Control of fire
Tools, tool making
Weapons, spear
Containers
Cooking
Lever
Rope
Non-nativist explanations
The observation of the same or similar behavior in different cultures does not prove that they are the results of a common underlying psychological mechanism. One possibility is that they may have been invented independently due to a common practical problem.
Outside influence could be an explanation for some cultural universals. This does not preclude multiple independent inventions of civilization and is therefore not the same thing as hyperdiffusionism; it merely means that cultural universals are not proof of innateness.
See also
Animal culture
Archetype
Biocultural anthropology
Culture
Social learning
Social norm |
https://en.wikipedia.org/wiki/Leccinum%20aurantiacum | Leccinum aurantiacum is a species of fungus in the genus Leccinum found in forests of Eurasia and North America. It has a large, characteristically red-capped fruiting body. In North America, it is sometimes referred to by the common name red-capped scaber stalk. Some uncertainties exist regarding the taxonomic classification of this species in Europe and North America. It is considered edible, but must be cooked thoroughly.
Description
The cap is orange-red and measures across. Its flesh is white, bruising at first burgundy, then grayish or purple-black. The underside of the cap has very small, whitish pores that bruise olive-brown. The stem measures tall and thick and can bruise blue-green. It is whitish, with short, rigid projections or scabers that turn to brown to black with age.
Distribution and habitat
L. aurantiacum can be found fruiting during summer and autumn in forests throughout Europe and North America. The association between fungus and host tree is mycorrhizal. In Europe, it has traditionally been associated with poplar trees. Some debate exists about the classification of L. aurantiacum and L. quercinum as separate species. According to authors who do not recognise the distinction, L. aurantiacum is also found among oak trees. Additionally, L. aurantiacum has been recorded with various other deciduous trees, including beech, birch, chestnut, willow, and trees of the genus Tilia. L. aurantiacum is not known to associate with conifers in Europe.
North American populations have been recorded in coniferous and deciduous forests, though whether collections from coniferous forests are not L. vulpinum, instead, remains uncertain. In addition, L. aurantiacum may be absent altogether from North America, with collections from deciduous forests being attributed to other North American species L. insigne, and L. brunneum.
Edibility
This is a favorite species for eating and can be prepared as other edible boletes. Its flesh turns very dark on cooking. Li |
https://en.wikipedia.org/wiki/Ectomesenchyme | Ectomesenchyme has properties similar to mesenchyme. The origin of the ectomesenchyme is disputed. It is either like the mesenchyme, arising from mesodermic cells, or conversely arising from neural crest cells. The neural crest is a critical group of cells that form in the cranial region during early vertebrate development. Ectomesenchyme plays a critical role in the formation of the hard and soft tissues of the head and neck such as bones, muscles, teeth and, notably, the pharyngeal arches. |
https://en.wikipedia.org/wiki/Reverse%20Monte%20Carlo | The Reverse Monte Carlo (RMC) modelling method is a variation of the standard Metropolis–Hastings algorithm to solve an inverse problem whereby a model is adjusted until its parameters have the greatest consistency with experimental data. Inverse problems are found in many branches of science and mathematics, but this approach is probably best known for its applications in condensed matter physics and solid state chemistry.
Applications in condensed matter sciences
Basic method
This method is often used in condensed matter sciences to produce atom-based structural models that are consistent with experimental data and subject to a set of constraints.
An initial configuration is constructed by placing atoms in a periodic boundary cell, and one or more measurable quantities are calculated based on the current configuration. Commonly used data include the pair distribution function and its Fourier transform, the latter of which is derived directly from neutron or x-ray scattering data (see small-angle neutron scattering, wide-angle X-ray scattering, small-angle X-ray scattering, and X-ray diffraction). Other data that are used included Bragg diffraction data for crystalline materials, and EXAFS data. The comparison with experiment is quantified using a function of the form
where and are the observed (measured) and calculated quantities respectively, and is a measure of the accuracy of the measurement. The sum is over all independent measurements, which will include the sum over all points in a function such as the pair distribution function.
An iterative procedure is run where one randomly chosen atom is moved a random amount, followed by a new calculation of the measurable quantities. Such a process will cause to either increase or decrease in value by an amount . The move is accepted with the probability according to the normal Metropolis–Hastings algorithm, ensuring that moves that give better agreement with experimental data are accepted, and moves that w |
https://en.wikipedia.org/wiki/BestCrypt | BestCrypt, developed by Jetico, is a commercial disk encryption app available for Windows, Linux, macOS and Android.
BestCrypt comes in two editions: BestCrypt Volume Encryption to encrypt entire disk volumes; BestCrypt Container Encryption to encrypt virtual disks stored as computer files.
BestCrypt also provides the complimentary data erasure utility BCWipe.
Cryptographic Algorithms
BestCrypt supports a wide variety of block cipher algorithms including AES, Serpent, Blowfish, Twofish, DES, Triple DES, GOST 28147-89. All ciphers support CBC and LRW modes of operation while AES, Twofish and Serpent also support XTS mode.
Features
Create and mount a virtual drive encrypted using AES, Blowfish, Twofish, CAST-128 and various other encryption methods. BestCrypt v.8 and higher can alternatively mount a subfolder on a NTFS disk instead of a drive. Encrypted virtual disk images are compatible across Windows, Linux and Mac OS X.
Encrypt a set of files into a single, self-extracting archive.
Transparently encrypt entire partitions or volumes together with pre-boot authentication for encrypted boot partitions.
Two-factor authentication.
Support for size-efficient Dynamic Containers with the Smart Free Space Monitoring technology.
Hardware accelerated encryption.
Anti-keylogging facilities to protect container and volume passwords.
Data erasure utility BCWipe to erase unprotected copies of data to complement encryption.
Secret sharing and Public Key authentication methods in addition to basic password-based authentication.
See also
Comparison of disk encryption software |
https://en.wikipedia.org/wiki/Aspartic%20protease | Aspartic proteases (also "aspartyl proteases", "aspartic endopeptidases") are a catalytic type of protease enzymes that use an activated water molecule bound to one or more aspartate residues for catalysis of their peptide substrates. In general, they have two highly conserved aspartates in the active site and are optimally active at acidic pH. Nearly all known aspartyl proteases are inhibited by pepstatin.
Aspartic endopeptidases of vertebrate, fungal and retroviral origin have been characterised. More recently, aspartic endopeptidases associated with the processing of bacterial type 4 prepilin and archaean preflagellin have been described.
Eukaryotic aspartic proteases include pepsins, cathepsins, and renins. They have a two-domain structure, arising from ancestral duplication. Retroviral and retrotransposon proteases (retroviral aspartyl proteases) are much smaller and appear to
be homologous to a single domain of the eukaryotic aspartyl proteases. Each domain contributes a catalytic Asp residue, with an extended active site cleft localized between the two lobes of the molecule. One lobe has probably evolved from the other through a gene duplication event in the distant past. In modern-day enzymes, although the three-dimensional structures are very similar, the amino acid sequences are more divergent, except for the catalytic site motif, which is very conserved. The presence and position of disulfide bridges are other conserved features of aspartic peptidases.
Catalytic mechanism
Aspartyl proteases are a highly specific family of proteases – they tend to cleave dipeptide bonds that have hydrophobic residues as well as a beta-methylene group. Unlike serine or cysteine proteases these proteases do not form a covalent intermediate during cleavage. Proteolysis therefore occurs in a single step.
While a number of different mechanisms for aspartyl proteases have been proposed, the most widely accepted is a general acid-base mechanism involving coordination of a w |
https://en.wikipedia.org/wiki/Tinapa | Tinapa, a Filipino term, is fish cooked or preserved through the process of smoking. It is a native delicacy in the Philippines and is often made from blackfin scad (Alepes melanoptera, known locally as galunggong), or from milkfish, which is locally known as bangus.
Though canned tinapa in tomato sauce is common and sold commercially throughout the country, it is also still produced and sold traditionally or prepared at home. Tinapa recipe mainly involves the process of washing the fish and putting it in brine for an extended amount of time (usually 5 – 6 hours), air drying and finally smoking the fish. The fish species which are commonly used for making tinapa could either be galunggong (scads) or bangus (milkfish).
The term tinapa means "prepared by smoking". Tapa in Philippine languages originally meant fish or meat preserved by smoking. In the Spanish Philippines, it came to refer to meats (modern tapa) preserved by other means. It is derived from Proto-Malayo-Polynesian *tapa, which in turn is derived from Proto-Austronesian *Capa.
See also
Tortang sardinas
Daing
Odong |
https://en.wikipedia.org/wiki/The%20Inner%20Life%20of%20the%20Cell | The Inner Life of the Cell is an 8.5-minute 3D computer graphics animation illustrating the molecular mechanisms that occur when a white blood cell in the blood vessels of the human body is activated by inflammation (Leukocyte extravasation). It shows how a white blood cell rolls along the inner surface of the capillary, flattens out, and squeezes through the cells of the capillary wall to the site of inflammation where it contributes to the immune reaction.
When teaching biology, professors will often generate 3D animations to demonstrate certain concepts to their students in a much more visual way than would otherwise be possible. In the case of The Inner Life of the Cell the creators aimed for a more cinematic, as opposed to academic, feel.
Production
David Bolinsky, former lead medical illustrator at Yale, lead animator John Liebler, and Mike Astrachan are some of the creators at XVIVO who made the movie. The audio track was composed, recorded, and produced by Matt Berky. They created the animation for Harvard's Department of Molecular and Cellular Biology.
Most of the processes animated were the result of Alain Viel's, Ph.D. work describing the processes to the team. Alain Viel is an associate director of undergraduate research at Harvard University.
The film took 14 months to create for 8.5 minutes of animation. It was first seen by a wide audience at the 2006 SIGGRAPH conference in Boston. |
https://en.wikipedia.org/wiki/Screening%20router | A screening router performs packet-filtering and is used as a firewall. In some cases a screening router may be used as perimeter protection for the internal network or as the entire firewall solution. |
https://en.wikipedia.org/wiki/Deductive%20closure | In mathematical logic, a set of logical formulae is deductively closed if it contains every formula that can be logically deduced from , formally: if always implies . If is a set of formulae, the deductive closure of is its smallest superset that is deductively closed.
The deductive closure of a theory is often denoted or . This is a special case of the more general mathematical concept of closure — in particular, the deductive closure of is exactly the closure of with respect to the operation of logical consequence ().
Examples
In propositional logic, the set of all true propositions is deductively closed. This is to say that only true statements are derivable from other true statements.
Epistemic closure
In epistemology, many philosophers have and continue to debate whether particular subsets of propositions—especially ones ascribing knowledge or justification of a belief to a subject—are closed under deduction. |
https://en.wikipedia.org/wiki/Opportunistic%20TLS | Opportunistic TLS (Transport Layer Security) refers to extensions in plain text communication protocols, which offer a way to upgrade a plain text connection to an encrypted (TLS or SSL) connection instead of using a separate port for encrypted communication. Several protocols use a command named "STARTTLS" for this purpose. It is a form of opportunistic encryption and is primarily intended as a countermeasure to passive monitoring.
The STARTTLS command for IMAP and POP3 is defined in , for SMTP in , for XMPP in and for NNTP in . For IRC, the IRCv3 Working Group has defined the STARTTLS extension. FTP uses the command "AUTH TLS" defined in and LDAP defines a protocol extension OID in . HTTP uses an upgrade header.
Layering
TLS is application-neutral; in the words of :
One advantage of TLS is that it is application protocol independent. Higher-level protocols can layer on top of the TLS protocol transparently. The TLS standard, however, does not specify how protocols add security with TLS; the decisions on how to initiate TLS handshaking and how to interpret the authentication certificates exchanged are left to the judgment of the designers and implementors of protocols that run on top of TLS.
The style used to specify how to use TLS matches the same layer distinction that is also conveniently supported by several library implementations of TLS. E.g., the SMTP extension illustrates with the following dialog how a client and server can start a secure session:
S: <waits for connection on TCP port 25>
C: <opens connection>
S: 220 mail.example.org ESMTP service ready
C: EHLO client.example.org
S: 250-mail.example.org offers a warm hug of welcome
S: 250 STARTTLS
C: STARTTLS
S: 220 Go ahead
C: <starts TLS negotiation>
C & S: <negotiate a TLS session>
C & S: <check result of negotiation>
C: EHLO client.example.org
. . .
The last EHLO command above is issued over a secure channel. Note that authentication is optional in SMTP, |
https://en.wikipedia.org/wiki/Inertial%20wave | Inertial waves, also known as inertial oscillations, are a type of mechanical wave possible in rotating fluids. Unlike surface gravity waves commonly seen at the beach or in the bathtub, inertial waves flow through the interior of the fluid, not at the surface. Like any other kind of wave, an inertial wave is caused by a restoring force and characterized by its wavelength and frequency. Because the restoring force for inertial waves is the Coriolis force, their wavelengths and frequencies are related in a peculiar way. Inertial waves are transverse. Most commonly they are observed in atmospheres, oceans, lakes, and laboratory experiments. Rossby waves, geostrophic currents, and geostrophic winds are examples of inertial waves. Inertial waves are also likely to exist in the molten core of the rotating Earth.
Restoring force
Inertial waves are restored to equilibrium by the Coriolis force, a result of rotation. To be precise, the Coriolis force arises (along with the centrifugal force) in a rotating frame to account for the fact that such a frame is always accelerating. Inertial waves, therefore, cannot exist without rotation. More complicated than tension on a string, the Coriolis force acts at a 90° angle to the direction of motion, and its strength depends on the rotation rate of the fluid. These two properties lead to the peculiar characteristics of inertial waves.
Characteristics
Inertial waves are possible only when a fluid is rotating, and exist in the bulk of the fluid, not at its surface. Like light waves, inertial waves are transverse, which means that their vibrations occur perpendicular to the direction of wave travel. One peculiar geometrical characteristic of inertial waves is that their phase velocity, which describes the movement of the crests and troughs of the wave, is perpendicular to their group velocity, which is a measure of the propagation of energy.
Whereas a sound wave or an electromagnetic wave of any frequency is possible, inertial wa |
https://en.wikipedia.org/wiki/Psychopathy | Psychopathy is a mental health condition characterized by persistent antisocial behavior, impaired empathy and remorse, and bold, disinhibited, and egotistical traits. Different conceptions of psychopathy have been used throughout history that are only partly overlapping and may sometimes be contradictory.
Hervey M. Cleckley, an American psychiatrist, influenced the initial diagnostic criteria for antisocial personality reaction/disturbance in the Diagnostic and Statistical Manual of Mental Disorders (DSM), as did American psychologist George E. Partridge. The DSM and International Classification of Diseases (ICD) subsequently introduced the diagnoses of antisocial personality disorder (ASPD) and dissocial personality disorder (DPD) respectively, stating that these diagnoses have been referred to (or include what is referred to) as psychopathy or sociopathy. The creation of ASPD and DPD was driven by the fact that many of the classic traits of psychopathy were impossible to measure objectively. Canadian psychologist Robert D. Hare later repopularized the construct of psychopathy in criminology with his Psychopathy Checklist.
Although no psychiatric or psychological organization has sanctioned a diagnosis titled "psychopathy", assessments of psychopathic characteristics are widely used in criminal justice settings in some nations and may have important consequences for individuals. The study of psychopathy is an active field of research. The term is also used by the general public, popular press, and in fictional portrayals. While the abbreviated term "psycho" is often employed in common usage in general media along with "crazy", "insane", and "mentally ill", there is a categorical difference between psychosis and psychopathy.
History
Etymology
The word psychopathy is a joining of the Greek words psyche () "soul" and pathos () "suffering, feeling". The first documented use is from 1847 in Germany as psychopatisch, and the noun psychopath has been traced to 1885. |
https://en.wikipedia.org/wiki/Tribler | Tribler is an open source decentralized BitTorrent client which allows anonymous peer-to-peer by default. Tribler is based on the BitTorrent protocol and uses an overlay network for content searching.
Due to this overlay network, Tribler does not require an external website or indexing service to discover content. The user interface of Tribler is very basic and focused on ease of use instead of diversity of features. Tribler is available for Linux, Windows, and OS X.
Tribler has run trials for a video streamer known as SwarmPlayer.
History
The name Tribler stems from the word tribe, referring to the usage of social networks in this P2P client. The first version of Tribler was an enhancement of ABC aka Yet Another BitTorrent Client.
In 2009, the development team behind Tribler stated that their efforts for the coming years were focused on the integration of Tribler with television hardware.
In 2014, with the release of version 6.3.1, a custom built-in onion routing network was introduced as part of Tribler. Users can load any clearnet torrent, and by leaving the box for anonymity ticked, the files will be routed through other Tribler. Because the custom onion network does not use Tor exit nodes, it is enhanced to make every Tribler user to function as a relay.
Features
Tribler adds keyword search ability to the BitTorrent file download protocol using a gossip protocol, somewhat similar to the eXeem network which was shut down in 2005. The software includes the ability to recommend content. After a dozen downloads the Tribler software can roughly estimate the download taste of the user and
recommends content. This feature is based on collaborative filtering, also featured on websites such as Last.fm and Amazon.com. Another feature of Tribler is a limited form of social networking and donation of upload capacity. Tribler borrows bandwidth capacity from connected nodes regarded as helpful to boost the download speed of files.
SwarmPlayer
The SwarmPlayer is a Pyt |
https://en.wikipedia.org/wiki/History%20of%20artificial%20life | Humans have considered and tried to create non-biological life for at least 3000 years. As seen in tales ranging from Pygmalion to Frankenstein, humanity has long been intrigued by the concept of artificial life.
Pre-computer
The earliest examples of artificial life involve sophisticated automata constructed using pneumatics, mechanics, and/or hydraulics. The first automata were conceived during the third and second centuries BC and these were demonstrated by the theorems of Hero of Alexandria, which included sophisticated mechanical and hydraulic solutions. Many of his notable works were included in the book Pneumatics, which was also used for constructing machines until early modern times. In 1490, Leonardo da Vinci also constructed an armored knight, which is considered the first humanoid robot in Western civilization.
Other early famous examples include al-Jazari's humanoid robots. This Arabic inventor once constructed a band of automata, which can be commanded to play different pieces of music. There is also the case of Jacques de Vaucanson's artificial duck exhibited in 1735, which had thousands of moving parts and one of the first to mimic a biological system. The duck could reportedly eat and digest, drink, quack, and splash in a pool. It was exhibited all over Europe until it fell into disrepair.
In the late 1600s, following René Descartes' claims that animals could be understood as purely physical machines, there was increasing interest in the question of whether a machine could be designed that, like an animal, could generate offspring (a self-replicating machine). After the climax of the British Industrial Revolution in the early 1800s, and the publication of Charles Darwin's On The Origin of Species in 1859, various writers in the late 1800s explored the idea that it might be possible to build machines that could not only self-reproduce, but also evolve and become increasingly intelligent.
However, it wasn't until the invention of cheap computin |
https://en.wikipedia.org/wiki/Kyorochan | is a fictional bird that serves as a mascot for a Japanese brand of Morinaga chocolate, known as ChocoBall. He first appeared in 1967 in the anime television series Uchuu-shonen Soran (Space Boy Soran). Kyorochan also replaced the character Chappy in 1967, a space-themed squirrel who was the original mascot, first appearing in 1965.
Kyorochan's popularity began to take off in 1987, when TV commercials starring Kyorochan, as well as commercial songs performed by famous artists were made. In 1991, the name "Kyorochan" was printed on the boxes of ChocoBall candies. However, that same year, the sales of merchandise, such as stuffed animals and related products exceeded the sales of the ChocoBall brand itself.
Anime
An anime adaptation starring Kyorochan, with the same name, was produced by TV Tokyo, NAS, and SPE Visual Works and animated by Group TAC. These focus on the adventures of Kyorochan, as he lives on Angel Island, a large village home to various other birds. The first theme song is "Halation Summer", performed by Coconuts Musume, while the first ending theme was "Tsuukagu Ro", performed by Whiteberry. These were replaced by original songs by episode 27.
The series was released in very limited amounts in the DVD format, with box-sets being rare. International releases of the anime include Hungary (Kukucska Kalandjai), Romania (with the name intact), Taiwan (大嘴鳥), the Czech Republic (Červánek), and South Korea (왕부리 팅코). The Indian television channel Pogo began broadcasting Kyorochan from May 31, 2010, a decade after the original anime.
An obscure English dub appeared to have been made of the series, with Richie Campos only being the notable voice actor. Campos's name was curiously mentioned on its page in Anime News Network's encyclopedia, voicing Don Girori, Makumou, Dementon, Girosshu and the narrator. Other info and footage about this lost dub currently remain unknown.
Characters
Kyoro-chan (voiced by Miyako Ito) - The titular character and a cute par |
https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis%20conjecture | In mathematics, the Andrews–Curtis conjecture states that every balanced presentation of the trivial group can be transformed into a trivial presentation by a sequence of Nielsen transformations on the relators together with conjugations of relators, named after James J. Andrews and Morton L. Curtis who proposed it in 1965. It is difficult to verify whether the conjecture holds for a given balanced presentation or not.
It is widely believed that the Andrews–Curtis conjecture is false. While there are no counterexamples known, there are numerous potential counterexamples. It is known that the Zeeman conjecture on collapsibility implies the Andrews–Curtis conjecture. |
https://en.wikipedia.org/wiki/Stride%20scheduling | Stride scheduling is a type of scheduling mechanism that has been introduced as a simple concept to achieve proportional central processing unit (CPU) capacity reservation among concurrent processes. Stride scheduling aims to sequentially allocate a resource for the duration of standard time-slices (quantum) in a fashion, that performs periodic recurrences of allocations. Thus, a process p1 which has reserved twice the share of a process p2 will be allocated twice as often as p2. In particular, process p1 will even be allocated two times every time p2 is waiting for allocation, assuming that neither of the two processes performs a blocking operation.
See also
Computer multitasking
Concurrency control
Concurrent computing
Resource contention
Time complexity
Thread (computing) |
https://en.wikipedia.org/wiki/Carleson%20measure | In mathematics, a Carleson measure is a type of measure on subsets of n-dimensional Euclidean space Rn. Roughly speaking, a Carleson measure on a domain Ω is a measure that does not vanish at the boundary of Ω when compared to the surface measure on the boundary of Ω.
Carleson measures have many applications in harmonic analysis and the theory of partial differential equations, for instance in the solution of Dirichlet problems with "rough" boundary. The Carleson condition is closely related to the boundedness of the Poisson operator. Carleson measures are named after the Swedish mathematician Lennart Carleson.
Definition
Let n ∈ N and let Ω ⊂ Rn be an open (and hence measurable) set with non-empty boundary ∂Ω. Let μ be a Borel measure on Ω, and let σ denote the surface measure on ∂Ω. The measure μ is said to be a Carleson measure if there exists a constant C > 0 such that, for every point p ∈ ∂Ω and every radius r > 0,
where
denotes the open ball of radius r about p.
Carleson's theorem on the Poisson operator
Let D denote the unit disc in the complex plane C, equipped with some Borel measure μ. For 1 ≤ p < +∞, let Hp(∂D) denote the Hardy space on the boundary of D and let Lp(D, μ) denote the Lp space on D with respect to the measure μ. Define the Poisson operator
by
Then P is a bounded linear operator if and only if the measure μ is Carleson.
Other related concepts
The infimum of the set of constants C > 0 for which the Carleson condition
holds is known as the Carleson norm of the measure μ.
If C(R) is defined to be the infimum of the set of all constants C > 0 for which the restricted Carleson condition
holds, then the measure μ is said to satisfy the vanishing Carleson condition if C(R) → 0 as R → 0. |
https://en.wikipedia.org/wiki/Shark%20agonistic%20display | Agonism is a broad term which encompasses many behaviours that result from, or are triggered by biological conflict between competing organisms. Approximately 23 shark species are capable of producing such displays when threatened by intraspecific or interspecific competitors, as an evolutionary strategy to avoid unnecessary combat. The behavioural, postural, social and kinetic elements which comprise this complex, ritualized display can be easily distinguished from normal, or non-display behaviour, considered typical of that species' life history. The display itself confers pertinent information to the foe regarding the displayer's physical fitness, body size, inborn biological weaponry, confidence and determination to fight. This behaviour is advantageous because it is much less biologically taxing for an individual to display its intention to fight than the injuries it would sustain during conflict, which is why agonistic displays have been reinforced through evolutionary time, as an adaptation to personal fitness. Agonistic displays are essential to the social dynamics of many biological taxa, extending far beyond sharks.
Characteristics
Definition
Agonistic displays are ritualized sequences of actions, produced by animals belonging to almost all biological taxa, in response to conflict with other organisms. If challenged or threatened, animals may employ a suite of adaptive behaviours, which are used to reinforce the chances of their own survival. Behaviours which arise from agonistic conflict include:
fight or flight response
threat display to warn competitors and signal honest intentions
defence behaviour
simulated paralysis
avoidance behaviour
withdrawal
settling behaviour.
Each of these listed strategies constitute some manifestation of agonistic behaviour, and have been observed in numerous shark species, among many higher taxa in Kingdom Animalia. Displays of this nature are influenced and reinforced by natural selection, as an optimal strategy for |
https://en.wikipedia.org/wiki/Hand-stopping | Hand-stopping is a technique by which a natural horn or a natural trumpet can be made to produce notes outside of its normal harmonic series. By inserting the hand, cupped, into the bell, the player can reduce the pitch of a note by a semitone or more. This, combined with the use of crooks changing the key of the instrument, allowed composers to write fully chromatic music for the horn and almost fully chromatic music for the trumpet before the invention of piston and valve horns and trumpets in the early 19th Century. A stopped note is called gestopft in German and bouché in French.
The technique was invented in Europe in the mid 18th Century, and its first celebrated exponent was Giovanni Punto, who learned the technique from A. J. Hampel and subsequently taught it to the Court orchestra of George III.
In addition to the change in pitch, the timbre is changed, sounding somewhat muted. Some pieces call for notes to be played stopped (sometimes written as gestopft in the score) specifically in order to produce this muted tone. This can clearly be heard on recordings of natural horns playing pre-valve repertoire such as the Punto concertino (a recording by Anthony Halstead and the Hanover Band is available which demonstrates this to particularly good effect).
The pitch control is affected by the degree of closing the bell with the right hand. As the palm closes the bell, the effective tube length is increased, lowering the pitch (up to about a semitone for horns in the range D through G). But when the hand stops the bell completely, the tube length is shortened, raising pitch about a semitone for horns tuned near to the key of F. |
https://en.wikipedia.org/wiki/Marshall%20Hall%20%28mathematician%29 | Marshall Hall Jr. (17 September 1910 – 4 July 1990) was an American mathematician who made significant contributions to group theory and combinatorics.
Education and career
Hall studied mathematics at Yale University, graduating in 1932. He studied for a year at Cambridge University under a Henry Fellowship working with G. H. Hardy. He returned to Yale to take his Ph.D. in 1936 under the supervision of Øystein Ore.
He worked in Naval Intelligence during World War II, including six months in 1944 at Bletchley Park, the center of British wartime code breaking. In 1946 he took a position at Ohio State University. In 1959 he moved to the California Institute of Technology where, in 1973, he was named the first IBM Professor at Caltech, the first named chair in mathematics. After retiring from Caltech in 1981, he accepted a post at Emory University in 1985.
Hall died in 1990 in London on his way to a conference to mark his 80th birthday.
Contributions
He wrote a number of papers of fundamental importance in group theory, including his solution of Burnside's problem for groups of exponent 6, showing that a finitely generated group in which the order of every element divides 6 must be finite.
His work in combinatorics includes an important paper of 1943 on projective planes, which for many years was one of the most cited mathematics research papers. In this paper he constructed a family of non-Desarguesian planes which are known today as Hall planes. He also worked on block designs and coding theory.
His classic book on group theory was well received when it came out and is still useful today. His book Combinatorial Theory came out in a second edition in 1986, published by John Wiley & Sons.
He proposed Hall's conjecture on the differences between perfect squares and perfect cubes, which remains an open problem as of 2015.
Publications
1943: "Projective Planes", Transactions of the American Mathematical Society 54(2): 229–77
1959: The Theory of Groups, Macmil |
https://en.wikipedia.org/wiki/Crown-rump%20length | Crown-rump length (CRL) is the measurement of the length of human embryos and fetuses from the top of the head (crown) to the bottom of the buttocks (rump). It is typically determined from ultrasound imagery and can be used to estimate gestational age.
Introduction
The embryo and fetus float in the amniotic fluid inside the uterus of the mother usually in a curved posture resembling the letter C. The measurement can actually vary slightly if the fetus is temporarily stretching (straightening) its body. The measurement needs to be in the natural state with an unstretched body which is actually C shaped. The measurement of CRL is useful in determining the gestational age (menstrual age starting from the first day of the last menstrual period) and thus the expected date of delivery (EDD). Different human fetuses grow at different rates and thus the gestational age is an approximation. Recent evidence has indicated that CRL growth (and thus the approximation of gestational age) may be influenced by maternal factors such as age, smoking, and folic acid intake. Early in pregnancy gestational age 8 weeks, it is accurate within about +/- 5 days but later in pregnancy due to different growth rates, the accuracy is less. In that situation, other parameters can be used in addition to CRL. The length of the umbilical cord is approximately equal to the CRL throughout pregnancy.
Gestational age is not the same as fertilization age. It takes about 14 days from the first day of the last menstrual period for conception to take place and thus for the conceptus to form. The age from this point in time (conception) is called the fertilization age and is thus 2 weeks shorter than the gestational age. Thus a 6-week gestational age would be a 4-week fertilization age. Some authorities however casually interchange these terms and the reader is advised to be cautious. An average gestational period (duration of pregnancy from the first day of the last menstrual period up to delivery) is |
https://en.wikipedia.org/wiki/Thermodynamic%20databases%20for%20pure%20substances | Thermodynamic databases contain information about thermodynamic properties for substances, the most important being enthalpy, entropy, and Gibbs free energy. Numerical values of these thermodynamic properties are collected as tables or are calculated from thermodynamic datafiles. Data is expressed as temperature-dependent values for one mole of substance at the standard pressure of 101.325 kPa (1 atm), or 100 kPa (1 bar). Both of these definitions for the standard condition for pressure are in use.
Thermodynamic data
Thermodynamic data is usually presented as a table or chart of function values for one mole of a substance (or in the case of the steam tables, one kg). A thermodynamic datafile is a set of equation parameters from which the numerical data values can be calculated. Tables and datafiles are usually presented at a standard pressure of 1 bar or 1 atm, but in the case of steam and other industrially important gases, pressure may be included as a variable. Function values depend on the state of aggregation of the substance, which must be defined for the value to have any meaning. The state of aggregation for thermodynamic purposes is the standard state, sometimes called the reference state, and defined by specifying certain conditions. The normal standard state is commonly defined as the most stable physical form of the substance at the specified temperature and a pressure of 1 bar or 1 atm. However, since any non-normal condition could be chosen as a standard state, it must be defined in the context of use. A physical standard state is one that exists for a time sufficient to allow measurements of its properties. The most common physical standard state is one that is stable thermodynamically (i.e., the normal one). It has no tendency to transform into any other physical state. If a substance can exist but is not thermodynamically stable (for example, a supercooled liquid), it is called a metastable state. A non-physical standard state is one whose pro |
https://en.wikipedia.org/wiki/Reconvergent%20fan-out | Reconvergent fan-out is a technique to make VLSI logic simulation less pessimistic.
Static timing analysis tries to figure out the best and worst case time estimate for each signal as they pass through an electronic device. Whenever a signal passes through a node, a bit of uncertainty must be added to the time required for the signal to transit that device. These uncertain delays add up so, after passing through many devices, the worst-case timing for a signal could be unreasonably pessimistic.
It is common for two signals to share an identical path, branch and follow different paths for a while, then converge back to the same point to produce a result. When this happens, you can remove a fair amount of uncertainty from the total delay because you know that they shared a common path for a while. Even though each signal has an uncertain delay, because their delays were identical for part of the journey the total uncertainty can be reduced. This tightens up the worst-case estimation for the signal delay, and usually allows a small but important speedup of the overall device.
This term is starting to be used in a more generic sense as well. Any time a signal splits into two and then reconverges, certain optimizations can be made. The term reconvergent fan-out has been used to describe similar optimizations in graph theory and static code analysis.
See also
Fan-out
Fan-in
External links
An example of reconvergent fan-out
Logic gates |
https://en.wikipedia.org/wiki/Fiber%20bundle%20construction%20theorem | In mathematics, the fiber bundle construction theorem is a theorem which constructs a fiber bundle from a given base space, fiber and a suitable set of transition functions. The theorem also gives conditions under which two such bundles are isomorphic. The theorem is important in the associated bundle construction where one starts with a given bundle and surgically replaces the fiber with a new space while keeping all other data the same.
Formal statement
Let X and F be topological spaces and let G be a topological group with a continuous left action on F. Given an open cover {Ui} of X and a set of continuous functions
defined on each nonempty overlap, such that the cocycle condition
holds, there exists a fiber bundle E → X with fiber F and structure group G that is trivializable over {Ui} with transition functions tij.
Let E′ be another fiber bundle with the same base space, fiber, structure group, and trivializing neighborhoods, but transition functions t′ij. If the action of G on F is faithful, then E′ and E are isomorphic if and only if there exist functions
such that
Taking ti to be constant functions to the identity in G, we see that two fiber bundles with the same base, fiber, structure group, trivializing neighborhoods, and transition functions are isomorphic.
A similar theorem holds in the smooth category, where X and Y are smooth manifolds, G is a Lie group with a smooth left action on Y and the maps tij are all smooth.
Construction
The proof of the theorem is constructive. That is, it actually constructs a fiber bundle with the given properties. One starts by taking the disjoint union of the product spaces Ui × F
and then forms the quotient by the equivalence relation
The total space E of the bundle is T/~ and the projection π : E → X is the map which sends the equivalence class of (i, x, y) to x. The local trivializations
are then defined by
Associated bundle
Let E → X a fiber bundle with fiber F and structure group G, and let F′ be anothe |
https://en.wikipedia.org/wiki/Signal%20edge | In electronics, a signal edge is a transition of a digital signal from low to high or from high to low:
A rising edge (or positive edge) is the low-to-high transition.
A falling edge (or negative edge) is the high-to-low transition.
In the case of a pulse, which consists of two edges:
The leading edge (or front edge) is the first edge of the pulse.
The trailing edge (or back edge) is the second edge of the pulse.
See also
Flip-flop (electronics), an edge-triggered circuit
Rise time, for a signal transition |
https://en.wikipedia.org/wiki/Petname | Petname systems are naming systems that claim to possess all three naming properties of Zooko's triangle - global, secure, and memorable. Software that uses such a system can satisfy all three requirements. Such systems can be used to enhance security, such as preventing phishing attacks.
Unlike traditional identity systems, which focus on the service provider, Petname systems are decentralized and designed to facilitate the needs of the enduser as they interact with multiple services.
History
Though the Petname model was formally described in 2005 by Mark Stiegler, the potential of the system was discovered by several people successively.
Examples
The GNU Name System (GNS) – a decentralized alternative to DNS based on the principle of a petname system
CapDesk – a distributed desktop environment
Petname Tool (discontinued browser extension) – There was a browser extension available for Firefox called Petname Tool that allowed pet names to be assigned to secure websites. Use of this extension could help prevent phishing attacks.
PetName Markup Language
The PetName Markup Language (PNML) is a proposal for embedding Petname information into other systems using a custom markup language.
PNML consists of two tags:
pet-name-string
stringified-cryptographic-key |
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Peru | This is a list of flags used in or otherwise associated with Peru. For further information, see Flag of Peru.
National flags
Current
Historical
Government
Military
Political flags
Proposed flags
Subnational flags
Departments
Provinces
Districts
Cities
See also
Coat of arms of Peru
National Anthem of Peru
Vexillology |
https://en.wikipedia.org/wiki/Rademacher%27s%20theorem | In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If is an open subset of and is Lipschitz continuous, then is differentiable almost everywhere in ; that is, the points in at which is not differentiable form a set of Lebesgue measure zero. Differentiability here refers to infinitesimal approximability by a linear map, which in particular asserts the existence of the coordinate-wise partial derivatives.
Sketch of proof
The one-dimensional case of Rademacher's theorem is a standard result in introductory texts on measure-theoretic analysis. In this context, it is natural to prove the more general statement that any single-variable function of bounded variation is differentiable almost everywhere. (This one-dimensional generalization of Rademacher's theorem fails to extend to higher dimensions.)
One of the standard proofs of the general Rademacher theorem was found by Charles Morrey. In the following, let denote a Lipschitz-continuous function on . The first step of the proof is to show that, for any fixed unit vector , the -directional derivative of exists almost everywhere. This is a consequence of a special case of the Fubini theorem: a measurable set in has Lebesgue measure zero if its restriction to every line parallel to has (one-dimensional) Lebesgue measure zero. Considering in particular the set in where the -directional derivative of fails to exist (which must be proved to be measurable), the latter condition is met due to the one-dimensional case of Rademacher's theorem.
The second step of Morrey's proof establishes the linear dependence of the -directional derivative of upon . This is based upon the following identity:
Using the Lipschitz assumption on , the dominated convergence theorem can be applied to replace the two difference quotients in the above expression by the corresponding -directional derivatives. Then, based upon the known linear dependence of the -directional derivative of up |
https://en.wikipedia.org/wiki/Fenske%20equation | The Fenske equation in continuous fractional distillation is an equation used for calculating the minimum number of theoretical plates required for the separation of a binary feed stream by a fractionation column that is being operated at total reflux (i.e., which means that no overhead product distillate is being withdrawn from the column).
The equation was derived in 1932 by Merrell Fenske, a professor who served as the head of the chemical engineering department at the Pennsylvania State University from 1959 to 1969.
When designing large-scale, continuous industrial distillation towers, it is very useful to first calculate the minimum number of theoretical plates required to obtain the desired overhead product composition.
Common versions of the Fenske equation
This is one of the many different but equivalent versions of the Fenske equation valid only for binary mixtures:
where:
is the minimum number of theoretical plates required at total reflux (of which the reboiler is one),
is the mole fraction of more volatile component in the overhead distillate,
is the mole fraction of more volatile component in the bottoms,
is the average relative volatility of the more volatile component to the less volatile component.
For a multi-component mixture the following formula holds.
For ease of expression, the more volatile and the less volatile components are commonly referred to as the light key (LK) and the heavy key (HK), respectively. Using that terminology, the above equation may be expressed as:
or also:
If the relative volatility of the light key to the heavy key is constant from the column top to the column bottom, then is simply . If the relative volatility is not constant from top to bottom of the column, then the following approximation may be used:
where:
is the relative volatility of light key to heavy key at top of column,
is the relative volatility of light key to heavy key at bottom of column.
The above forms of the Fenske equation can |
https://en.wikipedia.org/wiki/Borsuk%27s%20conjecture | The Borsuk problem in geometry, for historical reasons incorrectly called Borsuk's conjecture, is a question in discrete geometry. It is named after Karol Borsuk.
Problem
In 1932, Karol Borsuk showed that an ordinary 3-dimensional ball in Euclidean space can be easily dissected into 4 solids, each of which has a smaller diameter than the ball, and generally -dimensional ball can be covered with compact sets of diameters smaller than the ball. At the same time he proved that subsets are not enough in general. The proof is based on the Borsuk–Ulam theorem. That led Borsuk to a general question:
The question was answered in the positive in the following cases:
— which is the original result by Karol Borsuk (1932).
— shown by Julian Perkal (1947), and independently, 8 years later, by H. G. Eggleston (1955). A simple proof was found later by Branko Grünbaum and Aladár Heppes.
For all for smooth convex bodies — shown by Hugo Hadwiger (1946).
For all for centrally-symmetric bodies — shown by A.S. Riesling (1971).
For all for bodies of revolution — shown by Boris Dekster (1995).
The problem was finally solved in 1993 by Jeff Kahn and Gil Kalai, who showed that the general answer to Borsuk's question is . They claim that their construction shows that pieces do not suffice for and for each . However, as pointed out by Bernulf Weißbach, the first part of this claim is in fact false. But after improving a suboptimal conclusion within the corresponding derivation, one can indeed verify one of the constructed point sets as a counterexample for (as well as all higher dimensions up to 1560).
Their result was improved in 2003 by Hinrichs and Richter, who constructed finite sets for , which cannot be partitioned into parts of smaller diameter.
In 2013, Andriy V. Bondarenko had shown that Borsuk's conjecture is false for all . Shortly after, Thomas Jenrich derived a 64-dimensional counterexample from Bondarenko's construction, giving the best bound up to now.
|
https://en.wikipedia.org/wiki/Association%20of%20Internet%20Researchers | The Association of Internet Researchers (AoIR) is a learned society dedicated to the advancement of the transdisciplinary field of Internet studies. Founded in 1999, it is an international, member-based support network promoting critical and scholarly Internet research, independent from traditional disciplines and existing across academic borders.
AoIR was formally founded on May 30, 1999, at a meeting of nearly sixty scholars at the San Francisco Hilton and Towers, following initial discussions at a 1998 conference at Drake University entitled "The World Wide Web and Contemporary Cultural Theory: Metaphor, Magic & Power". The inaugural conference was organised by Nancy Baym, Jeremy Hunsinger and Steve Jones at the University of Kansas in 2000, and attracted 300 scholars. As the Chronicle of Higher Education noted, its rapid growth during the first few years of its existence marked the coming of age of Internet studies. It has continued to grow, with a membership of approximately 400 scholars. It supports AIR-L, a mailing list with over 5,000 subscribers.
AoIR holds an annual academic conference, as well as promoting online discussion and collaboration through a long-running mailing list, and other venues.
Activities
The Association supports scholarly communication in a number of ways:
It organizes an annual, peer-reviewed scholarly conference, which accepts paper and presentation submissions from all disciplines.
As part of its annual conference, it hosts an annual one day interdisciplinary Doctoral Colloquium for Ph.D. students and an Early Career Researchers event for professionals who are in their first academic positions following the completion of the Ph.D.
It hosts the AIR-L mailing list with over 5000 subscribers.
It has published multiple editions of the Internet Research Annual with Peter Lang
It hosts working groups that produce reports of interest to researchers in the field, most notably the AoIR Guide on Ethical Online Research.
It co-sponso |
https://en.wikipedia.org/wiki/Microarray%20analysis%20techniques | Microarray analysis techniques are used in interpreting the data generated from experiments on DNA (Gene chip analysis), RNA, and protein microarrays, which allow researchers to investigate the expression state of a large number of genes - in many cases, an organism's entire genome - in a single experiment. Such experiments can generate very large amounts of data, allowing researchers to assess the overall state of a cell or organism. Data in such large quantities is difficult - if not impossible - to analyze without the help of computer programs.
Introduction
Microarray data analysis is the final step in reading and processing data produced by a microarray chip. Samples undergo various processes including purification and scanning using the microchip, which then produces a large amount of data that requires processing via computer software. It involves several distinct steps, as outlined in the image below. Changing any one of the steps will change the outcome of the analysis, so the MAQC Project was created to identify a set of standard strategies. Companies exist that use the MAQC protocols to perform a complete analysis.
Techniques
Most microarray manufacturers, such as Affymetrix and Agilent, provide commercial data analysis software alongside their microarray products. There are also open source options that utilize a variety of methods for analyzing microarray data.
Aggregation and normalization
Comparing two different arrays or two different samples hybridized to the same array generally involves making adjustments for systematic errors introduced by differences in procedures and dye intensity effects. Dye normalization for two color arrays is often achieved by local regression. LIMMA provides a set of tools for background correction and scaling, as well as an option to average on-slide duplicate spots. A common method for evaluating how well normalized an array is, is to plot an MA plot of the data. MA plots can be produced using programs and language |
https://en.wikipedia.org/wiki/Anamorphism | In computer programming, an anamorphism is a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence.
The above layman's description can be stated more formally in category theory: the anamorphism of a coinductive type denotes the assignment of a coalgebra to its unique morphism to the final coalgebra of an endofunctor. These objects are used in functional programming as unfolds.
The categorical dual (aka opposite) of the anamorphism is the catamorphism.
Anamorphisms in functional programming
In functional programming, an anamorphism is a generalization of the concept of unfolds on coinductive lists. Formally, anamorphisms are generic functions that can corecursively construct a result of a certain type and which is parameterized by functions that determine the next single step of the construction.
The data type in question is defined as the greatest fixed point ν X . F X of a functor F. By the universal property of final coalgebras, there is a unique coalgebra morphism A → ν X . F X for any other F-coalgebra a : A → F A. Thus, one can define functions from a type A _into_ a coinductive datatype by specifying a coalgebra structure a on A.
Example: Potentially infinite lists
As an example, the type of potentially infinite lists (with elements of a fixed type value) is given as the fixed point [value] = ν X . value × X + 1, i.e. a list consists either of a value and a further list, or it is empty. A (pseudo-)Haskell-Definition might look like this:
data [value] = (value:[value]) | []
It is the fixed point of the functor F value, where:
data Maybe a = Just a | Nothing
data F value x = Maybe (value, x)
One can eas |
https://en.wikipedia.org/wiki/K-server%20problem | The -server problem is a problem of theoretical computer science in the category of online algorithms, one of two abstract problems on metric spaces that are central to the theory of competitive analysis (the other being metrical task systems). In this problem, an online algorithm must control the movement of a set of k servers, represented as points in a metric space, and handle requests that are also in the form of points in the space. As each request arrives, the algorithm must determine which server to move to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests.
The problem was first posed by Mark Manasse, Lyle A. McGeoch and Daniel Sleator (1988). The most prominent open question concerning the k-server problem is the so-called k-server conjecture, also posed by Manasse et al. This conjecture states that there is an algorithm for solving the k-server problem in an arbitrary metric space and for any number k of servers that has competitive ratio exactly k. Manasse et al. were able to prove their conjecture when k = 2, and for more general values of k for some metric spaces restricted to have exactly k+1 points. Chrobak and Larmore (1991) proved the conjecture for tree metrics. The special case of metrics in which all distances are equal is called the paging problem because it models the problem of page replacement algorithms in memory caches, and was also already known to have a -competitive algorithm (Sleator and Tarjan 1985). Fiat et al. (1990) first proved that there exists an algorithm with finite competitive ratio for any constant k and any metric space, and finally Koutsoupias and Papadimitriou (1995) proved that Work Function Algorithm (WFA) has competitive ratio 2k - 1. However, despite the efforts of many other researchers, reducing the competitive ratio to or provid |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.