source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/2023%20Bangladesh%20Government%20website%20data%20breach | In June and July 2023, a major data breach occurred in a Bangladesh Government website, resulting in the unauthorized exposure and compromise of personal data belonging to more than 50 million Bangladeshi citizens.
Background
On July 7, 2023, it was discovered that a government website in Bangladesh had inadvertently exposed the personal data of citizens due to security vulnerabilities. The breach was not a result of a deliberate hack, but rather a consequence of weaknesses in the infrastructure and data protection practices of the websites. The exposed data included sensitive information such as names, addresses, phone numbers, and national identification numbers. From October 2023, the leaked NID data of Bangladeshi citizens are openly accessible on Telegram channels.
Breach incident
The breach was initially reported by American technology news website TechCrunch, on July 7, 2023. According to their reports, the exposed data was accessible via the government website, potentially allowing unauthorized individuals to access and misuse citizens' personal information. They initially did not reveal the website's name as breached data were still accessible, however they later revealed that the data breach occurred in the Office of the Registrar General, Birth & Death Registration website. The incident raised concerns about privacy and data security, causing alarm among affected individuals.
Zunaid Ahmed Palak, the state minister for Information and Communication Technology in Bangladesh, acknowledged the breach and clarified that it was not the result of hacking but rather a consequence of the security weaknesses presents in the websites. Palak further explained that the websites had vulnerabilities that were exploited, resulting in the exposure of citizens' personal data.
Government Response
In response to the data breach, the Bangladesh government took action to address the situation. On July 10, 2023, the government announced the takedown of the exposed citize |
https://en.wikipedia.org/wiki/Acoustic%20droplet%20vaporization | Acoustic droplet vaporization (ADV) is the process by which superheated liquid droplets are phase-transitioned into gas bubbles by means of ultrasound. Perfluorocarbons and halocarbons are often used for the dispersed medium, which forms the core of the droplet. The surfactant, which forms a stabilizing shell around the dispersive medium, is usually composed of albumin or lipids.
There exist two main hypothesis that explain the mechanism by which ultrasound induces vaporization. One poses that the ultrasonic field interacts with the dispersed medium so as to cause vaporization in the bubble core. The other suggests that shockwaves from inertial cavitation, occurring near or within the droplet, cause the dispersed medium to vaporize.
See also
Acoustic droplet ejection |
https://en.wikipedia.org/wiki/B-cell%20linker | B-cell linker (BLNK) protein is expressed in B cells and macrophages and plays a large role in B cell receptor signaling. Like all adaptor proteins, BLNK has no known intrinsic enzymatic activity. Its function is to temporally and spatially coordinate and regulate downstream signaling effectors in B cell receptor (BCR) signaling, which is important in B cell development. Binding of these downstream effectors is dependent on BLNK phosphorylation. BLNK is encoded by the BLNK gene and is also known as SLP-65, BASH, and BCA.
Structure and localization
BLNK consists of a N-terminal leucine zipper motif followed by an acidic region, a proline-rich region, and a C-terminal SH2 domain. The leucine zipper motif allows BLNK to localize to the plasma membrane, presumably by coiled-coil interactions with a membrane protein. This leucine zipper motif distinguishes BLNK from lymphoctye cytosolic protein 2, also known as LCP-2 or SLP-76, which plays a similar role in T cell receptor signaling. Although LCP-2 has an N-terminal heptad-like organization of leucine and isoleucine residues like BLNK, it has not been experimentally shown to have the leucine zipper motif. Recruitment of BLNK to the plasma membrane is also achieved by binding of the SH2 domain of BLNK to a non-ITAM phospho-tyrosine on the cytoplasmic domain of CD79A, which is a part of Igα and the B cell receptor complex.
Function
BLNK's function and importance in B cell development were first illustrated in BLNK deficient DT40 cells, a chicken B cell line. DT40 cells had interrupted B cell development: there was no calcium mobilization response in the B cell, impaired activation of the mitogen-activated protein (MAP) kinases p38, JNK, and somewhat inhibited ERK activation upon (BCR) activation as compared to wild type DT40 cells. In knockout mice, BLNK deficiency results in a partial block in B cell development, and in humans BLNK deficiency results in a much more profound block in B cell development.
Linker or ada |
https://en.wikipedia.org/wiki/Association%20of%20Teachers%20of%20Mathematics | The Association of Teachers of Mathematics (ATM) was established by Caleb Gattegno in 1950 to encourage the development of mathematics education to be more closely related to the needs of the learner. ATM is a membership organisation representing a community of students, nursery, infant, primary, secondary and tertiary teachers, numeracy consultants, overseas teachers, academics and anybody interested in mathematics education.
Aims
The stated aims of the Association of Teachers of Mathematics are to support the teaching and learning of mathematics by:
encouraging increased understanding and enjoyment of mathematics
encouraging increased understanding of how people learn mathematics
encouraging the sharing and evaluation of teaching and learning strategies and practices
promoting the exploration of new ideas and possibilities
initiating and contributing to discussion of and developments in mathematics education at all levels
Guiding principles
ATM lists as its guiding principles:
The ability to operate mathematically is an aspect of human functioning which is as universal as language itself. Attention needs constantly to be drawn to this fact. Any possibility of intimidating with mathematical expertise is to be avoided.
The power to learn rests with the learner. Teaching has a subordinate role. The teacher has a duty to seek out ways to engage the power of the learner.
It is important to examine critically approaches to teaching and to explore new possibilities, whether deriving from research, from technological developments or from the imaginative and insightful ideas of others.
Teaching and learning are cooperative activities. Encouraging a questioning approach and giving due attention to the ideas of others are attitudes to be encouraged. Influence is best sought by building networks of contacts in professional circles.
Structure
There are about 3500 members, mainly teachers in primary and secondary schools. It is a registered charity and all profits |
https://en.wikipedia.org/wiki/EXL%20100 | The EXL 100 is a computer released in 1984 by the French brand Exelvision, based on the TMS 7020 microprocessor from Texas Instruments. This was an uncommon design choice (at the time almost all home computers either used 6502 or Z80 microprocessors) but justified by the fact that the engineering team behind the machine (Jacques Palpacuer, Victor Zebrouck and Christian Petiot) came from Texas instruments.
It was part of the government Computing for All plan and 9000 units were used in schools.
The design is unusual compared with similar machines of the time, as it had a separate central processing unit. Two keyboards are available: one with rubber keys and another with a more standard touch. Keyboard and joystick were not connected to the central unit by a cable but by infrared link, and are battery powered. Many extensions were available: modem, floppy disk drive and a 16 KB CMOS RAM powered by an integrated lithium battery.
Its TMS 5220 sound processor was capable of French speech synthesis, another unusual feature.
The machine came with a BASIC version on cartridge named ExelBasic.
Specifications
Release price: 3,190 French Francs
CPU: TMS 7020 at 4.9 Mhz
Graphics chip: TMS 3556 (40 x 25 character text mode, 320 x 250 pixel graphics mode, 8 colors)
Sound: TMS 5220 (with speech synthesis in French)
Storage: cartridge port, cassettes, optional floppy disk drive
Memory: 34KB RAM (2KB RAM + 32KB Shared VRAM), 4 to 32KB ROM
Variants: A version with an integrated V23 modem named Exeltel was released in 1986 |
https://en.wikipedia.org/wiki/Antiphospholipid%20syndrome | Antiphospholipid syndrome, or antiphospholipid antibody syndrome (APS or APLS), is an autoimmune, hypercoagulable state caused by antiphospholipid antibodies. APS provokes blood clots (thrombosis) in both arteries and veins as well as pregnancy-related complications such as miscarriage, stillbirth, preterm delivery, and severe preeclampsia. Although the exact etiology of APS is still not clear, genetics is believed to play a key role in the development of the disease. The diagnostic criteria require one clinical event (i.e. thrombosis or pregnancy complication) and two positive blood test results spaced at least three months apart that detect lupus anticoagulant, anti-apolipoprotein antibodies, or anti-cardiolipin antibodies.
Antiphospholipid syndrome can be primary or secondary. Primary antiphospholipid syndrome occurs in the absence of any other related disease. Secondary antiphospholipid syndrome occurs with other autoimmune diseases, such as systemic lupus erythematosus. In rare cases, APS leads to rapid organ failure due to generalised thrombosis; this is termed "catastrophic antiphospholipid syndrome" (CAPS or Asherson syndrome) and is associated with a high risk of death.
Antiphospholipid syndrome often requires treatment with anticoagulant medication such as heparin to reduce the risk of further episodes of thrombosis and improve the prognosis of pregnancy. Warfarin (brand name Coumadin) is not used during pregnancy because it can cross the placenta, unlike heparin, and is teratogenic.
Signs and symptoms
The presence of antiphospholipid antibodies (aPL) in the absence of blood clots or pregnancy-related complications does not indicate APS (see below for the diagnosis of APS).
Antiphospholipid syndrome can cause arterial or venous blood clots, in any organ system, or pregnancy-related complications. In APS patients, the most common venous event is deep vein thrombosis of the lower extremities, and the most common arterial event is stroke. In pregnant women |
https://en.wikipedia.org/wiki/Light%20in%20school%20buildings | Light in school buildings traditionally is from a combination of daylight and electric light to illuminate learning spaces (e.g. classrooms, labs, studios, etc.), hallways, cafeterias, offices and other interior areas. Light fixtures currently in use usually provide students and teachers with satisfactory visual performance, i.e., the ability to read a book, have lunch, or play basketball in a gymnasium. However, classroom lighting may also affect students' circadian systems, which may in turn affect test scores, attendance and behavior.
Exposure to light, or lack thereof, plays a significant role in sleep cycles. All animals, including humans, have evolved circadian rhythms, which respond to the earth's 24-hour cycle. These rhythms include the sleep–wake cycle, hormone production, and core body temperature cycles. The timing of these patterns is set by the 24-hour light–dark cycle. In particular, short-wavelength "blue" light in the daylight spectrum has maximal effect on human circadian rhythms. Research has shown that when these patterns are disrupted, individuals are more susceptible to ailments such as breast cancer, obesity, sleep deprivation, mood disorders, and other health problems.
According to Energy Star, after salaries and benefits, energy costs are the largest operating expense for school districts. Fluorescent lighting systems are the most prevalent sources of illumination in schools. These systems provide low cost, long life, high efficacy, good color, and low levels of noise and flicker. Lighting systems should be designed with respect to the requirements of the activity to be performed. For instance, lighting over a desk should be different than light required in cafeterias or hallways. Current sustainable design guidelines for schools usually focus only on energy-conserving luminaires with consideration only for visual needs. Several aspects of building performance, including lighting, are fundamental in providing an environment that is conduciv |
https://en.wikipedia.org/wiki/Random%20group | In mathematics, random groups are certain groups obtained by a probabilistic construction. They were introduced by Misha Gromov to answer questions such as "What does a typical group look like?"
It so happens that, once a precise definition is given, random groups satisfy some properties with very high probability, whereas other properties fail with very high probability. For instance, very probably random groups are hyperbolic groups. In this sense, one can say that "most groups are hyperbolic".
Definition
The definition of random groups depends on a probabilistic model on the set of possible groups. Various such probabilistic models yield different (but related) notions of random groups.
Any group can be defined by a group presentation involving generators and relations. For instance, the Abelian group has a presentation with two generators and , and the relation , or equivalently . The main idea of random groups is to start with a fixed number of group generators , and imposing relations of the form where each is a random word involving the letters and their formal inverses . To specify a model of random groups is to specify a precise way in which , and the random relations are chosen.
Once the random relations have been chosen, the resulting random group is defined in the standard way for group presentations, namely: is the quotient of the free group with generators , by the normal subgroup generated by the relations seen as elements of :
The few-relator model of random groups
The simplest model of random groups is the few-relator model. In this model, a number of generators and a number of relations are fixed. Fix an additional parameter (the length of the relations), which is typically taken very large.
Then, the model consists in choosing the relations at random, uniformly and independently among all possible reduced words of length at most involving the letters and their formal inverses .
This model is especially interesting when |
https://en.wikipedia.org/wiki/Pollen%20tube | A pollen tube is a tubular structure produced by the male gametophyte of seed plants when it germinates. Pollen tube elongation is an integral stage in the plant life cycle. The pollen tube acts as a conduit to transport the male gamete cells from the pollen grain—either from the stigma (in flowering plants) to the ovules at the base of the pistil or directly through ovule tissue in some gymnosperms. In maize, this single cell can grow longer than to traverse the length of the pistil.
Pollen tubes were first discovered by Giovanni Battista Amici in the 19th century.
They are used as a model for understanding plant cell behavior. Research is ongoing to comprehend how the pollen tube responds to extracellular guidance signals to achieve fertilization.
Description
Pollen tubes are produced by the male gametophytes of seed plants. Pollen tubes act as conduits to transport the male gamete cells from the pollen grain—either from the stigma (in flowering plants) to the ovules at the base of the pistil or directly through ovule tissue in some gymnosperms. Pollen tubes are unique to seed plants and their structures have evolved over their history since the Carboniferous period. Pollen tube formation is complex and the mechanism is not fully understood, but is of great interest to scientists because pollen tubes transport the male gametes produced by pollen grains to the female gametophyte. Once a pollen grain has implanted on a compatible stigma, its germination is initiated. During this process, the pollen grain begins to bulge outwards to form a tube-like structure, known as the pollen tube. The pollen tube structure rapidly descends down the length of the style via tip-directed growth, reaching rates of 1 cm/h, whilst carrying two non-motile sperm cells. Upon reaching the ovule the pollen tube ruptures, thereby delivering the sperm cells to the female gametophyte. In flowering plants a double fertilization event occurs. The first fertilization event produces a diplo |
https://en.wikipedia.org/wiki/Great%20deluge%20algorithm | The Great deluge algorithm (GD) is a generic algorithm applied to optimization problems. It is similar in many ways to the hill-climbing and simulated annealing algorithms.
The name comes from the analogy that in a great deluge a person climbing a hill will try to move in any direction that does not get his/her feet wet in the hope of finding a way up as the water level rises.
In a typical implementation of the GD, the algorithm starts with a poor approximation, S, of the optimum solution. A numerical value called the badness is computed based on S and it measures how undesirable the initial approximation is. The higher the value of badness the more undesirable is the approximate solution. Another numerical value called the tolerance is calculated based on a number of factors, often including the initial badness.
A new approximate solution S' , called a neighbour of S, is calculated based on S. The badness of S' , b' , is computed and compared with the tolerance. If b' is better than tolerance, then the algorithm is recursively restarted with S : = S' , and tolerance := decay(tolerance) where decay is a function that lowers the tolerance (representing a rise in water levels). If b' is worse than tolerance, a different neighbour S* of S is chosen and the process repeated. If all the neighbours of S produce approximate solutions beyond tolerance, then the algorithm is terminated and S is put forward as the best approximate solution obtained.
See also
de:Gunter Dueck |
https://en.wikipedia.org/wiki/VectorBase | VectorBase is one of the five Bioinformatics Resource Centers (BRC) funded by the National Institute of Allergy and Infectious Diseases (NIAID), a component of the National Institutes of Health (NIH), which is an agency of the United States Department of Health and Human Services. VectorBase is focused on invertebrate vectors of human pathogens working with the sequencing centers and the research community to curate vector genomes (mainly genome annotation).
Genomes covered in the VectorBase database
Aedes aegypti
Anopheles gambiae
Culex quinquefasciatus
Ixodes scapularis
Pediculus humanus
Rhodnius prolixus
Tools available through the VectorBase site
Genome browser
Community annotation system
Microarray and gene expression repository
Controlled vocabularies for anatomy, insecticide resistance and vector-borne diseases (malaria and dengue fever)
BLAST searches for all covered genomes
See also
Vectors in epidemiology |
https://en.wikipedia.org/wiki/Eugenia%20Malinnikova | Eugenia Malinnikova (born 23 April 1974) is a mathematician, winner of the 2017 Clay Research Award which she shared with Aleksandr Logunov "in recognition of their introduction of a novel geometric combinatorial method to study doubling properties of solutions to elliptic eigenvalue problems".
She competed three times in the International Mathematical Olympiad, winning three Gold medals (including two perfect scores). She is a member of the International Mathematical Olympiad Hall of Fame.
She got her PhD from St. Petersburg State University in 1999, under the supervision of Viktor Petrovich Havin. Currently she works as a professor of mathematics at Stanford University after previously working at the Norwegian University of Science and Technology. In 2018 she was inducted into the Norwegian Academy of Science and Letters. She is also a member of the Royal Norwegian Society of Sciences and Letters and the Norwegian Academy of Technological Sciences. |
https://en.wikipedia.org/wiki/Hericium%20clathroides | Hericium clathroides is a species of an edible fungus in the Hericiaceae family.
Characteristics
This species was distinguished by some authors from H. coralloides based on its substrate - beech wood (rarely other hardwoods). Another feature used to distinguish the two species is that H. coralloides grows hymenophore spines in tufts. Young fruitbodies are edible.
Taxonomy
The species was described under the name Hydnum clathroides by Peter Simon Pallas in the second volume of Reise durch verschiedene Provinzen des rußischen Reichs, published in 1773. It was moved to Hericium genus by Christiaan Hendrik Persoon in 1797. The approval work for this taxon is the first volume of Systema Mycologicum by Elias Magnus Fries published in 1821. There it was classified in Merisma within Hydnum genus.
In 1959 Rudolf Arnold Maas Geesteranus proved the misapplication and approval of the epithet by Elias Fries and proposed a change in the treatment of the taxon H. clathroides. Until now, specimens found on fir (Abies) were traditionally called this, but Maas Geesteranus assigned this name to the species fruiting on beech (Fagus). For the species developing in the fir tree, he proposed Scopoli's approach to classify it as H. coralloides. Some mycologists, like , and Theodore Louis Jahn accepted new definitions. in his 1983 analysis of European Hericium (and his interbreeding experiments of beech and fir species) doubted the separation of two species and proposed H. coralloides neotype. Some authors, like , followed his approach. Index Fungorum considers this taxon verified. |
https://en.wikipedia.org/wiki/GABRP | Gamma-aminobutyric acid receptor subunit pi is a protein that in humans is encoded by the GABRP gene.
The gamma-aminobutyric acid (GABA) A receptor is a multisubunit chloride channel that mediates the fastest inhibitory synaptic transmission in the central nervous system. The subunit encoded by this gene is expressed in several non-neuronal tissues including the uterus and ovaries. This subunit can assemble with known GABA A receptor subunits, and the presence of this subunit alters the sensitivity of recombinant receptors to modulatory agents such as pregnanolone.
See also
GABAA receptor |
https://en.wikipedia.org/wiki/Quantum%20supremacy | In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem. The term was coined by John Preskill in 2012, but the concept dates back to Yuri Manin's 1980 and Richard Feynman's 1981 proposals of quantum computing.
Conceptually, quantum supremacy involves both the engineering task of building a powerful quantum computer and the computational-complexity-theoretic task of finding a problem that can be solved by that quantum computer and has a superpolynomial speedup over the best known or possible classical algorithm for that task.
Examples of proposals to demonstrate quantum supremacy include the boson sampling proposal of Aaronson and Arkhipov, D-Wave's specialized frustrated cluster loop problems, and sampling the output of random quantum circuits. The output distributions that are obtained by making measurements in boson sampling or quantum random circuit sampling are flat, but structured in a way so that one cannot classically efficiently sample from a distribution that is close to the distribution generated by the quantum experiment. For this conclusion to be valid, only very mild assumptions in the theory of computational complexity have to be invoked. In this sense, quantum random sampling schemes can have the potential to show quantum supremacy.
A notable property of quantum supremacy is that it can be feasibly achieved by near-term quantum computers, since it does not require a quantum computer to perform any useful task or use high-quality quantum error correction, both of which are long-term goals. Consequently, researchers view quantum supremacy as primarily a scientific goal, with relatively little immediate bearing on the future commercial viability of quantum computing. Due to unpredictable possible improvements in classical computers and algorithms, |
https://en.wikipedia.org/wiki/Quantum%20money | A quantum money scheme is a quantum cryptographic protocol that creates and verifies banknotes that are resistant to forgery. It is based on the principle that quantum states cannot be perfectly duplicated (the no-cloning theorem), making it impossible to forge quantum money by including quantum systems in its design.
The concept was first proposed by Stephen Wiesner circa 1970 (though it remained unpublished until 1983), and later influenced the development of quantum key distribution protocols used in quantum cryptography.
Wiesner's quantum money scheme
Wiesner's quantum money scheme was first published in 1983. A formal proof of security, using techniques from semidefinite programming, was given in 2013.
In addition to a unique serial number on each bank note (these notes are actually more like cheques, since a verification step with the bank is required for each transaction), there is a series of isolated two-state quantum systems. For example, photons in one of four polarizations could be used: at 0°, 45°, 90° and 135° to some axis, which is referred to as the vertical. Each of these is a two-state system in one of two bases: the horizontal basis has states with polarizations at 0° and 90° to the vertical, and the diagonal basis has states at 45° and 135° to the vertical.
At the bank, there is a record of all the polarizations and the corresponding serial numbers. On the bank note, the serial number is printed, but the polarizations are kept secret. Thus, whilst the bank can always verify the polarizations by measuring the polarization of each photon in the correct basis without introducing any disturbance, a would-be counterfeiter ignorant of the bases cannot create a copy of the photon polarization states, since even if he knows the two bases, if he chooses the wrong one to measure a photon, it will change the polarization of the photon in the trap, and the forged banknote created will be with this wrong polarization.
For each photon, the would-be c |
https://en.wikipedia.org/wiki/Plant%20stem%20cell |
Plant stem cells
Plant stem cells are innately undifferentiated cells located in the meristems of plants. Plant stem cells serve as the origin of plant vitality, as they maintain themselves while providing a steady supply of precursor cells to form differentiated tissues and organs in plants. Two distinct areas of stem cells are recognised: the apical meristem and the lateral meristem.
Plant stem cells are characterized by two distinctive properties, which are: the ability to create all differentiated cell types and the ability to self-renew such that the number of stem cells is maintained. Plant stem cells never undergo aging process but immortally give rise to new specialized and unspecialized cells, and they have the potential to grow into any organ, tissue, or cell in the body. Thus they are totipotent cells equipped with regenerative powers that facilitate plant growth and production of new organs throughout lifetime.
Unlike animals, plants are immobile. As plants cannot escape from danger by taking motion, they need a special mechanism to withstand various and sometimes unforeseen environmental stress. Here, what empowers them to withstand harsh external influence and preserve life is stem cells. In fact, plants comprise the oldest and the largest living organisms on earth, including Bristlecone Pines in California, U.S. (4,842 years old), and the Giant Sequoia in mountainous regions of California, U.S. (87 meters in height and 2,000 tons in weight). This is possible because they have a modular body plan that enables them to survive substantial damage by initiating continuous and repetitive formation of new structures and organs such as leaves and flowers.
Plant stem cells are also characterized by their location in specialized structures called meristematic tissues, which are located in root apical meristem (RAM), shoot apical meristem (SAM), and vascular system ((pro)cambium or vascular meristem.)
Research and development
Traditionally, plant stem ce |
https://en.wikipedia.org/wiki/Tunnel%20network | In transport, tunnels can be connected together to form a tunnel network. These can be used in mining to reach ore below ground, in cities for underground rapid transit systems, in sewer systems, in warfare to avoid enemy detection or attacks, as maintenance access routes beneath sites with high ground-traffic such as airports and amusement parks, or to extend public living areas or commercial access while avoiding outdoor weather.
In warfare
Sieges
Tunnel networks were sometimes developed during siege warfare, even dating back to classical antiquity. Starting with a single tunnel being dug to undermine a wall that might be detected by the defenders and met with counter-tunnels, leading to tunnel warfare. Defenders might first create a series of underground listing posts to preempt such mining attacks.
Trench systems
Any time the use of trenches becomes extensive, this naturally leads to connecting them with tunnel networks for safe passage both along the trench lines and with rear areas. In World War I, when given enough time and resources, the underground components of the defenses could become more extensive than those above ground.
The French Maginot Line, constructed from 1929 to 1939, was a chain of fortresses, bunkers, retractable turrets, outposts, obstacles, and sunken artillery emplacements, all linked by an extensive shell-proof tunnel network. It included underground barracks, shelters, ammo dumps and depots, and even had its own underground narrow gauge railways.
In Vietnam
The tunnels of Củ Chi are an immense network of connecting tunnels located in the Củ Chi District of Ho Chi Minh City (Saigon), Vietnam, and are part of a much larger network of tunnels that underlie much of the country. The Củ Chi tunnels were the location of several military campaigns during the Vietnam War, and were the Viet Cong's base of operations for the Tết Offensive in 1968.
The tunnels were used by Viet Cong soldiers as hiding spots during combat, as well as serving |
https://en.wikipedia.org/wiki/CJK%20Symbols%20and%20Punctuation | CJK Symbols and Punctuation is a Unicode block containing symbols and punctuation used for writing the Chinese, Japanese and Korean languages. It also contains one Chinese character.
Block
The block has variation sequences defined for East Asian punctuation positional variants. They use (VS01) and (VS02):
Chinese character
The CJK Symbols and Punctuation block contains one Chinese character: . Although it is not covered under "Unified Ideographs", it is treated as a CJK character for all other intents and purposes.
Emoji
The CJK Symbols and Punctuation block contains two emoji:
U+3030 and U+303D.
The block has four standardized variants defined to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for the
two emoji, both of which default to a text presentation.
History
In Unicode 1.0.1, two changes were made to this block in order to make Unicode 1.0.1 a proper subset of ISO 10646:
U+3004 IDEOGRAPHIC DITTO MARK was merged with U+4EDD (仝) in the CJK Unified Ideographs block, freeing up code point U+3004
U+32FF JAPANESE INDUSTRIAL STANDARD SYMBOL was moved from the Enclosed CJK Letters and Months block to U+3004 (〄)
The following Unicode-related documents record the purpose and process of defining specific characters in the CJK Symbols and Punctuation block:
See also
Hangul Jamo (Unicode block)
Ideographic Symbols and Punctuation |
https://en.wikipedia.org/wiki/Bacterial%20cell%20structure | The bacterium, despite its simplicity, contains a well-developed cell structure which is responsible for some of its unique biological structures and pathogenicity. Many structural features are unique to bacteria and are not found among archaea or eukaryotes. Because of the simplicity of bacteria relative to larger organisms and the ease with which they can be manipulated experimentally, the cell structure of bacteria has been well studied, revealing many biochemical principles that have been subsequently applied to other organisms.
Cell morphology
Perhaps the most elemental structural property of bacteria is their morphology (shape). Typical examples include:
coccus (circle or spherical)
bacillus (rod-like)
coccobacillus (between a sphere and a rod)
spiral (corkscrew-like)
filamentous (elongated)
Cell shape is generally characteristic of a given bacterial species, but can vary depending on growth conditions. Some bacteria have complex life cycles involving the production of stalks and appendages (e.g. Caulobacter) and some produce elaborate structures bearing reproductive spores (e.g. Myxococcus, Streptomyces). Bacteria generally form distinctive cell morphologies when examined by light microscopy and distinct colony morphologies when grown on Petri plates.
Perhaps the most obvious structural characteristic of bacteria is (with some exceptions) their small size. For example, Escherichia coli cells, an "average" sized bacterium, are about 2 µm (micrometres) long and 0.5 µm in diameter, with a cell volume of 0.6–0.7 μm3. This corresponds to a wet mass of about 1 picogram (pg), assuming that the cell consists mostly of water. The dry mass of a single cell can be estimated as 23% of the wet mass, amounting to 0.2 pg. About half of the dry mass of a bacterial cell consists of carbon, and also about half of it can be attributed to proteins. Therefore, a typical fully grown 1-liter culture of Escherichia coli (at an optical density of 1.0, corresponding to c. 109 |
https://en.wikipedia.org/wiki/Molecular%20Biology%20of%20the%20Cell | Molecular Biology of the Cell is a biweekly peer-reviewed scientific journal published by the American Society for Cell Biology. It covers research on the molecular basis of cell structure and function. According to the Journal Citation Reports, the journal has a 2012 impact factor of 4.803. It was originally established as Cell Regulation in 1989.
The Editor-in-Chief is Matthew Welch (University of California, Berkeley). Previous Editors-in-Chief include: Erkki Ruoslahti (of Cell Regulation) and David Botstein and Keith Yamamoto (of MBoC) and their successors Sandra Schmid and David Drubin. |
https://en.wikipedia.org/wiki/Chemistry%20of%20ascorbic%20acid | Ascorbic acid is an organic compound with formula , originally called hexuronic acid. It is a white solid, but impure samples can appear yellowish. It dissolves well in water to give mildly acidic solutions. It is a mild reducing agent.
Ascorbic acid exists as two enantiomers (mirror-image isomers), commonly denoted "" (for "levo") and "" (for "dextro"). The isomer is the one most often encountered: it occurs naturally in many foods, and is one form ("vitamer") of vitamin C, an essential nutrient for humans and many animals. Deficiency of vitamin C causes scurvy, formerly a major disease of sailors in long sea voyages. It is used as a food additive and a dietary supplement for its antioxidant properties. The "" form can be made via chemical synthesis but has no significant biological role.
History
The antiscorbutic properties of certain foods were demonstrated in the 18th century by James Lind. In 1907, Axel Holst and Theodor Frølich discovered that the antiscorbutic factor was a water-soluble chemical substance, distinct from the one that prevented beriberi. Between 1928 and 1932, Albert Szent-Györgyi isolated a candidate for this substance, which he called it "hexuronic acid", first from plants and later from animal adrenal glands. In 1932 Charles Glen King confirmed that it was indeed the antiscorbutic factor.
In 1933, sugar chemist Walter Norman Haworth, working with samples of "hexuronic acid" that Szent-Györgyi had isolated from paprika and sent him in the previous year, deduced the correct structure and optical-isomeric nature of the compound, and in 1934 reported its first synthesis. In reference to the compound's antiscorbutic properties, Haworth and Szent-Györgyi proposed to rename it "a-scorbic acid" for the compound, and later specifically -ascorbic acid. Because of their work, in 1937 two Nobel Prizes: in Chemistry and in Physiology or Medicine were awarded to Haworth and Szent-Györgyi, respectively.
Independently, ascorbic acid was synthetized |
https://en.wikipedia.org/wiki/Taste%20detection%20threshold | Taste detection threshold is the minimum concentration of a flavoured substance detectable by the sense of taste. Sweetness detection thresholds are usually measured relative to that of sucrose, sourness relative to dilute hydrochloric acid, saltiness relative to table salt (NaCl), and bitterness to quinine. These substances have a reference index of 1. Thresholds for bitter substances can be considerable lower than those for other flavoured substances, this may be due to the importance of ingesting large amounts of energy-rich or salty food while avoiding even small quantities of poisonous substances, which are often bitter.
Variation in sensitivity among individuals plays a role in dietary selection and there is evidence that diet reciprocally affects taste sensitivity. One study found that non-vegetarians had less sensitivity to sweetness while vegetarians had higher sensitivity to caffeine, a bitter substance.
See also
Odor detection threshold |
https://en.wikipedia.org/wiki/Boaz%20Tsaban | Boaz Tsaban (born February 1973) is an Israeli mathematician on the faculty of Bar-Ilan University. His research interests include selection principles within set theory and nonabelian cryptology, within mathematical cryptology.
Biography
Boaz Tsaban grew up in Or Yehuda, a city near Tel Aviv. At the age of 16 he was selected with other high school students to attend the first cycle of a special preparation program in mathematics, at Bar-Ilan University, being admitted to regular mathematics courses at the University a year later. He completed his B.Sc., M.Sc. and Ph.D. degrees with highest distinctions.
Two years as a post-doctoral fellow at Hebrew University were followed by a three-year Koshland Fellowship at the Weizmann Institute of Science before he joined the Department of Mathematics, Bar-Ilan University in 2007.
Academic career
In the field of selection principles, Tsaban devised the method of omission of intervals for establishing covering properties of sets of real numbers that have certain combinatorial structures. In nonabelian cryptology he devised the algebraic span method that solved a number of computational problems that underlie a number of proposals for nonabelian public-key cryptographic schemes (such as the commutator key exchange).
Awards and recognition
Tsaban's doctoral dissertation, supervised by Hillel Furstenberg, won, with Irit Dinur, the Nessyahu prize for the best Ph.D. in mathematics in Israel in 2003.
In 2009 he won the Wolf Foundation Krill Prize
for Excellence in Scientific Research. |
https://en.wikipedia.org/wiki/Carrenza | Carrenza, was a cloud-computing company based in London, United Kingdom. The company was acquired by Six Degrees in 2016.
Operations
Carrenza was a UK-based IT company that provides Cloud computing technologies. It offered a range of public cloud, private cloud and hybrid cloud services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), enterprise application integration and system integration. Carrenza partnered with several enterprise IT providers and was an accredited VMware Enterprise Service Partner and HP (Hewlett-Packard) Cloud Agile Partner.
The company was based on Commercial Street, in the heart of the East London Tech City district, which is host to a large number of technology companies.
History
Carrenza was formed in 2001 as a consultancy by chief executive and founder Dan Sutherland. It began trading in 2004 and launched its first enterprise cloud computing platform in 2006, becoming one of the first companies in Europe to provide this type of hosting service.
In 2009, it formed a partnership with Comic Relief and its affiliated campaigns Red Nose Day Sport Relief to provide IT infrastructure services to the charity, an arrangement that has won industry recognition.
In 2013 it launched its first overseas services, with a mainland Europe cloud node based in Amsterdam.
Partnerships and customers
Carrenza had formed partnerships with a range of IT providers. It was one of the first companies in Europe to become a HP Cloud Agile partner., using HP blade servers and HP 3PAR SAN technology to power its cloud computing services. The company's products also use VMware vCloud IaaS tools and it is taking part in the VMware lighthouse initiative helping develop the next generation of VMware products and services.
Other technology companies that Carrenza has worked closely with include Cisco, for enterprise security and loadblancing services, and Oracle. The company was the first to deploy Oracle Database 11g stretched RAC in pro |
https://en.wikipedia.org/wiki/Model%20steam%20engine | A model steam engine is a small steam engine not built for serious use. Often they are built as an educational toy for children, in which case it is also called a toy steam engine, or for live steam enthusiasts. Between the 18th and early 20th centuries, demonstration models were also in use at universities and engineering schools, frequently designed and built by students as part of their curriculum.
Model steam engines have been made in many forms by a number of manufacturers, but building model steam engines from scratch is popular among adult steam enthusiasts, although this generally requires access to a lathe and/or milling machine. Those without a lathe can alternatively purchase prefabricated parts.
History
In the late 19th century, manufacturers such as German toy company Bing introduced the two main types of model/toy steam engines, namely stationary engines with accessories that were supposed to mimic a 19th-century factory, and mobile engines such as steam locomotives and boats. Later, especially in the early 20th century, steam rollers, fire engines, traction engines and steam wagons began to appear. At the peak of their popularity, around the mid 20th century, there were hundreds of companies making steam toys and models. Today, companies such as Wilesco (Germany), Mamod (UK), and Jensen (US) continue to produce model/toy steam engines.
Design features
Toy steam engines will commonly have fewer features (such as mechanical lubricators or governors), and operate at lower pressures, while model steam engines will place more emphasis on similarity to life-sized engines. Manufacturers such as Wilesco sell both simple toy engines for beginners (e.g. the D3) and more intricate model engines that are meant to be used to drive things like workshops or boats.
Model steam engines typically use hexamine fuel tablets, methylated spirits (aka meths or denatured alcohol), butane gas, or electricity to heat the boiler. Cylinders are either oscillating (single-a |
https://en.wikipedia.org/wiki/Load%20%28computing%29 | In UNIX computing, the system load is a measure of the amount of computational work that a computer system performs. The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers which represent the system load during the last one-, five-, and fifteen-minute periods.
Unix-style load calculation
All Unix and Unix-like systems generate a dimensionless metric of three "load average" numbers in the kernel. Users can easily query the current result from a Unix shell by running the uptime command:
$ uptime
14:34:03 up 10:43, 4 users, load average: 0.06, 0.11, 0.09
The w and top commands show the same three load average numbers, as do a range of graphical user interface utilities.
In operating systems based on the Linux kernel, this information can be easily accessed by reading the /proc/loadavg file.
To explore this kind of information in dept, according to the Linux's Filesystem Hierarchy Standard, architecture-dependent information are exposed on the file /proc/stat.
An idle computer has a load number of 0 (the idle process is not counted). Each process using or waiting for CPU (the ready queue or run queue) increments the load number by 1. Each process that terminates decrements it by 1. Most UNIX systems count only processes in the running (on CPU) or runnable (waiting for CPU) states. However, Linux also includes processes in uninterruptible sleep states (usually waiting for disk activity), which can lead to markedly different results if many processes remain blocked in I/O due to a busy or stalled I/O system. This, for example, includes processes blocking due to an NFS server failure or too slow media (e.g., USB 1.x storage devices). Such circumstances can result in an elevated load average, which does not reflect an actual increase in CPU use (but still gives an idea of how long users have to wait).
Systems calculate the load average as the exponentially damped/weighted moving average o |
https://en.wikipedia.org/wiki/Alpha%20Microsystems | Alpha Microsystems, Inc., often shortened to Alpha Micro, was an American computer company founded in California in 1977. The company was founded in 1977 in Costa Mesa, California, by John French, Dick Wilcox and Bob Hitchcock. During the dot-com boom, the company changed its name to AlphaServ, then NQL Inc., reflecting its pivot toward being a provider of Internet software. However, the company soon reverted to its original Alpha Microsystems name after the dot-com bubble burst.
Products
The first Alpha Micro computer was the S-100 AM-100, based upon the WD16 microprocessor chipset from Western Digital. As of 1982, AM-100/L and the AM-1000 were based on the Motorola 68000 and succeeding processors, though Alpha Micro swapped several addressing lines to create byte-ordering compatibility with their earlier processor.
Early peripherals included standard computer terminals (such models as Soroc, Hazeltine 1500, and Wyse WY50), Fortran punch card readers, 100 baud rate acoustic coupler modems (later upgraded to 300 baud modems), and 10 MB CDC Hawk hard drives with removable disk packs.
The company's primary claim to fame was selling inexpensive minicomputers that provided multi-user power using a proprietary operating system called AMOS (Alpha Micro Operating System). The operating system on the 68000 machines was called AMOS/L. The operating system had major similarities to the operating system of the DEC DECsystem-10. This may not be coincidental; legend has it that the founders based their operating system on "borrowed" source code from DEC, and DEC, perceiving the same, unsuccessfully tried to sue Alpha Micro over the similarities in 1984.
As Motorola stopped developing their 68000 product, Alpha Micro started to move to the x86 CPU family, used in common PCs. This was initially done with the Falcon cards, allowing standard DOS and later Windows-based PCs to run AMOS applications on the 68000-series CPU on the Falcon card. The work done on AMPC became the fo |
https://en.wikipedia.org/wiki/Country%20code | A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed.
The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU).
ISO 3166-1
The standard ISO 3166-1 defines short identification codes for most countries and dependent areas:
ISO 3166-1 alpha-2: two-letter code
ISO 3166-1 alpha-3: three-letter code
ISO 3166-1 numeric: three-digit code
The two-letter codes are used as the basis for other codes and applications, for example,
for ISO 4217 currency codes
with deviations, for country code top-level domain names (ccTLDs) on the Internet: list of Internet TLDs.
Other applications are defined in ISO 3166-1 alpha-2.
ITU country codes
In telecommunication, a country code, or international subscriber dialing (ISD) code, is a telephone number prefix used in international direct dialing (IDD) and for destination routing of telephone calls to a country other than the caller's. A country or region with an autonomous telephone administration must apply for membership in the International Telecommunication Union (ITU) to participate in the international public switched telephone network (PSTN). County codes are defined by the ITU-T section of the ITU in standards E.123 and E.164.
Country codes constitute the international telephone numbering plan, and are dialed only when calling a telephone number in another country. They are dialed before the national telephone number. International calls require at least one additional prefix to be dialing before the country code, to connect the call to international circuits, the international call prefix. When printing telephone numbers this is indicated by a plus-sign (+) in front of a complete international telephone number, per recommendation E164 by the |
https://en.wikipedia.org/wiki/Benedikt%20L%C3%B6we | Benedikt Löwe (born 1972) is a German mathematician and logician working at the
universities of Amsterdam, Hamburg, and Cambridge.
He is known for his work on mathematical logic and the foundations of mathematics, as well as for initiating the interdisciplinary conference series Foundations of the Formal Sciences (FotFS; 1999–2013) and Computability in Europe (CiE; since 2005).
Biography
Löwe studied mathematics and philosophy at the universities of Hamburg, Tübingen, HU Berlin, and Berkeley. In 2001, he completed his PhD entitled Blackwell Determinacy about determinacy under supervision of Donald A. Martin and Ronald Björn Jensen.
He works at the Institute for Logic, Language and Computation of the University of Amsterdam since 2003 and was appointed professor for mathematical logic and interdisciplinary applications of logic at the University of Hamburg in 2009.
Currently, he is also an extraordinary fellow at Churchill College of the University of Cambridge.
Löwe was Managing Editor of the journal Mathematical Logic Quarterly from 2011 to 2022.
He is the Secretary General of the Division for Logic, Methodology and Philosophy of Science and Technology of the International Union of History and Philosophy of Science and Technology
and a member of the International Academy for Philosophy of Science
and the Academia Europaea.
From 2012 to 2022, he was the President of the German Association for Mathematical Logic and for Basic Research in the Exact Sciences (DVMLG).
Co-edited Volumes (a selection)
2006. Logical approaches to computational barriers : Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30 – July 5, 2006; proceedings. Co-edited with Arnold Beckmann, Ulrich Berger and John V. Tucker.
2008. Games, scales, and Suslin cardinals. Co-edited with Alexander S. Kechris and John R. Steel. Cambridge : Cambridge University
2008. Logic and theory of algorithms : 4th Conference on Computability in Europe, CiE 2008, Athens, Greece, June |
https://en.wikipedia.org/wiki/Moment%20matrix | In mathematics, a moment matrix is a special symmetric square matrix whose rows and columns are indexed by monomials. The entries of the matrix depend on the product of the indexing monomials only (cf. Hankel matrices.)
Moment matrices play an important role in polynomial fitting, polynomial optimization (since positive semidefinite moment matrices correspond to polynomials which are sums of squares) and econometrics.
Application in regression
A multiple linear regression model can be written as
where is the explained variable, are the explanatory variables, is the error, and are unknown coefficients to be estimated. Given observations , we have a system of linear equations that can be expressed in matrix notation.
or
where and are each a vector of dimension , is the design matrix of order , and is a vector of dimension . Under the Gauss–Markov assumptions, the best linear unbiased estimator of is the linear least squares estimator , involving the two moment matrices and defined as
and
where is a square normal matrix of dimension , and is a vector of dimension .
See also
Design matrix
Gramian matrix
Projection matrix |
https://en.wikipedia.org/wiki/Electronic%20game | An electronic game is a game that uses electronics to create an interactive system with which a player can play. Video games are the most common form today, and for this reason the two terms are often used interchangeably. There are other common forms of electronic game including handheld electronic games, standalone systems (e.g. pinball, slot machines, or electro-mechanical arcade games), and exclusively non-visual products (e.g. audio games).
Teletype games
The earliest form of computer game to achieve any degree of mainstream use was the text-based Teletype game. Teletype games lack video display screens and instead present the game to the player by printing a series of characters on paper which the player reads as it emerges from the platen. Practically this means that each action taken will require a line of paper and thus a hard-copy record of the game remains after it has been played. This naturally tends to reduce the size of the gaming universe or alternatively to require a great amount of paper. As computer screens became standard during the rise of the third generation computer, text-based command line-driven language parsing Teletype games transitioned into visual interactive fiction allowing for greater depth of gameplay and reduced paper requirements. This transition was accompanied by a simultaneous shift from the mainframe environment to the personal computer. Several of these subsequently were ported to systems with video displays, eliminating the need for a teletype printer.
Examples of text-based Teletype games include:
The Oregon Trail (1971)
Trek73 (1973)
Dungeon (1975)
Super Star Trek (1975)
Colossal Cave Adventure (1976)
Zork (1977)
Electronic handhelds
The earliest form of dedicated console, handheld electronic games are characterized by their size and portability. Used to play interactive games, handheld electronic games are often miniaturized versions of video games. The controls, display and speakers are all part of a single unit, an |
https://en.wikipedia.org/wiki/Conceptual%20model%20%28computer%20science%29 | In computer science, a conceptual model, or domain model, represents concepts (entities) and relationships between them.
Overview
In the field of computer science a conceptual model aims to express the meaning of terms and concepts used by domain experts to discuss the problem, and to find the correct relationships between different concepts. The conceptual model is explicitly chosen to be independent of design or implementation concerns, for example, concurrency or data storage. Conceptual modeling in computer science should not be confused with other modeling disciplines within the broader field of conceptual models such as data modelling, logical modelling and physical modelling.
The conceptual model attempts to clarify the meaning of various, usually ambiguous terms, and ensure that confusion caused by different interpretations of the terms and concepts cannot occur. Such differing interpretations could easily cause confusion amongst stakeholders, especially those responsible for designing and implementing a solution, where the conceptual model provides a key artifact of business understanding and clarity. Once the domain concepts have been modeled, the model becomes a stable basis for subsequent development of applications in the domain. The concepts of the conceptual model can be mapped into physical design or implementation constructs using either manual or automated code generation approaches. The realization of conceptual models of many domains can be combined to a coherent platform.
A conceptual model can be described using various notations, such as UML, ORM or OMT for object modelling, ITE, or IDEF1X for Entity Relationship Modelling. In UML notation, the conceptual model is often described with a class diagram in which classes represent concepts, associations represent relationships between concepts and role types of an association represent role types taken by instances of the modelled concepts in various situations. In ER notation, the conceptual m |
https://en.wikipedia.org/wiki/Natural%20border | A natural border is a border between states or their subdivisions which is concomitant with natural formations such as rivers or mountain ranges. The "doctrine of natural boundaries" developed in Western culture in the 18th century being based upon the "natural" ideas of Jean-Jacques Rousseau and developing concepts of nationalism. The similar concept in China developed earlier from natural zones of control.
Natural borders have historically been strategically useful because they are easily defended. Natural borders remain meaningful in modern warfare even though military technology and engineering have somewhat reduced their strategic value.
Expanding until natural borders are reached, and maintaining those borders once conquered, have been a major policy goal for a number of states. For example, the Roman Republic, and later, the Roman Empire expanded continuously until it reached certain natural borders: first the Alps, later the Rhine river, the Danube river and the Sahara desert. From the Middle Ages onwards until the 19th century, France sought to expand its borders towards the Alps, the Pyrenees, and the Rhine River.
Natural borders can be a source of territorial disputes when they shift. One such example is the Rio Grande, which defines part of the border between the United States and Mexico, whose movement has led to multiple conflicts.
Natural borders are not to be confused with landscape borders, which are also geographical features that demarcate political boundaries. Although landscape borders, like natural borders, also take forms of forests, water bodies, and mountains, they are manmade instead of natural. Installing a landscape border, usually motivated by demarcating treaty-designated political boundaries, goes against nature by modifying the borderland's natural geography. For one, China's Song Dynasty built an extensive defensive forest in its northern border to thwart the nomadic Khitan people.
Criticism
In Chapter IV of his 1916 book The N |
https://en.wikipedia.org/wiki/Authorize.Net | Authorize.Net, A Visa Solution is a United States-based payment gateway service provider, allowing merchants to accept credit card and electronic check payments through their website and over an Internet Protocol (IP) connection. Founded in 1996 as Authorize.Net, Inc., the company is now a subsidiary of Visa Inc. Its service permits customers to enter credit card and shipping information directly onto a web page, in contrast to some alternatives that require the customer to sign up for a payment service before performing a transaction.
History
Authorize.Net was founded in 1996, in Utah, by Jeff Knowles. As of 2004, it had about 90,000 customers.
Authorize.Net was one of several companies acquired by Go2Net, a company backed by Microsoft founder Paul Allen, in 1999, for 90.5 million in cash and stock. Go2Net was acquired by InfoSpace in 2000 for about $4 billion; Authorize.Net was acquired by Lightbridge in 2004 for $82 million and then by CyberSource in 2007.
Visa Inc. acquired CyberSource in 2010 for $2 billion. Visa has maintained Authorize.Net and Cybersource as separate services, with Authorize.Net concentrating on small- to medium-sized businesses and Cybersource concentrating on international and large-scale payment processing. At the time of the 2010 acquisition, the company's CEO identified three priorities: expanding the ecommerce market, enhancing fraud detection and prevention, and improving data security. As of 2014, along with parent CyberSource, it had about 450,000 customers.
Outages
In September 2004, Authorize.Net's servers were hit by a distributed denial-of-service (DDoS) attack. The DDoS attack lasted for over one week and caused a virtual shut down of the payment gateway's service. The attackers demanded money from Authorize.net in exchange for stopping the attack.
On July 2, 2009, at 11:00 pm PST, the entire web infrastructure for Authorize.Net (main website, merchant gateway website, etc.) went offline and stayed down all morning July 3 |
https://en.wikipedia.org/wiki/List%20of%20American%20Physical%20Society%20prizes%20and%20awards | The American Physical Society gives out a number of awards for research excellence and conduct; topics include outstanding leadership, computational physics, lasers, mathematics, and more.
Prizes
David Adler Lectureship Award in the Field of Materials Physics
The David Adler Lectureship Award in the Field of Materials Physics is a prize that has been awarded annually by the American Physical Society since 1988. The recipient is chosen for being "an outstanding contributor to the field of materials physics, who is noted for the quality of his/her research, review articles and lecturing." The prize is named after physicist David Adler with contributions to the endowment by friends of David Adler and Energy Conversion Devices, Inc. The winner receives a $5,000 honorarium.
Will Allis Prize for the Study of Ionized Gases
Will Allis Prize for the Study of Ionized Gases is awarded biannually "for outstanding contributions to understanding the physics of partially ionized plasmas and gases" in honour of Will Allis. The $10000 prize was founded in 1989 by contributions from AT&T, General Electric, GTE, International Business Machines, and Xerox Corporations.
Early Career Award for Soft Matter Research
This award recognizes outstanding and sustained contributions by an early-career researcher to the soft matter field.
LeRoy Apker Award
The LeRoy Apker Award was established in 1978 to recognize outstanding achievements in physics by undergraduate students. Two awards are presented each year, one to a student from a Ph.D. granting institution, and one to a student from a non-Ph.D. granting institution.
APS Medal for Exceptional Achievement in Research
The APS Medal for Exceptional Achievement in Research was established in 2016 to recognize contributions of the highest level that advance our knowledge and understanding of the physical universe. The medal carries with it a prize of $50,000 and is the largest APS prize to recognize the achievement of researchers from acro |
https://en.wikipedia.org/wiki/Tur%C3%A1n%20number | In mathematics, the Turán number T(n,k,r) for r-uniform hypergraphs of order n is the smallest number of r-edges such that every induced subgraph on k vertices contains an edge. This number was determined for r = 2 by , and the problem for general r was introduced in . The paper gives a survey of Turán numbers.
Definitions
Fix a set X of n vertices. For given r, an r-edge or block is a set of r vertices. A set of blocks is called a Turán (n,k,r) system (n ≥ k ≥ r) if every k-element subset of X contains a block.
The Turán number T(n,k,r) is the minimum size of such a system.
Example
The complements of the lines of the Fano plane form a Turán (7,5,4)-system. T(7,5,4) = 7.
Relations to other combinatorial designs
It can be shown that
Equality holds if and only if there exists a Steiner system S(n - k, n - r, n).
An (n,r,k,r)-lotto design is an (n, k, r)-Turán system. Thus, T(n,k, r) = L(n,r,k,r).
See also
Forbidden subgraph problem
Combinatorial design |
https://en.wikipedia.org/wiki/Permutation%20category | In mathematics, the permutation category is a category where
the objects are the natural numbers,
the morphisms from a natural number n to itself are the elements of the symmetric group and
there are no morphisms from m to ''n if .
It is equivalent as a category to the category of finite sets and bijections between them. |
https://en.wikipedia.org/wiki/Pressure-correction%20method | Pressure-correction method is a class of methods used in computational fluid dynamics for numerically solving the Navier-Stokes equations normally for incompressible flows.
Common properties
The equations solved in this approach arise from the implicit time integration of the incompressible Navier–Stokes equations.
Due to the non-linearity of the convective term in the momentum equation that is written above, this problem is solved with a nested-loop approach. While so called global
or inner iterations represent the real time-steps and are used to update the variables and , based on a linearized system, and boundary conditions; there is also an outer loop for updating the coefficients of the linearized system.
The outer iterations comprise two steps:
Solve the momentum equation for a provisional velocity based on the velocity and pressure of the previous outer loop.
Plug the new newly obtained velocity into the continuity equation to obtain a correction.
The correction for the velocity that is obtained from the second equation one has with incompressible flow, the non-divergence criterion or continuity equation
is computed by first calculating a residual value , resulting from spurious mass flux, then using this mass imbalance to get a new pressure value. The pressure value that is attempted to compute, is such that when plugged into momentum equations a divergence-free velocity field results. The mass imbalance is often also used for control of the outer loop.
The name of this class of methods stems from the fact that the correction of the velocity field is computed through the pressure-field.
The discretization of this is typically done with either the finite element method or the finite volume method. With the latter, one might also encounter the dual mesh, i.e. the computation grid obtained from connecting the centers of the cells that the initial subdivision into finite elements of the computation domain yielded.
Implicit split-update procedures
|
https://en.wikipedia.org/wiki/Dijen%20K.%20Ray-Chaudhuri | Dwijendra Kumar Ray-Chaudhuri (born November 1, 1933) is a professor emeritus at Ohio State University. He and his student R. M. Wilson together solved Kirkman's schoolgirl problem in 1968 which contributed to developments in design theory.
He received his M.Sc. (1956) in mathematics from the famous Rajabazar Science College, University of Calcutta and Ph.D. in combinatorics (1959) from University of North Carolina at Chapel Hill. He served as consultant at Cornell Medicine and Sloan Kettering, a professor and chairman of the Department of Mathematics at Ohio State University, as well as a visiting professor of University of Göttingen and University of Erlangen in Germany, University of London, and Tata Institute of Fundamental Research in Mumbai.
He is best known for his work in design theory and the theory of error-correcting codes, in which the class of BCH codes is partly named after him and his Ph.D. advisor Bose. Ray-Chaudhuri is the recipient of the Euler Medal by the Institute of Combinatorics and its Applications for his career contributions to combinatorics. In 2000, a festschrift appeared on the occasion of his 65th birthday. In 2012 he became a fellow of the American Mathematical Society.
Honors, Awards, and Fellowships
Senior U.S. Scientist Award of the Humboldt Foundation of Germany
Distinguished Senior Research Award from Ohio State University
President for Forum in New Delhi
Foundation Fellow of the ICA
Euler Medal of ICA.
Fellow of the American Mathematical Society.
Selected publications
R. C. Bose and D. K. Ray-Chaudhuri: On a class of error correcting binary group codes. Information and Control 3(1): 68-79 (March 1960).
C. T. Abraham, S. P Ghosh and D. K. Ray-Chaudhuri: File organization schemes based on finite geometries. Information and control, 1968.
D. K. Ray-Chaudhuri and R. M. Wilson: Solution of Kirkman's schoolgirl problem. Proc. symp. pure Math, 1971.
D. K. Ray-Chaudhuri and R. M. Wilson: On t-designs. Osaka Journal of Ma |
https://en.wikipedia.org/wiki/Nina%20Gantert | Nina Gantert is a Swiss and German probability theorist, and a Fellow of the Institute of Mathematical Statistics. She holds the chair for probability in the department of mathematics at the Technical University of Munich, a position she has held since 2011 when the chair was established.
Research
Her research interests include the use of random walks to model transport in disordered media, and stochastic processes more generally. She is also interested in physical and biological applications of probability theory.
Education and career
After studying at ETH Zurich,
Gantert earned her PhD from the University of Bonn in 1991. Her dissertation, Einige große Abweichungen der Brownschen Bewegung [some large deviations for Brownian motion] was supervised by Hans Föllmer.
After postdoctoral research and a habilitation at the Technical University of Berlin, she held faculty positions at the Karlsruhe Institute of Technology and the University of Münster before moving to Munich in 2011. |
https://en.wikipedia.org/wiki/Carath%C3%A9odory%20metric | In mathematics, the Carathéodory metric is a metric defined on the open unit ball of a complex Banach space that has many similar properties to the Poincaré metric of hyperbolic geometry. It is named after the Greek mathematician Constantin Carathéodory.
Definition
Let (X, || ||) be a complex Banach space and let B be the open unit ball in X. Let Δ denote the open unit disc in the complex plane C, thought of as the Poincaré disc model for 2-dimensional real/1-dimensional complex hyperbolic geometry. Let the Poincaré metric ρ on Δ be given by
(thus fixing the curvature to be −4). Then the Carathéodory metric d on B is defined by
What it means for a function on a Banach space to be holomorphic is defined in the article on Infinite dimensional holomorphy.
Properties
For any point x in B,
d can also be given by the following formula, which Carathéodory attributed to Erhard Schmidt:
For all a and b in B,
with equality if and only if either a = b or there exists a bounded linear functional ℓ ∈ X∗ such that ||ℓ|| = 1, ℓ(a + b) = 0 and
Moreover, any ℓ satisfying these three conditions has |ℓ(a − b)| = ||a − b||.
Also, there is equality in (1) if ||a|| = ||b|| and ||a − b|| = ||a|| + ||b||. One way to do this is to take b = −a.
If there exists a unit vector u in X that is not an extreme point of the closed unit ball in X, then there exist points a and b in B such that there is equality in (1) but b ≠ ±a.
Carathéodory length of a tangent vector
There is an associated notion of Carathéodory length for tangent vectors to the ball B. Let x be a point of B and let v be a tangent vector to B at x; since B is the open unit ball in the vector space X, the tangent space TxB can be identified with X in a natural way, and v can be thought of as an element of X. Then the Carathéodory length of v at x, denoted α(x, v), is defined by
One can show that α(x, v) ≥ ||v||, with equality when x = 0.
See also
Earle–Hamilton fixed point theorem |
https://en.wikipedia.org/wiki/Velocity%20dispersion | In astronomy, the velocity dispersion (σ) is the statistical dispersion of velocities about the mean velocity for a group of astronomical objects, such as an open cluster, globular cluster, galaxy, galaxy cluster, or supercluster. By measuring the radial velocities of the group's members through astronomical spectroscopy, the velocity dispersion of that group can be estimated and used to derive the group's mass from the virial theorem. Radial velocity is found by measuring the Doppler width of spectral lines of a collection of objects; the more radial velocities one measures, the more accurately one knows their dispersion. A central velocity dispersion refers to the σ of the interior regions of an extended object, such as a galaxy or cluster.
The relationship between velocity dispersion and matter (or the observed electromagnetic radiation emitted by this matter) takes several forms in astronomy based on the object(s) being observed. For instance, the M–σ relation was found for material circling black holes, the Faber–Jackson relation for elliptical galaxies, and the Tully–Fisher relation for spiral galaxies. For example, the σ found for objects about the Milky Way's supermassive black hole (SMBH) is about 100 km/s. The Andromeda Galaxy (Messier 31) hosts a SMBH about 10 times larger than our own, and has a .
Groups and clusters of galaxies have a wider range of velocity dispersions than smaller objects. For example, our own poor group, the Local Group, has a . But rich clusters of galaxies, such as the Coma Cluster, have a . The dwarf elliptical galaxies within Coma have their own internal velocity dispersion for their stars, which is a , typically. Normal elliptical galaxies, by comparison, have an average .
For spiral galaxies, the increase in velocity dispersion in population I stars is a gradual process which likely results from the random momentum exchanges, known as dynamical friction, between individual stars and large interstellar media (gas and dust c |
https://en.wikipedia.org/wiki/Friedmann%20equations | The Friedmann equations are a set of equations in physical cosmology that govern the expansion of space in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density and pressure . The equations for negative spatial curvature were given by Friedmann in 1924.
Assumptions
The Friedmann equations start with the simplifying assumption that the universe is spatially homogeneous and isotropic, that is, the cosmological principle; empirically, this is justified on scales larger than the order of 100 Mpc. The cosmological principle implies that the metric of the universe must be of the form
where is a three-dimensional metric that must be one of (a) flat space, (b) a sphere of constant positive curvature or (c) a hyperbolic space with constant negative curvature. This metric is called Friedmann–Lemaître–Robertson–Walker (FLRW) metric. The parameter discussed below takes the value 0, 1, −1, or the Gaussian curvature, in these three cases respectively. It is this fact that allows us to sensibly speak of a "scale factor" .
Einstein's equations now relate the evolution of this scale factor to the pressure and energy of the matter in the universe. From FLRW metric we compute Christoffel symbols, then the Ricci tensor. With the stress–energy tensor for a perfect fluid, we substitute them into Einstein's field equations and the resulting equations are described below.
Equations
There are two independent Friedmann equations for modelling a homogeneous, isotropic universe. The first is:
which is derived from the 00 component of the Einstein field equations. The second is:
which is derived from the first together with the trace of Einstein's field equations (the dimension of the two equations is time−2).
is the scale factor, , , and are u |
https://en.wikipedia.org/wiki/Point%20mutation | A point mutation is a genetic mutation where a single nucleotide base is changed, inserted or deleted from a DNA or RNA sequence of an organism's genome. Point mutations have a variety of effects on the downstream protein product—consequences that are moderately predictable based upon the specifics of the mutation. These consequences can range from no effect (e.g. synonymous mutations) to deleterious effects (e.g. frameshift mutations), with regard to protein production, composition, and function.
Causes
Point mutations usually take place during DNA replication. DNA replication occurs when one double-stranded DNA molecule creates two single strands of DNA, each of which is a template for the creation of the complementary strand. A single point mutation can change the whole DNA sequence. Changing one purine or pyrimidine may change the amino acid that the nucleotides code for.
Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention.
There are multiple ways for point mutations to occur. First, ultraviolet (UV) light and higher-frequency light are capable of ionizing electrons, which in turn can affect DNA. Reactive oxygen molecules with free radicals, which are a byproduct of cellular metabolism, can also be very harmful to DNA. These reactants can lead to both single-stranded DNA breaks and double-stranded DNA breaks. Third, bonds in DNA eventually degrade, which creates another problem to keep the integrity of DNA to a high standard. There can also be replication errors that lead to substitution, insertion, or deletion mutations.
Categorization
Transition/transversion categorization
In 1959 Ernst Freese coined the terms |
https://en.wikipedia.org/wiki/Centro%20Brasileiro%20de%20Pesquisas%20F%C3%ADsicas | The Brazilian Center for Research in Physics (, CBPF) is a physics research center in the Urca neighborhood of Rio de Janeiro sponsored by the Brazilian National Council for Scientific and Technological Development (CNPq), linked to the Ministry of Science and Technology. CBPF was founded in 1949 from a joint effort of Cesar Lattes, José Leite Lopes, and Jayme Tiomno. Throughout its existence, CBPF became an internationally renowned research institution, organizing several international meetings and hosting many renowned physicists, like Richard Feynman and J. Robert Oppenheimer. It was also the starting point of important Brazilian institutions, like the National Institute for Pure and Applied Mathematics (IMPA), the National Laboratory for Scientific Computation (LNCC) and the National Laboratory of Synchrotron Light (LNLS). Since its creation, CBPF has been one of the most important Physics research institutions in Brazil, and its graduate program ranks among the best in the country.
See also
Maria Laura Moura Mouzinho Leite Lopes
External links
Official CBPF Homepage
Research institutes in Brazil
Physics research institutes |
https://en.wikipedia.org/wiki/Rostral%20ventrolateral%20medulla | The rostral ventrolateral medulla (RVLM), also known as the pressor area of the medulla, is a brain region that is responsible for basal and reflex control of sympathetic activity associated with cardiovascular function. Abnormally elevated sympathetic activity in the RVLM is associated with various cardiovascular diseases, such as heart failure and hypertension. The RVLM is notably involved in the baroreflex.
It receives inhibitory GABAergic input from the caudal ventrolateral medulla (CVLM). The RVLM is a primary regulator of the sympathetic nervous system; it sends catecholaminergic projections to the sympathetic preganglionic neurons in the intermediolateral nucleus of the spinal cord via reticulospinal tract.
Physostigmine, a choline-esterase inhibitor, elevates endogenous levels of acetylcholine and causes a rise in blood pressure by stimulation of the RVLM. Orexinergic neurons from the lateral hypothalamus output in the RVLM.
See also
Vasomotor center |
https://en.wikipedia.org/wiki/Aerospace%20physiology | Aerospace physiology is the study of the effects of high altitudes on the body, such as different pressures and levels of oxygen. At different altitudes the body may react in different ways, provoking more cardiac output, and producing more erythrocytes. These changes cause more energy waste in the body, causing muscle fatigue, but this varies depending on the level of the altitude.
Effects of altitude
The physics that affect the body in the sky or in space are different from the ground. For example, barometric pressure is different at different heights. At sea level barometric pressure is 760 mmHg; at 3.048 m above sea level, barometric pressure is 523 mmHg, and at 15.240 m, the barometric pressure is 87 mmHg. As the barometric pressure decreases, atmospheric partial pressure decreases also. This pressure is always below 20% of the total barometric pressure. At sea level, alveolar partial pressure of oxygen is 104 mmHg, reaching 6000 meters above the sea level. This pressure will decrease up to 40 mmHg in a non-acclimated person, but in an acclimated person, it will decrease as much as 52 mmHg. This is because alveolar ventilation will increase more in the acclimated person. Aviation physiology can also include the effect in humans and animals exposed for long periods of time inside pressurized cabins.
The other main issue with altitude is hypoxia, caused by both the lack of barometric pressure and the decrease in oxygen as the body rises. With exposure at higher altitudes, alveolar carbon dioxide partial pressure (PCO2) decreases from 40 mmHg (sea level) to lower levels. With a person acclimated to sea level, ventilation increases about five times and the carbon dioxide partial pressure decreases up to 6 mmHg. In an altitude of 3040 meters, arterial saturation of oxygen elevates to 90%, but over this altitude arterial saturation of oxygen decreases rapidly as much as 70% (6000 m), and decreases more at higher altitudes.
g-forces
g-forces are mostly experience |
https://en.wikipedia.org/wiki/Dirichlet%20series%20inversion | In analytic number theory, a Dirichlet series, or Dirichlet generating function (DGF), of a sequence is a common way of understanding and summing arithmetic functions in a meaningful way. A little known, or at least often forgotten about, way of expressing formulas for arithmetic functions and their summatory functions is to perform an integral transform that inverts the operation of forming the DGF of a sequence. This inversion is analogous to performing an inverse Z-transform to the generating function of a sequence to express formulas for the series coefficients of a given ordinary generating function.
For now, we will use this page as a compendia of "oddities" and oft-forgotten facts about transforming and inverting Dirichlet series, DGFs, and relating the inversion of a DGF of a sequence to the sequence's summatory function. We also use the notation for coefficient extraction usually applied to formal generating functions in some complex variable, by denoting for any positive integer , whenever
denotes the DGF (or Dirichlet series) of f which is taken to be absolutely convergent whenever the real part of s is greater than the abscissa of absolute convergence, .
The relation of the Mellin transformation of the summatory function of a sequence to the DGF of a sequence provides us with a way of expressing arithmetic functions such that , and the corresponding Dirichlet inverse functions, , by inversion formulas involving the summatory function, defined by
In particular, provided that the DGF of some arithmetic function f has an analytic continuation to , we can express the Mellin transform of the summatory function of f by the continued DGF formula as
It is often also convenient to express formulas for the summatory functions over the Dirichlet inverse function of f using this construction of a Mellin inversion type problem.
Preliminaries: Notation, conventions and known results on DGFs
DGFs for Dirichlet inverse functions
Recall that an arithmetic funct |
https://en.wikipedia.org/wiki/5%27-3%27%20exoribonuclease%202 | 5'-3' Exoribonuclease 2 (XRN2) also known as Dhm1-like protein is an exoribonuclease enzyme that in humans is encoded by the XRN2 gene.
The human gene encoding XRN2 shares similarity with the mouse Dhm1 and the yeast's Dhp1 (Schizosaccharomyces pombe) or RAT1 (Saccharomyces) genes. The yeast gene is involved in homologous recombination and RNA metabolism, such as RNA synthesis and RNA trafficking and termination. Complementation studies show that Dhm1 has a similar function in mouse as Dhp1.
Function
Human XRN2 is involved in the torpedo model of transcription termination.
The C. elegans homologue, XRN-2, is involved in the degradation of certain mature miRNAs and their dislodging from miRISC miRNAs.
In yeast, the Rat1 protein has been shown to also be involved in the torpedo transcription termination model. When a polyadenylation site has been detected on the nascent RNA and cleaved by the RNA polymerase II, the Rtt103 factor recruits Rat1 and attaches it to free end. The exonuclease activity of Rat1 degrades the RNA strand and halts transcriptions upon catching up to the polymerase.
See also
Xrn1 |
https://en.wikipedia.org/wiki/Transcriptional%20memory | Transcriptional memory is a biological phenomenon, initially discovered in yeast, during which cells primed with a particular cue show increased rates of gene expression after re-stimulation at a later time. This event was shown to take place: in yeast during growth in galactose and inositol starvation; plants during environmental stress; in mammalian cells during LPS and interferon induction. Prior work has shown that certain characteristics of chromatin may contribute to the poised transcriptional state allowing faster re-induction. These include: activity of specific transcription factors, retention of RNA polymerase II at the promoters of poised genes, activity of chromatin remodeling complexes, propagation of H3K4me2 and H3K36me3 histone modifications, occupancy of the H3.3 histone variant, as well as binding of nuclear pore components. Moreover, locally bound cohesin was shown to inhibit establishment of transcriptional memory in human cells during interferon gamma stimulation. |
https://en.wikipedia.org/wiki/Watchful%20waiting | Watchful waiting (also watch and wait or WAW) is an approach to a medical problem in which time is allowed to pass before medical intervention or therapy is used. During this time, repeated testing may be performed.
Related terms include expectant management, active surveillance, and masterly inactivity. The term masterly inactivity is also used in nonmedical contexts.
A distinction can be drawn between watchful waiting and medical observation, but some sources equate the terms. Usually, watchful waiting is an outpatient process and may have a duration of months or years. In contrast, medical observation is usually an inpatient process, often involving frequent or even continuous monitoring and may have a duration of hours or days.
Medical uses
Often watchful waiting is recommended in situations with a high likelihood of self-resolution if there is high uncertainty concerning the diagnosis, and the risks of intervention or therapy may outweigh the benefits.
Watchful waiting is often recommended for many common illnesses such as ear infections in children; because the majority of cases resolve spontaneously, antibiotics will often be prescribed only after several days of symptoms. It is also a strategy frequently used in surgery prior to a possible operation, when it is possible for a symptom (for example abdominal pain) to either improve naturally or become worse.
Other examples include:
the diagnosis and treatment of benign prostatic hyperplasia
depression
otitis media
inguinal hernia
odd behaviors in infants
Active surveillance of prostate cancer
non-symptomatic kidney stones
Process
Watchful waiting
In many applications, a key component of watchful waiting is the use of an explicit decision tree or other protocol to ensure a timely transition from watchful waiting to another form of management, as needed. This is particularly common in the post-surgical management of cancer survivors, in whom cancer recurrence is a significant concern.
Medical ob |
https://en.wikipedia.org/wiki/World%20Emoji%20Day | World Emoji Day is an annual unofficial holiday occurring on 17 July each year, intended to celebrate emoji; in the years since the earliest observance, it has become a popular date to make product or other announcements and releases relating to emoji.
Origins and celebrations
The date originally referred to the day Apple premiered its iCal calendar application in 2002. The day, July 17, was displayed on the Apple Color Emoji version of the calendar emoji (📅) as an Easter egg.
World Emoji Day was created on 17 July 2014 by Jeremy Burge, the founder of Emojipedia.
The New York Times reported that Burge chose 17 July "based on the way the calendar emoji is shown on iPhones". For the first World Emoji Day, Burge told The Independent "there were no formal plans put in place" other than choosing the date. The Washington Post suggested in 2018 that readers use this day to "communicate with only emoji".
NBC reported that the day was Twitter's top trending item on 17 July in 2015.
In 2016, Google changed the appearance of Unicode character to display 17 July on Android, Gmail, Hangouts, and ChromeOS products. As of 2020, all major platforms except Microsoft had switched to show 17 July on this emoji, to avoid confusion on World Emoji Day.
The most recent World Emoji Day was World Emoji Day 2023, which occurred on 17 July 2023. The next World Emoji Day will be World Emoji Day 2024, scheduled to occur on 17 July 2024, while the previous World Emoji Day was World Emoji Day 2022, which occurred on 17 July 2022.
Announcements
Since 2017, Apple has used each World Emoji Day to announce upcoming expansions to the range of emojis on iOS.
On World Emoji Day 2015, Pepsi launched PepsiMoji which included an emoji keyboard and custom World Emoji Day Pepsi cans and bottles. These were initially released in Canada and expanded to 100 markets in 2016.
In 2016, Sony Pictures Animation used World Emoji Day to announce T.J. Miller as the first cast member for The Emoji Movie, Go |
https://en.wikipedia.org/wiki/Stein%20Institute%20for%20Research%20on%20Aging | Sam and Rose Stein Institute for Research on Aging (Stein Institute for Research on Aging) is a non-profit, multidisciplinary research institute at the University of California, San Diego School of Medicine located in La Jolla, California. Established in 1983, it researches healthy aging through the development and application of the latest advances in biomedical and behavioral sciences.
The more than 150 scientists at the Stein Institute are investigating predictors and associations of successful cognitive and emotional aging. Understanding these processes requires contributions from basic sciences like neurobiology and genetics, along with the input from clinical medicine and social sciences, such as medical anthropology.
History
The focus of the Stein Institute's research has shifted over the years since its inception in 1983. In the beginning, the primary emphasis was on Alzheimer's disease. Later, this scope was broadened to include various age-related disorders such as cancer and arthritis. Dilip V. Jeste, taking over as director in 2004 set the Institute's main focus on successful aging.
Activities
Over the past 25 years, the Stein Institute has brought together many scientists, encouraged and funded research published in top scientific journals (such as JAMA), supported the education of students, including medical students participating in the National Institute on Aging funded MSTAR program and presented and broadcast about 300 public lectures on aging as part of its community outreach. The number of views and downloads of the Stein lectures from UCSD-TV and UCTV, as well as YouTube and iTunes has exceeded 1.2 million views in the last couple of years. The Institute's work has been cited in the media, including the BBC, New York Times, NPR, U.S. News & World Report, Huffington Post, USA Today, London Times, and Scientific American, among others.
Successful Aging Evaluation (SAGE) Study
Stein Institute has developed the UCSD Successful Aging Evaluat |
https://en.wikipedia.org/wiki/Arthur%20Lennox%20Butler | Arthur Lennox Butler (22 February 1873 – 29 December 1939) was a British naturalist. Born in Karachi, he became a curator of a natural history museum in Kuala Lumpur, Malaysia. He later became the superintendent of a game preserve in Sudan before returning to England. He is commemorated in the scientific names of four species of reptile, a bird, and an amphibian.
Early life and education
Butler was born on 22 February 1873 in Karachi, British India. His father was the British ornithologist Edward Arthur Butler and his mother was Clara Francis Butler. Butler attended Fauconberg School in Beccles. In 1891 at the age of eighteen, Butler traveled to Ceylon (now Sri Lanka) to become a tea-planter, which he abandoned to become a scientific collector.
Career
Butler became a scientific collector after moving to Ceylon, collecting specimens for the Marsden and Tring museums. In 1898, Butler was appointed curator at the State Museum at Kuala Lumpur in Malaysia. In 1899, he was elected as a member of the British Ornithologists' Union. Beginning in 1901, he was the superintendent of game preservation in Anglo-Egyptian Sudan, a position he held until 1915. In 1921, he became a member of the British Ornithologists' Club.
Personal life
In 1908, Butler married his cousin, Rose Boughton-Leigh. The couple had no children.
Later life and death
In 1915 Butler returned to England, living in St. Leonard's Park near Horsham for the rest of his life. The last several years of his life were spent in poor health, and he was unable to attend any meetings of the British Ornithologists' Club from 1932 onward. Butler died on 29 December 1939 at age 66.
Legacy and honors
The scientific name of the Nicobar sparrowhawk (Accipiter butleri) commemorates Butler, as do the scientific names of four species of reptiles (Gehyra butleri, Lycodon butleri, Chilorhinophis butleri, and Tytthoscincus butleri) and an amphibian (Microhyla butleri). Gehyra butleri is now included as a synonym of Gehyra mutil |
https://en.wikipedia.org/wiki/Ned%20Nefer%20and%20Teagan | Ned Nefer and Teagan are a man/mannequin couple who gained national attention during a walk through rural New York.
They gained national attention in June 2011 when they walked, with the Teagan situated in a wheelchair, from Syracuse, New York to Watertown, New York and later from Syracuse to Dansville, New York and Gainesville, New York. During his walk, he was interviewed by the Jefferson County Sheriff and social welfare officials, who did not find him to be dangerous and declined to take him into custody saying that "This is definitely one of the very oddest things I've ever come across, but he seems very happy. I wouldn't classify him as dangerous at all. He seemed quite happy in his own little world.".
Nefer and Teagan became Web celebrities, appearing in Facebook fan pages and YouTube videos. Nefer deflected the internet fame to Teagan, saying that "I've heard about the Facebook page and that's great, I guess, but she's really the star."
According to Nefer, he met a bodyless Teagan in 1986; she "told him how to build her." The Village Voice wryly described this origin story as "way more than a lot of men do for their women." Nefer claims that they were married in California on October 31, 1986. Nefer described his marriage to Teagan as such: "For us it's real. We weren't legally married. But on the ocean, we took our vows; we said words to each other and we've done our best to live by them." |
https://en.wikipedia.org/wiki/Polymer%20field%20theory | A polymer field theory is a statistical field theory describing the statistical behavior of a neutral or charged polymer system. It can be derived by transforming the partition function from its standard many-dimensional integral representation over the particle degrees of freedom in a functional integral representation over an auxiliary field function, using either the Hubbard–Stratonovich transformation or the delta-functional transformation. Computer simulations based on polymer field theories have been shown to deliver useful results, for example to calculate the structures and properties of polymer solutions (Baeurle 2007, Schmid 1998), polymer melts (Schmid 1998, Matsen 2002, Fredrickson 2002) and thermoplastics (Baeurle 2006).
Canonical ensemble
Particle representation of the canonical partition function
The standard continuum model of flexible polymers, introduced by Edwards (Edwards 1965), treats a solution composed of linear monodisperse homopolymers as a system of coarse-grained polymers, in which the statistical mechanics of the chains is described by the continuous Gaussian thread model (Baeurle 2007) and the solvent is taken into account implicitly. The Gaussian thread model can be viewed as the continuum limit of the discrete Gaussian chain model, in which the polymers are described as continuous, linearly elastic filaments. The canonical partition function of such a system, kept at an inverse temperature and confined in a volume , can be expressed as
where is the potential of mean force given by,
representing the solvent-mediated non-bonded interactions among the segments, while represents the harmonic binding energy of the chains. The latter energy contribution can be formulated as
where is the statistical segment length and the polymerization index.
Field-theoretic transformation
To derive the basic field-theoretic representation of the canonical partition function, one introduces in the following the segment density operator of th |
https://en.wikipedia.org/wiki/Physics%20of%20firearms | From the viewpoint of physics (dynamics, to be exact), a firearm, as for most weapons, is a system for delivering maximum destructive energy to the target with minimum delivery of energy on the shooter. The momentum delivered to the target, however, cannot be any more than that (due to recoil) on the shooter. This is due to conservation of momentum, which dictates that the momentum imparted to the bullet is equal and opposite to that imparted to the gun-shooter system.
Firearm energy efficiency
From a thermodynamic point of view, a firearm is a special type of piston engine, or in general heat engine where the bullet has a function of a piston. The energy conversion efficiency of a firearm strongly depends on its construction, especially on its caliber and barrel length.
However, for illustration, here is the energy balance of a typical small firearm for .300 Hawk ammunition:
Barrel friction 2%
Projectile motion 32%
Hot gases 34%
Barrel heat 30%
Unburned propellant 1%.
which is comparable with a typical piston engine.
Higher efficiency can be achieved in longer barrel firearms because they have better volume ratio. However, the efficiency gain is less than corresponding to the volume ratio, because the expansion is not truly adiabatic and burnt gas becomes cold faster because of exchange of heat with the barrel. Large firearms (such as cannons) achieve smaller barrel-heating loss because they have better volume-to-surface ratio.
High barrel diameter is also helpful because lower barrel friction is induced by sealing compared to the accelerating force. The force is proportional to the square of the barrel diameter while sealing needs are proportional to the perimeter by the same pressure.
Force
According to Newtonian mechanics, if the gun and shooter are at rest initially, the force on the bullet will be equal to that on the gun-shooter. This is due to Newton's third law of motion (For every action, there is an equal and opposite reaction). Consider a system wh |
https://en.wikipedia.org/wiki/Quantum%20revival | In quantum mechanics, the quantum revival
is a periodic recurrence of the quantum wave function
from its original form during the time evolution either many times in space as the multiple scaled fractions
in the form of the initial wave function (fractional revival) or approximately or exactly to its original
form from the beginning (full revival). The quantum wave function periodic in time exhibits therefore the full revival
every period. The phenomenon of revivals is most readily observable for the wave functions being well localized wave packets at the beginning of the time evolution for example in the hydrogen atom. For Hydrogen, the fractional revivals show up
as multiple angular Gaussian bumps around the circle drawn by the radial maximum of leading circular state component (that with the highest amplitude in the eigenstate expansion) of the
original localized state and the full revival as the original Gaussian
.
The full revivals are exact for the infinite quantum well, harmonic oscillator or the hydrogen atom, while for shorter times are approximate
for the hydrogen atom and a lot of quantum systems.
The plot of collapses and revivals of quantum oscillations of the JCM atomic inversion.
Example - arbitrary truncated wave function of the quantum system with rational energies
Consider a quantum system with the energies and the eigenstates
and let the energies be the rational fractions of some constant
(for example for hydrogen atom , , .
Then the truncated (till of states) solution of the time dependent Schrödinger equation is
.
Let be to lowest common multiple of all and greatest common divisor of all
then for each the is an integer, for each the is an integer, is the full multiple of angle and
after the full revival time time
.
For the quantum system as small as Hydrogen and as small as 100 it may take quadrillions of years till it will fully revive. Especially once created by fields the Trojan wave packet in a
hydroge |
https://en.wikipedia.org/wiki/Haustorium | In botany and mycology, a haustorium (plural haustoria) is a rootlike structure that grows into or around another structure to absorb water or nutrients. For example, in mistletoe or members of the broomrape family, the structure penetrates the host's tissue and draws nutrients from it. In mycology, it refers to the appendage or portion of a parasitic fungus (the hyphal tip), which performs a similar function. Microscopic haustoria penetrate the host plant's cell wall and siphon nutrients from the space between the cell wall and plasma membrane but do not penetrate the membrane itself. Larger (usually botanical, not fungal) haustoria do this at the tissue level.
The etymology of the name corresponds to the Latin word haustor meaning the one who draws, drains or drinks, and refers to the action performed by the outgrowth.
In fungi
Fungi in all major divisions form haustoria. Haustoria take several forms. Generally, on penetration, the fungus increases the surface area in contact with host plasma membrane releasing enzymes that break up the cell walls, enabling greater potential movement of organic carbon from host to fungus. Thus, an insect hosting a parasitic fungus such as Cordyceps may look as though it is being "eaten from the inside out" as the haustoria expand inside of it.
The simplest forms of haustoria are small spheres. The largest are complex formations adding significant mass to a cell, expanding between the cell wall and cell membrane. In the Chytridiomycota, the entire fungus may become enclosed in the cell, and it is arguable whether this should be considered analogous to a haustorium.
Haustoria arise from intercellular hyphae, appressoria, or external hyphae. The hypha narrows as it passes through the cell wall and then expands on invaginating the cell. A thickened, electron-dense collar of material is deposited around the hypha at the point of invagination. Further, the host cell wall becomes highly modified in the invaginated zone. Inclusions |
https://en.wikipedia.org/wiki/Excision%20theorem | In algebraic topology, a branch of mathematics, the excision theorem is a theorem about relative homology and one of the Eilenberg–Steenrod axioms. Given a topological space and subspaces and such that is also a subspace of , the theorem says that under certain circumstances, we can cut out (excise) from both spaces such that the relative homologies of the pairs into are isomorphic.
This assists in computation of singular homology groups, as sometimes after excising an appropriately chosen subspace we obtain something easier to compute.
Theorem
Statement
If are as above, we say that can be excised if the inclusion map of the pair into induces an isomorphism on the relative homologies:
The theorem states that if the closure of is contained in the interior of , then can be excised.
Often, subspaces that do not satisfy this containment criterion still can be excised—it suffices to be able to find a deformation retract of the subspaces onto subspaces that do satisfy it.
Proof Sketch
The proof of the excision theorem is quite intuitive, though the details are rather involved. The idea is to subdivide the simplices in a relative cycle in to get another chain consisting of "smaller" simplices, and continuing the process until each simplex in the chain lies entirely in the interior of or the interior of . Since these form an open cover for and simplices are compact, we can eventually do this in a finite number of steps. This process leaves the original homology class of the chain unchanged (this says the subdivision operator is chain homotopic to the identity map on homology).
In the relative homology , then, this says all the terms contained entirely in the interior of can be dropped without affecting the homology class of the cycle. This allows us to show that the inclusion map is an isomorphism, as each relative cycle is equivalent to one that avoids entirely.
Applications
Eilenberg–Steenrod Axioms
The excision theorem is taken to be |
https://en.wikipedia.org/wiki/Choquet%20game | The Choquet game is a topological game named after Gustave Choquet, who was in 1969 the first to investigate such games. A closely related game is known as the strong Choquet game.
Let be a non-empty topological space. The Choquet game of , , is defined as follows: Player I chooses , a non-empty open subset of , then Player II chooses , a non-empty open subset of , then Player I chooses , a non-empty open subset of , etc. The players continue this process, constructing a sequence If then Player I wins, otherwise Player II wins.
It was proved by John C. Oxtoby that a non-empty topological space is a Baire space if and only if Player I has no winning strategy. A nonempty topological space in which Player II has a winning strategy is called a Choquet space. (Note that it is possible that neither player has a winning strategy.) Thus every Choquet space is Baire. On the other hand, there are Baire spaces (even separable metrizable ones) which are not Choquet spaces, so the converse fails.
The strong Choquet game of , , is defined similarly, except that Player I chooses , then Player II chooses , then Player I chooses , etc, such that for all . A topological space in which Player II has a winning strategy for is called a strong Choquet space. Every strong Choquet space is a Choquet space, although the converse does not hold.
All nonempty complete metric spaces and compact T2 spaces are strong Choquet. (In the first case, Player II, given , chooses such that and . Then the sequence for all .) Any subset of a strong Choquet space which is a set is strong Choquet. Metrizable spaces are completely metrizable if and only if they are strong Choquet. |
https://en.wikipedia.org/wiki/Epigenetics%20of%20bipolar%20disorder | Epigenetics of bipolar disorder is the effect that epigenetics has on triggering and maintaining bipolar disorder.
Bipolar disorder is a chronic mood disorder, characterized by manic and depressive episodes. The symptoms of a manic episode include high mood, low sleep, and reduced inhibition, while the symptoms of a depressive episode include low mood, lethargy, and reduced motivation. There are different types of bipolar disorders; the two most common are bipolar I and bipolar II. Patients are diagnosed with bipolar disorder I if their manic episodes last at least seven consecutive days and they experience major depressive symptoms over the course of two weeks. In bipolar disorder II, patients experience shorter hypomanic episodes or manic symptoms that have less disruptive impacts on their daily lives. Sometimes patients can experience extreme cycling where they experience four or more episodes of mania and major depression in one year. In addition to affecting mood, people who have bipolar disorder often deal with impaired cognitive abilities, where memory, speech, attention and decision-making skills are all impacted. Bipolar disorder has one of the highest rates of suicide amongst psychiatric disorders, as well as high comorbidity rates with alcohol and substance use disorders.
Bipolar disorder has a genetic component. This means that the sequence of nucleotides in DNA contains information that can lead to bipolar disorder in individuals. Researchers determined that bipolar disorder has a genetic component by comparing individuals who have been diagnosed with the disorder and those who have not. However, findings are inconsistent as to what specific genes are involved.
The trouble that researchers have had in conclusively identifying genes that cause bipolar disorder has led them to search for an epigenetic component to bipolar disorder. Epigenetics is the study of heritable phenotypes that occur without changes to the primary DNA sequence. Typically, epigen |
https://en.wikipedia.org/wiki/Recrudescence | Recrudescence is the recurrence of an undesirable condition. In medicine, it is usually defined as the recurrence of symptoms after a period of remission or quiescence, in which sense it can sometimes be synonymous with relapse. In a narrower sense, it can also be such a recurrence with higher severity than before the remission. "Relapse" conventionally has a specific (albeit somewhat illogical) meaning when used in relation to malaria (see below).
Malaria
In malaria, recurrence can take place due to recrudescence; or relapse; or re-infection (via mosquito transmission). Relapse means that a recurrence has been precipitated by a dormant stage in the liver called a "hypnozoite". Thus, relapse is applied only for those plasmodial species that have hypnozoites in the life cycle, such as Plasmodium vivax and P. ovale. On the other hand, recrudescence means that circulating, multiplying parasites are detected after having persisted in the bloodstream (or elsewhere) at undetectable levels for a period of time, as merozoites (as opposed to hypnozoites). This term is applied for Plasmodium species that are not associated with hypnozoite-mediated recurrences, such as P. falciparum, P. malariae, and P. knowlesi. Recrudescence is also used for malarial recurrence caused by drug-resistant strains of P. vivax and P. ovale where parasites remained in the bloodstream despite treatment.
Melioidosis
In melioidosis, a recurrent infection can be due to re-infection and relapse. Re-infection is a recurrence of symptoms due to an infection with a new strain of Burkholderia pseudomallei following the eradication therapy of melioidosis. Meanwhile, relapse are those who presented with melioidosis symptoms due to failure to clear the infection in the bloodstream after completion of eradication therapy. On the other hand, recrudescence is the recurrence of melioidosis symptoms during the eradication therapy.
Bovine viral diarrhoea
The bovine viral diarrhoea virus (bovine virus diarrhea) i |
https://en.wikipedia.org/wiki/Strain%20%28chemistry%29 | In chemistry, a molecule experiences strain when its chemical structure undergoes some stress which raises its internal energy in comparison to a strain-free reference compound. The internal energy of a molecule consists of all the energy stored within it. A strained molecule has an additional amount of internal energy which an unstrained molecule does not. This extra internal energy, or strain energy, can be likened to a compressed spring. Much like a compressed spring must be held in place to prevent release of its potential energy, a molecule can be held in an energetically unfavorable conformation by the bonds within that molecule. Without the bonds holding the conformation in place, the strain energy would be released.
Summary
Thermodynamics
The equilibrium of two molecular conformations is determined by the difference in Gibbs free energy of the two conformations. From this energy difference, the equilibrium constant for the two conformations can be determined.
If there is a decrease in Gibbs free energy from one state to another, this transformation is spontaneous and the lower energy state is more stable. A highly strained, higher energy molecular conformation will spontaneously convert to the lower energy molecular conformation.
Enthalpy and entropy are related to Gibbs free energy through the equation (at a constant temperature):
Enthalpy is typically the more important thermodynamic function for determining a more stable molecular conformation. While there are different types of strain, the strain energy associated with all of them is due to the weakening of bonds within the molecule. Since enthalpy is usually more important, entropy can often be ignored. This isn't always the case; if the difference in enthalpy is small, entropy can have a larger effect on the equilibrium. For example, n-butane has two possible conformations, anti and gauche. The anti conformation is more stable by 0.9 kcal mol−1. We would expect that butane is roughly |
https://en.wikipedia.org/wiki/Tortilla%20Wall | The Tortilla Wall is a term given to a 14-mile (22.5 kilometer) section of United States border fence between the Otay Mesa Border Crossing in San Diego, California and the Pacific Ocean.
This "San Diego wall" was completed in the early 1990s. While there are other walls at various points along the border, the Tortilla Wall is the longest to date. No other wall sections have evolved distinct names, so The Tortilla Wall is often used to describe the entire set of walled defensive structures.
The Tortilla Wall is marked with graffiti, crosses, photos, pictures and remembrances of migrants who died trying to illegally enter the United States.
Effectiveness
The effectiveness of the wall has been significant according to U.S. Congressional testimony by Representative Ed Royce:
...apprehensions along the region with a security fence dropped from 202,000 in 1992 to 9,000 in 1994.
The building of the Tortilla Wall is generally considered by Mexicans to be an unfriendly gesture.
It is a symbol of the controversial immigration issue. It is argued that the wall simply forces illegal border crossings to be moved to the more dangerous area of the Arizona desert.
Expansion of the wall
In 2006, the U.S. Congress passed the Secure Fence Act of 2006
which authorized spending $1.2 billion to build 700 miles (1,100 km) of additional fencing on the southern border facing Mexico.
Anecdotal wall stories
Tunnels under the wall are still a common way to illegally cross the border. Some tunnels are quite sophisticated. One such tunnel created by smugglers ran from Tijuana to San Diego, was a half mile long, and included a concrete floor as well as electricity. Other tunnels have included steel rails, while some tunnels are simply dirt passageways or connect to sewer or drain systems.
As a stunt, a circus cannon was placed on the south side of the wall and an acrobat was blasted over the wall into the Border Field State Park in the U.S. He had his passport with him.
See also
|
https://en.wikipedia.org/wiki/Teacher%20Institute%20for%20Evolutionary%20Science | The Teacher Institute for Evolutionary Science (TIES) is a project of the Richard Dawkins Foundation for Reason and Science and a program of the Center for Inquiry which provides free workshops and materials to elementary, middle school, and, more recently, high school science teachers to enable them to effectively teach evolution based on the Next Generation Science Standards.
History
In 2013, Bertha Vazquez, TIES director and middle school science teacher in Miami, met Richard Dawkins at the University of Miami and discussed evolution education with him and a number of science professors. The discussion surrounded the issue of teachers feeling unprepared to teach evolution. This encounter and the understanding that teachers learn the most from each other inspired her to conduct workshops on evolution for her fellow teachers. After hearing about Vazquez's work, Dawkins followed up with a visit to Vazquez's school in 2014 to speak to teachers from the Miami-Dade County school district. Dawkins eventually asked Vazquez if she would be willing to take her workshop project nationwide. With the encouragement of Dawkins and funding from his foundation, and also with encouragement from Robyn Blumner of the Center for Inquiry, the Teacher Institute for Evolutionary Science began offering workshops in 2015.
Activity
The first TIES workshop was in April 2015 in collaboration with the Miami Science Museum. A total of ten workshops took place in 2015. Since then, the program has expanded, as of 2020, to over 200 workshops in all 50 states. While Bertha Vazquez presented many of the workshops earlier on, over 80 presenters are now active in the nationwide program. Presenters are usually high school or college biology educators in the states in which their workshops take place, and workshops take into account the given state's evolution education standards. Workshops vary in length, and in cases of longer workshops or webinars, scientists and other relevant guests are also |
https://en.wikipedia.org/wiki/Solar%20Physics%20Division | The Solar Physics Division of the American Astronomical Society (AAS/SPD) or (AAS-SPD), often referred to as simply the "Solar Physics Division" (SPD), is the primary trade organization of solar physicists in the U.S. It exists for the advancement of the study of the Sun and to coordinate of such research with other branches of science. SPD organizes meetings and certain solar-physics-specific prizes, occasionally advocates for solar physics in the political arena, and promotes outreach via formal and informal educational projects.
The SPD awards the George Ellery Hale Prize for outstanding contributions over an extended period of time to solar astronomy, and the Karen Harvey Prize for a significant contribution to the study of the Sun early in a scientist's professional career. The SPD also gives popular writing awards, and holds a student poster contest at its meetings. Contestants are judged on readability, flow, quality of appearance and proportion of independent work done by the student. Judges also take into account the oral presentation by the student, how well the conclusions line up with the aim or purpose and the overall quality of the work. |
https://en.wikipedia.org/wiki/ZNF444 | Zinc finger protein 444 is a protein that in humans is encoded by the ZNF444 gene.
Function
This gene encodes a zinc finger protein which activates transcription of a scavenger receptor gene involved in the degradation of acetylated low-density lipoprotein (Ac-LDL) (). This gene is located in a cluster of zinc finger genes on chromosome 19 at q13.4. A pseudogene of this gene is located on chromosome 15. Multiple transcript variants encoding different isoforms have been found for this gene. |
https://en.wikipedia.org/wiki/ADP%20ribosylation%20factor | ADP ribosylation factors (ARFs) are members of the ARF family of GTP-binding proteins of the Ras superfamily. ARF family proteins are ubiquitous in eukaryotic cells, and six highly conserved members of the family have been identified in mammalian cells. Although ARFs are soluble, they generally associate with membranes because of N-terminus myristoylation. They function as regulators of vesicular traffic and actin remodelling.
The small ADP ribosylation factor (Arf) GTP-binding proteins are major regulators of vesicle biogenesis in intracellular traffic. They are the founding members of a growing family that includes Arl (Arf-like), Arp (Arf-related proteins) and the remotely related Sar (Secretion-associated and Ras-related) proteins. Arf proteins cycle between inactive GDP-bound and active GTP-bound forms that bind selectively to effectors. The classical structural GDP/GTP switch is characterised by conformational changes at the so-called switch 1 and switch 2 regions, which bind tightly to the gamma-phosphate of GTP but poorly or not at all to the GDP nucleotide. Structural studies of Arf1 and Arf6 have revealed that although these proteins feature the switch 1 and 2 conformational changes, they depart from other small GTP-binding proteins in that they use an additional, unique switch to propagate structural information from one side of the protein to the other.
The GDP/GTP structural cycles of human Arf1 and Arf6 feature a unique conformational change that affects the beta2beta3 strands connecting switch 1 and switch 2 (interswitch) and also the amphipathic helical N-terminus. In GDP-bound Arf1 and Arf6, the interswitch is retracted and forms a pocket to which the N-terminal helix binds, the latter serving as a molecular hasp to maintain the inactive conformation. In the GTP-bound form of these proteins, the interswitch undergoes a two-residue register shift that pulls switch 1 and switch 2 up, restoring an active conformation that can bind GTP. In this confo |
https://en.wikipedia.org/wiki/Epik | Epik is an American domain registrar and web hosting company known for providing services to alt-tech websites that host far-right, neo-Nazi, and other extremist materials. It has been described as a "safehaven for the extreme right" because of its willingness to provide services to far-right websites that have been denied service by other Internet service providers.
Some of Epik's notable clients have included social network Gab and the imageboard website 8chan. In 2021, the Parler social network moved its domain registration to Epik when it was denied hosting and other web services after it was used to help plan the 2021 storming of the U.S. Capitol. Epik has also provided hosting and registrar services to Patriots.win, formerly TheDonald.win, an independent far-right forum that has served as the successor for the r/The_Donald subreddit that was banned in June 2020.
Epik was founded in 2009 by Rob Monster, and is based in Washington State. In September and October 2021, hackers identifying themselves as a part of Anonymous released several caches of data obtained from Epik in a large-scale data breach.
History
Epik was founded in 2009 by Rob Monster, who served as the company's chief executive officer until 2022. Until 2018, Epik primarily focused on domain trading and mostly stayed out of the public spotlight. In 2018, the company came to public attention when they decided to provide services to Gab.
Epik is primarily known for its domain name registration services, and describes itself as the "Swiss bank of the domain industry". In the late 2010s, following a series of acquisitions, Epik also began providing an increasing variety of other web services including web hosting, content delivery network (CDN) services, and DDoS protection.
Acquisitions
In February 2019, it was announced that Epik had acquired BitMitigate, an American cybersecurity company based in Vancouver, Washington. BitMitigate protects websites against potential threats including distribut |
https://en.wikipedia.org/wiki/Prenatal%20perception | Prenatal perception is the study of the extent of somatosensory and other types of perception during pregnancy. In practical terms, this means the study of fetuses; none of the accepted indicators of perception are present in embryos. Studies in the field inform the abortion debate, along with certain related pieces of legislation in countries affected by that debate. As of 2022, there is no scientific consensus on whether a fetus can feel pain.
Prenatal hearing
Numerous studies have found evidence indicating a fetus's ability to respond to auditory stimuli. The earliest fetal response to a sound stimulus has been observed at 16 weeks' gestational age, while the auditory system is fully functional at 25–29 weeks' gestation. At 33–41 weeks' gestation, the fetus is able to distinguish its mother's voice from others.
Prenatal pain
The hypothesis that human fetuses are capable of perceiving pain in the first trimester has little support, although fetuses at 14 weeks may respond to touch. A multidisciplinary systematic review from 2005 found limited evidence that thalamocortical pathways begin to function "around 29 to 30 weeks' gestational age", only after which a fetus is capable of feeling pain.
In March 2010, the Royal College of Obstetricians and Gynecologists submitted a report, concluding that "Current research shows that the sensory structures are not developed or specialized enough to respond to pain in a fetus of less than 24 weeks",
The report specifically identified the anterior cingulate as the area of the cerebral cortex responsible for pain processing. The anterior cingulate is part of the cerebral cortex, which begins to develop in the fetus at week 26. A co-author of that report revisited the evidence in 2020, specifically the functionality of the thalamic projections into the cortical subplate, and posited "an immediate and unreflective pain experience...from as early as 12 weeks."
There is a consensus among developmental neurobiologists that the |
https://en.wikipedia.org/wiki/Bertrand%27s%20box%20paradox | Bertrand's box paradox is a veridical paradox in elementary probability theory. It was first posed by Joseph Bertrand in his 1889 work Calcul des Probabilités.
There are three boxes:
a box containing two gold coins,
a box containing two silver coins,
a box containing one gold coin and one silver coin.
The question is to calculate the probability, after choosing a box at random and withdrawing one coin at random, if that happens to be a gold coin, of the next coin drawn from the same box also being a gold coin.
A veridical paradox is when the correct solution to a puzzle appears to be counterintuitive. It may seem intuitive that the probability that the remaining coin is gold should be , but the probability is actually . However, this is not the paradox Bertrand referred to. He showed that if were correct, it would lead to a contradiction, so cannot be correct.
This simple but counterintuitive puzzle is used as a standard example in teaching probability theory. The solution illustrates some basic principles, including the Kolmogorov axioms.
Solution
The problem can be reframed by describing the boxes as each having one drawer on each of two sides. Each drawer contains a coin. One box has a gold coin on each side (GG), one a silver coin on each side (SS), and the other a gold coin on one side and a silver coin on the other (GS). A box is chosen at random, a random drawer is opened, and a gold coin is found inside it. What is the chance of the coin on the other side being gold?
The following faulty reasoning appears to give a probability of :
Originally, all three boxes were equally likely to be chosen.
The chosen box cannot be box SS.
So it must be box GG or GS.
The two remaining possibilities are equally likely. So the probability that the box is GG, and the other coin is also gold, is .
The flaw is in the last step. While those two cases were originally equally likely, the fact that you are certain to find a gold coin if you had chosen the GG box, but |
https://en.wikipedia.org/wiki/SIM%20card | A SIM card (full form: Subscriber Identity Module or Subscriber Identification Module) is an integrated circuit (IC) intended to securely store an international mobile subscriber identity (IMSI) number and its related key, which are used to identify and authenticate subscribers on mobile telephony devices (such as mobile phones and laptops). Technically the actual physical card is known as a universal integrated circuit card (UICC); this smart card is usually made of PVC with embedded contacts and semiconductors, with the SIM as its primary component. In practice the term "SIM card" refers to the entire unit and not simply the IC.
A SIM contains a unique serial number, integrated circuit card identification (ICCID), international mobile subscriber identity (IMSI) number, security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to, and four passwords: a personal identification number (PIN) for ordinary use, and a personal unblocking key (PUK) for PIN unlocking as well as a second pair (called PIN2 and PUK2 respectively) which are used for managing fixed dialing number and some other functionality. In Europe, the serial SIM number (SSN) is also sometimes accompanied by an international article number (IAN) or a European article number (EAN) required when registering online for the subscription of a prepaid card. It is also possible to store contact information on many SIM cards.
SIMs are always used on GSM phones; for CDMA phones, they are needed only for LTE-capable handsets. SIM cards can also be used in satellite phones, smart watches, computers, or cameras.
The first SIM cards were the size of credit and bank cards; sizes were reduced several times over the years, usually keeping electrical contacts the same, so that a larger card could be cut down to a smaller size.
SIMs are transferable between different mobile devices by removing the card itself. eSIM is replacing physi |
https://en.wikipedia.org/wiki/Cognitive%20password | A cognitive password is a form of knowledge-based authentication that requires a user to answer a question, presumably something they intrinsically know, to verify their identity. Cognitive password systems have been researched for many years and are currently commonly used as a form of secondary access. They were developed to overcome the common memorability vs. strength problem that exists with the traditional password. Cognitive passwords, when compared to other password systems, can be measured through the usage of a memorability vs. guessability ratio.
History
Research on passwords as an authentication method has struggled between memorability and strong security. Passwords that are easily remembered are easily cracked by attackers. On the other hand, strong passwords are difficult to crack but also difficult to remember.
When passwords are difficult to remember, users may write them down, and the secrecy of the password is compromised. Early research into this trade-off between security and usability aimed to develop a password system that utilized easily remembered personal facts and encouraged user participation. This line of research resulted in the concept of the associative password, a password system based on user selected cues and responses. This concept of associative passwords was extended to a pre-specified set of questions and answers that users would be expected to know and could easily recall. Empirical analysis of passwords and human cognition resulted in a recommendation that people should not be expected to remember more the four complex passwords.
Building upon the idea of questions later researchers developed a series of innovations for cognitive passwords. Pass faces used the ability to identify individuals in a social network and the particular cognitive strength of recognizing faces. Later work evaluating these cues reified the recommendation of four passwords as a reasonable cognitive expectation.
A historical overview of the us |
https://en.wikipedia.org/wiki/TXT%20e-solutions | TXT e-solutions is an international software development and consulting company.
Founded in 1989, TXT e-solutions has been listed since July 2000 on the Borsa Italiana–London Stock Exchange (TXT.MI) STAR segment. The company has its headquarters in Milan with offices in Italy, France, United Kingdom, Germany, Switzerland and the United States.
Overview
TXT operates across Aerospace, Aviation, Defense, Industrial, Government and Fintech. TXT is headquartered in Milan and has subsidiaries in Italy, Germany, the United Kingdom, France, Switzerland and the United States. The holding company TXT e-solutions S.p.A, has been listed on the Italian Stock Exchange, STAR segment (TXT.MI), since July 2000.
Another focus of TXT e-solutions is Research & Development. Its Corporate Research team is involved in several national (Italian) and international research programs.
History
TXT e-solution was founded in the 1989. During the 1990s, it released software products dedicated to Production Planning & Scheduling and then its first Software Suite for (SCM) Supply Chain Management. Soon after its IPO in 2000, it started international operations and started up subsidiaries in France, Spain, United Kingdom and Germany. In 2002,
TXT e-solutions released its first solutions for Supply Chain Management specialized by Business Processes: Demand Management and Sales and Operations Planning. The year after, it released its new Software Products for Product Data Management (PDM). TXT Polymedia, (former subsidiary of TXT e-solutions), which began in the late 1990s with Multichannel Content Management software, and extended its offering to Digital Terrestrial Television.
In 2004, TXT PERFORM Software products for Demand & Supply Chain Management and Sales & Operation Planning was released, as well as a new Polymedia Mobile Platform. In 2006, Microsoft selected TXT e-solutions for the Industry Builder Initiative to cover Supply Chain Planning needs of consumer-driven industries. The yea |
https://en.wikipedia.org/wiki/Altran%20Praxis | Altran UK (formerly known as Altran Praxis, Praxis High Integrity Systems, Praxis Critical Systems, Altran Xype, Xype and Altran Technologies) is a division of parent company Altran. Altran Praxis was a British software house that specialised in critical systems. This role is continued under the banner of high-tech engineering consultancy services provided by the rest of the Altran group.
The division formerly known as Praxis (the critical systems specialists) is based in SouthGate, Bath, England, close to Bath Spa railway station, and also has offices in London, Loughborough, Paris, Sophia Antipolis, and Bangalore.
Altran UK as a whole has offices in Bath, Bristol, London, Loughborough, Manchester, Slough and Coventry.
History
The company Praxis Systems Limited was founded by Martyn Thomas and David Bean in 1983:
it was incorporated on 1 June 1983 and commenced business on 1 July 1983. On 28 June 1985 it became a Public limited company Praxis Systems plc.
Until 1988, Praxis was owned almost entirely by its staff.
In 1988 Praxis obtained venture capital finance in order to provide funds for future acquisitions and working capital for continued growth.
On 27 November 1992 Praxis was acquired by Deloitte Consulting (then known as Touche Ross), an international firm of accountants and management consultants.
The critical systems part of the company was acquired by the Altran Group in 1997.
In 2004, Praxis Critical Systems and HIS Consulting merged to form Praxis High Integrity Systems. In January 2010, the company was merged with SC2 by Altran to form Altran Praxis. The company has since been rebranded to Altran along with Altran Xype and Altran Technologies.
In December 2012, AdaCore along with Altran Praxis released SPARK Pro 11. In 2013, Altran acquired Sentaca, a specialty telecoms consultancy.
A distinguishing feature of the former Praxis office's is its extensive use of formal methods such as the Z notation and the SPARK toolset (acquired through the takeo |
https://en.wikipedia.org/wiki/Selenography | Selenography is the study of the surface and physical features of the Moon (also known as geography of the Moon, or selenodesy). Like geography and areography, selenography is a subdiscipline within the field of planetary science. Historically, the principal concern of selenographists was the mapping and naming of the lunar terrane identifying maria, craters, mountain ranges, and other various features. This task was largely finished when high resolution images of the near and far sides of the Moon were obtained by orbiting spacecraft during the early space era. Nevertheless, some regions of the Moon remain poorly imaged (especially near the poles) and the exact locations of many features (like crater depths) are uncertain by several kilometers. Today, selenography is considered to be a subdiscipline of selenology, which itself is most often referred to as simply "lunar science." The word selenography is derived from the Greek word Σελήνη (Selene, meaning Moon) and γράφω graphō, meaning to write.
History
The idea that the Moon is not perfectly smooth originates to at least , when Democritus asserted that the Moon's "lofty mountains and hollow valleys" were the cause of its markings. However, not until the end of the 15th century AD did serious study of selenography begin. Around AD 1603, William Gilbert made the first lunar drawing based on naked-eye observation. Others soon followed, and when the telescope was invented, initial drawings of poor accuracy were made, but soon thereafter improved in tandem with optics. In the early 18th century, the librations of the Moon were measured, which revealed that more than half of the lunar surface was visible to observers on Earth. In 1750, Johann Meyer produced the first reliable set of lunar coordinates that permitted astronomers to locate lunar features.
Lunar mapping became systematic in 1779 when Johann Schröter began meticulous observation and measurement of lunar topography. In 1834 Johann Heinrich von Mädler pub |
https://en.wikipedia.org/wiki/Carl%20Hewitt | Carl Eddie Hewitt (; 1944 – 7 December 2022) was an American computer scientist who designed the Planner programming language for automated planning and the actor model of concurrent computation, which have been influential in the development of logic, functional and object-oriented programming. Planner was the first programming language based on procedural plans invoked using pattern-directed invocation from assertions and goals. The actor model influenced the development of the Scheme programming language, the π-calculus, and served as an inspiration for several other programming languages.
Education and career
Hewitt obtained his PhD in mathematics at MIT in 1971, under the supervision of Seymour Papert, Marvin Minsky, and Mike Paterson. He began his employment at MIT that year, and retired from the faculty of the MIT Department of Electrical Engineering and Computer Science during the 1999–2000 school year. He became emeritus in the department in 2000. Among the doctoral students that Hewitt supervised during his time at MIT are Gul Agha, Henry Baker, William Clinger, Irene Greif, and Akinori Yonezawa.
From September 1989 to August 1990, Hewitt was the IBM Chair Visiting Professor in the Department of Computer Science at Keio University in Japan. He has also been a Visiting Professor at Stanford University.
Research
Hewitt was best known for his work on the actor model of computation. For the last decade, his work had been in "inconsistency robustness", which aims to provide practical rigorous foundations for systems dealing with pervasively inconsistent information. This work grew out of his doctoral dissertation focused on the procedural (as opposed to logical) embedding of knowledge, which was embodied in the Planner programming language.
His publications also include contributions in the areas of open information systems, organizational and multi-agent systems, logic programming, concurrent programming, paraconsistent logic and cloud computing.
Plann |
https://en.wikipedia.org/wiki/VaxTele%20SIP%20Server%20SDK | VaxTele SIP Server SDK (Software Development Kit) is a complete development toolkit, which allows software vendors and Internet telephony service providers (ITSP) to develop SIP Server and (SIP) Session Initiation Protocol based VoIP systems for Microsoft Windows to install computer to computer voice chat, chat rooms, IVR systems, call center services, calling card services, dial/receive computer to PSTN (Public Switched Telephone Network) and mobile phone calling services.
SIP Server Development
VaxTele SIP Server SDK supports Component Object Model (COM) based technology and can be incorporated in any Microsoft Windows-based application. It also includes technical manual, demo SIP Server, and sample codes for C#, Visual C++, Delphi and VB.NET development languages to understand about how to use VaxTele's COM component and accelerates the SIP Server development process.
Develop IP-PBX Features
VaxTele SIP SDK, allows to add many IP-PBX (Private Branch Exchange) related features: voice mail, call transfer, chat rooms, Interactive Voice Responses (IVR), multiuser conference call, auto call distribution, call queues and stealth listening.
Connection with PSTN and Mobile Networks
There are different ways to connect VaxTele integrated SIP Server to PSTN and/or mobile network to dial and receive PSTN and mobile calls.
There are many SIP based PSTN and mobile gateways are available to connect VaxTele based SIP Server to PSTN and mobile network. Some of them are Cisco, Linksys, Dlink and Quintum.
There are many Internet Telephony Service Providers (ITSP) are available on the internet, who already provide service to dial and receive SIP based phone calls to and from PSTN and mobile networks. It can connect to those service providers to dial and receive PSTN and mobile (local and long distance) calls. Some of them are: YuppyDialer, BroadVoice, inphonex and voxbone.
See also
Software Development Kit
Asterisk
Comparison of VoIP software
List of SIP software
IP P |
https://en.wikipedia.org/wiki/Type%20inference | Type inference refers to the automatic detection of the type of an expression in a formal language. These include programming languages and mathematical type systems, but also natural languages in some branches of computer science and linguistics.
Nontechnical explanation
Types in a most general view can be associated to a designated use suggesting and restricting the activities possible for an object of that type. Many nouns in language specify such uses. For instance, the word leash indicates a different use than the word line. Calling something a table indicates another designation than calling it firewood, though it might be materially the same thing. While their material properties make things usable for some purposes, they are also subject of particular designations. This is especially the case in abstract fields, namely mathematics and computer science, where the material is finally only bits or formulas.
To exclude unwanted, but materially possible uses, the concept of types is defined and applied in many variations. In mathematics, Russell's paradox sparked early versions of type theory. In programming languages, typical examples are "type errors", e.g. ordering a computer to sum values that are not numbers. While materially possible, the result would no longer be meaningful and perhaps disastrous for the overall process.
In a typing, an expression is opposed to a type. For example, , , and are all separate terms with the type for natural numbers. Traditionally, the expression is followed by a colon and its type, such as . This means that the value is of type . This form is also used to declare new names, e.g. , much like introducing a new character to a scene by the words "detective Decker".
Contrary to a story, where the designations slowly unfold, the objects in formal languages often have to be defined with their type from very beginning. Additionally, if the expressions are ambiguous, types may be needed to make the intended use explicit. For |
https://en.wikipedia.org/wiki/Pair%20production | Pair production is the creation of a subatomic particle and its antiparticle from a neutral boson. Examples include creating an electron and a positron, a muon and an antimuon, or a proton and an antiproton. Pair production often refers specifically to a photon creating an electron–positron pair near a nucleus. As energy must be conserved, for pair production to occur, the incoming energy of the photon must be above a threshold of at least the total rest mass energy of the two particles created. (As the electron is the lightest, hence, lowest mass/energy, elementary particle, it requires the least energetic photons of all possible pair-production processes.) Conservation of energy and momentum are the principal constraints on the process.
All other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1, or if one particle has strangeness of +1 then another one must have strangeness of −1.
The probability of pair production in photon–matter interactions increases with photon energy and also increases approximately as the square of atomic number of (hence, number of protons in) the nearby atom.
Photon to electron and positron
For photons with high photon energy (MeV scale and higher), pair production is the dominant mode of photon interaction with matter. These interactions were first observed in Patrick Blackett's counter-controlled cloud chamber, leading to the 1948 Nobel Prize in Physics. If the photon is near an atomic nucleus, the energy of a photon can be converted into an electron–positron pair:
(Z+) → +
The photon's energy is converted to particle mass in accordance with Einstein's equation, ; where is energy, is mass and is the speed of light. The photon must have higher energy than the sum of the rest mass energies of |
https://en.wikipedia.org/wiki/Spiral%20ligament | The spiral ligament is a fibrous cushion located between the stria vascularis and the bony otic capsule.
The periosteum, forming the outer wall of the cochlear duct (), is greatly thickened and altered in character.
Additional images |
https://en.wikipedia.org/wiki/Three-photon%20microscopy | Three-photon microscopy (3PEF) is a high-resolution fluorescence microscopy based on nonlinear excitation effect. Different from two-photon excitation microscopy, it uses three exciting photons. It typically uses 1300 nm or longer wavelength lasers to excite the fluorescent dyes with three simultaneously absorbed photons. The fluorescent dyes then emit one photon whose energy is (slightly smaller than) three times the energy of each incident photon. Compared to two-photon microscopy, three-photon microscopy reduces the fluorescence away from the focal plane by , which is much faster than that of two-photon microscopy by . In addition, three-photon microscopy employs near-infrared light with less tissue scattering effect. This causes three-photon microscopy to have higher resolution than conventional microscopy.
Concept
Three-photon excited fluorescence was first observed by Singh and Bradley in 1964 when they estimated the three-photon absorption cross section of naphthalene crystals. In 1996, Stefan W. Hell designed experiments to validate the feasibility of applying three-photon excitation to scanning fluorescence microscopy, which further proved the concept of three-photon excited fluorescence.
Three-photon microscopy shares a few similarities with Two-photon excitation microscopy. Both of them employ the point scanning method. Both are able to image 3D samples by adjusting the position of the focus lens along the axial and lateral directions. The structures of both systems do not require a pinhole to block out-focus light. However, three-photon microscopy differs from Two-photon excitation microscopy in their Point spread function, resolution, penetration depth, resistance to out-of-focus light and strength of photobleaching.
In three-photon excitation, the fluorophore absorbs three photons almost simultaneously. The wavelength of the excitation laser is about 1200 nm or more in three photon microscopy with the emission wavelength slightly longer than one-th |
https://en.wikipedia.org/wiki/Schwinger%20effect | The Schwinger effect is a predicted physical phenomenon whereby matter is created by a strong electric field. It is also referred to as the Sauter–Schwinger effect, Schwinger mechanism, or Schwinger pair production. It is a prediction of quantum electrodynamics (QED) in which electron–positron pairs are spontaneously created in the presence of an electric field, thereby causing the decay of the electric field. The effect was originally proposed by Fritz Sauter in 1931 and further important work was carried out by Werner Heisenberg and Hans Heinrich Euler in 1936, though it was not until 1951 that Julian Schwinger gave a complete theoretical description.
The Schwinger effect can be thought of as vacuum decay in the presence of an electric field. Although the notion of vacuum decay suggests that something is created out of nothing, physical conservation laws are nevertheless obeyed. To understand this, note that electrons and positrons are each other's antiparticles, with identical properties except opposite electric charge.
To conserve energy, the electric field loses energy when an electron–positron pair is created, by an amount equal to , where is the electron rest mass and is the speed of light. Electric charge is conserved because an electron–positron pair is charge neutral. Linear and angular momentum are conserved because, in each pair, the electron and positron are created with opposite velocities and spins. In fact, the electron and positron are expected to be created at (close to) rest, and then subsequently accelerated away from each other by the electric field.
Mathematical description
Schwinger pair production in a constant electric field takes place at a constant rate per unit volume, commonly referred to as . The rate was first calculated by Schwinger and at leading (one-loop) order is equal to
where is the mass of an electron, is the elementary charge, and is the electric field strength. This formula cannot be expanded in a Taylor series in , |
https://en.wikipedia.org/wiki/Beta-dual%20space | In functional analysis and related areas of mathematics, the beta-dual or -dual is a certain linear subspace of the algebraic dual of a sequence space.
Definition
Given a sequence space the -dual of is defined as
If is an FK-space then each in defines a continuous linear form on
Examples
Properties
The beta-dual of an FK-space is a linear subspace of the continuous dual of . If is an FK-AK space then the beta dual is linear isomorphic to the continuous dual.
Functional analysis |
https://en.wikipedia.org/wiki/TPK%20algorithm | The TPK algorithm is a simple program introduced by Donald Knuth and Luis Trabb Pardo to illustrate the evolution of computer programming languages. In their 1977 work "The Early Development of Programming Languages", Trabb Pardo and Knuth introduced a small program that involved arrays, indexing, mathematical functions, subroutines, I/O, conditionals and iteration. They then wrote implementations of the algorithm in several early programming languages to show how such concepts were expressed.
To explain the name "TPK", the authors referred to Grimm's law (which concerns the consonants 't', 'p', and 'k'), the sounds in the word "typical", and their own initials (Trabb Pardo and Knuth). In a talk based on the paper, Knuth said:
The algorithm
Knuth describes it as follows:
In pseudocode:
ask for 11 numbers to be read into a sequence S
reverse sequence S
for each item in sequence S
call a function to do an operation
if result overflows
alert user
else
print result
The algorithm reads eleven numbers from an input device, stores them in an array, and then processes them in reverse order, applying a user-defined function to each value and reporting either the value of the function or a message to the effect that the value has exceeded some threshold.
Implementations
Implementations in the original paper
In the original paper, which covered "roughly the first decade" of the development of high-level programming languages (from 1945 up to 1957), they gave the following example implementation "in a dialect of ALGOL 60", noting that ALGOL 60 was a later development than the languages actually discussed in the paper:
TPK: begin integer i; real y; real array a[0:10];
real procedure f(t); real t; value t;
f := sqrt(abs(t)) + 5 × t ↑ 3;
for i := 0 step 1 until 10 do read(a[i]);
for i := 10 step -1 until 0 do
begin y := f(a[i]);
if y > 400 then write(i, 'TOO LARGE')
else write(i, y);
end
end T |
https://en.wikipedia.org/wiki/Patch%20Code | Patch Code is a barcode developed by Kodak for use in automated scanning.
Symbology
A Patch Code consists of two wide bars ( ± ) and two narrow bars (). The bars are separated by three narrow spaces, so the Patch Code symbols are a fixed length. There are six distinct permutations of the wide and narrow bars, so there are six Patch Codes. The patches are called:
Patch 2, WNNW, assigns image level 2 to the current document
Patch 3, WNWN, assigns image level 3 to the current document
Patch T, NWNW, assigns predefined image level to the next document (transfer patch)
Patch 1, WWNN, used for post scan image control
Patch 4, NWWN, used for post scan image control (toggle patch)
Patch 6, NNWW, used for post scan image control
The Patch Code needs to be in a specific position on the page, but that position may vary with the image scanner used. The Patch Code is usually close to feed edge of the scanner. That way, the Patch Code can be detected early during the paper transport. Patch Codes are often printed along all four edges of a page. That covers the requirements for many scanners, and it allows the pages to work even if the page is upside down (rotated 180 degrees).
Sometimes, the Patch Code is the only information on a page. That is, the page is only used for controlling a scan. In that situation, the scanner may just take the required action and not transfer the Patch Code image to the computer.
Application
The Patch Codes aid in batch scanning of documents to tell capture software or content management software how to classify or organize documents. Several documents may need to be scanned, and each document should end up in a separate file. Instead of running three separate jobs (loading the scanner with each document separately), the documents are loaded into the scanner with a Patch 2 separating each document. When the scanner encounters a Patch 2, then the scanner starts copying the scans to the next file.
The California tax return Form 540 had a P |
https://en.wikipedia.org/wiki/Kerr/CFT%20correspondence | The Kerr/CFT correspondence is an extension of the AdS/CFT correspondence or gauge-gravity duality to rotating black holes (which are described by the Kerr metric).
The duality works for black holes whose near-horizon geometry can be expressed as a product of AdS3 and a single compact coordinate. The AdS/CFT duality then maps this to a two-dimensional conformal field theory (the compact coordinate being analogous to the S5 factor in Maldacena's original work), from which the correct Bekenstein entropy can then be deduced.
The original form of the duality applies to black holes with the maximum value of angular momentum, but it has now been speculatively extended to all lesser values.
See also
AdS black hole |
https://en.wikipedia.org/wiki/Point-pair%20separation | In a cyclic order, such as the real projective line, two pairs of points separate each other when they occur alternately in the order. Thus the ordering a b c d of four points has (a,c) and (b,d) as separating pairs. This point-pair separation is an invariant of projectivities of the line.
The concept was described by G. B. Halsted at the outset of his Synthetic Projective Geometry:
Given any pair of points on a projective line, they separate a third point from its harmonic conjugate.
A pair of lines in a pencil separates another pair when a transversal crosses the pairs in separated points.
See also
Separation relation |
https://en.wikipedia.org/wiki/D-MAC | Among the family of MAC or Multiplexed Analogue Components systems for television broadcasting, D-MAC is a reduced bandwidth variant designed for transmission down cable.
The data is duobinary coded with a data burst rate of 20.25Mbit/s so that 0° as well as ±90° phasors are used.
D-MAC has a bandwidth of 8.4 MHz versus 27 MHz for C-MAC.
Most cable systems work on EBU 7 MHz channel spacing, so this approach did not work universally.
D-MAC's bandwidth problems were later fixed by D2-MAC.
D2-MAC: A fix for D-MAC
D-MAC consumed too much bandwidth for many applications, so D2-MAC was designed for European cable TV systems.
Luminance and chrominance
MAC transmits luminance and chrominance data separately in time rather than separately in frequency (as other analog television formats do, such as composite video).
Audio and scrambling (selective access)
Audio, in a format similar to NICAM was transmitted digitally rather than as an FM subcarrier.
The MAC standard included a standard scrambling system, EuroCrypt, a precursor to the standard DVB-CSA encryption system.
History and politics
MAC was developed by the UK's Independent Broadcasting Authority (IBA) and in 1982 was adopted as the transmission format for the UK's forthcoming direct broadcast satellite (DBS) television services (eventually provided by British Satellite Broadcasting). The following year MAC was adopted by the European Broadcasting Union (EBU) as the standard for all DBS.
By 1986, despite there being two standards, D-MAC and D2-MAC, favoured by different countries in Europe, an EU Directive imposed MAC on the national DBS broadcasters, to provide a stepping stone from analogue PAL and Secam formats to the eventual high definition and digital television of the future, with European TV manufacturers in a privileged position to provide the equipment required.
However, the Astra satellite system was also starting up at this time (the first satellite, Astra 1A was launched in 1989) and that ope |
https://en.wikipedia.org/wiki/Duffy%20binding%20proteins | In molecular biology, Duffy binding proteins are found in Plasmodium. Plasmodium vivax and Plasmodium knowlesi merozoites invade Homo sapiens erythrocytes that express Duffy blood group surface determinants. The Duffy receptor family is localised in micronemes, an organelle found in all organisms of the phylum Apicomplexa.
The presence of duffy-binding-like domains defines the family of erythrocyte binding-like proteins (EBL), a family of cell invasion proteins universal among Plasmodium. These other members may use some other receptor, for example Glycophorin A. The other universal invasion protein is reticulocyte binding protein homologs. Both families are essential for cell invasion, as they function cooperatively.
A duffy-binding-like domain is also found in proteins of the family Plasmodium falciparum erythrocyte membrane protein 1.
See also
Genetic resistance to malaria |
https://en.wikipedia.org/wiki/Aspen%20Center%20for%20Physics | The Aspen Center for Physics (ACP) is a non-profit institution for physics research located in Aspen, Colorado, in the Rocky Mountains region of the United States. Since its foundation in 1962, it has hosted distinguished physicists for short-term visits during seasonal winter and summer programs, to promote collaborative research in fields including astrophysics, cosmology, condensed matter physics, string theory, quantum physics, biophysics, and more.
To date, sixty-six of the center’s affiliates have won Nobel Prizes in Physics and three have won Fields Medals in mathematics. Its affiliates have garnered a wide array of other national and international distinctions, among them the Abel Prize, the Dirac Medal, the Guggenheim Fellowship, the MacArthur Prize, and the Breakthrough Prize. Its visitors have included figures such as the cosmologist and gravitational theorist Stephen Hawking, the particle physicist Murray Gell-Mann, the condensed matter theorist Philip W. Anderson, and the former Prime Minister of the United Kingdom, Margaret Thatcher.
In addition to serving as a locus for physics research, the ACP’s mission has entailed public outreach: offering programs to educate the general public about physics and to stimulate interest in the subject among youth.
History & public outreach
The Aspen Center for Physics was founded in 1962 by three people: George Stranahan, Michael Cohen, and Robert W. Craig. George Stranahan, then a postdoctoral fellow at Purdue University, played a critical role in raising funds and early public support for the initiative. He later left physics to become a craft brewer, rancher, and entrepreneur, although he remained a lifelong supporter of the center. Stranahan’s enterprises included the Flying Dog Brewery. Michael Cohen, a professor at the University of Pennsylvania, is a condensed matter physicist whose work has investigated the properties of real-world material systems such as ferroelectrics, liquid helium, and biological |
https://en.wikipedia.org/wiki/Skip%20counting | Skip counting is a mathematics technique taught as a kind of multiplication in reform mathematics textbooks such as TERC. In older textbooks, this technique is called counting by twos (threes, fours, etc.).
In skip counting by twos, a person can count to 10 by only naming every other even number: 2, 4, 6, 8, 10. Combining the base (two, in this example) with the number of groups (five, in this example) produces the standard multiplication equation: two multiplied by five equals ten. |
https://en.wikipedia.org/wiki/Halting%20problem | In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs.
A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via a Turing machine. The proof then shows, for any program that might determine whether programs halt, that a "pathological" program , called with some input, can pass its own source and its input to f and then specifically do the opposite of what f predicts g will do. No f can exist that handles this case, thus showing undecidability. This proof is significant to practical computing efforts, defining a class of applications which no programming invention can possibly perform perfectly.
Background
The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long and use an arbitrary amount of storage space before halting. The question is simply whether the given program will ever halt on a particular input.
For example, in pseudocode, the program
while (true) continue
does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program
print "Hello, world!"
does halt.
While deciding whether these programs halt is simple, more complex programs prove problematic. One approach t |
https://en.wikipedia.org/wiki/Multitape%20Turing%20machine | A multi-tape Turing machine is a variant of the Turing machine that utilizes several tapes. Each tape has its own head for reading and writing. Initially, the input appears on tape 1, and the others start out blank.
This model intuitively seems much more powerful than the single-tape model, but any multi-tape machine—no matter how many tapes—can be simulated by a single-tape machine using only quadratically more computation time. Thus, multi-tape machines cannot calculate any more functions than single-tape machines, and none of the robust complexity classes (such as polynomial time) are affected by a change between single-tape and multi-tape machines.
Formal definition
-tape Turing machine can be formally defined as a 7-tuple , following the notation of a Turing machine:
is a finite, non-empty set of tape alphabet symbols;
is the blank symbol (the only symbol allowed to occur on the tape infinitely often at any step during the computation);
is the set of input symbols, that is, the set of symbols allowed to appear in the initial tape contents;
is a finite, non-empty set of states;
is the initial state;
is the set of final states or accepting states. The initial tape contents is said to be accepted by if it eventually halts in a state from .
is a partial function called the transition function, where L is left shift, R is right shift.
A -tape Turing machine computes as follows. Initially, receives its input on the leftmost positions of the first tape, the rest of the first tape as well as other tapes is blank (i.e., filled with blank symbols). All the heads start on the leftmost position of the tapes. Once has started, the computation proceeds according to the rules described by the transition function. The computation continues until it enters the accept states, at which point it halts.
Two-stack Turing machine
Two-stack Turing machines have a read-only input and two storage tapes. If a head moves left on either tape a blank is printed on |
https://en.wikipedia.org/wiki/1-Ethyl-3-%283-dimethylaminopropyl%29carbodiimide | 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC, EDAC or EDCI) is a water-soluble carbodiimide usually handled as the hydrochloride.
It is typically employed in the 4.0-6.0 pH range. It is generally used as a carboxyl activating agent for the coupling of primary amines to yield amide bonds. While other carbodiimides like dicyclohexylcarbodiimide (DCC) or diisopropylcarbodiimide (DIC) are also employed for this purpose, EDC has the advantage that the urea byproduct formed (often challenging to remove in the case of DCC or DIC) can be washed away from the amide product using dilute acid. Additionally, EDC can also be used to activate phosphate groups in order to form phosphomonoesters and phosphodiesters. Common uses for this carbodiimide include peptide synthesis, protein crosslinking to nucleic acids, but also in the preparation of immunoconjugates. EDC is often used in combination with N-hydroxysuccinimide (NHS) for the immobilisation of large biomolecules. Recent work has also used EDC to assess the structure state of uracil nucleobases in RNA.
Preparation
EDC is commercially available. It may be prepared by coupling ethyl isocyanate to N,N-dimethylpropane-1,3-diamine to give a urea, followed by dehydration:
Mechanism
EDC couples primary amines, and other nucleophiles, to carboxylic acids by creating an activated ester leaving group. First, the carbonyl of the acid attacks the carbodiimide of EDC, and there is a subsequent proton transfer. The primary amine then attacks the carbonyl carbon of the acid which forms a tetrahedral intermediate before collapsing and discharging the urea byproduct. The desired amide is obtained. |
https://en.wikipedia.org/wiki/Peanut%20oil | Peanut oil, also known as groundnut oil or arachis oil, is a vegetable oil derived from peanuts. The oil usually has a mild or neutral flavor but, if made with roasted peanuts, has a stronger peanut flavor and aroma. It is often used in American, Chinese, Indian, African and Southeast Asian cuisine, both for general cooking, and in the case of roasted oil, for added flavor. Peanut oil has a high smoke point relative to many other cooking oils, so it is commonly used for frying foods.
History
Due to war shortages of other oils, use of readily available peanut oil increased in the United States during World War II.
Production
Uses
Unrefined peanut oil is used as a flavorant for dishes akin to sesame oil. Refined peanut oil is commonly used for frying volume batches of foods like French fries and has a smoke point of 450 °F/232 °C.
Biodiesel
At the 1900 Paris Exhibition, the Otto Company, at the request of the French Government, demonstrated that peanut oil could be used as a source of fuel for the diesel engine; this was one of the earliest demonstrations of biodiesel technology.
Other uses
Peanut oil, as with other vegetable oils, can be used to make soap by the process of saponification. Peanut oil is safe for use as a massage oil.
Composition
Its major component fatty acids are oleic acid (46.8% as olein), linoleic acid (33.4% as linolein), and palmitic acid (10.0% as palmitin). The oil also contains some stearic acid, arachidic acid, behenic acid, lignoceric acid and other fatty acids.
Nutritional content
Peanut oil is 17% saturated fat, 46% monounsaturated fat, and 32% polyunsaturated fat (table).
Health issues
Toxins
If quality control is neglected, peanuts that contain the mold that produces highly toxic aflatoxin can end up contaminating the oil derived from them.
Allergens
Those allergic to peanuts can consume highly refined peanut oil, but should avoid first-press, organic oil. Most highly refined peanut oils remove the peanut allergens and have b |
https://en.wikipedia.org/wiki/CCIR%20System%20E | CCIR System E is an analog broadcast television system used in France and Monaco, associated with monochrome 819-line high resolution broadcasts. Transmissions started in 1949 and ended in 1985.
System E specifications
Some of the important specs are listed below:
Frame rate: 25 Hz
Interlace: 2/1
Field rate: 50 Hz
Lines/frame: 819
Line rate: 20475 Hz
Visual bandwidth: 10 MHz
Vision modulation: Positive
Preemphasis: none
Sound modulation: AM
Sound offset: +11.15 MHz on odd numbered channels, -11.15 MHz on even numbered channels
Channel bandwidth: 14 MHz
System E implementation provided very good (near HDTV) picture quality but with an uneconomical use of bandwidth. With the usual additions of sound carrier and vestigial sideband the result was a combined signal that demanded approximately two to three times the bandwidth of more moderately specified standards, even when colour was added to them (as the color subcarrier resides within the Luma signal space).
For this reason, France gradually abandoned it in favor of the 625-lines standard, implementing System L with SECAM color.
The final 819-line transmissions in Metropolitan France took place in Paris, from the Eiffel Tower, on 19 July 1983. TMC in Monaco were the last broadcasters to transmit 819-line television, closing down their System E transmitter in 1985.
Television channels were arranged as follows:
System F
CCIR System F was an adaptation of System E used in Belgium (1953, RTB) and Luxembourg (1955, Télé Luxembourg). With only half the vision bandwidth and approximately half the sound carrier offset, it allowed French 819-line programming to squeeze into the 7 MHz VHF broadcast channels used in those neighboring countries, albeit with a substantial loss of horizontal resolution. Use of System F was discontinued in Belgium in February 1968, and in Luxembourg in September 1971.
Some of the important specs are listed below:
Frame rate: 25 Hz
Interlace: 2/1
Field rate: 50 Hz
Lines/frame: |
https://en.wikipedia.org/wiki/Semiconductor%20intellectual%20property%20core | In electronic design, a semiconductor intellectual property core (SIP core), IP core, or IP block is a reusable unit of logic, cell, or integrated circuit layout design that is the intellectual property of one party. IP cores can be licensed to another party or owned and used by a single party. The term comes from the licensing of the patent or source code copyright that exists in the design. Designers of system on chip (SoC), application-specific integrated circuits (ASIC) and systems of field-programmable gate array (FPGA) logic can use IP cores as building blocks.
History
The licensing and use of IP cores in chip design came into common practice in the 1990s. There were many licensors and also many foundries competing on the market. In 2013, the most widely licensed IP cores were from Arm Holdings (43.2% market share), Synopsys Inc. (13.9% market share), Imagination Technologies (9% market share) and Cadence Design Systems (5.1% market share).
Types of IP cores
The use of an IP core in chip design is comparable to the use of a library for computer programming or a discrete integrated circuit component for printed circuit board design. Each is a reusable component of design logic with a defined interface and behavior that has been verified by its creator and is integrated into a larger design.
Soft cores
IP cores are commonly offered as synthesizable RTL in a hardware description language such as Verilog or VHDL. These are analogous to low-level languages such as C in the field of computer programming. IP cores delivered to chip designers as RTL permit chip designers to modify designs at the functional level, though many IP vendors offer no warranty or support for modified designs.
IP cores are also sometimes offered as generic gate-level netlists. The netlist is a boolean-algebra representation of the IP's logical function implemented as generic gates or process-specific standard cells. An IP core implemented as generic gates can be compiled for any proce |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.