source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Brockia | Brockia is a genus of thermophilic bacteria from the family of Thermoanaerobacteraceae, with one known species (Brockia lithotrophica), an obligate anaerobe, spore-forming, rod-shaped microorganism. |
https://en.wikipedia.org/wiki/Glaze3D | Glaze3D was a family of graphics cards announced by BitBoys Oy on August 2, 1999, that would have produced substantially better performance than other consumer products available at the time. The family, which would have come in the Glaze3D 1200, Glaze3D 2400 and Glaze3D 4800 models, was supposed to offer full support for DirectX 7, OpenGL 1.2, AGP 4×, 4× anisotropic filtering, full-screen anti-aliasing and a host of other technologies not commonly seen at the time. The 1.5 million gate GPU would have been fabricated by Infineon on a 0.2 μm eDRAM process, later to be reduced to 0.17 μm with a minimum of 9 MB of embedded DRAM and 128 to 512 MB of external SDRAM. The maximum supported video resolution was 2048×1536 pixels.
Development history
The Glaze3D family of cards were developed in several generations, beginning with the original Glaze3D "400" with multi-channel RDRAM instead of internal eDRAM. This was offered only as IP but with no takers. Bitboys revised the design and decided to have it manufactured themselves, in cooperation with Infineon Technologies, the chip fabrication arm of Siemens. They came up with a new Glaze3D pitched for release in Q1, 2000. The card promised extremely high performance compared to contemporary consumer GPUs. As bug-hunting, validation and manufacturing problems delayed the launch, new features became necessary and a DX7 variant with built-in hardware Transform & Lighting was announced, but never appeared.
The GPU was later redesigned under a new codename, Axe, to take advantage of DirectX 8 and compete with a developing competition. The new version sported such features as an additional 3 MB of eDRAM, proprietary Matrix Antialiasing and a vastly improved fillrate, as well as offering a programmable vertex shader and widened internal memory bus. The new card was to have been released as Avalanche3D by the end of 2001.
The third development, codenamed Hammer, started development as Axe lost viability toward the end of 2001. Thi |
https://en.wikipedia.org/wiki/Reconvergent%20fan-out | Reconvergent fan-out is a technique to make VLSI logic simulation less pessimistic.
Static timing analysis tries to figure out the best and worst case time estimate for each signal as they pass through an electronic device. Whenever a signal passes through a node, a bit of uncertainty must be added to the time required for the signal to transit that device. These uncertain delays add up so, after passing through many devices, the worst-case timing for a signal could be unreasonably pessimistic.
It is common for two signals to share an identical path, branch and follow different paths for a while, then converge back to the same point to produce a result. When this happens, you can remove a fair amount of uncertainty from the total delay because you know that they shared a common path for a while. Even though each signal has an uncertain delay, because their delays were identical for part of the journey the total uncertainty can be reduced. This tightens up the worst-case estimation for the signal delay, and usually allows a small but important speedup of the overall device.
This term is starting to be used in a more generic sense as well. Any time a signal splits into two and then reconverges, certain optimizations can be made. The term reconvergent fan-out has been used to describe similar optimizations in graph theory and static code analysis.
See also
Fan-out
Fan-in
External links
An example of reconvergent fan-out
Logic gates |
https://en.wikipedia.org/wiki/GATA%20zinc%20finger | In molecular biology, GATA zinc fingers are zinc-containing domains found in a number of transcription factors (including erythroid-specific transcription factor and nitrogen regulatory proteins). Some members of this class of zinc fingers specifically bind the DNA sequence (A/T)GATA(A/G) in the regulatory regions of genes., giving rise to the name of the domain. In these domains, a single zinc ion is coordinated by 4 cysteine residues. NMR studies have shown the core of the Znf to comprise 2 irregular anti-parallel beta-sheets and an alpha-helix, followed by a long loop to the C-terminal end of the finger. The N-terminal part, which includes the helix, is similar in structure, but not sequence, to the N-terminal zinc module of the glucocorticoid receptor DNA-binding domain. The helix and the loop connecting the 2 beta-sheets interact with the major groove of the DNA, while the C-terminal tail wraps around into the minor groove. Interactions between the Znf and DNA are mainly hydrophobic, explaining the preponderance of thymines in the binding site; a large number of interactions with the phosphate backbone have also been observed. Two GATA zinc fingers are found in GATA-family transcription factors. However, there are several proteins that only contain a single copy of the domain.
It is also worth noting that many GATA-type Znfs (such as those found in the proteins GATAD2B and MTA1) have not been experimentally demonstrated to be DNA-binding domains. Furthermore, several GATA-type Znfs have been demonstrated to act as protein-recognition domains. For example, the N-terminal Znf of GATA1 binds specifically to a zinc finger from the transcriptional coregulator FOG1 (ZFPM1). |
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%E2%80%93Witten%20model | In theoretical physics and mathematics, a Wess–Zumino–Witten (WZW) model, also called a Wess–Zumino–Novikov–Witten model, is a type of two-dimensional conformal field theory named after Julius Wess, Bruno Zumino, Sergei Novikov and Edward Witten. A WZW model is associated to a Lie group (or supergroup), and its symmetry algebra is the affine Lie algebra built from the corresponding Lie algebra (or Lie superalgebra). By extension, the name WZW model is sometimes used for any conformal field theory whose symmetry algebra is an affine Lie algebra.
Action
Definition
For a Riemann surface, a Lie group, and a (generally complex) number, let us define the -WZW model on at the level . The model is a nonlinear sigma model whose action is a functional of a field :
Here, is equipped with a flat Euclidean metric, is the partial derivative, and is the Killing form on the Lie algebra of . The Wess–Zumino term of the action is
Here is the completely anti-symmetric tensor, and is the Lie bracket.
The Wess–Zumino term is an integral over a three-dimensional manifold whose boundary is .
Topological properties of the Wess–Zumino term
For the Wess–Zumino term to make sense, we need the field to have an extension to . This requires the homotopy group to be trivial, which is the case in particular for any compact Lie group .
The extension of a given to is in general not unique.
For the WZW model to be well-defined,
should not depend on the choice of the extension.
The Wess–Zumino term is invariant under small deformations of , and only depends on its homotopy class.
Possible homotopy classes are controlled by the homotopy group .
For any compact, connected simple Lie group , we have , and different extensions of lead to values of that differ by integers. Therefore, they lead to the same value of provided the level obeys
Integer values of the level also play an important role in the representation theory of the model's symmetry algebra, which is an affine |
https://en.wikipedia.org/wiki/Dementia%20and%20Alzheimer%27s%20disease%20in%20Australia | Dementia and Alzheimer's disease in Australia is a major health issue. Alzheimer's disease is the most common type of dementia in Australia. Dementia is an ever-increasing challenge as the population ages and life expectancy increases. As a consequence, there is an expected increase in the number of people with dementia, posing countless challenges to carers and the health and aged care systems. In 2018, an estimated 376,000 people had dementia; this number is expected to increase to 550,000 by 2030 and triple to 900,000 by 2050. The dementia death rate is increasing, resulting in the shift from fourth to second leading cause of death from 2006 to 2015. It is expected to become the leading cause of death over the next number of years. In 2011, it was the fourth leading cause of disease burden and third leading cause of disability burden. This is expected to remain the same until at least 2020.
Dementia primarily affects older people, approximately 95% of all dementia deaths occur after the age of 74. People aged 75 and over accounted for the majority (72%) of the burden due to dementia. It was the leading cause of death for women and third leading cause of death for men. There is a sex bias, as women have higher mortality rates, morbidity and burden of dementia than men. In 2018, 61% of people with dementia were women. The rate of dementia differs between population subgroups. Aboriginal and Torres Strait Islander people experience risk factors and prevalence at a higher and earlier rate than non-indigenous Australians.
Dementia is the ninth National Health Priority Area. For this reason, health and service policy and expenditure is especially focused on this significant burden of disease. Since dementia is typically not reversible, its extended illness and disability poses a significant financial burden to Australia. In 2016, total costs continued to increase to an estimated A$14.25 billion. Future costs are projected to reach $33.6 billion in 2050 (estimated fro |
https://en.wikipedia.org/wiki/Service%20mark%20symbol | The service mark symbol (the letters in small capitals and superscript style), is a symbol used in the United States and some other jurisdictions to provide notice that the preceding mark is a service mark. This symbol may be used for service marks not yet registered with the relevant national authority. Upon successful registration, registered services are marked with the same symbol as is used for registered trademarks, the registered trademark symbol . The proper manner to display the symbol is immediately following the service name, in superscript style.
Computer systems
The service mark symbol is mapped in Unicode as , in the Letterlike symbols block. The HTML entity is ℠.
Unlike the similar trademark symbol, there is no simple way to type the service mark symbol on Microsoft Windows or Apple MacOS systems. However the symbol may be selected from the Windows Character Map or the MacOS Character Palette. On Linux and similar systems with a Compose key, it can be inserted using .
Related symbols
Registered trademark symbol is also used for registered service marks
Trademark symbol
See also
World Intellectual Property Organization |
https://en.wikipedia.org/wiki/WHUT-TV | WHUT-TV (channel 32) is the secondary PBS member television station in Washington, D.C. The station is owned by Howard University, a historically black college, and is sister to commercial urban contemporary radio station WHUR-FM (96.3). WHUT-TV's studios are located on the Howard University campus, and its transmitter is located in the Tenleytown neighborhood in the northwest quadrant of Washington.
WHUT airs a variety of standard PBS programming, as well as programs produced by Howard University, and international programs focusing on regions such as the Caribbean and Africa.
History
On June 25, 1974, Howard University was granted a construction permit to build a new television station on channel 32 in Washington, D.C. It was more than six years before the station signed on November 17, 1980. WHMM-TV (whose call letters stood for Howard University Mass Media) turned Howard, owner of the only radio station owned by an HBCU at the time, into the owner of the first Black-owned public television station. At the outset, the station suffered from some problems with its antenna and the need to train staff on the job. It also faced issues carving out an identity for itself and its mission, with standard PBS fare airing during much of the day; in 1983, its budget was one-third that of WETA-TV. However, within its first decade, it produced 1,000 Howard graduates trained in television production. The long-running Evening Exchange public affairs program, which debuted with the station, became a station staple; it was hosted by Kojo Nnamdi between 1985 and 2011.
Budget cuts at Howard in the late 1980s and 1990s prompted staff cuts in operations. Even as the station tried to significantly step up fundraising, its treatment as another academic department, requiring a different style of management, often hurt WHMM-TV. Staff levels were cut from 90 in 1988 to 65 five years later, when a blue-ribbon panel was convened by PBS to discuss the station's problems; that year, it had |
https://en.wikipedia.org/wiki/Mean-field%20theory | In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other.
The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost.
MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium.
Origins
The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory.
Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original solvable and open to calculation, and in some cases MFT may give very accurate approximations.
In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the fi |
https://en.wikipedia.org/wiki/IBUS%20%28device%29 | iBUS, or Intelligent BUS Utility System, is a bus monitoring and management system device. It helps solve traffic problem in the worsening traffic congestion of mass transport systems in the developing country metro cities.
How it works
iBUS digitally identifies a vehicle by using machine readable tags that will be part of a database. Its operation is managed through computers which system offers a playing field for everyone involved in the traffic problem: the bus drivers, the enforcers, the operators, the passengers. This would make the buses load and unload passengers in the proper designated areas only.
The iBUS system can organize the buses based on the machine readable tags. It can group the buses and allocate designated stop or pick up points. It will also impose a time limit on a bus stop usage. The bus will be programmed to stop and open its doors at allocated areas only. It also prides on the Real Time Location System (RTLS) to track the buses and project the exact arrival of buses at the designated loading and unloading areas. |
https://en.wikipedia.org/wiki/Arthropod%20head%20problem | The (pan)arthropod head problem is a long-standing zoological dispute concerning the segmental composition of the heads of the various arthropod groups, and how they are evolutionarily related to each other. While the dispute has historically centered on the exact make-up of the insect head, it has been widened to include other living arthropods, such as chelicerates, myriapods, and crustaceans, as well as fossil forms, such as the many arthropods known from exceptionally preserved Cambrian faunas. While the topic has classically been based on insect embryology, in recent years a great deal of developmental molecular data has become available. Dozens of more or less distinct solutions to the problem, dating back to at least 1897, have been published, including several in the 2000s.
The arthropod head problem is popularly known as the endless dispute, the title of a famous paper on the subject by Jacob G. Rempel in 1975, referring to its seemingly intractable nature. Although some progress has been made since that time, the precise nature of especially the labrum and the pre-oral region of arthropods remain highly controversial.
Background
Some key events in the evolution of the arthropod body resulted from changes in certain Hox genes' DNA sequences. The trunks of arthropods comprise repeated segments, which are typically associated with various structures such as a pair of appendages, apodemes for muscle attachment, ganglia and (at least embryologically) coelomic cavities. While many arthropod segments are modified to a greater or lesser extent (for example, only three of the insect thorax and abdominal segments typically bear appendages), arthropodists widely assume that all of the segments were nearly identical in the ancestral state. However, while one can usually readily see the segmental organisation of the trunks of adult arthropods, that of the head is much less obvious. Arthropod heads are typically fused capsules that bear a variety of complex struc |
https://en.wikipedia.org/wiki/Amoeboid%20movement | Amoeboid movement is the most typical mode of locomotion in adherent eukaryotic cells. It is a crawling-like type of movement accomplished by protrusion of cytoplasm of the cell involving the formation of pseudopodia ("false-feet") and posterior uropods. One or more pseudopodia may be produced at a time depending on the organism, but all amoeboid movement is characterized by the movement of organisms with an amorphous form that possess no set motility structures.
Movement occurs when the cytoplasm slides and forms a pseudopodium in front to pull the cell forward. Some examples of organisms that exhibit this type of locomotion are amoebae (such as Amoeba proteus and Naegleria gruberi,) and slime molds, as well as some cells in humans such as leukocytes. Sarcomas, or cancers arising from connective tissue cells, are particularly adept at amoeboid movement, thus leading to their high rate of metastasis.
This type of movement has been linked to changes in action potential. While several hypotheses have been proposed to explain the mechanism of amoeboid movement, its exact mechanisms are not yet well understood.
Assembly and disassembly of actin filaments in cells may be important to the biochemical and biophysical mechanisms that contribute to different types of cellular movements in both striated muscle structures and nonmuscle cells.
Polarity gives cells distinct leading and lagging edges through the shifting of proteins selectively to the poles, and may play an important role in eukaryotic chemotaxis.
Types of amoeboid motion
Crawling
Crawling is one form of amoeboid movement which starts when an extension of the moving cell (pseudopod) binds tightly to the surface. The main bulk of the cell pulls itself toward the bound patch. By repeating this process the cell can move until the first bound patch is at the very end of the cell, at which point it detaches. The speed at which cells crawl can vary greatly, but generally crawling is faster than swimming, but s |
https://en.wikipedia.org/wiki/Gut-associated%20lymphoid%20tissue | Gut-associated lymphoid tissue (GALT) is a component of the mucosa-associated lymphoid tissue (MALT) which works in the immune system to protect the body from invasion in the gut.
Owing to its physiological function in food absorption, the mucosal surface is thin and acts as a permeable barrier to the interior of the body. Equally, its fragility and permeability creates vulnerability to infection and, in fact, the vast majority of the infectious agents invading the human body use this route. The functional importance of GALT in body's defense relies on its large population of plasma cells, which are antibody producers, whose number exceeds the number of plasma cells in spleen, lymph nodes and bone marrow combined. GALT makes up about 70% of the immune system by weight; compromised GALT may significantly affect the strength of the immune system as a whole.
Structure
The gut-associated lymphoid tissue lies throughout the intestine, covering an area of approximately 260–300 m2. In order to increase the surface area for absorption, the intestinal mucosa is made up of finger-like projections (villi), covered by a monolayer of epithelial cells, which separates the GALT from the lumen intestine and its contents. These epithelial cells are covered by a layer of glycocalyx on their luminal surface so as to protect cells from the acid pH.
New epithelial cells derived from stem cells are constantly produced on the bottom of the intestinal glands, regenerating the epithelium (epithelial cell turnover time is less than one week). Although in these crypts conventional enterocytes are the dominant type of cells, Paneth cells can also be found. These are located at the bottom of the crypts and release a number of antibacterial substances, among them lysozyme, and are thought to be involved in the control of infections.
Underneath them, there is an underlying layer of loose connective tissue called lamina propria. There is also lymphatic circulation through the tissue connecte |
https://en.wikipedia.org/wiki/Magda%20Peligrad | Magda Peligrad is a Romanian mathematician and mathematical statistician known for her research in probability theory, and particularly on central limit theorems and stochastic processes.
She works at the University of Cincinnati, where she is Distinguished Charles Phelps Taft Professor of Mathematical Sciences.
Education and career
Peligrad obtained her Ph.D. in 1980 from the Center of Statistics of the Romanian Academy.
By 1983 she was working at the Sapienza University of Rome and by 1984 she had arrived at Cincinnati, where
since 1988 she has supervised the dissertations of seven doctoral students.
With Florence Merlevède and Sergey Utev, she is coauthor of the book Functional Gaussian Approximation for Dependent Structures (Oxford University Press, 2019).
Recognition
In 1995, Peligrad was elected as a Fellow of the Institute of Mathematical Statistics,
which she had served in 1990 as the Institute's representative to the Joint Committee on Women in Mathematical Sciences, an umbrella organization
for women in eight societies of mathematics and statistics.
A conference on "limit theorems for dependent data and applications" was organized in her honor in Paris in 2010, celebrating her 60th birthday, by the researchers at four Parisian universities.
She was named Taft professor in 2004. |
https://en.wikipedia.org/wiki/Eurytherm | A eurytherm is an organism, often an endotherm, that can function at a wide range of ambient temperatures. To be considered a eurytherm, all stages of an organism's life cycle must be considered, including juvenile and larval stages. These wide ranges of tolerable temperatures are directly derived from the tolerance of a given eurythermal organism's proteins. Extreme examples of eurytherms include Tardigrades (Tardigrada), the desert pupfish (Cyprinodon macularis), and green crabs (Carcinus maenas), however, nearly all mammals, including humans, are considered eurytherms. Eurythermy can be an evolutionary advantage: adaptations to cold temperatures, called cold-eurythemy, are seen as essential for the survival of species during ice ages. In addition, the ability to survive in a wide range of temperatures increases a species' ability to inhabit other areas, an advantage for natural selection.
Eurythermy is an aspect of thermoregulation in organisms. It is in contrast with the idea of stenothermic organisms, which can only operate within a relatively narrow range of ambient temperatures. Through a wide variety of thermal coping mechanisms, eurythermic organisms can either provide or expel heat for themselves in order to survive in cold or hot, respectively, or otherwise prepare themselves for extreme temperatures. Certain species of eurytherm have been shown to have unique protein synthesis processes that differentiate them from relatively stenothermic, but otherwise similar, species.
Examples
Tardigrades, known for their ability to survive in nearly any environment, are extreme examples of eurytherms. Certain species of tardigrade, including Mi. tardigradum, are able to withstand and survive temperatures ranging from –273 °C (near absolute zero) to 150 °C in their anhydrobiotic state.
The desert pupfish, a rare bony fish that occupies places like the Colorado River Delta in Baja California, small ponds in Sonora, Mexico, and drainage sites near the Salton Sea |
https://en.wikipedia.org/wiki/Staphylothermus | In taxonomy, Staphylothermus is a genus of the Desulfurococcaceae.[1]
Taxonomy
Desulfurococcaceae are anaerobic, sulfur respiring, extreme thermophiles. Desulfurococcaceae share the same family as Desulfurococcus. Two species of Staphylothermus have been identified: S. marinus and S. hellenicus. They are both heterotrophic, anaerobic members of the domain Archea.
Cell structure
Staphylothermus marinus has a unique morphology. When nutrient levels are low, it forms grape-like clusters that range in diameter from 0.5–1.0 mm up to 100 clusters large. At high nutrient levels, large clustered cells up to 15 μm in diameter are found. The S-layer is made of a glycoprotein called tetrabrachion. Tetrabrachion is stable at high temperatures and resistant to chemicals that typically denature proteins. Tetrabrachion is built from 92,000 kDa polypeptides forming projections that react with other tetrabrachion sub units making a lattice framework that covers the cell.[7] Tetrabrachion is resistant to heat and chemical denaturation.[11] S. marinus has a circular chromosome with 1,610 protein-coding genes and 49 RNA genes. Staphylothermus hellenicus does not have tetrabrachion in the cell wall. It is an aggregated coccus, obligate anaerobe, heterotrophic, archeon that grows 0.8–1.3 μm in diameter. It forms large aggregates with up to 50 cells and has a circular chromosome that contains 158,0347 nucleotides, 1,599 protein-coding genes and 50 RNA genes.
Metabolism
Staphylothermus marinus and Staphylothermus hellenicus have special enzymes called extremozymes known to work well in extremely hot or cold environments where most enzymatic reactions could not occur.[9] Staphylothermus marinus and Staphylothermus hellenecus are thermophiles that have heat stable extremozymes that work at particularly high temperatures. Both organisms are sulfur dependent, extreme marine thermophiles. These archeons require sulfur for growth but can produce hydrogen if sulfur becomes limited. Staphyloth |
https://en.wikipedia.org/wiki/Robert%20H.%20MacArthur%20Award | The Robert H. MacArthur Award is a biennial prize given by the Ecological Society of America to ecologists for their pivotal contributions to their field. The acceptance speeches of many recipients have been given at the annual meeting of the society and subsequently published in the ESA's journal, Ecology.
The following is a self-descriptive quote taken from the Robert H. MacArthur Award page on the ESA's website: "The Robert H. MacArthur Award is given biennially to an established ecologist in mid-career for meritorious contributions to ecology, in the expectation of continued outstanding ecological research. Nominees may be from any country and need not be ESA members. The recipient is invited to prepare an address for presentation at the annual meeting of the society and for publication in Ecology."
Recipients
Source: ESA
1983 Robert Treat Paine, United States
1984 Robert McCreadie May, United Kingdom
1986 Thomas W. Schoener, United States
1988 Simon Asher Levin, United States
1990 William W. Murdoch, United States
1992 Peter M. Vitousek, United States
1994 Henry Miles Wilbur, United States
1996 David Tilman, United States
1998 Robert V. O'Neill, United States
2000 Stephen R. Carpenter, United States
2002 James H. Brown, United States
2004 May Berenbaum, United States
2006 Alan Hastings, United States
2008 Monica Turner, United States
2010 Stephen W. Pacala, United States
2012 Anthony Ragnar Ives, United States
2014 Mercedes Pascual, United States
2016 Anurag A. Agrawal, United States
2018 Katharine N. Suding. United States
2020 Jonathan M. Levine, United States
2022 Priyanga Amarasekare, United States
See also
List of ecology awards |
https://en.wikipedia.org/wiki/Jakobid | Jakobids are an order of free-living, heterotrophic, flagellar eukaryotes in the supergroup Excavata. They are small (less than 15 μm), and can be found in aerobic and anaerobic environments. The order Jakobida, believed to be monophyletic, consists of only twenty species at present, and was classified as a group in 1993. There is ongoing research into the mitochondrial genomes of jakobids, which are unusually large and bacteria-like, evidence that jakobids may be important to the evolutionary history of eukaryotes.
Molecular phylogenetic evidence suggests strongly that jakobids are most closely related to Heterolobosea (Percolozoa) and Euglenozoa.
Structure and Biology
Jakobids have two flagella, inserted in the anterior end of the cell, and, like other members of order Excavata, have a ventral feeding groove and associated cytoskeleton support. The posterior flagella has a dorsal vane and is aligned within the ventral groove, where it generates a current that the cell uses for food intake.
The nucleus is generally in the anterior part of the cell and bears a nucleolus. Most known jakobids have one mitochondrion, again located anteriorly, and different genera have flattened, tubular, or absent cristae. Food vacuoles are mostly located on the cell posterior, and in most jakobids the endoplasmic reticulum is distributed throughout the cell.
The sessile, loricate Histionidae and occasionally free-swimming Jakoba libera (Jakobidae) have extrusomes under the dorsal membrane that are theorized to be defensive structures.
Ecology
Jakobids are widely dispersed, having been found in soil, freshwater, and marine habitats, but generally not common. However, environmental DNA surveys suggest that Stygiellidae are abundant in anoxic marine habitats. Some are capable of surviving hypersaline and anoxic environments, though the Histionids have only been found in freshwater ecosystems, where they attach themselves to algae or zooplankton. Outside of obligate sessile species, |
https://en.wikipedia.org/wiki/Biological%20warfare | Biological warfare, also known as germ warfare, is the use of biological toxins or infectious agents such as bacteria, viruses, insects, and fungi with the intent to kill, harm or incapacitate humans, animals or plants as an act of war. Biological weapons (often termed "bio-weapons", "biological threat agents", or "bio-agents") are living organisms or replicating entities (i.e. viruses, which are not universally considered "alive"). Entomological (insect) warfare is a subtype of biological warfare.
Offensive biological warfare in international armed conflicts is a war crime under the 1925 Geneva Protocol and several international humanitarian law treaties. In particular, the 1972 Biological Weapons Convention (BWC) bans the development, production, acquisition, transfer, stockpiling and use of biological weapons. In contrast, defensive biological research for prophylactic, protective or other peaceful purposes is not prohibited by the BWC.
Biological warfare is distinct from warfare involving other types of weapons of mass destruction (WMD), including nuclear warfare, chemical warfare, and radiological warfare. None of these are considered conventional weapons, which are deployed primarily for their explosive, kinetic, or incendiary potential.
Biological weapons may be employed in various ways to gain a strategic or tactical advantage over the enemy, either by threats or by actual deployments. Like some chemical weapons, biological weapons may also be useful as area denial weapons. These agents may be lethal or non-lethal, and may be targeted against a single individual, a group of people, or even an entire population. They may be developed, acquired, stockpiled or deployed by nation states or by non-national groups. In the latter case, or if a nation-state uses it clandestinely, it may also be considered bioterrorism.
Biological warfare and chemical warfare overlap to an extent, as the use of toxins produced by some living organisms is considered under the prov |
https://en.wikipedia.org/wiki/Apple%20Lisa | The Lisa is a desktop computer developed by Apple, released on January 19, 1983. It is generally considered the first mass market personal computer operable through a graphical user interface (GUI). In 1983, a machine like the Lisa was still so expensive that it was primarily marketed to individual and small and medium-size businesses, as a groundbreaking new alternative to much bigger, much more expensive (mainframe or "Mini") computers from firms such as IBM, that either require additional, expensive consultancy from the supplier, hiring specially trained personnel, or at least, a much steeper learning curve to maintain and operate. Earlier GUI controlled personal computers, like the Xerox Alto, although manufactured in several thousands, were only made for Xerox, the University of California, Berkeley, and select partners in Xerox PARC's developments, from the early to mid 1970s.
Development of project "LISA" began in 1978. It underwent many changes and shipped at with a five-megabyte hard drive. It was affected by its high price, insufficient software, unreliable Apple FileWare floppy disks, and the imminent release of the cheaper and faster Macintosh. Only 10,000 Lisa units were sold in two years.
Considered a commercial failure (albeit one with technical acclaim), Lisa introduced a number of advanced features that reappeared on the Macintosh and eventually IBM PC compatibles. Among them is an operating system with protected memory and a document-oriented workflow. The hardware was more advanced overall than the forthcoming Macintosh 128K; the Lisa included hard disk drive support, capacity for up to 2 megabytes (MB) of random-access memory (RAM), expansion slots, and a larger, higher-resolution display.
The complexity of the Lisa operating system and its associated programs (most notably its office suite), as well as the ad hoc protected memory implementation (due to the lack of a Motorola MMU), placed a high demand on the CPU and, to some extent, the stor |
https://en.wikipedia.org/wiki/Disc%20mill | A disc mill is a type of crusher that can be used to grind, cut, shear, shred, fiberize, pulverize, granulate, crack, rub, curl, fluff, twist, hull, blend, or refine. It works in a similar manner to the ancient Buhrstone mill in that the feedstock is fed between opposing discs or plates. The discs may be grooved, serrated, or spiked.
Applications
Typical applications for a single-disc mill are all three stages of the wet milling of field corn, manufacture of peanut butter, processing nut shells, ammonium nitrate, urea, producing chemical slurries and recycled paper slurries, and grinding chromium metal.
Double-disc mills are typically used for alloy powders, aluminum chips, bark, barley, borax, brake lining scrap, brass chips, sodium hydroxide, chemical salts, coconut shells, copper powder, cork, cottonseed hulls, pharmaceuticals, feathers, hops, leather, oilseed cakes, phosphates, rice, rosin, sawdust, and seeds.
Disc mills are relatively expensive to run and maintain and they consume much more power than other shredding machines, and are not used where ball mills or hammermills produce the desired results at a lower cost.
Mechanism
Substances are crushed between the edge of a thick, spinning disk and something else. Some mills cover the edge of the disk in blades to chop up incoming matter rather than crush it.
Industrial equipment
Grinding mills |
https://en.wikipedia.org/wiki/Garbhanga%20Wildlife%20Sanctuary | Garbhanga Wildlife Sanctuary (formerly Garbhanga and Rani Reserve Forest) is a wildlife sanctuary on the southwestern side of Guwahati City, bordering the state of Meghalaya, India. The forested area is the key urban wildlife site and catchment area near Guwahati City.
Located approximately 15 km (10 miles) away from Guwahati, Garbhanga Wildlife Sanctuary is situated in the southern part of Assam, bordering the foothills of Meghalaya. It is located very close to the Deepor Bill, and because of its location in an urban area it is considered a key wildlife area of Guwahati City.
Garbhanga Wildlife Sanctuary has a total land area of 117 km2 and lies between the Garbhanga and Rani ranges.
Etymology
The origin of wildlife sanctuary's name is unclear. However, some believe that the name comes from the Karbi people, who came from the Markang area of Sonapur and eventually entered the hilly forest for Jhum cultivation.
Garbhanga was declared a wildlife sanctuary by the second secretary to the government of Assam, Mr G.T. Lloyd. He was under the supervision of Major Briggs, who surveyed the forest in 1862.
Brief boundary description
The Garbhanga Wildlife Sanctuary is surrounded by Guwahati City and Dipor Bill in the South, the Meghalayan ranges on the east and north, and Rani Range on the west.
North
The northern boundary starts at BSF headquarters near the VIP Road, then runs along the foothills of Matia, Chakradeo, Dipor Bil, Mahua Para, Pamohi, and Mainakhurung up to Paschdhora River, which is the common boundary between Garbhanga Reserve Forest and Rani Reserve Forest. From there, the boundary runs along Phalbama, Nawagaon, and Nalapara, up to Lokhara Village and to the Siva Temple, which is situated in the northeast corner.
East
On the east side, the boundary runs along the Basistha river, up to the front side of the Government Art School and then follows the stream. Permanent boundary pillars are situated near the Basistha River, then the eastern boundary runs |
https://en.wikipedia.org/wiki/Libraries.io | Libraries.io is an open source web service that lists software development project dependencies and alerts developers to new versions of the software libraries they are using.
Libraries.io is written by Andrew Nesbitt, who has also used the code as the basis for DependencyCI, a service that tests project dependencies. A key feature is that the service checks for software license compliance.
As of 17 April 2022, the web service monitors 6,921,905 open source libraries and supports 32 different package managers. To gather the information on libraries, it uses the dominant package manager for each programming language that is supported. The website organizes them by programming language, package manager, license (such as GPL or MIT), and by keyword.
On November 14, 2017, Libraries.io announced its acquisition by Tidelift, an open-source software support company, with an intention to continue to develop and operate the service.
The code that runs the web service is available on GitHub and under the GNU Affero General Public License.
External links
Libraries.io source code |
https://en.wikipedia.org/wiki/Common%20Terminology%20Criteria%20for%20Adverse%20Events | The Common Terminology Criteria for Adverse Events (CTCAE), formerly called the Common Toxicity Criteria (CTC or NCI-CTC), are a set of criteria for the standardized classification of adverse effects of drugs used in cancer therapy.
The CTCAE system is a product of the US National Cancer Institute (NCI).
The first Iteration was prior to 1998. In 1999, the FDA released version 2.0. CTCAE version 4.0 in 2009 with an update to y version 4.03 in 2010. The current version 5.0 was released on November 27, 2017. Many clinical trials, now extending beyond oncology, encode their observations based on the CTCAE system.
It uses a range of grades from 1 to 5. Specific conditions and symptoms may have values or descriptive comment for each level, but the general guideline is:
1 - Mild
2 - Moderate
3 - Severe
4 - Life-threatening
5 - Death
Grade 1: is defined as mild, asymptomatic symptoms. clinical or diagnostic observations only; Intervention not indicated.
Grade 2: is moderate; minimal, local or noninvasive intervention was needed.
Grade 3: Severe symptoms or medically significant but not life-threatening but may be disabling or limit self care in ADL
Grade 4: is Life threatening consequences; urgent or emergent intervention needed
Grade 5: Death related to or due to adverse event |
https://en.wikipedia.org/wiki/Blue | Blue is one of the three primary colours in the RYB colour model (traditional colour theory), as well as in the RGB (additive) colour model. It lies between violet and cyan on the spectrum of visible light. The term blue generally describes colors perceived by humans observing light with a dominant wavelength between approximately 450 and 495 nanometres. Most blues contain a slight mixture of other colours; azure contains some green, while ultramarine contains some violet. The clear daytime sky and the deep sea appear blue because of an optical effect known as Rayleigh scattering. An optical effect called the Tyndall effect explains blue eyes. Distant objects appear more blue because of another optical effect called aerial perspective.
Blue has been an important colour in art and decoration since ancient times. The semi-precious stone lapis lazuli was used in ancient Egypt for jewellery and ornament and later, in the Renaissance, to make the pigment ultramarine, the most expensive of all pigments. In the eighth century Chinese artists used cobalt blue to colour fine blue and white porcelain. In the Middle Ages, European artists used it in the windows of cathedrals. Europeans wore clothing coloured with the vegetable dye woad until it was replaced by the finer indigo from America. In the 19th century, synthetic blue dyes and pigments gradually replaced organic dyes and mineral pigments. Dark blue became a common colour for military uniforms and later, in the late 20th century, for business suits. Because blue has commonly been associated with harmony, it was chosen as the colour of the flags of the United Nations and the European Union.
In the United States and Europe, blue is the colour that both men and women are most likely to choose as their favourite, with at least one recent survey showing the same across several other countries, including China, Malaysia, and Indonesia. Past surveys in the US and Europe have found that blue is the colour most commonly associ |
https://en.wikipedia.org/wiki/Variational%20principle | In science and especially in mathematical studies, a variational principle is one that enables a problem to be solved using calculus of variations, which concerns finding functions that optimize the values of quantities that depend on those functions. For example, the problem of determining the shape of a hanging chain suspended at both ends—a catenary—can be solved using variational calculus, and in this case, the variational principle is the following: The solution is a function that minimizes the gravitational potential energy of the chain.
Overview
Any physical law which can be expressed as a variational principle describes a self-adjoint operator. These expressions are also called Hermitian. Such an expression describes an invariant under a Hermitian transformation.
History
Felix Klein's Erlangen program attempted to identify such invariants under a group of transformations. In what is referred to in physics as Noether's theorem, the Poincaré group of transformations (what is now called a gauge group) for general relativity defines symmetries under a group of transformations which depend on a variational principle, or action principle.
Examples
In mathematics
The Rayleigh–Ritz method for solving boundary-value problems approximately
Ekeland's variational principle in mathematical optimization
The finite element method
The variation principle relating topological entropy and Kolmogorov-Sinai entropy.
In physics
Fermat's principle in geometrical optics
Maupertuis' principle in classical mechanics
The principle of least action in mechanics, electromagnetic theory, and quantum mechanics
The variational method in quantum mechanics
Gauss's principle of least constraint and Hertz's principle of least curvature
Hilbert's action principle in general relativity, leading to the Einstein field equations.
Palatini variation
Gibbons–Hawking–York boundary term |
https://en.wikipedia.org/wiki/Somatic%20theory | Somatic theory is a theory of human social behavior based on the somatic marker hypothesis of António Damásio. The theory proposes a mechanism by which emotional processes can guide (or bias) behavior: in particular, decision-making, the attachment theory of John Bowlby, and the self-psychology of Heinz Kohut (especially as consolidated by Allan Schore).
It draws on various philosophical models: On the Genealogy of Morals of Friedrich Nietzsche, Martin Heidegger on das Man, Maurice Merleau-Ponty practiced on the lived body as a center of experience, Ludwig Wittgenstein on social practices, Michel Foucault on discipline, as well as theories of performativity emerging out of the speech act theory by J. L. Austin, in point of fact was developed by Judith Butler and Shoshana Felman. Some somatic theorists have also put into somatic theory to performance in the schools of acting, the training was developed by Konstantin Stanislavski and Bertolt Brecht.
Theorists
Barbara Sellers-Young
Barbara Sellers-Young applies Damasio’s somatic-marker hypothesis to critical thinking as an embodied performance and provides a review of the theoretical literature in performance studies that supports something like Damasio’s approach:
Howard Gardner’s theory of multiple intelligences, especially bodily-kinesthetic intelligence
Thomas Hanna’s believe that “we cannot sense without acting and we cannot act without sensing”
Bonnie Bainbridge Cohen's movement-pedagogy
Konstantin Stanislavski’s acting theory that “in every physical action, unless it is purely mechanical, there is concealed some inner action, some feelings. This is how the two levels of life in a part are created, the inner and the outer. They are intertwined. A common purpose brings them together and reinforces the unbreakable bond.”
Edward Slingerland
Edward Slingerland applies Damasio's somatic-marker hypothesis to the cognitive linguistics by Gilles Fauconnier and Mark Turner, as well as George Lakoff and Mark Jo |
https://en.wikipedia.org/wiki/Anti-vibration%20compound | An anti-vibration compound is a temperature-resistant mixture of a liquid with fine particles, which is used to reduce oscillations in calender rolls
and to dampen vibrations in fabricated structures like machine beds and housings.
Use
Vibration may limit the performance of a calender or paper machine. It can have numerous sources such as bulk variations in the sheet, bearing problems, or misalignment of the driveshaft. Vibration manifests itself as a high frequency periodic movement of the roll body with an amplitude from less than one to several µm.
When anti-vibration compound is introduced to the center bores of the rolls, vibration is transferred from the solid roll structure to the incompressible fluid component of the anti-vibration compound. Its solid particles are less mobile due to their inertia. Thus the fluid is forced to oscillate around the solid components. The flow energy is absorbed by micro eddies by which the vibration is damped.
The benefits are a smoother running with increased operating speed and production, longer operating times of the polymer covers between re-grindings and improved product quality due to the reduction of barring.
Classical mechanics |
https://en.wikipedia.org/wiki/B-cell%20CLL/lymphoma | B-cell CLL/lymphoma refers to a family of genes associated with certain types of lymphoma and leukemia.
Although named for B-cell chronic lymphocytic leukemia, they can be associated with other malignancies.
Members include:
CCND1 (also known as "BCL1")
BCL2
BCL3
BCL5
BCL6, BCL6B
BCL7A, BCL7B, BCL7C
BCL8
BCL9
BCL10
BCL11A, BCL11B
See also
Bcl-2 family |
https://en.wikipedia.org/wiki/Oblique%20popliteal%20ligament | The oblique popliteal ligament (posterior ligament) is a broad, flat, fibrous ligament on the posterior knee. It is an extension of the tendon of the semimembranosus muscle. It attaches onto the intercondylar fossa and lateral condyle of the femur. It reinforces the posterior central portion of the knee joint capsule.
Anatomy
The oblique popliteal ligament is formed as a lateral expansion of the tendon of the semimembranosus muscle and represents one of the muscle's five insertions. The ligament blends with the posterior portion of the knee joint capsule. It exhibits a large opening through which nerves and vessels pass.
Attachments
The ligament extends superolaterally from the semimembranosus tendon to attach onto the intercondylar fossa and lateral condyle of the femur.
Relations
The oblique popliteal ligament forms part of the floor of the popliteal fossa; the popliteal artery lies upon the ligament. The ligament is pierced by posterior division of the obturator nerve, as well as the middle genicular nerve, the middle genicular artery, and the middle genicular vein.
Clinical significance
The oblique popliteal ligament may be damaged, causing a valgus deformity. Surgical repair of the ligament often leads to better outcomes than conservative management.
The oblique popliteal ligament may be cut during arthroscopic meniscus repair surgery.
Additional images |
https://en.wikipedia.org/wiki/Emboliform%20nucleus | The emboliform nucleus (or anterior interposed nucleus) is a deep cerebellar nucleus that lies immediately to the medial side of the nucleus dentatus, and partly covering its hilum. It is one among the four pairs of deep cerebellar nuclei, which are from lateral to medial: the dentate, interposed (which consists of the emboliform and globose), and fastigial nuclei. These nuclei can be seen using Weigert's elastic stain.
Emboliform, from Ancient Greek, means "shaped like a plug or wedge".
Structure
The emboliform nucleus is a wedge-shaped structure of gray matter found at the medial side of the hilum of the dentate nucleus. Its neurons display a similar structure from those of the dentate nucleus. In some mammals the emboliform nucleus is continuous with the globose nucleus, forming together the interposed nucleus. When present, the interposed nucleus can be divided in an anterior and a posterior interposed nucleus, considered homologues of the emboliform and globose nuclei, respectively.
Function
As a part of the interposed nucleus, the emboliform participates in the spinocerebellum, a system that regulates the precision of limb movements. Axons leaving the emboliform exit through the superior cerebellar peduncle and reach the red nucleus in the midbrain and several thalamic nuclei which project into areas of the cerebral cortex that control limb movement. |
https://en.wikipedia.org/wiki/Cineromycin%20B | Cineromycin B is an antiadipogenic antibiotic with the molecular formula C17H26O4 which is produced by the bacterium Streptomyces cinerochromogenes. |
https://en.wikipedia.org/wiki/L%27Aquila%20saffron | L'Aquila saffron () is a saffron product of cuisine of Abruzzo, Italy. It is traditionally cultivated in Navelli plateau and in Subequana Valley, in the Park Municipalities of Fagnano Alto, Fontecchio, Molina Aterno, Tione degli Abruzzi. Saffron was introduced in Italy from Spain in 13th century by a friar Dominican belonging to the Santucci family of Navelli. The production in the Navelli Plain is favored by the karst of the soil, which avoids the stagnation of water which is unfavorable to the growth of the plant.
Under its Italian name "Zafferano dell'Aquila" the product is registered as a Protected Designation of Origin since 4 February 2005, while the establishment of the Consortium for the Protection of Zafferano dell'Aquila dates back to May 13 2005. The name may only be used if it is produced according its specifications within the municipalities Barisciano, Caporciano, Fagnano Alto, Fontecchio, L'Aquila, Molina Aterno, Navelli, Poggio Picenze, Prata d'Ansidonia, San Demetrio nei Vestini, S. Pio delle Camere, Tione degli Abruzzi or Villa S. Angelo at an altitude of 350 -1000 metres above sea level.. It is included in the Slow Food movement Ark of Taste, an international catalogue of endangered heritage foods.
Production
The soil is prepared in spring with a 'plowing at a depth of 30 cm, with the simultaneous fertilization through about 30 t / ha of manure, being then prohibited the use of any fertilizer during the vegetative cycle. The surface is subsequently refined and leveled and 2 or 4 grooves are prepared at a distance of about 20 cm to accommodate the bulbs.
After a subsequent Milling of the soil, in August the bulbs are transplanted, with a density of about 10 t / ha, corresponding to about 600 000 bulbs. The soil is not irrigated and the bulbs are buried on the row in contact and at a depth of about 10 cm.
The first leaves filiform sprout with the first rains of September, with a development up to 40 cm. The flowers have six petals of a |
https://en.wikipedia.org/wiki/Oscillation%20%28cell%20signaling%29 | Oscillations are an important type of cell signaling characterized by the periodic change of the system in time. Oscillations can take place in a biological system in a multitude of ways. Positive feedback loops, on their own or in combination with negative feedback are a common feature of oscillating biological systems.
Examples
Genetic oscillation
One of the most common forms of biological oscillation is genetic oscillation, which can take place when a transcription factor binds and represses its own promoter. This type of regulatory system is able to successfully describe the NFkB-IkB and p53-Mdm52 biological oscillating systems.
Relaxation oscillations
Relaxation oscillation takes place in the context of a bi-stable system. It is characterized by the periodic switching between two stable states. |
https://en.wikipedia.org/wiki/International%20Conference%20on%20Logic%20Programming | The International Conference on Logic Programming (ICLP) is the premier academic conference on the topic of logic programming, one of the main programming paradigms. It is organized annually by the Association for Logic Programming (ALP). The conference consists of peer-reviewed papers with the post-proceedings published in the international journal Theory and Practice of Logic Programming (TPLP), published by Cambridge University Press. The acceptance rate for TPLP papers is about 20%. Technical Communications are published as Electronic Proceedings in Theoretical Computer Science.
The first ICLP was held in September 1982 in Marseille, France; the complete list is available on the ALP website. Every 4 years, ICLP is held in conjunction with several other logic conferences, in the Federated Logic Conferences (FLoC) series.
ICLP ranks as A (top 14.55%) in the CORE conference ranking. |
https://en.wikipedia.org/wiki/Fouling | Fouling is the accumulation of unwanted material on solid surfaces. The fouling materials can consist of either living organisms (biofouling) or a non-living substance (inorganic or organic). Fouling is usually distinguished from other surface-growth phenomena in that it occurs on a surface of a component, system, or plant performing a defined and useful function and that the fouling process impedes or interferes with this function.
Other terms used in the literature to describe fouling include deposit formation, encrustation, crudding, deposition, scaling, scale formation, slagging, and sludge formation. The last six terms have a more narrow meaning than fouling within the scope of the fouling science and technology, and they also have meanings outside of this scope; therefore, they should be used with caution.
Fouling phenomena are common and diverse, ranging from fouling of ship hulls, natural surfaces in the marine environment (marine fouling), fouling of heat-transfer components through ingredients contained in cooling water or gases, and even the development of plaque or calculus on teeth or deposits on solar panels on Mars, among other examples.
This article is primarily devoted to the fouling of industrial heat exchangers, although the same theory is generally applicable to other varieties of fouling. In cooling technology and other technical fields, a distinction is made between macro fouling and micro fouling. Of the two, micro fouling is the one that is usually more difficult to prevent and therefore more important.
Components subject to fouling
Examples of components that may be subject to fouling and the corresponding effects of fouling:
Heat exchanger surfaces – reduces thermal efficiency, decreases heat flux, increases temperature on the hot side, decreases temperature on the cold side, induces under-deposit corrosion, increases use of cooling water;
Piping, flow channels – reduces flow, increases pressure drop, increases upstream pressure, incr |
https://en.wikipedia.org/wiki/Plant%20cover | The abundances of plant species are often measured by plant cover, which is the relative area covered by different plant species in a small plot. Plant cover is not biased by the size and distributions of individuals, and is an important and often measured characteristic of the composition of plant communities.
Usage
Plant cover data may be used to classify the studied plant community into a vegetation type, to test different ecological hypothesis on plant abundance, and in gradient studies, where the effects of different environmental gradients on the abundance of specific plant species are studied .
Measurement
The most common way to measure plant cover in herbal plant communities, is to make a visual assessment of the relative area covered by the different species in a small plot (see quadrat). The visually assessed cover of a plant species is then recorded as a continuous variable between 0 and 1, or divided into interval classes as an ordinal variable. An alternative methodology, called the pin-point method (or point-intercept method), has also been widely employed.
In a pin-point analysis, a frame with a fixed grid pattern is placed randomly above the vegetation, and a thin pin is inserted vertically through one of the grid points into the vegetation. The different species touched by the pin are recorded at each insertion. The cover of plant species k in a
plot, , is now assumed to be proportional to the number of “hits” by the pin,
,
where is the number of pins that hit species k out of a total of n pins. Since a single pin in multi-species plant communities often will hit more than a single species, the sum of the plant cover of the different species may be larger than unity when estimated by the pin-point method. The sum of the estimated plant cover is expected to increase with the number of plant species in a plot and with increasing 3-dimensional structuring of the plants in the community. Plant cover data obtained by the pin-point method may be mod |
https://en.wikipedia.org/wiki/Logical%20equivalence | In logic and mathematics, statements and are said to be logically equivalent if they have the same truth value in every model. The logical equivalence of and is sometimes expressed as , , , or , depending on the notation being used.
However, these symbols are also used for material equivalence, so proper interpretation would depend on the context. Logical equivalence is different from material equivalence, although the two concepts are intrinsically related.
Logical equivalences
In logic, many common logical equivalences exist and are often listed as laws or properties. The following tables illustrate some of these.
General logical equivalences
Logical equivalences involving conditional statements
Logical equivalences involving biconditionals
Examples
In logic
The following statements are logically equivalent:
If Lisa is in Denmark, then she is in Europe (a statement of the form ).
If Lisa is not in Europe, then she is not in Denmark (a statement of the form ).
Syntactically, (1) and (2) are derivable from each other via the rules of contraposition and double negation. Semantically, (1) and (2) are true in exactly the same models (interpretations, valuations); namely, those in which either Lisa is in Denmark is false or Lisa is in Europe is true.
(Note that in this example, classical logic is assumed. Some non-classical logics do not deem (1) and (2) to be logically equivalent.)
Relation to material equivalence
Logical equivalence is different from material equivalence. Formulas and are logically equivalent if and only if the statement of their material equivalence () is a tautology.
The material equivalence of and (often written as ) is itself another statement in the same object language as and . This statement expresses the idea "' if and only if '". In particular, the truth value of can change from one model to another.
On the other hand, the claim that two formulas are logically equivalent is a statement in metalanguage, which expresse |
https://en.wikipedia.org/wiki/Podium%20%28company%29 | Podium is a private technology company headquartered in Lehi, Utah that develops cloud-based software related to messaging, customer feedback, online reviews, selling products, and requesting payments.
History
Podium was founded in 2014 by Eric Rea and Dennis Steele, who developed a tool to help small businesses "build their online reputation" through online reviews. Podium was initially known as RepDrive before rebranding as Podium in 2015. In 2015, Podium moved from a spare bedroom to a new location above a Provo bike shop. In March 2020, Podium added payments technology to its product suite. In November 2021, Podium raised $201 million in Series D funding and was valued at $3 billion.
Product
Podium is a software-as-a-service platform designed to improve business online reputation. It helps users manage business interactions in one tool. Users can communicate reviews, texts, chats, and post payment directly within the app.
Awards and recognition
Podium was named one of the "Emerging Elite" by Mountain West Capital Network in 2016 and 2017.
Podium's CEO Eric Rea was interviewed for "The Top" podcast in December 2016.
Eric Rea was named one of the "highest rated CEOs" by Glassdoor in June 2017.
Named the "No. 1 Startup to Watch" by Utah Valley Magazine in September 2017.
Named Utah Business Medium Companies Best Companies to Work for in December 2017.
Ranked 16th on Glassdoor's "Best Places to Work" list in 2018.
Ranked 13th on the 2018 Inc. 5000.
Listed on Forbes' 2018 and 2021 "Cloud 100".
Recognized by Forbes as one of their "Next Billion-Dollar Startups" .
Ranked as the ninth fastest growing technology company, public or private, by Deloitte in 2018
Named by Fast Company as one of the "World's Most Innovative Companies" in 2019 |
https://en.wikipedia.org/wiki/Wannier%20equation | The Wannier equation describes a quantum mechanical eigenvalue problem in solids where an electron in a conduction band and an electronic vacancy (i.e. hole) within a valence band attract each other via the Coulomb interaction. For one electron and one hole, this problem is analogous to the Schrödinger equation of the hydrogen atom; and the bound-state solutions are called excitons. When an exciton's radius extends over several unit cells, it is referred to as a Wannier exciton in contrast to Frenkel excitons whose size is comparable with the unit cell. An excited solid typically contains many electrons and holes; this modifies the Wannier equation considerably. The resulting generalized Wannier equation can be determined from the homogeneous part of the semiconductor Bloch equations or the semiconductor luminescence equations.
The equation is named after Gregory Wannier.
Background
Since an electron and a hole have opposite charges their mutual Coulomb interaction is attractive. The corresponding Schrödinger equation, in relative coordinate , has the same form as the hydrogen atom:
with the potential given by
Here, is the reduced Planck constant, is the nabla operator, is the reduced mass, () is the elementary charge related to an electron (hole), is the relative permittivity, and is the vacuum permittivity. The solutions of the hydrogen atom are described by eigenfunction and eigenenergy where is a quantum number labeling the different states.
In a solid, the scaling of and the wavefunction size are orders of magnitude different from the hydrogen problem because the relative permittivity is roughly ten and the reduced mass in a solid is much smaller than the electron rest mass , i.e., . As a result, the exciton radius can be large while the exciton binding energy is small, typically few to hundreds of meV, depending on material, compared to eV for the hydrogen problem.
The Fourier transformed version of the presented Hamiltonian can be written a |
https://en.wikipedia.org/wiki/Blue%20hour | The blue hour (from French ; ) is the period of twilight (in the morning or evening, around the nautical stage) when the Sun is at a significant depth below the horizon. During this time, the remaining sunlight takes on a mostly blue shade. This shade differs from the colour of the sky on a clear day, which is caused by Rayleigh scattering.
The blue hour occurs when the Sun is far enough below the horizon so that the sunlight's blue wavelengths dominate due to the Chappuis absorption caused by ozone. Since the term is colloquial, it lacks an official definition such as dawn, dusk, or the three stages of twilight. Rather, blue hour refers to the state of natural lighting that usually occurs around the nautical stage of the twilight period (at dawn or dusk).
Explanation and times of occurrence
The still commonly presented incorrect explanation claims that Earth's post-sunset and pre-sunrise atmosphere solely receives and disperses the sun's shorter blue wavelengths and scatters the longer, reddish wavelengths to explain why the hue of this hour is so blue. In fact, the blue hour occurs when the Sun is far enough below the horizon so that the sunlight's blue wavelengths dominate due to the Chappuis absorption caused by ozone.
When the sky is clear, the blue hour can be a colourful spectacle, with the indirect sunlight tinting the sky yellow, orange, red, and blue. This effect is caused by the relative diffusibility of shorter wavelengths (bluer rays) of visible light versus the longer wavelengths (redder rays). During the blue "hour", red light passes through space while blue light is scattered in the atmosphere, and thus reaches Earth's surface.
Blue hour usually lasts about 20–96 minutes right after sunset and right before sunrise. Time of year, location, and air quality all have an impact on the exact timing of blue hour. For instance in Egypt (every 21st of June), when sunset is at 7:59 PM: blue hour occurs from 7:59 PM to 9:35 PM. When sunrise is at 5:54 AM: b |
https://en.wikipedia.org/wiki/Exotheology | The term "exotheology" was coined in the 1960s or early 1970s for the examination of theological issues as they pertain to extraterrestrial intelligence. It is primarily concerned with either conjecture about possible theological beliefs that extraterrestrials might have, or how our own theologies would be influenced by evidence of and/or interaction with extraterrestrials.
One of the main themes of exotheology is applying the concept of extraterrestrials who are sentient, and more to the point, endowed with a soul, as a thought experiment to the examination of a given theology, mostly Christian theology, occasionally also Jewish theology.
Christianity
The Christian writer C. S. Lewis, in a 1950s article in the Christian Herald contemplated the possibility of the Son of God incarnating on extraterrestrial worlds, or else that God could devise an entirely distinct plan of salvation for extraterrestrial communities from the one for humans.
Lutheran theologian Ted Peters (2003) said that the questions raised by the possibility of extraterrestrial life are by not new to Christian theology and do not pose, as said by other authors, a threat for Christian dogma. Peters says that medieval theology had frequently considered the question of "what if God had created many worlds?", as had the earlier Church Fathers in discussion of the Antipodes.
The Catholic theologian Corrado Balducci often discussed the question in Italian popular media, and in 2001 published a statement UFOs and Extraterrestrials - A Problem for the Church?.
In a 2008 statement, José Gabriel Funes, head of the Vatican Observatory, said
"Just as there is a multiplicity of creatures on earth, there can be other beings, even intelligent, created by God. This is not in contrast with our faith because we can't put limits on God's creative freedom".
Smaller denominations also have similar treatments in passing in their key writings: Christian Science and the Course in Miracles treat extraterrestrials as |
https://en.wikipedia.org/wiki/171st%20meridian%20east | The meridian 171° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, New Zealand, the Southern Ocean, and Antarctica to the South Pole.
The 171st meridian east forms a great circle with the 9th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 171st meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Chukotka Autonomous Okrug Kamchatka Krai — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bering Sea
| style="background:#b0e0e6;" |
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Passing just east of the island of Mejit, (at )
|-
|
! scope="row" |
| Maloelap Atoll
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Passing just west of Aur Atoll, (at ) Passing just west of Majuro Atoll, (at )
|-
|
! scope="row" |
| South Island — passing just east of the town of Oamaru (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Southern Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" | Antarctica
| Ross Dependency, claimed by
|-
|}
See also
170th meridian east
172nd meridian east
e171 meridian east |
https://en.wikipedia.org/wiki/Regret%20%28decision%20theory%29 | In decision theory, on making decisions under uncertainty—should information about the best course of action arrive after taking a fixed decision—the human emotional response of regret is often experienced, and can be measured as the value of difference between a made decision and the optimal decision.
The theory of regret aversion or anticipated regret proposes that when facing a decision, individuals might anticipate regret and thus incorporate in their choice their desire to eliminate or reduce this possibility. Regret is a negative emotion with a powerful social and reputational component, and is central to how humans learn from experience and to the human psychology of risk aversion. Conscious anticipation of regret creates a feedback loop that transcends regret from the emotional realm—often modeled as mere human behavior—into the realm of the rational choice behavior that is modeled in decision theory.
Description
Regret theory is a model in theoretical economics simultaneously developed in 1982 by Graham Loomes and Robert Sugden, David E. Bell, and Peter C. Fishburn. Regret theory models choice under uncertainty taking into account the effect of anticipated regret. Subsequently, several other authors improved upon it.
It incorporates a regret term in the utility function which depends negatively on the realized outcome and positively on the best alternative outcome given the uncertainty resolution. This regret term is usually an increasing, continuous and non-negative function subtracted to the traditional utility index. These type of preferences always violate transitivity in the traditional sense, although most satisfy a weaker version.
Evidence
Several experiments over both incentivized and hypothetical choices attest to the magnitude of this effect.
Experiments in first price auctions show that by manipulating the feedback the participants expect to receive, significant differences in the average bids are observed. In particular, "Loser's regret |
https://en.wikipedia.org/wiki/HP%20Enterprise%20Security%20Products | The Micro Focus Enterprise Security Products business is part of the software business of Micro Focus. HP Enterprise Security Products was built from acquired companies Fortify Software, ArcSight, and TippingPoint and Atalla (from the acquisition of 3Com), which HP bought in 2010 and 2011. HPE has since sold TippingPoint and has announced the intention to divest the entire HP Enterprise Software business unit by spinning it out and merging it with Micro Focus. The merge concluded on September 1, 2017.
Products
ArcSight and Fortify security technologies are designed to scan network activity and data to offer customers real-time application-level threat detection. ArcSight provides Security Information and Event Management (SIEM). Fortify provides application protection through the combination of static and dynamic application security testing.
Atalla products are cryptographic solutions and key management solutions.
TippingPoint products provide a network defence system. Announced in September 2013, TippingPoint Next-Generation Firewall (NGFW) is designed to block new risks introduced by cloud and mobile computing.
Divestiture and sale
HP Enterprise sold TippingPoint to Trend Micro on October 21, 2015, for approximately $300 million.
On September 7, 2016, HPE CEO Meg Whitman announced that the software assets of Hewlett Packard Enterprise, including HP Enterprise Security Products, would be spun out and then merged with Micro Focus to create an independent company of which HP Enterprise shareholders would retain the majority ownership. Micro Focus CEO Kevin Loosemore called the transaction "entirely consistent with our established acquisition strategy and our focus on efficient management of mature infrastructure products" and indicated that Micro Focus intended to "bring the core earnings margin for the mature assets in the deal - about 80 per cent of the total - from 21 per cent today to Micro Focus's existing 46 per cent level within three years."
HP Prote |
https://en.wikipedia.org/wiki/Phosphoric%20monoester%20hydrolases | Phosphoric monoester hydrolases (or phosphomonoesterases) are enzymes that catalyse the hydrolysis of O-P bonds by nucleophilic attack of phosphorus by cysteine residues or coordinated metal ions.
They are categorized with the EC number 3.1.3.
Examples include:
acid phosphatase
alkaline phosphatase
fructose-bisphosphatase
glucose-6-phosphatase
phosphofructokinase-2
phosphoprotein phosphatase
calcineurin
6-phytase
See also
phosphodiesterase
phosphatase
External links
Metabolism |
https://en.wikipedia.org/wiki/Ridged%20mirror | In atomic physics, a ridged mirror (or ridged atomic mirror, or Fresnel diffraction mirror) is a kind of atomic mirror, designed for the specular reflection of neutral particles (atoms) coming at a grazing incidence angle. In order to reduce the mean attraction of particles to the surface and increase the reflectivity, this surface has narrow ridges.
Reflectivity of ridged atomic mirrors
Various estimates for the efficiency of quantum reflection of waves from ridged mirror were discussed in the literature. All the estimates explicitly use the de Broglie theory about wave properties of reflected atoms.
Scaling of the van der Waals force
The ridges enhance the quantum reflection from the surface, reducing the effective constant of the van der Waals attraction of atoms to the surface. Such interpretation leads to the estimate of the reflectivity
,
where is width of the ridges, is distance between ridges, is grazing angle, and is wavenumber and is coefficient of reflection of atoms with wavenumber from a flat surface at the normal incidence. Such estimate predicts the enhancement of the reflectivity at the increase of period ; this estimate is valid at . See quantum reflection for the approximation (fit) of the function .
Interpretation as Zeno effect
For narrow ridges with large period , the ridges just blocks the part of the wavefront. Then, it can be interpreted in terms of the Fresnel diffraction of the de Broglie wave, or the Zeno effect; such interpretation leads to the estimate the reflectivity
,
where the grazing angle is supposed to be small. This estimate predicts enhancement of the reflectivity at the reduction of period . This estimate requires that .
Fundamental limit
For efficient ridged mirrors, both estimates above should predict high reflectivity. This implies reduction of both, width, of the ridges and the period, . The width of the ridges cannot be smaller than the size of an atom; this sets the limit of performance of the ridged mirro |
https://en.wikipedia.org/wiki/Quotient%20of%20subspace%20theorem | In mathematics, the quotient of subspace theorem is an important property of finite-dimensional normed spaces, discovered by Vitali Milman.
Let (X, ||·||) be an N-dimensional normed space. There exist subspaces Z ⊂ Y ⊂ X such that the following holds:
The quotient space E = Y / Z is of dimension dim E ≥ c N, where c > 0 is a universal constant.
The induced norm || · || on E, defined by
is uniformly isomorphic to Euclidean. That is, there exists a positive quadratic form ("Euclidean structure") Q on E, such that
for
with K > 1 a universal constant.
The statement is relative easy to prove by induction on the dimension of Z (even for Y=Z, X=0, c=1) with a K that depends only on N; the point of the theorem is that K is independent of N.
In fact, the constant c can be made arbitrarily close to 1, at the expense of the
constant K becoming large. The original proof allowed
Notes |
https://en.wikipedia.org/wiki/Machine%20perception | Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. The basic method that the computers take in and respond to their environment is through the attached hardware. Until recently input was limited to a keyboard, or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.
Machine perception allows the computer to use this sensory input, as well as conventional computational means of gathering information, to gather information with greater accuracy and to present it in a way that is more comfortable for the user. These include computer vision, machine hearing, machine touch, and machine smelling, as artificial scents are, at a chemical compound, molecular, atomic level, indiscernible and identical.
The end goal of machine perception is to give machines the ability to see, feel and perceive the world as humans do and therefore for them to be able to explain in a human way why they are making their decisions, to warn us when it is failing and more importantly, the reason why it is failing. This purpose is very similar to the proposed purposes for artificial intelligence generally, except that machine perception would only grant machines limited sentience, rather than bestow upon machines full consciousness, self-awareness, and intentionality.
Machine vision
Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and high-dimensional data from the real world to produce numerical or symbolic information, e.g., in the forms of decisions. Computer vision has many applications already in use today such as facial recognition, geographical modeling, and even aesthetic judgment.
However, machines still struggle to interpret visual impute accurately if said impute is blurry, and if the viewpoint at which stimul |
https://en.wikipedia.org/wiki/Martellivirales | Martellivirales is an order of viruses.
Taxonomy
The following families are recognized:
Bromoviridae
Closteroviridae
Endornaviridae
Kitaviridae
Mayoviridae
Togaviridae
Virgaviridae |
https://en.wikipedia.org/wiki/Provider%20model | The provider model is a design pattern formulated by Microsoft for use in the ASP.NET Starter Kits and formalized in .NET version 2.0. It is used to allow an application to choose from one of multiple implementations or "condiments" in the application configuration, for example, to provide access to different data stores to retrieve login information, or to use different storage methodologies such as a database, binary to disk, XML, etc.
The .NET extensible provider model allows a "component" to have multiple implementations using an abstract factory pattern approach. Providers are a subclass of the ProviderBase class and typically instantiated using a factory method.
The provider model in ASP.NET 2.0 provides extensibility points for developers to plug their own implementation of a feature into the runtime. Both the membership and role features in ASP.NET 2.0 follow the provider pattern by specifying an interface, or contract. The provider model begins with the abstract class ProviderBase. ProviderBase exists to enforce the contract that all providers need public Name and Description properties, as well as a public Initialize method. Inheriting from ProviderBase are the MembershipProvider and RoleProvider abstract classes. These classes add additional properties and methods to define the interface for their specific areas of functionality.
Strategy pattern renaming
It has been argued that the provider model is merely another name for the already existing strategy pattern, and that this should, therefore, be the preferred terminology for describing the design pattern at hand.
See also
Strategy pattern |
https://en.wikipedia.org/wiki/Congener%20%28beverages%29 | In the alcoholic beverages industry, congeners are substances, other than the desired type of alcohol, ethanol, produced during fermentation. These substances include small amounts of chemicals such as methanol and other alcohols (known as fusel alcohols), acetone, acetaldehyde, esters, tannins, and aldehydes (e.g. furfural). Congeners are responsible for most of the taste and aroma of distilled alcoholic beverages, and contribute to the taste of non-distilled drinks. Brandy, rum and red wine have the highest amount of congeners, while vodka and beer have the least.
Congeners are the basis of alcohol congener analysis, a sub-discipline of forensic toxicology which determines what a person drank.
There is some evidence that high-congener drinks induce more severe hangovers, but the effect is not well studied and is still secondary to the total amount of ethanol consumed.
See also
Alcohol (drug)
Alcohol congener analysis
Wine chemistry |
https://en.wikipedia.org/wiki/Password%20strength | Password strength is a measure of the effectiveness of a password against guessing or brute-force attacks. In its usual form, it estimates how many trials an attacker who does not have direct access to the password would need, on average, to guess it correctly. The strength of a password is a function of length, complexity, and unpredictability.
Using strong passwords lowers the overall risk of a security breach, but strong passwords do not replace the need for other effective security controls. The effectiveness of a password of a given strength is strongly determined by the design and implementation of the authentication factors (knowledge, ownership, inherence). The first factor is the main focus of this article.
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g. three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secured with relatively simple passwords. However, the system must store information about the user's passwords in some form and if that information is stolen, say by breaching system security, the user's passwords can be at risk.
In 2019, the United Kingdom's NCSC analyzed public databases of breached accounts to see which words, phrases, and strings people used. The most popular password on the list was 123456, appearing in more than 23 million passwords. The second-most popular string, 123456789, was not much harder to crack, while the top five included "qwerty", "password", and 1111111.
Password creation
Passwords are created either automatically (using randomizing equipment) or by a human; the latter case is more common. While the strength of randomly chosen passwords against a brute-force attack can be calculated with precision, determining the strength of human-generated passwords is difficult.
Typically, humans are asked to choose a |
https://en.wikipedia.org/wiki/Parametric%20array | A parametric array, in the field of acoustics, is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high frequency sound waves, effectively overcoming the diffraction limit (a kind of spatial 'uncertainty principle') associated with linear acoustics. The main side lobe-free beam of low frequency sound is created as a result of nonlinear mixing of two high frequency sound beams at their difference frequency. Parametric arrays can be formed in water, air, and earth materials/rock.
History
Priority for discovery and explanation of the parametric array owes to Peter J. Westervelt, winner of the Lord Rayleigh Medal (currently Professor Emeritus at Brown University), although important experimental work was contemporaneously underway in the former Soviet Union.
According to Muir and Albers, the concept for the parametric array occurred to Dr. Westervelt while he was stationed at the London, England, branch office of the Office of Naval Research in 1951.
According to Albers, he (Westervelt) there first observed an accidental generation of low frequency sound in air by Captain H.J. Round (British pioneer of the superheterodyne receiver) via the parametric array mechanism.
The phenomenon of the parametric array, seen first experimentally by Westervelt in the 1950s, was later explained theoretically in 1960, at a meeting of the Acoustical Society of America. A few years after this, a full paper was published as an extension of Westervelt's classic work on the nonlinear Scattering of Sound by Sound.
Foundations
The foundation for Westervelt's theory of sound generation and scattering in nonlinear acoustic media owes to an application of Lighthill's equation for fluid particle motion.
The application of Lighthill’s theory to the nonlinear acoustic realm yields the Westervelt–Lighthill Equation (WLE). Solutions to this equation have been developed using Green's functions and |
https://en.wikipedia.org/wiki/Beating%20net | A beating net, also known as beating sheet, beat sheet or beating tray, is a device used to collect insects. It consists of a white cloth stretched out on a circular or rectangular frame which may be dismantled for transport. The beating tray is held under a tree or shrub and the foliage is then shaken or beaten with a stick. Insects fall from the plant and land on the cloth. They can then be examined or collected using a pooter.
The insect beating net was devised by George Carter Bignell. |
https://en.wikipedia.org/wiki/Varignon%27s%20theorem%20%28mechanics%29 | Varignon's theorem is a theorem of French mathematician Pierre Varignon (1654–1722), published in 1687 in his book Projet d'une nouvelle mécanique. The theorem states that the torque of a resultant of two concurrent forces about any point is equal to the algebraic sum of the torques of its components about the same point.
In other words, "If many concurrent forces are acting on a body, then the algebraic sum of torques of all the forces about a point in the plane of the forces is equal to the torque of their resultant about the same point."
Proof
Consider a set of force vectors that concur at a point in space. Their resultant is:
.
The torque of each vector with respect to some other point is
.
Adding up the torques and pulling out the common factor , one sees that the result may be expressed solely in terms of , and is in fact the torque of with respect to the point :
.
Proving the theorem, i.e. that the sum of torques about is the same as the torque of the sum of the forces about the same point. |
https://en.wikipedia.org/wiki/Operculum%20%28botany%29 | In botany, an operculum () or calyptra () is a cap-like structure in some flowering plants, mosses, and fungi. It is a covering, hood or lid, describing a feature in plant morphology.
Flowering plants
In flowering plants, the operculum, also known as a calyptra, is the cap-like covering or "lid" of the flower or fruit that detaches at maturity. The operculum is formed by the fusion of sepals and/or petals and is usually shed as a single structure as the flower or fruit matures. The name is also used for the capping tissue of roots, the root cap.
In eucalypts, (including Eucalyptus and Corymbia but not Angophora) there may be two opercula – an outer operculum formed by the fusion of the united sepals and an inner operculum formed by the fusion of the sepals. In that case, the outer operculum is shed early in the development of the bud leaving a scar around the bud. In those species that lack an outer operculum, there is no bud scar. The inner operculum is shed just before flowering, when the stamens expand and shed their pollen.
In some species of monocotyledon, the operculum is an area of exine covering the pollen aperture.
In Plantago, the capsule has an opening covered by an operculum. When the operculum falls, the seed is sticky and is easily carried by animals that come into contact with it.
Pitcher plants have an operculum above the pitcher that serves to keep out rainwater that would otherwise dilute the digestive juices in the pitcher.
Bryophytes
In bryophytes, the calyptra (plural calyptrae) is an enlarged archegonial venter that protects the capsule containing the embryonic sporophyte. The calyptra is usually lost before the spores are released from the capsule. The shape of the calyptra can be used for identification purposes.
The sporangium of mosses usually opens when its operculum or "lid" falls off, exposing a ring of teeth that control the release of spores.
Fungi
There are two types of sexual spore-bearing asci of ascomycete fungi – those tha |
https://en.wikipedia.org/wiki/Du%20Bois%20singularity | In algebraic geometry, Du Bois singularities are singularities of complex varieties studied by .
gave the following characterisation of Du Bois singularities. Suppose that is a reduced closed subscheme of a smooth scheme .
Take a log resolution of in that is an isomorphism outside , and let be the reduced preimage of in . Then has Du Bois singularities if and only if the induced map is a quasi-isomorphism. |
https://en.wikipedia.org/wiki/Lymphoepithelial%20lesion | In pathology, lymphoepithelial lesion refers to a discrete abnormality that consists of lymphoid cells and epithelium, which may or may not be benign.
It may refer to a benign lymphoepithelial lesion of the parotid gland or benign lymphoepithelial lesion of the lacrimal gland, or may refer to the infiltration of malignant lymphoid cells into epithelium, in the context of primary gastrointestinal lymphoma.
In the context of GI tract lymphoma, it is most often associated with MALT lymphomas.
See also
Gastric lymphoma
MALT lymphoma |
https://en.wikipedia.org/wiki/Transcendental%20equation | In applied mathematics, a transcendental equation is an equation over the real (or complex) numbers that is not algebraic, that is, if at least one of its sides describes a transcendental function.
Examples include:
A transcendental equation need not be an equation between elementary functions, although most published examples are.
In some cases, a transcendental equation can be solved by transforming it into an equivalent algebraic equation.
Some such transformations are sketched below; computer algebra systems may provide more elaborated transformations.
In general, however, only approximate solutions can be found.
Transformation into an algebraic equation
Ad hoc methods exist for some classes of transcendental equations in one variable to transform them into algebraic equations which then might be solved.
Exponential equations
If the unknown, say x, occurs only in exponents:
applying the natural logarithm to both sides may yield an algebraic equation, e.g.
transforms to , which simplifies to , which has the solutions
This will not work if addition occurs "at the base line", as in
if all "base constants" can be written as integer or rational powers of some number q, then substituting y=qx may succeed, e.g.
transforms, using y=2x, to which has the solutions , hence is the only real solution.
This will not work if squares or higher power of x occurs in an exponent, or if the "base constants" do not "share" a common q.
sometimes, substituting y=xex may obtain an algebraic equation; after the solutions for y are known, those for x can be obtained by applying the Lambert W function, e.g.:
transforms to which has the solutions hence , where and the denote the real-valued branches of the multivalued function.
Logarithmic equations
If the unknown x occurs only in arguments of a logarithm function:
applying exponentiation to both sides may yield an algebraic equation, e.g.
transforms, using exponentiation to base to which has the solutions I |
https://en.wikipedia.org/wiki/Pico-ITXe | Pico-ITXe is a PC Pico-ITX motherboard specification created by VIA Technologies and SFF-SIG. It was announced by VIA Technologies on October 29, 2008, and released in December 2008. The Pico-ITXe specifications call for the board to be , which is half the area of Nano-ITX, and 12 layers deep. The processor can be a VIA C7 that uses VIA's NanoBGA2 technology. It uses DDR2 667/533 SO-DIMM memory, with support for up to 2GB. Video is supplied by VIA's Chrome9 HC3 GPU with built-in MPEG-2, 4, WMV9, and VC1 decoding acceleration. The BIOS is a 4 or 8 Mbit Award BIOS.
EPIA-P710
The first motherboard that was produced under this specification is called EPIA-P710. It was released in December 2008. It is and 12 layers deep. The operating temperature range is from 0°C to about 50 °C. The operating humidity level (relative and non-condensing) can be from 0% to about 95%. It uses a 1 GHz VIA C7-M processor, a VIA VX800 chip set, and is RoHS compliant. It has onboard VGA video-out. Gigabit Ethernet is supplied by VIA's VT6122, but requires a connector board. HD 5.1 channel audio is provided by a VIA VT1708B chip.
The following are the standard I/O connections:
2× SUMIT QMS/QFS series connectors by Samtec
1× GigaLAN pin header
1× Audio Pin Connector for Line-out, Line-in, MIC-in
1× Front Panel pin header
1× CRT pin header
1× UDMA 100/133 44-pin PATA
1× 3 Gbit/s SATA
DVI and LVDS video-out
USB 2.0
COM
PS/2 Mouse & Keyboard
Up to four I/O expansion boards can be stacked upon each other using the SUMIT interface.
See also
Pico-ITX
Mini-ITX
Mobile-ITX
EPIA
Ultra-Mobile PC |
https://en.wikipedia.org/wiki/Ceragon | Ceragon Networks Ltd. is a networking equipment vendor, focused on wireless point-to-point connectivity, mostly used for wireless backhaul by mobile operators and wireless service providers as well as private businesses.
Ceragon's products include Short-Haul and Long-Haul wireless point-to-point systems in licensed microwave licensed spectrum (4–42 GHz) and millimeter-wave (57–88 GHz and, in the future – up to 170 GHz) spectrum range.
5G Wireless Backhaul Services
Ceragon is also a provider of 5G wireless transport, enabling it to connect of broadband sites to the core network in a wireless manner. This is a common way to connect areas to broadband networks when for various reasons using an optic fiber connection is not an option.
Corporate history
Established in 1996 under the name Giganet, Ceragon Networks was first listed on the NASDAQ on September 6, 2000 (symbol: CRNT).
Ceragon designs and manufactures high-capacity communication systems for wireless backhaul, mid-haul, and front-haul – addressing the segment of the cellular market that connects a typical cell site to an operator's core network (backhaul) and different cell site functions that reside in separate geographical locations (mid-haul and front-haul). Ceragon provides wireless equipment with capacities of up to 20Gbps and plans to add products, based on higher frequency bands, to support up to 100Gbps.
Ceragon markets its products under the IP-20 and IP-50 brands. Ceragon has a customer base of over 230 service providers of all sizes, and hundreds of private networks in more than 130 countries across the globe.
Ceragon has numerous sales offices located throughout North and South America, EMEA, and Asia, handling direct sales. Partnerships with leading distributors, VARs, and system integrators around the world provide an active indirect channel. Its US headquarters was opened in 1999 and its European headquarters in 2000.
Ceragon reported worldwide revenue of $290.8 million US dollars for 20 |
https://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite%20theorem | In social choice theory, the Gibbard–Satterthwaite theorem is a result published independently by philosopher Allan Gibbard in 1973 and economist Mark Satterthwaite in 1975. It deals with deterministic ordinal electoral systems that choose a single winner. It states that for every voting rule, one of the following three things must hold:
The rule is dictatorial, i.e. there exists a distinguished voter who can choose the winner; or
The rule limits the possible outcomes to two alternatives only; or
The rule is susceptible to tactical voting: in certain conditions, a voter's sincere ballot may not best defend their opinion.
While the scope of this theorem is limited to ordinal voting, Gibbard's theorem is more general, in that it deals with processes of collective decision that may not be ordinal: for example, voting systems where voters assign grades to candidates. Gibbard's 1978 theorem and Hylland's theorem are even more general and extend these results to non-deterministic processes, i.e. where the outcome may not only depend on the voters' actions but may also involve a part of chance.
Informal description
Consider three voters named Alice, Bob and Carol, who wish to select a winner among four candidates named , , and . Assume that they use the Borda count: each voter communicates his or her preference order over the candidates. For each ballot, 3 points are assigned to the top candidate, 2 points to the second candidate, 1 point to the third one and 0 points to the last one. Once all ballots have been counted, the candidate with the most points is declared the winner.
Assume that their preferences are as follows.
If the voters cast sincere ballots, then the scores are: . Hence, candidate will be elected, with 7 points.
But Alice can vote strategically and change the result. Assume that she modifies her ballot, in order to produce the following situation.
Alice has strategically upgraded candidate and downgraded candidate . Now, the scores are: . Hen |
https://en.wikipedia.org/wiki/Electromyography | Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG is performed using an instrument called an electromyograph to produce a record called an electromyogram. An electromyograph detects the electric potential generated by muscle cells when these cells are electrically or neurologically activated. The signals can be analyzed to detect abnormalities, activation level, or recruitment order, or to analyze the biomechanics of human or animal movement. Needle EMG is an electrodiagnostic medicine technique commonly used by neurologists. Surface EMG is a non-medical procedure used to assess muscle activation by several professionals, including physiotherapists, kinesiologists and biomedical engineers. In computer science, EMG is also used as middleware in gesture recognition towards allowing the input of physical action to a computer as a form of human-computer interaction.
Clinical uses
EMG testing has a variety of clinical and biomedical applications. Needle EMG is used as a diagnostics tool for identifying neuromuscular diseases, or as a research tool for studying kinesiology, and disorders of motor control. EMG signals are sometimes used to guide botulinum toxin or phenol injections into muscles. Surface EMG is used for functional diagnosis and during instrumental motion analysis. EMG signals are also used as a control signal for prosthetic devices such as prosthetic hands, arms and lower limbs.
An acceleromyograph may be used for neuromuscular monitoring in general anesthesia with neuromuscular-blocking drugs, in order to avoid postoperative residual curarization (PORC).
Except in the case of some purely primary myopathic conditions EMG is usually performed with another electrodiagnostic medicine test that measures the conducting function of nerves. This is called nerve conduction study (NCS). Needle EMG and NCSs are typically indicated when there is pain in the limbs, weakness from spinal nerv |
https://en.wikipedia.org/wiki/Balayage | In potential theory, a mathematical discipline, balayage (from French: balayage "scanning, sweeping") is a method devised by Henri Poincaré for reconstructing a harmonic function in a domain from its values on the boundary of the domain.
In modern terms, the balayage operator maps a measure μ on a closed domain D to a measure ν on the boundary ∂ D, so that the Newtonian potentials of μ and ν coincide outside . The procedure is called balayage since the mass is "swept out" from D onto the boundary.
For x in D, the balayage of δx yields the harmonic measure νx corresponding to x. Then the value of a harmonic function f at x is equal to |
https://en.wikipedia.org/wiki/Pelargonium%20inquinans | Pelargonium inquinans, the scarlet geranium, is a species of plant in the genus Pelargonium (family Geraniaceae). It is a shrub endemic to South Africa, ranging from Mpumalanga to KwaZulu-Natal and Eastern Cape provinces. It is one of the ancestors of the hybrid line of horticultural pelargoniums, referred to as the zonal group. They can easily be propagated by seeds and cuttings.
Etymology and history
The generic name Pelargonium in scientific Latin derives from the Greek pelargós (πελαργός), which means the stork and the shape of their fruit evoking the beak of the wader. The specific epithet "messy" derives from the Latin verb inquino "dirty, soil" because the leaves leave a brown trace on the fingers when touched.
The Pelargonium inquinans was grown in the garden of the Bishop of London, Henry Compton, an admirer of exotic plants. In 1713, when he died, Pelargonium inquinans was found in his collection. The first illustration from 1732 was made from a plant growing in the garden of British botanist James Sherard. Many hybrids have been derived from this species, but the true wild species can be recognized by its red glandular hairs.
Description
In the wild, Pelargonium inquinans is a small shrub, about 2 m tall, branched, with young succulent twigs becoming woody with age, bearing red glandular hairs.
The evergreen leaves, borne by long petioles, are orbicular (like Pelargonium × hortorum but without dark markings), incised in 5 to 7 crenate lobes, with a viscous pubescence, giving a cottony appearance to both sides. To the touch, the leaves stain the fingers brown rust.
The scarlet red flowers, sometimes pink or white, are grouped by 10 to 20 in pseudo-umbels. They are bilateral symmetry (zygomorph) with the 2 upper petals may be a little smaller than the 3 lower petals. Stamens and style are exerted. The filaments of the seven fertile stamens join over most of their length.
In South Africa flowering is spread throughout the year.
The pericardial fruit |
https://en.wikipedia.org/wiki/KIVA%20%28software%29 | KIVA is a family of Fortran-based Computational Fluid Dynamics software developed by Los Alamos National Laboratory (LANL). The software predicts complex fuel and air flows as well as ignition, combustion, and pollutant-formation processes in engines. The KIVA models have been used to understand combustion chemistry processes, such as auto-ignition of fuels, and to optimize diesel engines for high efficiency and low emissions. General Motors has used KIVA in the development of direct-injection, stratified charge gasoline engines as well as the fast burn, homogeneous-charge gasoline engine. Cummins reduced development time and cost by 10%–15% using KIVA to develop its high-efficiency 2007 ISB 6.7-L diesel engine that was able to meet 2010 emission standards in 2007. At the same time, the company realized a more robust design and improved fuel economy while meeting all environmental and customer constraints.
History
LANL's Computational Fluid Dynamics expertise hails from the very beginning of the Manhattan Project in the 1940s. When the United States found itself in the midst of the first energy crisis in the 1970s, this core Laboratory capability transformed into KIVA, an internal combustion engine modeling tool designed to help make automotive engines more fuel-efficient and cleaner-burning. A "kiva" is actually a round Pueblo ceremonial chamber that is set underground and entered from above by means of a ladder through its roof; drawing on LANL's southwestern heritage, an analogy is made with the typical engine cylinder in which the entrance and exit of gases is achieved through valves set in the cylinder.
The first public release of KIVA was made in 1985 through the National Energy Software Center (NESC) at Argonne National Laboratory, which served at the time as the official distribution hub for Department of Energy-sponsored software. Distribution of KIVA continued through the Energy Science and Technology Software Center (ESTSC) in Oak Ridge, Tennessee un |
https://en.wikipedia.org/wiki/Zeev%20Rudnick | Zeev Rudnick or Ze'ev Rudnick (born 1961 in Haifa, Israel) is a mathematician, specializing in number theory and in mathematical physics, notably quantum chaos. Rudnick is a professor at the School of Mathematical Sciences and the Cissie and Aaron Beare Chair in Number Theory at Tel Aviv University.
Education
Rudnick received his PhD from Yale University in 1990 under the supervision of Ilya Piatetski-Shapiro and Roger Evans Howe.
Career
Rudnick joined Tel Aviv University in 1995, after working as an assistant professor at Princeton and Stanford. In 2003–4 Rudnick was a Leverhulme visiting professor at the University of Bristol and in 2008–2010 and 2015–2016 he was a member of the Institute for Advanced Study at Princeton.
In 2012, Rudnick was inducted as a fellow of the American Mathematical Society.
Research
Rudnick has been studying different aspects of quantum chaos and number theory. He has contributed to one of the discoveries concerning the Riemann zeta function, namely, that the Riemann zeros appear to display the same statistics as those which are believed to be present in energy levels of quantum chaotic systems and described by random matrix theory. Together with Peter Sarnak, he has formulated the Quantum Unique Ergodicity conjectures for eigenfunctions on negatively curved manifolds, and has investigated the question arising from Quantum Chaos in other arithmetic models such as the Quantum Cat map (with Par Kurlberg) and the flat torus (with CP Hughes and with Jean Bourgain). Another interest is the interface between function field arithmetic and corresponding problems in number fields.
Education
Ph.D., 1990, Yale University.
M.Sc., 1985, The Hebrew University, Jerusalem.
B.Sc., 1984, Bar-Ilan University, Ramat Gan.
Awards and fellowships
ERC Advanced Grants, 1.7 million euro, 2013–2018., 2019–2024.
Fellow of the American Mathematical Society, 2012–.
Annales Henri Poincaré Distinguished Paper Award for the year, 2011.
Erdős Prize of the Isra |
https://en.wikipedia.org/wiki/Histone%20fold | A histone fold is a structurally conserved motif found near the C-terminus in every core histone sequence in a histone octamer responsible for the binding of histones into heterodimers.
The histone fold averages about 70 amino acids and consists of three alpha helices connected by two short, unstructured loops. When not in the presence of DNA, the core histones assemble into head-to-tail intermediates (H3 and H4 first assemble into heterodimers then fuse two heterodimers to form a tetramer, while H2A and H2B form heterodimers) via extensive hydrophobic interactions between each histone fold domain in a "handshake motif". Also the histone fold was first found in TATA box-binding protein-associated factors, which is a main component in transcription.
The histone fold's evolution can be found by different combinations of ancestral sets of peptides that make up helix-strand-helix motif that come from the three folds from the ancestral fragments. These peptide chains can be found in the archaeal histones, which could have come from eukaryotic H3-H4 tetramer. The archaeal single-chain histones are also found in the bacterium Aquifex aeolicus. Which helps the diverse bacteria phylogeny coming from the ancestry of eukaryotes and archaea with lateral gene transfers to get to the bacteria. These lead into the octamer articulated protein endoskeleton for DNA compaction. From this endoskeleton it has a central segment that folds for the histone dimerization. This then leads into the end segments of the fold to make properties of dimer-dimer contacts that also cap the protein super helix at the octamer.
One species that looked at is Drosophila, and in the subunits of the Drosophila transcription initiation factor has specific amino acid sequences that have different characteristics of the histone folds that make up the two proteins make up the subunits. When just looking at the histone fold motif in the Drosophila the protein-protein and the protein DNA interaction of the co |
https://en.wikipedia.org/wiki/Slope%20number | In graph drawing and geometric graph theory, the slope number of a graph is the minimum possible number of distinct slopes of edges in a drawing of the graph in which vertices are represented as points in the Euclidean plane and edges are represented as line segments that do not pass through any non-incident vertex.
Complete graphs
Although closely related problems in discrete geometry had been studied earlier, e.g. by and ,
the problem of determining the slope number of a graph was introduced by , who showed that the slope number of an -vertex complete graph is exactly . A drawing with this slope number may be formed by placing the vertices of the graph on a regular polygon.
Relation to degree
The slope number of a graph of maximum degree is clearly at least , because at most two of the incident edges at a degree- vertex can share a slope. More precisely, the slope number is at least equal to the linear arboricity of the graph, since the edges of a single slope must form a linear forest, and the linear arboricity in turn is at least .
There exist graphs with maximum degree five that have arbitrarily large slope number. However, every graph of maximum degree three has slope number at most four; the result of for the complete graph shows that this is tight. Not every set of four slopes is suitable for drawing all degree-3 graphs: a set of slopes is suitable for this purpose if and only if it forms the slopes of the sides and diagonals of a parallelogram. In particular, any degree 3 graph can be drawn so that its edges are either axis-parallel or parallel to the main diagonals of the integer lattice. It is not known whether graphs of maximum degree four have bounded or unbounded slope number.
Planar graphs
As showed, every planar graph has a planar straight-line drawing in which the number of distinct slopes is a function of the degree of the graph. Their proof follows a construction of for bounding the angular resolution of planar graphs as a function of d |
https://en.wikipedia.org/wiki/Netstalking | Netstalking is a searching activity carried out within the limits of Internet, aimed at finding little-known, inaccessible, forbidden, shocking and rarely-visited objects, including their analysis, systematisation and storage. The objects found are either aesthetically pleasing or informationally fulfilling to a seeker.
This mostly includes the deep web and the darknet, partially IoT devices, deprecated or developing web protocols. Although irrational, the activity develops web searching skills and mindful work with information.
The term of "netstalking" was most likely created in 2009 in Russian part of the Net, and refers to S.T.A.L.K.E.R.
Methods of netstalking
In netstalking, there are two general methods for finding unusual information: a deli-search and a net-random. Deli-search, or "deliberated search", is a targeted search for objects of interest whose characteristics are already known. This method usually uses the language of search queries and web archives, with which one can view old or deleted versions of these pages. Net-random searches for hidden and unknown information through the process of trial and error. For netstalkers, the second method is considered to be the most popular way to search for information, as it allows network researchers to find undefined hidden resources. Net-randoming is done by either scanning IP address ranges or by using content randomizers, such as PetitTube. Special programs are used for scanning include Advanced IP scanner, Nmap / Zenmap, NESCA, and RouterScan by Stas’m.
Search areas
Netstalkers analyze the entire Internet, which is traditionally divided into several conditional segments.
Surface web
The surface web is the public Internet. In this part, one can find everything that is used by the average user of the network: social networks, blogs, encyclopaedias, news sites and others. In other words, the surface web is all that can be found using ordinary search engines (Google, Yahoo and others). The surface web |
https://en.wikipedia.org/wiki/Black%20rose%20symbolism | Black roses do not naturally exist but are symbols with different meanings or for various things.
Flowers
The flowers commonly called black roses do not really exist in said color, instead they actually have a dark shade, such as the "Black Magic", "Barkarole", "Black Beauty" and "Baccara" varieties. They can be artificially colored as well.
In the language of flowers, roses have many different meanings. Black roses symbolize ideas such as hatred, despair, death or rebirths.
Anarchism
Black Rose Books is the name of the Montreal anarchist publisher and small press imprint headed by the libertarian-municipalist and anarchist Dimitrios Roussopoulos. One of the two anarchist bookshops in Sydney is Black Rose Books which has existed in various guises since 1982.
The Black Rose was the title of a respected journal of anarchist ideas published in the Boston area during the 1970s, as well as the name of an anarchist lecture series addressed by notable anarchist and libertarian socialists (including Murray Bookchin and Noam Chomsky) into the 1990s.
Black Rose Labour (organisation) is the name of a factional political organisation associated with the United Kingdom Labour Party, which defines itself as Libertarian Socialist.
Black Rose Anarchist Federation is a political organization that was founded in 2014, with a few local and regional groups in the United States.
See also
Anarchist symbolism |
https://en.wikipedia.org/wiki/Animal%20communication | Animal communication is the transfer of information from one or a group of animals (sender or senders) to one or more other animals (receiver or receivers) that affects the current or future behavior of the receivers. Information may be sent intentionally, as in a courtship display, or unintentionally, as in the transfer of scent from predator to prey with kairomones. Information may be transferred to an "audience" of several receivers. Animal communication is a rapidly growing area of study in disciplines including animal behavior, sociology, neurology and animal cognition. Many aspects of animal behavior, such as symbolic name use, emotional expression, learning and sexual behavior, are being understood in new ways.
When the information from the sender changes the behavior of a receiver, the information is referred to as a "signal". Signalling theory predicts that for a signal to be maintained in the population, both the sender and receiver should usually receive some benefit from the interaction. Signal production by senders and the perception and subsequent response of receivers are thought to coevolve. Signals often involve multiple mechanisms, e.g. both visual and auditory, and for a signal to be understood the coordinated behaviour of both sender and receiver require careful study.
Animal languages
The sounds animals make are important because they communicate the animals' state. Some animals species have been taught simple versions of human languages. Animals can use, for example, electrolocation and echolocation to communicate about prey and location. Keski-Korsu suggests a challenge of human/animal communication is that humans don't recognize animals as self aware and deliberately communicating.
Modes
Visual
Gestures: Most animals understand communication through a visual display of distinctive body parts or bodily movements. Animals will reveal or accentuate a body part to relay certain information. The parent herring gull displays its bright yell |
https://en.wikipedia.org/wiki/Merrimack%20Pharmaceuticals | Merrimack Pharmaceuticals is a pharmaceutical company based in Cambridge, Massachusetts, United States. They specialize in developing drugs for the treatment of cancer.
Merrimack's first FDA-approved drug was approved in 2015; Onivyde, a liposome encapsulated version of irinotecan is used for treating pancreatic adenocarcinoma. It was approved for use in the European Union the following year.
History
Merrimack was founded by a group of scientists from MIT and Harvard University in 2000.
In 2016, Merrimack had 426 full-time employees, 103 of which had an MD or PhD.
In October 2016, CEO Robert Mulroy resigned and the company announced they would be laying off 20% of its employees. In January 2017, interim CEO Gary Crocker resigned and the board of directors appointed Richard Peters to be president and CEO. Peters previously worked at Sanofi and was a faculty member at Harvard University.
In January 2017, French pharmaceutical company Ipsen announced they would be purchasing Onivyde from Merrimack for approximately $1 billion.
On November 13, 2018, the statistical programming director Songjiang Wang, received "six months in prison and one year supervised released" after a guilty verdict was handed down to Wang from a United States District Judge in July 2018 for securities fraud and conspiracy to commit securities fraud. Also on December 20, 2019, the United States Securities and Exchange Commission charged Wang with Insider trading.
Pipeline
Merrimack has four drugs in clinical development.
MM-302 – HER2 targeting antibody-drug conjugate
MM-121 (seribantumab) – anti-HER3 monoclonal antibody
MM-141 (istiratumab) – IGF-1R and HER3 bispecific monoclonal antibody
MM-151 – anti-EGFR mixture of monoclonal antibody |
https://en.wikipedia.org/wiki/Fermat%20Prize | The Fermat prize of mathematical research biennially rewards research works in fields where the contributions of Pierre de Fermat have been decisive:
Statements of variational principles
Foundations of probability and analytic geometry
Number theory.
The spirit of the prize is focused on rewarding the results of research accessible to the greatest number of professional mathematicians within these fields. The Fermat prize was created in 1989 and is awarded once every two years in Toulouse by the Institut de Mathématiques de Toulouse. The amount of the Fermat prize has been fixed at 20,000 Euros for the twelfth edition (2011).
Previous prize winners
Pierre Fermat medal
There has also been a Pierre Fermat medal, which has been awarded for example to chemist Linus Pauling (1957), mathematician Ernst Peschl (1965) and botanist Francis Raymond Fosberg.
Junior Fermat Prize
The Junior Fermat Prize is a mathematical prize, awarded every two years to a student in the first four years of university for a contribution to mathematics. The amount of the prize is 2000 Euros.
See also
List of mathematics awards |
https://en.wikipedia.org/wiki/Mageo | Mageo (until May 1998, MaMedia) was the oldest Czech message board, active from the end of 1995 until 2017. Milan Votava is considered the founder and spiritual father of the project, along with co-founder Aleš Němeček. In June 1997, the address mamedia.cz was recognized as the most visited Czech server with more than 2.5 million accesses per month. At the end of September, 2017, the operation of the discussion server was terminated and the domain was redirected to the project of a new social network with integrated game environment.
History
Mageo's predecessor, the MaMedia discussion, was founded in August 1995 by Milan Votava and Aleš Němeček, who established the legal entities MA Media s.r.o. and MAMEDIA.COM s.r.o. (the initial letters MA were an abbreviation of the names Milan and Aleš). At first, it seemed that the project would not have a long duration due to financial problems, however, with the help of Pavel Vojíř (then a Reflex reporter, Playboy editor, editor-in-chief of Melodie and editor-in-chief of Public Reality), MaMedia was able to survive and stay viable.
Due to the impossibility of registering the domain www.mamedia.com, MaMedia was renamed to Mageo in May 1998, and there were also fundamental changes in the structure and graphics that lasted to its end. The genealogical portal genea.cz is based on the original Mageo auditorium from 1998.
The popularity of the MaMedia server increased rapidly, as evidenced by regular user events (MaMedia / Mageo steamer, an event held regularly every year, moderated several times in the past by Leoš Mareš), as well as a hacker attack on February 15, 1997. Local phenomena (Drasťák - a novel to be continued, the so-called MUSIL mania) grew into a reality at the turn of the millennium and into reality (sabotage of the Miss Internet competition, Štěpán Turek as a stellar infantry general in the 2nd year of Česko hledá SuperStar). Many people who have made a significant contribution to the Czech Internet have passe |
https://en.wikipedia.org/wiki/Kernel%20method | In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the Representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.
Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick". Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors.
Algorithms capable of operating with kernels include the kernel perceptron, support-vector machines (SVM), Gaussian processes, principal components analysis (PCA), canonical correlation analysis, ridge regression, spectral clustering, linear adaptive filters and many others.
Most kernel algorithms are based on convex optimization or eigenproblems and are statistically well-founded. Typically, their statistical prop |
https://en.wikipedia.org/wiki/Thermodynamic%20beta | In statistical thermodynamics, thermodynamic beta, also known as coldness, is the reciprocal of the thermodynamic temperature of a system: (where is the temperature and is Boltzmann constant).
It was originally introduced in 1971 (as "coldness function") by , one of the proponents of the rational thermodynamics school of thought, based on earlier proposals for a "reciprocal temperature" function.
Thermodynamic beta has units reciprocal to that of energy (in SI units, reciprocal joules, ). In non-thermal units, it can also be measured in byte per joule, or more conveniently, gigabyte per nanojoule; 1 K−1 is equivalent to about 13,062 gigabytes per nanojoule; at room temperature: = 300K, β ≈ ≈ ≈ . The conversion factor is 1 GB/nJ = J−1.
Description
Thermodynamic beta is essentially the connection between the information theory and statistical mechanics interpretation of a physical system through its entropy and the thermodynamics associated with its energy. It expresses the response of entropy to an increase in energy. If a system is challenged with a small amount of energy, then β describes the amount the system will randomize.
Via the statistical definition of temperature as a function of entropy, the coldness function can be calculated in the microcanonical ensemble from the formula
(i.e., the partial derivative of the entropy with respect to the energy at constant volume and particle number ).
Advantages
Though completely equivalent in conceptual content to temperature, is generally considered a more fundamental quantity than temperature owing to the phenomenon of negative temperature, in which is continuous as it crosses zero whereas has a singularity.
In addition, has the advantage of being easier to understand causally: If a small amount of heat is added to a system, is the increase in entropy divided by the increase in heat. Temperature is difficult to interpret in the same sense, as it is not possible to "Add entropy" to a system excep |
https://en.wikipedia.org/wiki/Ion%20beam%20lithography | Ion-beam lithography is the practice of scanning a focused beam of ions in a patterned fashion across a surface in order to create very small structures such as integrated circuits or other nanostructures.
Details
Ion-beam lithography has been found to be useful for transferring high-fidelity patterns on three-dimensional surfaces.
Ion-beam lithography offers higher resolution patterning than UV, X-ray, or electron beam lithography because these heavier particles have more momentum. This gives the ion beam a smaller wavelength than even an e-beam and therefore almost no diffraction. The momentum also reduces scattering in the target and in any residual gas. There is also a reduced potential radiation effect to sensitive underlying structures compared to x-ray and e-beam lithography.
Ion-beam lithography, or ion-projection lithography, is similar to Electron beam lithography, but uses much heavier charged particles, ions. In addition to diffraction being negligible, ions move in straighter paths than electrons do both through vacuum and through matter, so there seems be a potential for very high resolution. Secondary particles (electrons and atoms) have very short range, because of the lower speed of the ions. On the other hand, intense sources are more difficult to make and higher acceleration voltages are needed for a given range. Due to the higher energy loss rate, higher particle energy for a given range and the absence of significant space charge effects, shot noise will tend to be greater.
Fast-moving ions interact differently with matter than electrons do, and, owing to their higher momentum, their optical properties are different. They have much shorter range in matter and move straighter through it. At low energies, at the end of the range, they lose more of their energy to the atomic nuclei, rather than to the atoms, so that atoms are dislocated rather than ionized. If the ions don't defuse out of the resist, they dope it. The energy loss in m |
https://en.wikipedia.org/wiki/Human%E2%80%93animal%20breastfeeding | Human to animal breastfeeding has been practiced in some different cultures during various time periods. The practice of breastfeeding or suckling between humans and other species occurred in both directions: women sometimes breastfed young animals, and animals were used to suckle babies and children. Animals were used as substitute wet nurses for infants, particularly after the rise of syphilis increased the health risks of wet nursing. Goats and donkeys were widely used to feed abandoned babies in foundling hospitals in 18th- and 19th-century Europe. Breastfeeding animals has also been practised, whether for perceived health reasons – such as to toughen the nipples and improve the flow of milk – or for religious and cultural purposes. A wide variety of animals have been used for this purpose, including puppies, kittens, piglets and monkeys.
Breastfeeding by animals of humans
Terracotta feeding bottles surviving from the third millennium BC in Sumeria indicate that children who were not being breastfed were receiving animal milk, probably from cows. It is possible that some infants directly sucked lactating animals, which served as alternatives to wet nurses. Unless another lactating woman was available, a mother who lacked enough breast milk was likely to lose her child. To avert that possibility if a wet nurse was not available, an animal such as a donkey, cow, goat, sheep or dog could be employed. Suckling directly was preferable to milking an animal and giving the milk, as contamination by microbes during the milking process could lead to the infant contracting a deadly diarrheal disease. It was not until as late as the 1870s that stored animal milk became safe to drink due to the invention of pasteurisation and sterilisation.
The Jewish Talmud permits children to suckle animals if the child's welfare dictates it.
Mythology and stories
The suckling of infants by animals was a repeated theme in classical mythology. Most famously, twin brothers Romulus and R |
https://en.wikipedia.org/wiki/Project%20MinE | Project MinE is an independent large scale whole genome research project that was initiated by 2 patients with amyotrophic lateral sclerosis and started on World ALS Day, June 21, 2013.
The symptoms of amyotrophic lateral sclerosis are caused by degeneration of motor nerve cells (motor neurons) in the spinal cord, brainstem, and motor cortex. The exact cause of this degeneration is unknown but it is thought that environmental exposures and genetic factors play a role in susceptibility to the disease. In 5-10% of patients the family history is positive for ALS. However, it is not always possible to establish the mode of inheritance in each pedigree and not all familial cases may suffer from a genuine Mendelian or monogenic disorder. Autosomal-dominant mutations in the C9orf72 and the SOD1 gene are found in a substantial number of familial ALS cases. Mutations in other genes (such as VAPB [2], ANG, TARDBP and FUS) have been reported, but are found at a much lower frequency and with variable penetrance, suggesting the involvement of other genes.
Project MinE is a research project to systematically interrogate the human genome for both common and rare genetic variation in ALS (genetic "data mining" explains the project name). The project consists of two phases and combines a genome-wide association study (GWAS) study with whole genome sequencing:
Phase 1 of Project MinE consists of whole genome sequencing of 300 DNA samples of ALS patients to detect relevant haplotypes with high fidelity (variant calling & haplotype detection). Subsequently, expansion of the current GWAS for ALS will take place by increasing the amount of DNA samples to be investigated to 15,000 ALS samples and 20,000 healthy controls (so 35,000 samples in total) and imputation using the whole genome sequencing results will be performed. Combining these two processes will result that a relatively small group of whole genome sequenced DNA samples will extend the > 500,000 single nucleotide polymorphi |
https://en.wikipedia.org/wiki/Swastika%20%28Germanic%20Iron%20Age%29 | The swastika design is known from artefacts of various cultures since the Neolithic, and it recurs with some frequency on artefacts dated to the Germanic Iron Age, i.e. the Migration period to Viking Age period in Scandinavia, including the Vendel era in Sweden, attested from as early as the 3rd century in Elder Futhark inscriptions and as late as the 9th century on Viking Age image stones.
In older literature, the symbol is known variously as gammadion, fylfot, crux gothica, flanged thwarts, or angled cross.
English use of the Sanskritism swastika for the symbol dates to the 1870s, at first in the context of Hindu and Buddhist traditions, but from the 1890s also in cross-cultural comparison.
Examples include a 2nd-century funerary urn of the Przeworsk culture, Poland, the 3rd century Værløse Fibula from Zealand, Denmark, the Gothic spearhead from Brest-Litovsk, Belarus, the 9th century Snoldelev Stone from Ramsø, Denmark, and numerous Migration Period bracteates. The swastika is drawn either left-facing or right-facing, sometimes with "feet" attached to its four legs. Medallions and bracteates featuring swastikas were issued in Central Europe of late antiquity by the Etruscans.
The symbol is closely related to the triskele, a symbol of three-fold rotational symmetry, which occurs on artefacts of the same period. When considered as a four-fold rotational symmetrical analogue of the triskele, the symbol is sometimes also referred to as tetraskele.
The swastika symbol in the Germanic Iron Age has been interpreted as having a sacral meaning, associated with either Odin or Thor, but the Indoeuropean tradition associates the four-fold swastika with solar deities and deities preceding Thor are rather associated with three-fold or more often six-fold symbology.
Bracteates
A number of bracteates, with or without runic inscriptions, show a swastika. Most of these bracteates are of the "C" type, showing a human head above a quadruped, often interpreted as the Germanic |
https://en.wikipedia.org/wiki/Syntrophales | The Syntrophales are an order of Thermodesulfobacteriota. |
https://en.wikipedia.org/wiki/Influenza%20Genome%20Sequencing%20Project | The Influenza Genome Sequencing Project (IGSP), initiated in early 2004, seeks to investigate influenza evolution by providing a public data set of complete influenza genome sequences from collections of isolates representing diverse species distributions.
The project is funded by the National Institute of Allergy and Infectious Diseases (NIAID), a division of the National Institutes of Health (NIH), and has been operating out of the NIAID Microbial Sequencing Center at The Institute for Genomic Research (TIGR, which in 2006 became The Venter Institute).
Sequence information generated by the project has been continually placed into the public domain through GenBank.
Origins
In late 2003, David Lipman, Lone Simonsen, Steven Salzberg, and a consortium of other scientists wrote a proposal to begin sequencing large numbers of influenza viruses at The Institute for Genomic Research (TIGR). Prior to this project, only a handful of flu genomes were publicly available. Their proposal was approved by the National Institutes of Health (NIH), and would later become the IGSP. New technology development led by Elodie Ghedin began at TIGR later that year, and the first publication describing > 100 influenza genomes appeared in 2005 in the journal Nature
Research goals
The project makes all sequence data publicly available through GenBank, an international, NIH-funded, searchable online database.
This research helps to provide international researchers with the information needed to develop new vaccines, therapies and diagnostics, as well as improve understanding of the overall molecular evolution of Influenza and other genetic factors that determine their virulence. Such knowledge could not only help mitigate the impact of annual influenza epidemics, but could also improve scientific knowledge of the emergence of pandemic influenza viruses.
Results
The project completed its first genomes in March 2005 and has rapidly accelerated since. By mid-2008, over 3000 isolates had bee |
https://en.wikipedia.org/wiki/E.%20H.%20Moore | Eliakim Hastings Moore (; January 26, 1862 – December 30, 1932), usually cited as E. H. Moore or E. Hastings Moore, was an American mathematician.
Life
Moore, the son of a Methodist minister and grandson of US Congressman Eliakim H. Moore, discovered mathematics through a summer job at the Cincinnati Observatory while in high school. He subsequently studied mathematics at Yale University, where he was a member of Skull and Bones and obtained a BA in 1883 and the PhD in 1885 with a thesis supervised by Hubert Anson Newton, on some work of William Kingdon Clifford and Arthur Cayley. Newton encouraged Moore to study in Germany, and thus he spent an academic year at the University of Berlin, attending lectures by Leopold Kronecker and Karl Weierstrass.
On his return to the United States, Moore taught at Yale and at Northwestern University. When the University of Chicago opened its doors in 1892, Moore was the first head of its mathematics department, a position he retained until his death in 1932. His first two colleagues were Oskar Bolza and Heinrich Maschke. The resulting department was the second research-oriented mathematics department in American history, after Johns Hopkins University.
Accomplishments
Moore first worked in abstract algebra, proving in 1893 the classification of the structure of finite fields (also called Galois fields). Around 1900, he began working on the foundations of geometry. He reformulated Hilbert's axioms for geometry so that points were the only primitive notion, thus turning David Hilbert's primitive lines and planes into defined notions. In 1902, he further showed that one of Hilbert's axioms for geometry was redundant. His work on axiom systems is considered one of the starting points for metamathematics and model theory. After 1906, he turned to the foundations of analysis. The concept of a closure operator first appeared in his 1910 Introduction to a form of general analysis. He also wrote on algebraic geometry, number theory, and |
https://en.wikipedia.org/wiki/Lymphology%20Association%20of%20North%20America | The Lymphology Association of North America, formerly known as the American Society of Lymphology, is a non-profit organization based in Kansas City, Missouri. The society provides current information and resources for professionals and patients interested in the healthy function and disorders of the lymphatic system, such as immune response, allergies, infectious disease and circulatory disorders lymphedema, relation to other systems of the body (integument, cardiac, venous, etc.), anatomical structures and functions, cancers, and integrative therapies. It organizes resources, conferences, and produces various publications. |
https://en.wikipedia.org/wiki/Linear%20no-threshold%20model | The linear no-threshold model (LNT) is a dose-response model used in radiation protection to estimate stochastic health effects such as radiation-induced cancer, genetic mutations and teratogenic effects on the human body due to exposure to ionizing radiation. The model statistically extrapolates effects of radiation from very high doses (where they are observable) into very low doses, where no biological effects may be observed. The LNT model lies at a foundation of a postulate that all exposure to ionizing radiation is harmful, regardless of how low the dose is, and that the effect is cumulative over lifetime.
The LNT model is commonly used by regulatory bodies as a basis for formulating public health policies that set regulatory dose limits to protect against the effects of radiation. The model has also been used in the assessment of cancer risks of mutagenic chemicals. The validity of the LNT model, however, is disputed, and other significant models exist: the threshold model, which assumes that very small exposures are harmless, the radiation hormesis model, which says that radiation at very small doses can be beneficial, and the supra-linear model based on observational data. Whenever the cancer risk is estimated from real data at low doses, and not from extrapolation of observations at high doses, the supra-linear model is verified. It has been argued that the LNT model may have created an irrational fear of radiation.
Different organizations take different approaches to the LNT model. For example, the US Nuclear Regulatory Commission and United States Environmental Protection Agency endorse the model, while a number of other bodies deprecate it. One of the organizations for establishing recommendations on radiation protection guidelines internationally, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) that previously supported the LNT model, no longer supports the model for very low radiation doses.
Introduction
Stocha |
https://en.wikipedia.org/wiki/Titer | Titer (American English) or titre (British English) is a way of expressing concentration. Titer testing employs serial dilution to obtain approximate quantitative information from an analytical procedure that inherently only evaluates as positive or negative. The titre corresponds to the highest dilution factor that still yields a positive reading. For example, positive readings in the first 8 serial, twofold dilutions translate into a titer of 1:256 (i.e., 2−8). Titres are sometimes expressed by the denominator only, for example 1:256 is written 256.
The term also has two other, conflicting meanings. In titration, the titer is the ratio of actual to nominal concentration of a titrant, e.g. a titer of 0.5 would require 1/0.5 = 2 times more titrant than nominal. This is to compensate for possible degradation of the titrant solution. Second, in textile engineering, titre is also a synonym for linear density.
Etymology
Titer has the same origin as the word "title", from the French word titre, meaning "title" but referring to the documented purity of a substance, often gold or silver. This comes from the Latin word titulus, also meaning "title".
Examples
Antibody titer
An antibody titer is a measurement of how much antibody an organism has produced that recognizes a particular epitope. It is conventionally expressed as the inverse of the greatest dilution level that still gives a positive result on some test. ELISA is a common means of determining antibody titers. For example, the indirect Coombs test detects the presence of anti-Rh antibodies in a pregnant woman's blood serum. A patient might be reported to have an "indirect Coombs titer" of 16. This means that the patient's serum gives a positive indirect Coombs test at any dilution down to 1/16 (1 part serum to 15 parts diluent). At greater dilutions the indirect Coombs test is negative. If a few weeks later the same patient had an indirect Coombs titer of 32 (1/32 dilution which is 1 part serum to 31 parts dilu |
https://en.wikipedia.org/wiki/List%20of%20chutneys | This is a list of notable chutney varieties. Chutney is a sauce and condiment in Indian cuisine, the cuisines of the Indian subcontinent and South Asian cuisine. It is made from a highly variable mixture of spices, vegetables, or fruit. Chutney originated in India, and is similar in preparation and usage to a pickle. In contemporary times, chutneys and pickles are a mass-produced food product.
Chutneys
Blatjang —a South African chutney made from dried fruit.
Branston Pickle—a jarred, mass-produced pickled chutney first made in England in 1922 by Crosse & Blackwell. It is sweet and spicy with a chutney-like consistency, containing chunks of vegetables in a thick brown sticky sauce.
Chammanthi podi—a dry condiment and coconut chutney from the Indian state of Kerala.
Coconut chutney—a South Indian chutney side dish and condiment, it is common in South Indian states. It is made with coconut pulp ground with other ingredients such as tamarind, green chili peppers and coriander.
Coriander chutney—common in Indian cuisine.
Dahi chutney—strained yogurt mixed into a chutney of mint and onions, popular in South India.
Eromba—common in Manipuri cuisine.
Garlic chutney—prepared using fresh garlic, dry or fresh coconut, groundnuts and green or red chili peppers, prepared in both wet and dried forms.
Gooseberry chutney—gooseberry (amla) chutney or "amlakir chutney" is common in Bengali cuisine. It is prepared by boiling raw sliced gooseberries in spicy jaggery or sugar syrup.
Green mango chutney—an Indian chutney prepared using unripe mangoes.
Hara choley chutney—made with raw unripe green chickpeas, often mixed with green coriander leaves.
Hog plum chutney—common in Bengali and Karnataka cuisine. It is called "Amrar chutney" in West Bengal. Ambade (tulu) chutney made from hog plum is a special dish from coastal districts of the Karnataka state of India (Bharat).
Kachri ki chutney—made with kachri (wild melon).
Major Grey's Chutney—reputedly created by a 19th-cent |
https://en.wikipedia.org/wiki/Synthetic%20ribosome | Synthetic ribosomes are artificial small-molecules that can synthesize peptides in a sequence-specific matter.
David Alan Leigh's lab built synthetic ribosome using a chemical structure based on a rotaxane.
The Cédric Orelle research group created ribosomes with tethered and inseparable subunits (or Ribo-T). |
https://en.wikipedia.org/wiki/Extended%20finite%20element%20method | The extended finite element method (XFEM), is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method (FEM) approach by enriching the solution space for solutions to differential equations with discontinuous functions.
History
The extended finite element method (XFEM) was developed in 1999 by Ted Belytschko and collaborators,
to help alleviate shortcomings of the finite element method and has been used to model the propagation of various discontinuities: strong (cracks) and weak (material interfaces). The idea behind XFEM is to retain most advantages of meshfree methods while alleviating their negative sides.
Rationale
The extended finite element method was developed to ease difficulties in solving problems with localized features that are not efficiently resolved by mesh refinement. One of the initial applications was the modelling of fractures in a material. In this original implementation, discontinuous basis functions are added to standard polynomial basis functions for nodes that belonged to elements that are intersected by a crack to provide a basis that included crack opening displacements. A key advantage of XFEM is that in such problems the finite element mesh does not need to be updated to track the crack path. Subsequent research has illustrated the more general use of the method for problems involving singularities, material interfaces, regular meshing of microstructural features such as voids, and other problems where a localized feature can be described by an appropriate set of basis functions.
Principle
Enriched finite element methods extend, or enrich, the
approximation space so that it is able to naturally reproduce the
challenging feature associated with the problem of interest: the
discontinuity, singularity, boundary layer, etc. It was shown that
for some problems, such an embedding of the problem's feature into the approximation
spa |
https://en.wikipedia.org/wiki/Interpeduncular%20fossa | The interpeduncular fossa is a deep depression of the ventral surface of the midbrain between the two crura cerebri.
It has been found in humans and macaques, but not in rats or mice, showing that this is a relatively new evolutionary region.
Anatomy
The interpeduncular fossa is a somewhat rhomboid-shaped area of the base of the brain.
Features
The lateral wall of the interpeduncular fossa bears a groove - the oculomotor sulcus - from which rootlets of the oculomotor nerve emerge from the substance of the brainstem and aggregate into a single fascicle.
Anatomical relations
The ventral tegmental area lies at the depth of the interpeduncular fossa.
Boundaries
The interpeduncular fossa is in front by the optic chiasma, behind by the antero-superior surface of the pons, antero-laterally by the converging optic tracts, and postero-laterally by the diverging cerebral peduncles.
The floor of interpeduncular fossa, from behind forward, are the posterior perforated substance, corpora mamillaria, tuber cinereum, infundibulum, and pituitary gland.
Contents
Contents of interpeduncular fossa include oculomotor nerve, and circle of Willis.
The basal veins pass alongside the interpeduncular fossa before joining the great cerebral vein.
Clinical significance
The most common locations for neurocutaneous melanosis have occurred along the interpeduncular fossa, ventral brainstem, upper cervical cord, and ventral lumbosacral cord.
See also
Interpeduncular cistern
Cerebral peduncles
Additional images |
https://en.wikipedia.org/wiki/Lanmaoa%20pseudosensibilis | Lanmaoa pseudosensibilis is a fungus of the family Boletaceae native to the United States. First described officially in 1971 by mycologists Alexander H. Smith and Harry Delbert Thiers, it was transferred to the newly circumscribed genus Lanmaoa in 2015.
While edible, it is not recommended as it could be confused with toxic species.
See also
List of North American boletes |
https://en.wikipedia.org/wiki/Orienting%20response | The orienting response (OR), also called orienting reflex, is an organism's immediate response to a change in its environment, when that change is not sudden enough to elicit the startle reflex. The phenomenon was first described by Russian physiologist Ivan Sechenov in his 1863 book Reflexes of the Brain, and the term ('ориентировочный рефлекс' in Russian) was coined by Ivan Pavlov, who also referred to it as the Shto takoye? (Что такое? or What is it?) reflex. The orienting response is a reaction to novel or significant stimuli. In the 1950s the orienting response was studied systematically by the Russian scientist Evgeny Sokolov, who documented the phenomenon called "habituation", referring to a gradual "familiarity effect" and reduction of the orienting response with repeated stimulus presentations.
Researchers have found a number of physiological mechanisms associated with OR, including changes in phasic and tonic skin conductance response (SCR), electroencephalogram (EEG), and heart rate following a novel or significant stimulus. These observations all occur within seconds of stimulus introduction. In particular, EEG studies of OR have corresponded particularly with the P300 wave and P3a component of the OR-related event-related potential (ERP).
Neural correlates
Current understanding of the localization of OR in the brain is still unclear. In one study using fMRI and SCR, researchers found novel visual stimuli associated with SCR responses typical of an OR also corresponded to activation in the hippocampus, anterior cingulate gyrus, and ventromedial prefrontal cortex. These regions are also believed to be largely responsible for emotion, decision making, and memory. Increases in cerebellar and extrastriate cortex were also recorded, which are significantly implicated in visual perception and processing.
Function
When an individual encounters a novel environmental stimulus, such as a bright flash of light or a sudden loud noise, they will pay attentio |
https://en.wikipedia.org/wiki/Pentagonal%20gyrobicupola | In geometry, the pentagonal gyrobicupola is one of the Johnson solids (). Like the pentagonal orthobicupola (), it can be obtained by joining two pentagonal cupolae () along their bases. The difference is that in this solid, the two halves are rotated 36 degrees with respect to one another.
The pentagonal gyrobicupola is the third in an infinite set of gyrobicupolae.
The pentagonal gyrobicupola is what you get when you take a rhombicosidodecahedron, chop out the middle parabidiminished rhombicosidodecahedron (), and paste the two opposing cupolae back together.
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a: |
https://en.wikipedia.org/wiki/Heavy%20isotope%20diet | A heavy isotope diet is one in that contains nutrients in which some atoms are replaced with their heavier non-radioactive isotopes, such as deuterium 2H or heavy carbon 13C. Biomolecules that incorporate heavier isotopes give rise to more stable molecular structures, which is hypothesized to increase resistance to damage associated with ageing or diseases.
Medicines with some hydrogen atoms substituted with deuterium are called deuterated drugs, while substances that are essential nutrients can be used as food constituents, making this food "isotopic". Consumed with food, these nutrients become building material for the body. The examples are deuterated polyunsaturated fatty acids, essential aminoacids, DNA bases such as cytosine, or heavy water and glucose.
Suggested mechanism
One of the most pernicious and irreparable types of oxidative damage inflicted by reactive oxygen species (ROS) upon biomolecules involves the carbon-hydrogen bond cleavage (hydrogen abstraction). Intriguingly, the biomolecules most damageable by this type of damage belong to the group of essential nutrients (10 out of 20 amino acids; nucleosides at certain conditions (conditionally essential); all polyunsaturated fatty acids). In theory, replacing hydrogen with deuterium "reinforces" the bond due to the kinetic isotope effect, and such reinforced biomolecules taken up by the body will be more resistant to ROS.
Deuterated omega-6 fatty acids for humans with degenerative diseases
The company Retrotope pioneered the development a source of deuterated omega-6 fatty acid di-deuterated linoleic acid ethyl ester (RT001) as a food additive for potential treatment of neurodegenerative diseases such as Friedreich’s ataxia and infantile neuroaxonal dystrophy. FDA has granted it an orphan drug designation and it passed the Phase I/II clinical trials (as of 2018).
See also
Deuterated drug
Heavy water
RT001 |
https://en.wikipedia.org/wiki/Dense%20order | In mathematics, a partial order or total order < on a set is said to be dense if, for all and in for which , there is a in such that . That is, for any two elements, one less than the other, there is another element between them. For total orders this can be simplified to "for any two distinct elements, there is another element between them", since all elements of a total order are comparable.
Example
The rational numbers as a linearly ordered set are a densely ordered set in this sense, as are the algebraic numbers, the real numbers, the dyadic rationals and the decimal fractions. In fact, every Archimedean ordered ring extension of the integers is a densely ordered set.
On the other hand, the linear ordering on the integers is not dense.
Uniqueness for total dense orders without endpoints
Georg Cantor proved that every two non-empty dense totally ordered countable sets without lower or upper bounds are order-isomorphic. This makes the theory of dense linear orders without bounds an example of an ω-categorical theory where ω is the smallest limit ordinal. For example, there exists an order-isomorphism between the rational numbers and other densely ordered countable sets including the dyadic rationals and the algebraic numbers. The proofs of these results use the back-and-forth method.
Minkowski's question mark function can be used to determine the order isomorphisms between the quadratic algebraic numbers and the rational numbers, and between the rationals and the dyadic rationals.
Generalizations
Any binary relation R is said to be dense if, for all R-related x and y, there is a z such that x and z and also z and y are R-related. Formally:
Alternatively, in terms of composition of R with itself, the dense condition may be expressed as R ⊆ R ; R.
Sufficient conditions for a binary relation R on a set X to be dense are:
R is reflexive;
R is coreflexive;
R is quasireflexive;
R is left or right Euclidean; or
R is symmetric and semi-connex and |
https://en.wikipedia.org/wiki/Ejection%20charge | Ejection charge (commonly Black Powder), also called expelling charge, is a pyrotechnic composition, a type of a pyrotechnic gas generator designed to produce a small short-term amount of thrust to burst open a container and eject its content.
In model rocketry, ejection charges are used to deploy a recovery system (usually parachute or streamer). The ejection charge is ignited through a layer of delay composition, to fire shortly after the main engine burns out. Ejection charges can be also triggered by a timer or an altimeter. A small amount of black powder is usually used, but smokeless powder and other compositions are possible. Ejection charge is granular black powder.
Ejection charges are also used in some flares to eject the light or smoke producing components out of the flare casing. In countermeasure flares, ejection charges are used to propel the flares out of their casing, or to eject pyrophoric fluids from their containers.
In cluster bombs, ejection charges are used to disperse the submunitions. Burst charges are also used to dispense leaflets from leaflet bombs. In chemical weapons, ejection charges are used to disperse the chemical agent from the bomb, submunition, grenade, or warhead.
See also
Burst charge |
https://en.wikipedia.org/wiki/MelsecNet | MelsecNet is a protocol developed and supported by Mitsubishi Electric for data delivery. MelsecNet supports 239 networks.
MelsecNet protocol has two variants. MELSECNET/H and its predecessor MELSECNET/10 use high speed and redundant functionality to give deterministic delivery of large data volumes. Both variants can use either coaxial bus type or optical loop type for transmission. Coaxial bus type uses the token bus method with an overall distance of but optical loop type uses the Token Ring method and can support a distance up to . MELSECNET/H can support a maximum of 19,200 bytes/frame and a maximum communication speed of 25 Mbit/s. MELSECNET/10 supports 960 bytes/frame and a baud rate of 10 Mbit/s. Mitsubishi provides a manual for both the variants Melsecnet/H and MelsecNet/10.
Features
Easy personal computer, HMI and PLC connection
High-speed data communications with large data volumes
Reliable and robust data transfers
Redundancy functions
10/25 megabaud data transfer rates
Maximum network distance 30 km, up to 255 segments
Simple configuration, remote programming
Floating master |
https://en.wikipedia.org/wiki/Marine%20spatial%20planning | Marine spatial planning (MSP) is a process that brings together multiple users of the ocean – including energy, industry, government, conservation and recreation – to make informed and coordinated decisions about how to use marine resources sustainably. MSP generally uses maps to create a more comprehensive picture of a marine area – identifying where and how an ocean area is being used and what natural resources and habitat exist. It is similar to land-use planning, but for marine waters.
Through the planning and mapping process of a marine ecosystem, planners can consider the cumulative effect of maritime industries on our seas, seek to make industries more sustainable and proactively minimize conflicts between industries seeking to utilise the same sea area. The intended result of MSP is a more coordinated and sustainable approach to how our oceans are used – ensuring that marine resources and services are utilized, but within clear environmental limits to ensure marine ecosystems remain healthy and biodiversity is conserved.
Definition and concept
The most commonly used definition of marine spatial planning was developed by the Intergovernmental Oceanographic Commission (IOC) of UNESCO:
The main elements of marine spatial planning include an interlinked system of plans, policies and regulations; the components of environmental management systems (e.g. setting objectives, initial assessment, implementation, monitoring, audit and review); and some of the many tools that are already used for land use planning. Whatever the building blocks, the essential consideration is that they need to work across sectors and give a geographic context in which to make decisions about the use of resources, development, conservation and the management of activities in the marine environment
Effective marine spatial planning has essential attributes:
Multi-objective. Marine spatial planning should balance ecological, social, economic, and governance objectives, but the over r |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.